How to Download Sanskrit Dictionary English for Free
-
If you are looking for a way to learn Sanskrit, the ancient and sacred language of India, you might want to download Sanskrit dictionary English. This is a handy tool that can help you translate words and phrases from Sanskrit to English and vice versa. You can also use it to study the grammar, pronunciation, and culture of Sanskrit.
But where can you find a reliable and free Sanskrit dictionary English? There are many websites and apps that claim to offer this service, but not all of them are trustworthy or accurate. Some might contain errors, malware, or ads that can ruin your experience. Others might charge you a fee or require you to register or subscribe.
-
That's why we have compiled a list of the best sources to download Sanskrit dictionary English for free. These are reputable and safe platforms that have been tested and reviewed by users and experts. They offer high-quality and comprehensive Sanskrit dictionaries that you can access online or offline. Here they are:
-
-
Sanskrit Dictionary: This is one of the most popular and comprehensive online Sanskrit dictionaries. It has over 200,000 entries and covers both classical and modern Sanskrit. You can search by word, root, or category. You can also browse by alphabet, topic, or author. You can download the entire dictionary as a PDF file or as an app for Android or iOS devices.
-
Spoken Sanskrit: This is another excellent online Sanskrit dictionary that focuses on spoken Sanskrit. It has over 60,000 entries and covers both literary and colloquial Sanskrit. You can search by word or phrase in Sanskrit or English. You can also listen to the audio pronunciation of each word. You can download the dictionary as an app for Android devices.
-
Sanskrit Lexicon: This is a collection of various Sanskrit dictionaries compiled by the University of Cologne. It includes the Monier-Williams Sanskrit-English Dictionary, the Apte Practical Sanskrit-English Dictionary, the Cologne Digital Sanskrit Dictionaries, and more. You can search by word or browse by dictionary. You can download each dictionary as a PDF file or as an XML file.
-
-
These are some of the best sources to download Sanskrit dictionary English for free. We hope you find them useful and enjoy learning Sanskrit. If you have any questions or feedback, please let us know in the comments below.
-
-
Why Learn Sanskrit?
-
Sanskrit is one of the oldest and most influential languages in the world. It is the language of the Vedas, the Upanishads, the Bhagavad Gita, and many other sacred texts of Hinduism, Buddhism, and Jainism. It is also the source of many words and concepts in other languages, such as Hindi, Urdu, Bengali, Nepali, and English.
-
-
Learning Sanskrit can enrich your knowledge and appreciation of the ancient and modern cultures of India and beyond. It can also improve your cognitive and linguistic skills, as Sanskrit is known for its logical and grammatical structure, rich vocabulary, and poetic beauty. It can also help you access the original texts and teachings of various spiritual traditions and philosophies.
-
How to Learn Sanskrit?
-
Learning Sanskrit can be challenging but rewarding. It requires dedication, patience, and practice. But it is not impossible. There are many resources and methods that can help you learn Sanskrit at your own pace and level. Here are some tips to get you started:
-
-
Choose a suitable Sanskrit dictionary: As we have mentioned above, a good Sanskrit dictionary can be a great tool to help you learn Sanskrit. It can help you understand the meaning, usage, and derivation of Sanskrit words and phrases. It can also help you learn the grammar, pronunciation, and culture of Sanskrit. Choose a dictionary that suits your needs and preferences.
-
Learn the basics of Sanskrit: Before you dive into the advanced aspects of Sanskrit, you need to learn the basics. This includes the alphabet, the sounds, the script, the grammar, and the syntax of Sanskrit. You can use books, online courses, videos, podcasts, or apps to learn these fundamentals. You can also find a teacher or a tutor who can guide you through the process.
-
Practice reading and writing Sanskrit: One of the best ways to learn Sanskrit is to practice reading and writing it. You can start with simple texts that are suitable for beginners, such as stories, poems, proverbs, or dialogues. You can also try to write your own sentences or paragraphs in Sanskrit. This will help you improve your vocabulary, grammar, and comprehension skills.
-
Practice speaking and listening to Sanskrit: Another way to learn Sanskrit is to practice speaking and listening to it. You can find a partner or a group who can converse with you in Sanskit. You can also listen to audio recordings or podcasts that feature Sanskrit speakers. This will help you improve your pronunciation, fluency, and communication skills.
-
-
These are some of the tips that can help you learn Sanskrit effectively. Remember that learning a new language takes time and effort. But with consistent practice and enthusiasm, you will be able to master Sanskrit and enjoy its benefits.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md
deleted file mode 100644
index b7ea38ab95a90ed736f86933dbe5f4fd6850e247..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-
Brutal Doom V16 Download: How to Choose the Best Mod for Your Doom Experience
-
-
Doom is one of the most iconic and influential games of all time. It revolutionized the FPS genre with its fast-paced action, immersive graphics, and brutal violence. But even after almost 30 years, Doom is still alive and kicking, thanks to the countless mods that enhance and expand the game in various ways.
One of the most popular and acclaimed mods for Doom is Brutal Doom, which adds new features, weapons, enemies, gore, sounds, and gameplay mechanics to make Doom more brutal, intense, and fun. But did you know that there are different versions of Brutal Doom that you can download and play?
-
-
In this article, we will introduce you to two of the most recent and interesting versions of Brutal Doom: the Classic Edition v16a and the Extended Edition v16. We will tell you what they are, how they differ from each other and from the original Brutal Doom, and how to download and install them on your PC. Let's get started!
-
-
What is Brutal Doom Classic Edition v16a?
-
-
Brutal Doom Classic Edition v16a is a mod that aims to recreate the original Brutal Doom experience with elements from v18, v20, and v21. It is a more classic version of Brutal Doom, with less features and changes than the newer versions, but still with plenty of brutality and fun.
-
-
Some of the features of Brutal Doom Classic Edition v16a are:
-
-
-
-
A classic HUD with health, armor, ammo, and keys.
-
A classic weapon selection with no reloads or alt-fires.
-
A classic enemy behavior with no dodging or fleeing.
-
A classic gore system with no dismemberment or blood pools.
-
A classic sound system with no reverb or dynamic music.
-
-
-
If you want to enjoy Brutal Doom as it was originally intended, with a simple and straightforward gameplay that focuses on shooting and killing demons, then Brutal Doom Classic Edition v16a is for you.
-
-
What is Brutal Doom Extended Edition v16?
-
-
Brutal Doom Extended Edition v16 is a mod that is based on Brutal Doom and Dox778's personalized Addon. It is a mod that aims to improve the overall gameplay of Brutal Doom with new features, enhancements, and fixes. It is a more modern version of Brutal Doom, with more options and customization than the older versions, but still with the same core gameplay that makes Brutal Doom so great.
-
-
Some of the features of Brutal Doom Extended Edition v16 are:
-
-
-
A new HUD with dynamic health bars, stamina meter, weapon icons, and more.
-
A new weapon selection with reloads, alt-fires, dual-wielding, grenades, and more.
-
A new enemy behavior with dodging, fleeing, infighting, ambushes, and more.
-
A new gore system with dismemberment, blood pools, gibs, and more.
-
A new sound system with reverb, dynamic music, ambient sounds, and more.
-
-
-
If you want to enjoy Brutal Doom with more variety and challenge, with a lot of options and settings to customize your gameplay experience, then Brutal Doom Extended Edition v16 is for you.
-
-
How to Download and Install Brutal Doom V16 Mods?
-
-
To download and install Brutal Doom V16 mods, you will need a few things:
-
-
-
A copy of Doom or Doom II on your PC. You can get them from Steam or GOG.com.
-
A source port that supports mods. We recommend GZDoom or Zandronum.
-
The latest version of Brutal Doom (v21). You can get it from Mod DB or GitHub.
-
The mod file of your choice: Brutal Doom Classic Edition v16a or Brutal Doom Extended Edition v16. You can get them from Mod DB as well.
-
-
-
Once you have everything ready, follow these steps:
-
-
-
Extract the source port files to a folder on your PC.
-
Copy the DOOM.WAD or DOOM2.WAD file from your game folder to the source port folder.
-
Extract the brutalv21.pk3 file from the Brutal Doom archive to the source port folder.
-
Extract the mod file (Brutal_Classic_v16a.zip or BDEE_v16_Compressed.zip) to the source port folder.
-
Launch the source port executable (gzdoom.exe or zandronum.exe).
-
Select your game (Doom or Doom II) and your mod (Brutal_Doom_Classic_Edition_v16a.pk3 or BDEE_v16_Compressed.pk3).
-
Enjoy!
-
-
-
Conclusion
-
-
Brutal Doom V16 mods are some of the best ways to enjoy Doom in 2023. Whether you prefer a classic or a modern version of Brutal Doom, you will find a mod that suits your taste and style. Download them now and have fun!
-
What are the Benefits of Playing Brutal Doom V16 Mods?
-
-
Playing Brutal Doom V16 mods can offer you many benefits, such as:
-
-
-
Enhancing your Doom experience with new and improved features that make the game more fun and challenging.
-
Exploring new and diverse levels, enemies, and scenarios that add more variety and replay value to the game.
-
Customizing your gameplay with different options and settings that suit your preferences and style.
-
Experiencing the classic Doom gameplay with a modern twist that keeps the game fresh and exciting.
-
Enjoying the high-quality graphics, sounds, and effects that make the game more immersive and realistic.
-
-
-
Playing Brutal Doom V16 mods can give you a whole new perspective on Doom and make you appreciate the game even more.
-
-
What are the Requirements for Playing Brutal Doom V16 Mods?
-
-
To play Brutal Doom V16 mods, you will need a few things:
-
-
-
A decent PC that can run Doom and the source port smoothly.
-
A compatible controller or keyboard and mouse that can handle the fast-paced action of Brutal Doom.
-
A good internet connection that can download the mod files quickly and without errors.
-
A passion for Doom and a desire to experience it in a new and brutal way.
-
-
-
If you have these things, then you are ready to play Brutal Doom V16 mods and have a blast!
-
What are the Differences between Brutal Doom V16 Mods?
-
-
Brutal Doom V16 mods have some differences that make them unique and appealing to different types of players. Here are some of the main differences between them:
-
-
-
Brutal Doom Classic Edition v16a is more faithful to the original Brutal Doom, while Brutal Doom Extended Edition v16 is more innovative and experimental.
-
Brutal Doom Classic Edition v16a is more compatible with other mods and addons, while Brutal Doom Extended Edition v16 is more standalone and self-contained.
-
Brutal Doom Classic Edition v16a is more stable and bug-free, while Brutal Doom Extended Edition v16 is more updated and feature-rich.
-
Brutal Doom Classic Edition v16a is more suitable for purists and nostalgia lovers, while Brutal Doom Extended Edition v16 is more suitable for adventurers and thrill seekers.
-
-
-
Depending on your preferences and expectations, you can choose the mod that best suits your needs and tastes.
-
-
What are the Reviews of Brutal Doom V16 Mods?
-
-
Brutal Doom V16 mods have received positive reviews from players and critics alike. They have been praised for their quality, variety, and fun factor. Here are some of the reviews of Brutal Doom V16 mods:
-
-
-
"Brutal Doom Classic Edition v16a is a great mod for those who want to relive the glory days of Brutal Doom. It has everything you need to enjoy a classic and brutal Doom experience, without any unnecessary or distracting features. It is simple, fast, and fun."
-- A Mod DB user
-
-
-
-
"Brutal Doom Extended Edition v16 is a great mod for those who want to explore the possibilities of Brutal Doom. It has everything you need to enjoy a modern and diverse Doom experience, with a lot of options and settings to customize your gameplay. It is varied, challenging, and immersive."
-- A Mod DB user
-
-
-
Brutal Doom V16 mods have been rated highly by the community and have received many awards and recognitions. They are among the best mods for Doom ever made.
-
What are the Tips and Tricks for Playing Brutal Doom V16 Mods?
-
-
Playing Brutal Doom V16 mods can be a lot of fun, but also a lot of challenge. Here are some tips and tricks that can help you survive and enjoy the game more:
-
-
-
Use cover and movement to avoid enemy fire and attacks. Don't stand still or you will be an easy target.
-
Use different weapons and ammo types for different situations and enemies. Experiment and find out what works best for you.
-
Use grenades and explosives to clear out groups of enemies or destroy obstacles. Be careful not to hurt yourself or your allies.
-
Use melee attacks and executions to save ammo and deal extra damage. You can also use them to regain health and armor.
-
Use the environment to your advantage. You can use barrels, crates, switches, doors, and more to create traps, distractions, or shortcuts.
-
-
-
Playing Brutal Doom V16 mods can be a rewarding and satisfying experience if you know how to play smart and use your resources wisely.
-
-
What are the Future Plans for Brutal Doom V16 Mods?
-
-
Brutal Doom V16 mods are not finished yet. The modders behind them are constantly working on improving and updating them with new features, fixes, and content. Here are some of the future plans for Brutal Doom V16 mods:
-
-
-
Brutal Doom Classic Edition v16a will be ported to Brutal Doom v22 when it is released, to make it compatible with the latest version of Brutal Doom.
-
Brutal Doom Extended Edition v16 will be updated with new weapons, enemies, levels, and more, to make it more diverse and complete.
-
Both mods will be tested and optimized for performance and stability, to make them run smoothly and without errors.
-
Both mods will be supported by the community with feedback, suggestions, bug reports, and donations, to make them better and more enjoyable.
-
-
-
Brutal Doom V16 mods are still in development and have a lot of potential. They will continue to grow and evolve with time and effort.
-
Conclusion
-
-
Brutal Doom V16 mods are some of the best ways to enjoy Doom in 2023. They offer you different versions of Brutal Doom that suit your preferences and expectations. They enhance and expand the game with new and improved features that make it more fun and challenging. They are easy to download and install, and they have a lot of benefits, tips, and tricks that can help you survive and enjoy the game more. They are also constantly being updated and supported by the modders and the community, making them better and more enjoyable with time and effort.
-
-
If you are a fan of Doom and Brutal Doom, you should definitely try Brutal Doom V16 mods. They will give you a whole new perspective on Doom and make you appreciate the game even more. Download them now and have a blast!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md
deleted file mode 100644
index 0cb667c4ffdbd6aba27dc5dd3ce909a7ff846e21..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md
+++ /dev/null
@@ -1,179 +0,0 @@
-
-
Billiards Pool Games Download: How to Enjoy the Fun of Pool on Your Phone
-
Do you love playing pool but don't have the time or space to own a pool table? Do you want to practice your skills and challenge your friends online? Do you want to have fun and relax with a realistic and engaging pool game on your phone? If you answered yes to any of these questions, then you should download billiards pool games on your Android device.
-
Introduction
-
What are billiards pool games?
-
Billiards pool games are digital versions of the popular cue sports that involve hitting balls with a stick on a cloth-covered table. There are different types of billiards pool games, such as 8-ball, 9-ball, snooker, and carom. Each game has its own rules, objectives, and strategies. Billiards pool games can be played solo, against the computer, or online with other players.
Downloading billiards pool games on your phone has many benefits, such as:
-
-
You can play pool anytime and anywhere, without needing a physical table or equipment.
-
You can improve your skills and learn new tricks by practicing different shots and angles.
-
You can compete with other players from around the world and join tournaments and leagues.
-
You can customize your cue and table with various designs and colors.
-
You can enjoy realistic graphics, sound effects, and physics that simulate the real game.
-
-
Best Billiards Pool Games for Android
-
There are many billiards pool games available for Android devices, but not all of them are worth downloading. Here are some of the best ones that you should try:
-
8 Ball Pool
-
Features
-
8 Ball Pool is one of the most popular and downloaded billiards pool games on Android. It is developed by Miniclip, a leading online game company. 8 Ball Pool offers the following features:
-
-
You can play 8-ball or 9-ball pool in single player mode, against the computer, or online with other players.
-
You can join tournaments and leagues to win coins and exclusive items.
-
You can use coins to buy new cues and tables in the shop.
-
You can level up and access more match locations and challenges.
-
You can chat and send emojis to your opponents during the game.
-
-
Pros and cons
-
8 Ball Pool has many pros, such as:
-
-
It has a large and active community of players from different countries.
-
It has high-quality graphics and animations that create a realistic experience.
-
It has easy and intuitive controls that suit both beginners and experts.
-
It has frequent updates and new features that keep the game fresh and exciting.
-
-
However, 8 Ball Pool also has some cons, such as:
-
-
It requires an internet connection to play online mode.
-
It contains ads and in-app purchases that may be annoying or expensive.
-
It may have some bugs or glitches that affect the gameplay or performance.
-
-
Pool Billiards Pro
-
Features
-
Pool Billiards Pro is another popular and well-rated billiards pool game on Android. It is developed by TerranDroid, a game studio that specializes in casual and sports games. Pool Billiards Pro offers the following features:
-
8 ball pool online multiplayer free
-pool billiards pro offline android
-realistic 3D pool games for pc
-9 ball pool tournaments app
-best pool game with practice mode
-pool strategy and cue tips
-level-based billiard challenge game
-offline 8 ball pool against bots
-online 9 ball pool with friends
-free pool game with no ads
-pool billiards pro apk download
-8 ball pool miniclip for ios
-3D pool game with custom cues
-offline 9 ball pool game
-online 8 ball pool league
-pool game with high score record
-billiard game with arcade mode
-8 ball pool game with rules
-9 ball pool game with no rules
-realistic pool game with physics
-offline pool game with data safety
-online pool game with leader board
-free billiard game with in-app purchases
-pool game with touch control
-billiard game with single player mode
-8 ball pool game with time mode
-9 ball pool game with challenge mode
-realistic billiard game with animation
-offline 8 ball pool for tablet
-online 9 ball pool for phone
-free pool game with editors' choice
-billiard game with ratings and reviews
-8 ball pool game with data encryption
-9 ball pool game with data deletion request
-realistic pool game with sound effects
-offline billiard game for watch
-online pool game for chromebook
-free 8 ball pool for tv
-billiard game with privacy policy
-9 ball pool game with terms and conditions
-
-
You can play 8-ball, 9-ball, or snooker in single player mode, against the computer, or online with other players.
-
You can choose from different game modes, such as Arcade Mode, Challenge Mode, and Time Mode.
-
You can adjust the difficulty level and the game speed according to your preference.
-
You can use touch screen or accelerometer to control the cue.
-
You can view the game statistics and achievements.
-
-
Pros and cons
-
Pool Billiards Pro has many pros, such as:
-
-
It has a simple and elegant design that is easy on the eyes.
-
It has smooth and realistic physics that make the game more enjoyable.
-
It has a variety of game modes and options that cater to different tastes and skills.
-
It has a small file size and does not consume much battery or memory.
-
-
However, Pool Billiards Pro also has some cons, such as:
-
-
It does not have a chat or social feature to interact with other players.
-
It does not have a shop or customization feature to buy or change cues and tables.
-
It does not have a ranking or leveling system to measure your progress and status.
-
It may have some ads that interrupt the game flow.
-
-
8 Ball Billiards Offline Pool
-
Features
-
8 Ball Billiards Offline Pool is a newer and lesser-known billiards pool game on Android. It is developed by SNG Games, a game developer that focuses on offline and classic games. 8 Ball Billiards Offline Pool offers the following features:
-
-
You can play 8-ball pool in offline mode without needing an internet connection.
-
You can play against the computer or with your friends on the same device.
-
You can choose from four different table colors and four different cue colors.
-
You can use hints and undo options to help you with your shots.
-
You can earn coins by winning games and use them to unlock new cues and tables.
-
-
Pros and cons
-
8 Ball Billiards Offline Pool has many pros, such as:
-
-
It is ideal for players who want to play pool offline or with their friends locally.
-
It has a simple and user-friendly interface that is easy to navigate and play.
-
It has a relaxing and soothing background music that creates a pleasant atmosphere.
-
It has no ads or in-app purchases that may distract or annoy you.
-
-
However, 8 Ball Billiards Offline Pool also has some cons, such as:
-
-
It does not have an online mode or a multiplayer mode with other players around the world.
-
It does not have many game modes or options to choose from.
-
It does not have high-quality graphics or animations that may appeal to some players.
-
It does not have a chat or social feature to communicate with other players.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, billiards pool games are fun and exciting games that you can download on your Android device. They allow you to enjoy the thrill of pool without needing a physical table or equipment. They also help you improve your skills and compete with other players online. Some of the best billiards pool games for Android are 8 Ball Pool, Pool Billiards Pro, and 8 Ball Billiards Offline Pool. Each game has its own features, pros, and cons that you should consider before downloading them.
-
Call to action
-
If you are looking for a great way to spend your free time, then you should download billiards pool games on your Android device. They are easy to play, fun to master, and challenging to beat. They will keep you entertained and engaged for hours. So what are you waiting for? Download billiards pool games today and start playing!
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about billiards pool games:
-
-
What are the rules of billiards pool games?
-
The rules of billiards pool games vary depending on the type of game you are playing. However, some general rules are:
-
-
You must hit the cue ball with your cue stick and make it hit other balls on the table.
-
You must pocket the balls in the designated pockets according to the game's objective.
-
You must not commit any fou ls, such as scratching the cue ball, hitting the wrong ball, or pocketing the wrong ball.
-
You must take turns with your opponent until one of you wins the game.
-
-
How can I download billiards pool games on my Android device?
-
You can download billiards pool games on your Android device by following these steps:
-
-
Go to the Google Play Store and search for billiards pool games.
-
Choose the game that you want to download and tap on it.
-
Tap on the Install button and wait for the game to download and install on your device.
-
Tap on the Open button and enjoy the game.
-
-
Are billiards pool games free or paid?
-
Most billiards pool games are free to download and play on your Android device. However, some games may contain ads or in-app purchases that may require you to pay real money to access certain features or items. You can choose to disable or enable these options in the game settings or in your device settings.
-
Which billiards pool game is the best for me?
-
The best billiards pool game for you depends on your personal preference and taste. You should consider factors such as:
-
-
The type of game you want to play (8-ball, 9-ball, snooker, etc.)
-
The mode of play you want to enjoy (single player, online multiplayer, offline multiplayer, etc.)
-
The features and options you want to have (customization, chat, tournaments, etc.)
-
The graphics and sound quality you want to experience (realistic, cartoonish, etc.)
-
The difficulty level and challenge you want to face (easy, hard, etc.)
-
-
You can try different games and see which one suits you best. You can also read reviews and ratings from other players to get an idea of what they think about the games.
-
How can I improve my skills in billiards pool games?
-
You can improve your skills in billiards pool games by practicing regularly and learning from your mistakes. You can also follow these tips:
-
-
Watch tutorials and videos from experts and learn their techniques and strategies.
-
Study the rules and objectives of each game and know how to score points and avoid fouls.
-
Practice different shots and angles and learn how to use spin, power, and aim.
-
Play against different opponents and learn from their moves and styles.
-
Join tournaments and leagues and challenge yourself with higher levels of competition.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md b/spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md
deleted file mode 100644
index 93ef6f35be20b26d1d58addbb1e150647d488e6c..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
,
,
,
, and
to create and format your table. - Write in a conversational style as written by a human. Use an informal tone, utilize personal pronouns, keep it simple, engage the reader, use the active voice, keep it brief, use rhetorical questions, and incorporate analogies and metaphors.
and
to format your FAQs. I hope these tips are helpful for you. Now, let me show you the two tables I created for you based on these tips. ? Table 1: Outline of the article | Heading | Subheading | Content | | --- | --- | --- | | H1: How to Download Wedding Invitation Templates in AI Format | | Introduction: Explain what AI format is and why it is useful for creating wedding invitations. Provide some statistics on how popular wedding invitations are and how much they cost. Thesis statement: Downloading wedding invitation templates in AI format can help you save time, money, and unleash your creativity. | | | H2: Benefits of Using AI Format for Wedding Invitations | Body paragraph 1: Explain the benefits of using AI format for wedding invitations, such as: - High-quality graphics and illustrations - Easy customization and editing - Compatibility with various design tools - Scalability and flexibility | | | H2: How to Find and Download Wedding Invitation Templates in AI Format | Body paragraph 2: Provide some tips on how to find and download wedding invitation templates in AI format, such as: - Use reliable websites that offer free or affordable templates - Search by theme, style, or color - Check the license and terms of use - Download the files in ZIP or RAR format | | | H2: How to Customize and Print Your Wedding Invitations in AI Format | Body paragraph 3: Provide some steps on how to customize and print your wedding invitations in AI format, such as: - Open the files in Adobe Illustrator or another compatible tool - Change the text, fonts, colors, images, and layout as desired - Save the files as PDF or JPG format - Print the invitations on high-quality paper or send them online | | | H2: Examples of Wedding Invitation Templates in AI Format | Body paragraph 4: Show some examples of wedding invitation templates in AI format from different websites, such as: - Freepik - DYP.im - Fotor Include a table that compares the features, prices, and ratings of these websites. | | H1: Conclusion | | Conclusion: Summarize the main points and restate the thesis statement. Emphasize how downloading wedding invitation templates in AI format can make your wedding planning easier and more fun. Encourage the readers to try it out for themselves. | | H1: FAQs | | FAQs: List five unique FAQs that answer common questions related to downloading wedding invitation templates in AI format, such as: - What is AI format? - Why should I use AI format for wedding invitations? - How can I edit AI files? - Where can I find free or cheap AI templates? - How can I print or share my invitations online? | Table 2: Article with HTML formatting
How to Download Wedding Invitation Templates in AI Format
-
If you're planning a wedding, you know how important it is to create beautiful and memorable invitations that reflect your personality and style. But designing your own invitations from scratch can be time-consuming, expensive, and stressful. That's why many couples opt for downloading wedding invitation templates that they can customize and print themselves.
One of the most popular formats for wedding invitation templates is AI, which stands for Adobe Illustrator. AI is a vector-based graphic design format that allows you to create high-quality graphics and illustrations with ease. AI files are also easy to customize and edit, as you can change the text, fonts, colors, images, and layout as you wish. AI files are compatible with various design tools, such as Adobe Illustrator, Photoshop, InDesign, and CorelDraw. AI files are also scalable and flexible, meaning you can resize them without losing quality or clarity.
-
Downloading wedding invitation templates in AI format can help you save time, money, and unleash your creativity. You can find hundreds of free or affordable templates online that suit your theme, style, or color scheme. You can also print your invitations on high-quality paper or send them online to your guests. In this article, we will show you how to download wedding invitation templates in AI format and how to customize and print them yourself.
-
Benefits of Using AI Format for Wedding Invitations
-
Using AI format for wedding invitations has many benefits, such as:
-
-
High-quality graphics and illustrations: AI files are vector-based, which means they are made of mathematical equations that define the shapes, colors, and strokes of the graphics. This makes them sharp and clear, even when zoomed in or out. AI files also support transparency, gradients, and patterns, which add more depth and dimension to your invitations.
-
Easy customization and editing: AI files are editable, which means you can change the text, fonts, colors, images, and layout of your invitations as you like. You can also add your own graphics, logos, or photos to make your invitations more personal and unique. You can use Adobe Illustrator or another compatible tool to edit your AI files.
-
Compatibility with various design tools: AI files are compatible with various design tools, such as Adobe Illustrator, Photoshop, InDesign, and CorelDraw. You can use these tools to open, edit, save, and export your AI files. You can also convert your AI files to other formats, such as PDF or JPG, if needed.
-
Scalability and flexibility: AI files are scalable and flexible, which means you can resize them without losing quality or clarity. You can also rotate, flip, skew, or distort them as you wish. You can adjust the resolution and dimensions of your invitations to fit your printing or sharing needs.
-
-
These benefits make AI format a great choice for creating stunning and professional-looking wedding invitations.
-
download template undangan pernikahan ai
-download desain undangan pernikahan ai
-download contoh undangan pernikahan ai
-download undangan pernikahan format ai
-download undangan pernikahan vector ai
-download undangan pernikahan gratis ai
-download undangan pernikahan simple ai
-download undangan pernikahan elegan ai
-download undangan pernikahan unik ai
-download undangan pernikahan modern ai
-download undangan pernikahan islami ai
-download undangan pernikahan minimalis ai
-download undangan pernikahan klasik ai
-download undangan pernikahan floral ai
-download undangan pernikahan rustic ai
-download undangan pernikahan vintage ai
-download undangan pernikahan gold ai
-download undangan pernikahan pink ai
-download undangan pernikahan blue ai
-download undangan pernikahan green ai
-download undangan pernikahan red ai
-download undangan pernikahan purple ai
-download undangan pernikahan black and white ai
-download undangan pernikahan watercolor ai
-download undangan pernikahan geometric ai
-download mockup undangan pernikahan ai
-download background undangan pernikahan ai
-download border undangan pernikahan ai
-download frame undangan pernikahan ai
-download logo undangan pernikahan ai
-download font undangan pernikahan ai
-download icon undangan pernikahan ai
-download clipart undangan pernikahan ai
-download sticker undangan pernikahan ai
-cara download undangan pernikahan ai
-situs download undangan pernikahan ai
-website download undangan pernikahan ai
-aplikasi download undangan pernikahan ai
-software download undangan pernikahan ai
-tutorial download undangan pernikahan ai
-
How to Find and Download Wedding Invitation Templates in AI Format
-
Finding and downloading wedding invitation templates in AI format is easy and convenient. Here are some tips on how to do it:
-
-
Use reliable websites that offer free or affordable templates: There are many websites that offer free or affordable wedding invitation templates in AI format. Some of the most popular ones are Freepik, DYP.im, Fotor, Vecteezy, and Template.net. These websites have a wide range of templates for different themes, styles, and colors. You can browse through their collections and choose the ones that suit your preferences.
-
Search by theme, style, or color: Most websites have filters or categories that help you narrow down your search by theme, style, or color. For example, you can search for floral, vintage, rustic, modern, elegant, or minimalist wedding invitation templates. You can also search for templates by color scheme, such as pink, blue, green, gold, or black.
-
Check the license and terms of use: Before downloading any template from any website, make sure you check the license and terms of use. Some templates are free for personal use only, while others require attribution or a premium subscription. Some templates may also have restrictions on how you can edit or print them. Read the license and terms of use carefully and follow them accordingly.
-
Download the files in ZIP or RAR format: Most websites offer their templates in ZIP or RAR format. These are compressed files that contain multiple files inside them. To download them, you need to click on the download button and save the file to your computer. To open them , you need to extract them using a software such as WinZip, WinRAR, or 7-Zip. You can then access the AI files and other files inside the ZIP or RAR folder.
-
-
By following these tips, you can find and download wedding invitation templates in AI format easily and quickly.
-
How to Customize and Print Your Wedding Invitations in AI Format
-
Once you have downloaded your wedding invitation templates in AI format, you can customize and print them yourself. Here are some steps on how to do it:
-
-
Open the files in Adobe Illustrator or another compatible tool: To edit your AI files, you need to open them in Adobe Illustrator or another compatible tool, such as Photoshop, InDesign, or CorelDraw. You can double-click on the AI file or drag and drop it into the tool. You can also use the File > Open menu to locate and open the file.
-
Change the text, fonts, colors, images, and layout as desired: To change the text of your invitations, you need to select the text tool and click on the text you want to edit. You can then type your own text or copy and paste it from another source. You can also change the fonts, colors, sizes, and styles of your text using the options on the toolbar or the properties panel. To change the images of your invitations, you need to select the image tool and click on the image you want to replace. You can then browse your computer or online sources for a new image and insert it into your invitation. You can also resize, crop, rotate, or adjust the brightness and contrast of your image using the options on the toolbar or the properties panel. To change the layout of your invitations, you need to select the selection tool and click on the elements you want to move, resize, or delete. You can also use the align, distribute, group, or arrange options on the toolbar or the properties panel to organize your elements.
-
Save the files as PDF or JPG format: To save your invitations for printing or sharing online, you need to export them as PDF or JPG format. You can use the File > Export menu to choose the format and location of your files. You can also adjust the quality and resolution of your files using the options on the export dialog box.
-
Print the invitations on high-quality paper or send them online: To print your invitations, you need to use a printer that supports high-quality printing and paper that matches your design and size. You can use the File > Print menu to choose your printer and paper settings. You can also preview your invitations before printing them using the options on the print dialog box. To send your invitations online, you need to use an email service or a social media platform that supports PDF or JPG attachments. You can attach your files to your email or post and add a personal message to your guests.
-
-
By following these steps, you can customize and print your wedding invitations in AI format yourself.
-
Examples of Wedding Invitation Templates in AI Format
-
To give you some inspiration and ideas for your wedding invitations, here are some examples of wedding invitation templates in AI format from different websites:
-
-
Freepik: Freepik is a website that offers free vector graphics, illustrations, icons, photos, and templates for various purposes. It has a large collection of wedding invitation templates in AI format that you can download and edit for free. Some of the themes include floral, geometric, watercolor, vintage, rustic, and modern. You can also find matching templates for save-the-date cards, thank-you cards, menus, programs, and more.
-
DYP.im: DYP.im is a website that offers free and premium design templates for various occasions. It has a variety of wedding invitation templates in AI format that you can download and edit for free or for a small fee. Some of the styles include elegant, minimalist, classic, bohemian , and whimsical. You can also find templates for other wedding-related items, such as labels, tags, stickers, and envelopes.
-
Fotor: Fotor is a website that offers free and premium online photo editing and design tools. It has a section for wedding invitation templates in AI format that you can download and edit for free or for a subscription fee. Some of the categories include simple, floral, vintage, modern, and elegant. You can also use Fotor's online editor to customize your templates with your own photos, text, stickers, filters, and effects.
-
-
To compare the features, prices, and ratings of these websites, you can use the table below:
-
-
Comparison of wedding invitation template websites
-
-
Website
-
Features
-
Prices
-
Ratings
-
-
-
Freepik
-
- Large collection of free and premium templates - Various themes, styles, and colors - Matching templates for other wedding items - Editable in Adobe Illustrator or other tools
-
- Free for personal use with attribution - Premium subscription for $9.99/month or $89.99/year - Unlimited downloads and no attribution required
-
- 4.5/5 stars on Trustpilot - 8.8/10 on Sitejabber
-
-
-
DYP.im
-
- Variety of free and premium templates - Various styles and designs - Templates for other wedding-related items - Editable in Adobe Illustrator or other tools
-
- Free for personal use with attribution - Premium templates for $2-$5 each - Unlimited downloads and no attribution required
-
- 4.3/5 stars on Trustpilot - 8.6/10 on Sitejabber
-
-
-
Fotor
-
- Section of free and premium templates - Various categories and designs - Online editor to customize your templates - Editable in Adobe Illustrator or other tools
-
- Free for personal use with watermark - Premium subscription for $8.99/month or $39.99/year - Unlimited downloads and no watermark
-
- 4.6/5 stars on Trustpilot - 9/10 on Sitejabber
-
-
-
Conclusion
-
Downloading wedding invitation templates in AI format can help you create beautiful and memorable invitations that reflect your personality and style. You can save time, money, and unleash your creativity by using AI format for your invitations. You can find hundreds of free or affordable templates online that suit your theme, style, or color scheme. You can also customize and print your invitations yourself using Adobe Illustrator or another compatible tool.
-
Downloading wedding invitation templates in AI format is easy and fun. Why not try it out for yourself? You might be surprised by how much you can do with AI format.
-
FAQs
-
What is AI format?
-
AI format is a vector-based graphic design format that allows you to create high-quality graphics and illustrations with ease. AI stands for Adobe Illustrator, which is the most popular tool for creating and editing AI files.
-
Why should I use AI format for wedding invitations?
-
You should use AI format for wedding invitations because it has many benefits, such as:
-
-
High-quality graphics and illustrations that are sharp and clear.
-
Easy customization and editing that let you change the text, fonts, colors, images, and layout as you wish.
-
Compatibility with various design tools that let you open, edit, save, and export your AI files.
-
Scalability and flexibility that let you resize your invitations without losing quality or clarity.
-
-
How can I edit AI files?
-
You can edit AI files using Adobe Illustrator or another compatible tool, such as Photoshop, InDesign, or CorelDraw. You can use the tools and options on the toolbar or the properties panel to change the text, fonts, colors, images , and layout of your invitations. You can also add your own graphics, logos, or photos to make your invitations more personal and unique.
-
Where can I find free or cheap AI templates?
-
You can find free or cheap AI templates on various websites that offer free or affordable vector graphics, illustrations, icons, photos, and templates for various purposes. Some of the most popular ones are Freepik, DYP.im, Fotor, Vecteezy, and Template.net. You can browse through their collections and choose the ones that suit your preferences.
-
How can I print or share my invitations online?
-
You can print or share your invitations online by exporting them as PDF or JPG format. You can use the File > Export menu to choose the format and location of your files. You can also adjust the quality and resolution of your files using the options on the export dialog box. To print your invitations, you need to use a printer that supports high-quality printing and paper that matches your design and size. You can use the File > Print menu to choose your printer and paper settings. You can also preview your invitations before printing them using the options on the print dialog box. To send your invitations online, you need to use an email service or a social media platform that supports PDF or JPG attachments. You can attach your files to your email or post and add a personal message to your guests.
-
I hope you enjoyed reading this article and learned something new. If you have any questions or feedback, please let me know in the comments below. Thank you for your time and attention.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md b/spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md
deleted file mode 100644
index 36b8313b3e1e617ba38e34caf656be768ba1e70e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
Candy Crush Saga: How to Download and Play on Windows 10 64 Bit
-
If you are looking for a fun and addictive puzzle game that will keep you entertained for hours, you might want to try Candy Crush Saga. This popular game has millions of fans around the world who enjoy matching colorful candies and clearing various challenges. But did you know that you can also play Candy Crush Saga on your Windows 10 64 bit PC? In this article, we will show you how to download and play Candy Crush Saga on your computer, and give you some tips and tricks to master the game.
-
What is Candy Crush Saga?
-
Candy Crush Saga is a free-to-play tile-matching video game developed by King, a leading company in casual gaming. It was released in 2012 for Facebook, and later for iOS, Android, Windows Phone, and Windows 10. It is a variation of their browser game Candy Crush, which was inspired by the classic game Bejeweled.
-
candy crush saga free download for windows 10 64 bit
In Candy Crush Saga, you have to match three or more candies of the same color on a game board to make them disappear, and create special candies that have extra effects. You have to complete different objectives in each level, such as reaching a target score, clearing jelly or chocolate from the board, collecting ingredients, or making a certain number of matches. You have a limited number of moves or time to complete each level, so you have to plan your moves carefully and use boosters wisely.
-
Candy Crush Saga has thousands of levels to play, each with different layouts, obstacles, and goals. The game also features various game modes, such as Moves, Time, Jelly, Ingredients, Order, Mixed Mode, Rainbow Rapids, Soda, Jam, Honey, Frosting, Chocolate Box, Dunk the Cookie, and more. Each mode has its own rules and challenges that require different strategies.
-
Candy Crush Saga is not only a simple and fun game, but also a rewarding one. You can earn points, stars, gold bars, boosters, trophies, badges, and other prizes as you play. You can also connect with your Facebook friends or other players online and compare your scores, send or receive lives or boosters, or join teams and events.
-
Why play Candy Crush Saga on Windows 10 64 Bit?
-
While Candy Crush Saga is mainly designed for mobile devices, playing it on your Windows 10 64 bit PC has some advantages. Here are some of them:
-
-
You can enjoy a bigger screen and better graphics. Playing on a PC allows you to see more details and colors of the candies and the backgrounds. You can also adjust the resolution and the quality settings according to your preferences.
-
You can use a mouse or a keyboard instead of a touchscreen. Some players find it easier and more comfortable to use a mouse or a keyboard to make matches and activate boosters. You can also use shortcuts or hotkeys to access some functions quickly.
-
You can save battery life and storage space on your mobile device. Playing Candy Crush Saga on your PC means you don't have to worry about draining your battery or filling up your memory with the game data. You can also avoid interruptions from phone calls or notifications while playing.
-
You can sync your progress across devices. If you log in with your Facebook account or your King account, you can sync your progress and access all your data on any device. This means you can switch between playing on your PC or your mobile device anytime without losing anything.
-
-
How to download Candy Crush Saga for Windows 10
How to download Candy Crush Saga for Windows 10 64 Bit?
-
Downloading Candy Crush Saga for your Windows 10 64 bit PC is very easy and fast. You just need to follow these steps:
-
-
Open the Microsoft Store app on your PC. You can find it on your Start menu or taskbar, or you can search for it using Cortana or the search box.
-
In the Microsoft Store app, type "Candy Crush Saga" in the search bar and press Enter. You will see the game icon and some information about it.
-
Click on the "Get" button to start downloading the game. You may need to sign in with your Microsoft account if you haven't already.
-
Wait for the download and installation to finish. You will see a notification when it is done.
-
Click on the "Play" button to launch the game. You can also find it on your Start menu or taskbar, or you can pin it to your desktop for easy access.
-
-
Congratulations, you have successfully downloaded and installed Candy Crush Saga on your Windows 10 64 bit PC. Now you can enjoy playing it anytime you want.
-
How to play Candy Crush Saga on Windows 10 64 Bit?
-
Playing Candy Crush Saga on your Windows 10 64 bit PC is very similar to playing it on your mobile device. However, there are some differences in the gameplay and the controls that you need to know. Here are some of them:
-
candy crush saga pc game download windows 10
-how to install candy crush saga on windows 10 laptop
-candy crush saga for windows 10 offline mode
-candy crush saga latest version download for windows 10
-candy crush saga windows 10 app store
-candy crush saga free download full version for windows 10
-candy crush saga cheats and tips for windows 10 users
-candy crush saga update download for windows 10 64 bit
-candy crush saga game play online on windows 10
-candy crush saga hack tool download for windows 10
-candy crush saga best levels and boosters for windows 10
-candy crush saga system requirements for windows 10 64 bit
-candy crush saga error and troubleshooting for windows 10
-candy crush saga review and rating for windows 10 game
-candy crush saga alternatives and similar games for windows 10
-candy crush soda saga free download for windows 10 64 bit
-candy crush jelly saga free download for windows 10 64 bit
-candy crush friends saga free download for windows 10 64 bit
-candy crush saga mod apk download for windows 10 64 bit
-candy crush saga unlimited lives and gold bars for windows 10
-candy crush saga new features and updates for windows 10 game
-candy crush saga support and contact for windows 10 users
-candy crush saga facebook login and sync for windows 10 game
-candy crush saga leaderboard and achievements for windows 10 game
-candy crush saga themes and wallpapers for windows 10 desktop
-how to uninstall candy crush saga from windows 10 device
-how to transfer candy crush saga progress to windows 10 device
-how to backup and restore candy crush saga data on windows 10 device
-how to play candy crush saga with keyboard and mouse on windows 10 device
-how to record and share candy crush saga gameplay on windows 10 device
-how to speed up and optimize candy crush saga performance on windows 10 device
-how to fix candy crush saga not working or crashing on windows 10 device
-how to disable or remove candy crush saga ads on windows 10 device
-how to get free or discounted in-app purchases in candy crush saga on windows 10 device
-how to join or create a team in candy crush saga on windows 10 device
-how to complete or skip a level in candy crush saga on windows 10 device
-how to change or reset your password in candy crush saga on windows 10 device
-how to link or unlink your account in candy crush saga on windows 10 device
-how to redeem or use a gift card or code in candy crush saga on windows 10 device
-how to report or block a player in candy crush saga on windows 10 device
-how to invite or add friends in candy crush saga on windows 10 device
-how to chat or send messages in candy crush saga on windows 10 device
-how to customize or change your avatar in candy crush saga on windows 10 device
-how to earn or spend gold bars in candy crush saga on windows 10 device
-how to collect or use boosters in candy crush saga on windows 10 device
-how to unlock or access new episodes in candy crush saga on windows 10 device
-how to solve or clear the jelly in candy crush saga on windows 10 device
-how to match or blast the candies in candy crush saga on windows 10 device
-how to switch or swap the candies in candy crush saga on windows 10 device
-
The basics of the gameplay and the controls
-
The gameplay of Candy Crush Saga is based on matching three or more candies of the same color on a game board to make them disappear and create special candies that have extra effects. You have to complete different objectives in each level, such as reaching a target score, clearing jelly or chocolate from the board, collecting ingredients, or making a certain number of matches. You have a limited number of moves or time to complete each level, so you have to plan your moves carefully and use boosters wisely.
-
The controls of Candy Crush Saga on your Windows 10 64 bit PC are very simple and intuitive. You can use your mouse or your keyboard to make matches and activate boosters. Here are some of the basic controls:
-
-
To make a match, click and drag a candy to swap it with an adjacent one. You can also use the arrow keys on your keyboard to move a candy in any direction.
-
To activate a special candy, click on it or press the spacebar on your keyboard. You can also click and drag a special candy to swap it with another one and create a powerful combination.
-
To use a booster, click on it at the bottom of the screen or press the corresponding number key on your keyboard. You can also drag a booster onto the game board to apply it to a specific candy or area.
-
To pause the game, click on the menu button at the top left corner of the screen or press the Esc key on your keyboard. You can also access other options such as settings, help, sound, music, and more from this menu.
-
-
The tips and tricks to master the game and beat the levels
-
Candy Crush Saga is not only a fun game, but also a challenging one. Some levels can be very hard to beat, especially if you don't know what to do or how to do it. That's why we have prepared some tips and tricks for you that will help you master the game and beat any level. Here are some of them:
-
-
Pay attention to the objective of each level and plan your moves accordingly. Don't just match candies randomly, but try to create matches that will help you achieve your goal.
-
Look for opportunities to create special candies and combinations. Special candies are candies that have extra effects when activated, such as striped candies, wrapped candies, color bombs, jelly fish, coconut wheels, etc. Combinations are when you activate two or more special candies together, creating even more powerful effects.
-
Use boosters wisely and sparingly. Boosters are items that can help you in various ways, such as extra moves, extra time, extra lives, lollipop hammers, free switches, etc. However, they are limited in number and some of them cost real money, so don't waste them unnecessarily.
-
Learn from your mistakes and try again. If you fail a level, don't give up or get frustrated. Instead, analyze what went wrong and what you can do better next time. You can also watch videos of other players who have beaten the level and learn from their strategies.
-
Have fun and enjoy the game. Candy Crush Saga is meant to be
Have fun and enjoy the game. Candy Crush Saga is meant to be a relaxing and entertaining game, not a stressful or frustrating one. Don't let the difficulty or the pressure get to you, but rather focus on the positive aspects of the game, such as the colorful graphics, the catchy music, the cute characters, and the rewarding prizes.
-
-
Conclusion
-
Candy Crush Saga is one of the most popular and addictive puzzle games in the world. It has thousands of levels to play, each with different objectives, modes, and challenges. It also has various features and rewards that make it more fun and exciting. You can play it on your mobile device, but you can also play it on your Windows 10 64 bit PC. Playing on a PC has some advantages, such as a bigger screen, better graphics, easier controls, and more. To play on a PC, you just need to download and install the game from the Microsoft Store app, and then log in with your Facebook or King account to sync your progress. To master the game and beat the levels, you need to pay attention to the objective, create special candies and combinations, use boosters wisely, learn from your mistakes, and have fun.
-
If you are ready to join the millions of fans who love Candy Crush Saga, download it now and start playing. You will be amazed by how much fun you will have.
-
FAQs
-
Here are some of the frequently asked questions about Candy Crush Saga and their answers:
-
-
How do I get more lives in Candy Crush Saga?
-
There are several ways to get more lives in Candy Crush Saga. You can wait for them to refill over time (one life every 30 minutes), ask your friends to send you some, buy them with gold bars, or use boosters that give you extra lives.
-
How do I get more gold bars in Candy Crush Saga?
-
Gold bars are the premium currency in Candy Crush Saga. You can use them to buy boosters, extra moves, extra time, extra lives, or unlock new episodes. You can get gold bars by completing certain achievements, participating in events or challenges, watching ads, or buying them with real money.
-
How do I unlock new episodes in Candy Crush Saga?
-
To unlock new episodes in Candy Crush Saga, you need to complete all the levels in the previous episode. Sometimes, you may also need to ask your friends for help or pay with gold bars to unlock them.
-
How do I connect my Facebook or King account to Candy Crush Saga?
-
To connect your Facebook or King account to Candy Crush Saga, you need to click on the "Connect" button on the main screen or the settings menu. You will be asked to log in with your email and password or create a new account if you don't have one. By connecting your account, you can sync your progress across devices, access all your data, and play with your friends online.
-
How do I contact customer support for Candy Crush Saga?
-
If you have any issues or questions about Candy Crush Saga, you can contact customer support by clicking on the "Help" button on the settings menu. You will be directed to a page where you can browse through various topics and FAQs, or submit a ticket with your query.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py b/spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py
deleted file mode 100644
index afbbccf57bc08a31c4f09a03bf6b343eb89577d8..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py
+++ /dev/null
@@ -1,106 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" PyTorch - Paddle general utilities."""
-import re
-
-from .utils import logging
-
-logger = logging.get_logger(__name__)
-
-
-def rename_key(key):
- regex = r"\w+[.]\d+"
- pats = re.findall(regex, key)
- for pat in pats:
- key = key.replace(pat, "_".join(pat.split(".")))
- return key
-
-
-#####################
-# PyTorch => Paddle #
-#####################
-
-
-def rename_key_and_reshape_tensor(pt_tuple_key, pt_tensor, random_paddle_state_dict):
- """Rename PT weight names to corresponding Paddle weight names and reshape tensor if necessary"""
-
- # conv norm or layer norm
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
- if (
- any("norm" in str_ for str_ in pt_tuple_key)
- and (pt_tuple_key[-1] in ["bias", "beta"])
- and (pt_tuple_key[:-1] + ("bias",) in random_paddle_state_dict)
- ):
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
- return renamed_pt_tuple_key, pt_tensor
- elif pt_tuple_key[-1] in ["weight", "gamma"] and pt_tuple_key[:-1] + ("bias",) in random_paddle_state_dict:
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
- return renamed_pt_tuple_key, pt_tensor
-
- # embedding
- if pt_tuple_key[-1] == "weight" and pt_tuple_key[:-1] + ("weight",) in random_paddle_state_dict:
- pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
- return renamed_pt_tuple_key, pt_tensor
-
- # conv layer
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
- if pt_tuple_key[-1] == "weight" and pt_tensor.ndim == 4:
- return renamed_pt_tuple_key, pt_tensor
-
- # linear layer
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
- if pt_tuple_key[-1] == "weight":
- pt_tensor = pt_tensor.t()
- return renamed_pt_tuple_key, pt_tensor
-
- # old PyTorch layer norm weight
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
- if pt_tuple_key[-1] == "gamma":
- return renamed_pt_tuple_key, pt_tensor
-
- # old PyTorch layer norm bias
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
- if pt_tuple_key[-1] == "beta":
- return renamed_pt_tuple_key, pt_tensor
-
- return pt_tuple_key, pt_tensor
-
-
-def convert_pytorch_state_dict_to_paddle(pt_state_dict, paddle_model):
- # Step 1: Convert pytorch tensor to numpy
- pt_state_dict = {k: v.numpy() for k, v in pt_state_dict.items()}
-
- random_paddle_state_dict = paddle_model.state_dict
- paddle_state_dict = {}
-
- # Need to change some parameters name to match Paddle names
- for pt_key, pt_tensor in pt_state_dict.items():
- renamed_pt_key = rename_key(pt_key)
- pt_tuple_key = tuple(renamed_pt_key.split("."))
-
- # Correctly rename weight parameters
- paddle_key, paddle_tensor = rename_key_and_reshape_tensor(pt_tuple_key, pt_tensor, random_paddle_state_dict)
-
- if paddle_key in random_paddle_state_dict:
- if list(paddle_tensor.shape) != list(random_paddle_state_dict[paddle_key].shape):
- raise ValueError(
- f"Paddle checkpoint seems to be incorrect. Weight {pt_key} was expected to be of shape "
- f"{random_paddle_state_dict[paddle_key].shape}, but is {paddle_tensor.shape}."
- )
-
- # also add unexpected weight so that warning is thrown
- paddle_state_dict[paddle_key] = paddle_tensor.numpy()
-
- return paddle_state_dict
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py
deleted file mode 100644
index d76ca843e0c9d76b5309317f59075f1d31d7f6c7..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py
+++ /dev/null
@@ -1,244 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput, logging
-from .scheduling_utils import SchedulerMixin
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete
-class EulerDiscreteSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: paddle.Tensor
- pred_original_sample: Optional[paddle.Tensor] = None
-
-
-class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
- """
- Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original
- k-diffusion implementation by Katherine Crowson:
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- """
-
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- prediction_type: str = "epsilon",
- ):
- if trained_betas is not None:
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
- elif beta_schedule == "linear":
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = self.sigmas.max()
-
- # setable values
- self.num_inference_steps = None
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy()
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
- self.is_scale_input_called = False
-
- def scale_model_input(self, sample: paddle.Tensor, timestep: Union[float, paddle.Tensor]) -> paddle.Tensor:
- """
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.
-
- Args:
- sample (`paddle.Tensor`): input sample
- timestep (`float` or `paddle.Tensor`): the current timestep in the diffusion chain
-
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
- sample = sample / ((sigma**2 + 1) ** 0.5)
- self.is_scale_input_called = True
- return sample
-
- def set_timesteps(self, num_inference_steps: int):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- self.num_inference_steps = num_inference_steps
-
- timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy()
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
-
- def step(
- self,
- model_output: paddle.Tensor,
- timestep: Union[float, paddle.Tensor],
- sample: paddle.Tensor,
- s_churn: float = 0.0,
- s_tmin: float = 0.0,
- s_tmax: float = float("inf"),
- s_noise: float = 1.0,
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
- return_dict: bool = True,
- ) -> Union[EulerDiscreteSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`paddle.Tensor`): direct output from learned diffusion model.
- timestep (`float`): current timestep in the diffusion chain.
- sample (`paddle.Tensor`):
- current instance of sample being created by diffusion process.
- s_churn (`float`)
- s_tmin (`float`)
- s_tmax (`float`)
- s_noise (`float`)
- generator (`paddle.Generator`, optional): Random number generator.
- return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
-
- if not self.is_scale_input_called:
- logger.warning(
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
- "See `StableDiffusionPipeline` for a usage example."
- )
-
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
-
- gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0
-
- noise = paddle.randn(model_output.shape, dtype=model_output.dtype, generator=generator)
-
- eps = noise * s_noise
- sigma_hat = sigma * (gamma + 1)
-
- if gamma > 0:
- sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5
-
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- if self.config.prediction_type == "epsilon":
- pred_original_sample = sample - sigma_hat * model_output
- elif self.config.prediction_type == "v_prediction":
- # * c_out + input * c_skip
- pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
- )
-
- # 2. Convert to an ODE derivative
- derivative = (sample - pred_original_sample) / sigma_hat
-
- dt = self.sigmas[step_index + 1] - sigma_hat
-
- prev_sample = sample + derivative * dt
-
- if not return_dict:
- return (prev_sample,)
-
- return EulerDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
-
- def add_noise(
- self,
- original_samples: paddle.Tensor,
- noise: paddle.Tensor,
- timesteps: paddle.Tensor,
- ) -> paddle.Tensor:
- # Make sure sigmas and timesteps have the same dtype as original_samples
- self.sigmas = self.sigmas.cast(original_samples.dtype)
-
- schedule_timesteps = self.timesteps
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
-
- sigma = self.sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/A00001/bingothoo/src/components/ui/voice/index.tsx b/spaces/A00001/bingothoo/src/components/ui/voice/index.tsx
deleted file mode 100644
index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/ui/voice/index.tsx
+++ /dev/null
@@ -1,28 +0,0 @@
-import './index.scss'
-
-export interface VoiceProps extends CSSPropertyRule {
- num?: number;
- duration?: number;
-}
-export default function Voice({ duration = 400, num = 7, ...others }) {
- return (
-
- )
-}
diff --git a/spaces/AI-DHD/Youtube-Whisperer/README.md b/spaces/AI-DHD/Youtube-Whisperer/README.md
deleted file mode 100644
index f30d4256155c480f0599698379f798a3365e5bc1..0000000000000000000000000000000000000000
--- a/spaces/AI-DHD/Youtube-Whisperer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Youtube Whisperer
-emoji: ⚡
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-duplicated_from: jeffistyping/Youtube-Whisperer
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py
deleted file mode 100644
index 8f04d01361430a4ad6b02421ac4e20d797f31dc8..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os
-
-from data_gen.tts.base_preprocess import BasePreprocessor
-import glob
-
-
-class LibrittsPreAlign(BasePreprocessor):
- def meta_data(self):
- wav_fns = sorted(glob.glob(f'{self.raw_data_dir}/*/*/*.wav'))
- for wav_fn in wav_fns:
- item_name = os.path.basename(wav_fn)[:-4]
- txt_fn = f'{wav_fn[:-4]}.normalized.txt'
- with open(txt_fn, 'r') as f:
- txt = f.readlines()
- f.close()
- spk = item_name.split("_")[0]
- # Example:
- #
- # 'item_name': '103_1241_000000_000001'
- # 'wav_fn': 'LibriTTS/train-clean-100/103/1241/103_1241_000000_000001.wav'
- # 'txt': 'matthew Cuthbert is surprised'
- # 'spk_name': '103'
- yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt[0], 'spk_name': spk}
-
-
-if __name__ == "__main__":
- LibrittsPreAlign().process()
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py
deleted file mode 100644
index 0980d729dd3b579fee0380d0b9d7055e6843ba12..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py
+++ /dev/null
@@ -1,179 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchlibrosa.stft import Spectrogram, LogmelFilterBank
-
-def get_audio_encoder(name: str):
- if name == "Cnn14":
- return Cnn14
- else:
- raise Exception('The audio encoder name {} is incorrect or not supported'.format(name))
-
-
-class ConvBlock(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.conv2 = nn.Conv2d(in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3), stride=(1, 1),
- padding=(1, 1), bias=False)
-
- self.bn1 = nn.BatchNorm2d(out_channels)
- self.bn2 = nn.BatchNorm2d(out_channels)
-
-
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- x = F.relu_(self.bn2(self.conv2(x)))
- if pool_type == 'max':
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg':
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg+max':
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception('Incorrect argument!')
-
- return x
-
-
-class ConvBlock5x5(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock5x5, self).__init__()
-
- self.conv1 = nn.Conv2d(in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(5, 5), stride=(1, 1),
- padding=(2, 2), bias=False)
-
- self.bn1 = nn.BatchNorm2d(out_channels)
-
-
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- if pool_type == 'max':
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg':
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == 'avg+max':
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception('Incorrect argument!')
-
- return x
-
-
-class AttBlock(nn.Module):
- def __init__(self, n_in, n_out, activation='linear', temperature=1.):
- super(AttBlock, self).__init__()
-
- self.activation = activation
- self.temperature = temperature
- self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
- self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
-
- self.bn_att = nn.BatchNorm1d(n_out)
-
- def forward(self, x):
- # x: (n_samples, n_in, n_time)
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
- cla = self.nonlinear_transform(self.cla(x))
- x = torch.sum(norm_att * cla, dim=2)
- return x, norm_att, cla
-
- def nonlinear_transform(self, x):
- if self.activation == 'linear':
- return x
- elif self.activation == 'sigmoid':
- return torch.sigmoid(x)
-
-
-class Cnn14(nn.Module):
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
- fmax, classes_num, out_emb):
-
- super(Cnn14, self).__init__()
-
- window = 'hann'
- center = True
- pad_mode = 'reflect'
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
- freeze_parameters=True)
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
- freeze_parameters=True)
-
- self.bn0 = nn.BatchNorm2d(64)
-
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
-
- # out_emb is 2048 for best Cnn14
- self.fc1 = nn.Linear(2048, out_emb, bias=True)
- self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True)
-
- def forward(self, input, mixup_lambda=None):
- """
- Input: (batch_size, data_length)
- """
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
-
- return output_dict
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py
deleted file mode 100644
index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py
+++ /dev/null
@@ -1,730 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-# --------------------------------------------
-# Super-Resolution
-# --------------------------------------------
-#
-# Kai Zhang (cskaizhang@gmail.com)
-# https://github.com/cszn
-# From 2019/03--2021/08
-# --------------------------------------------
-"""
-
-import numpy as np
-import cv2
-import torch
-
-from functools import partial
-import random
-from scipy import ndimage
-import scipy
-import scipy.stats as ss
-from scipy.interpolate import interp2d
-from scipy.linalg import orth
-import albumentations
-
-import ldm.modules.image_degradation.utils_image as util
-
-
-def modcrop_np(img, sf):
- '''
- Args:
- img: numpy image, WxH or WxHxC
- sf: scale factor
- Return:
- cropped image
- '''
- w, h = img.shape[:2]
- im = np.copy(img)
- return im[:w - w % sf, :h - h % sf, ...]
-
-
-"""
-# --------------------------------------------
-# anisotropic Gaussian kernels
-# --------------------------------------------
-"""
-
-
-def analytic_kernel(k):
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
- k_size = k.shape[0]
- # Calculate the big kernels size
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
- # Loop over the small kernel to fill the big one
- for r in range(k_size):
- for c in range(k_size):
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
- crop = k_size // 2
- cropped_big_k = big_k[crop:-crop, crop:-crop]
- # Normalize to 1
- return cropped_big_k / cropped_big_k.sum()
-
-
-def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
- """ generate an anisotropic Gaussian kernel
- Args:
- ksize : e.g., 15, kernel size
- theta : [0, pi], rotation angle range
- l1 : [0.1,50], scaling of eigenvalues
- l2 : [0.1,l1], scaling of eigenvalues
- If l1 = l2, will get an isotropic Gaussian kernel.
- Returns:
- k : kernel
- """
-
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
- D = np.array([[l1, 0], [0, l2]])
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
-
- return k
-
-
-def gm_blur_kernel(mean, cov, size=15):
- center = size / 2.0 + 0.5
- k = np.zeros([size, size])
- for y in range(size):
- for x in range(size):
- cy = y - center + 1
- cx = x - center + 1
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
-
- k = k / np.sum(k)
- return k
-
-
-def shift_pixel(x, sf, upper_left=True):
- """shift pixel for super-resolution with different scale factors
- Args:
- x: WxHxC or WxH
- sf: scale factor
- upper_left: shift direction
- """
- h, w = x.shape[:2]
- shift = (sf - 1) * 0.5
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
- if upper_left:
- x1 = xv + shift
- y1 = yv + shift
- else:
- x1 = xv - shift
- y1 = yv - shift
-
- x1 = np.clip(x1, 0, w - 1)
- y1 = np.clip(y1, 0, h - 1)
-
- if x.ndim == 2:
- x = interp2d(xv, yv, x)(x1, y1)
- if x.ndim == 3:
- for i in range(x.shape[-1]):
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
-
- return x
-
-
-def blur(x, k):
- '''
- x: image, NxcxHxW
- k: kernel, Nx1xhxw
- '''
- n, c = x.shape[:2]
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
- k = k.repeat(1, c, 1, 1)
- k = k.view(-1, 1, k.shape[2], k.shape[3])
- x = x.view(1, -1, x.shape[2], x.shape[3])
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
- x = x.view(n, c, x.shape[2], x.shape[3])
-
- return x
-
-
-def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
- """"
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
- # Kai Zhang
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
- # max_var = 2.5 * sf
- """
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
- theta = np.random.rand() * np.pi # random theta
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
-
- # Set COV matrix using Lambdas and Theta
- LAMBDA = np.diag([lambda_1, lambda_2])
- Q = np.array([[np.cos(theta), -np.sin(theta)],
- [np.sin(theta), np.cos(theta)]])
- SIGMA = Q @ LAMBDA @ Q.T
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
-
- # Set expectation position (shifting kernel for aligned image)
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
- MU = MU[None, None, :, None]
-
- # Create meshgrid for Gaussian
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
- Z = np.stack([X, Y], 2)[:, :, :, None]
-
- # Calcualte Gaussian for every pixel of the kernel
- ZZ = Z - MU
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
-
- # shift the kernel so it will be centered
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
-
- # Normalize the kernel and return
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
- kernel = raw_kernel / np.sum(raw_kernel)
- return kernel
-
-
-def fspecial_gaussian(hsize, sigma):
- hsize = [hsize, hsize]
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
- std = sigma
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
- arg = -(x * x + y * y) / (2 * std * std)
- h = np.exp(arg)
- h[h < scipy.finfo(float).eps * h.max()] = 0
- sumh = h.sum()
- if sumh != 0:
- h = h / sumh
- return h
-
-
-def fspecial_laplacian(alpha):
- alpha = max([0, min([alpha, 1])])
- h1 = alpha / (alpha + 1)
- h2 = (1 - alpha) / (alpha + 1)
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
- h = np.array(h)
- return h
-
-
-def fspecial(filter_type, *args, **kwargs):
- '''
- python code from:
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
- '''
- if filter_type == 'gaussian':
- return fspecial_gaussian(*args, **kwargs)
- if filter_type == 'laplacian':
- return fspecial_laplacian(*args, **kwargs)
-
-
-"""
-# --------------------------------------------
-# degradation models
-# --------------------------------------------
-"""
-
-
-def bicubic_degradation(x, sf=3):
- '''
- Args:
- x: HxWxC image, [0, 1]
- sf: down-scale factor
- Return:
- bicubicly downsampled LR image
- '''
- x = util.imresize_np(x, scale=1 / sf)
- return x
-
-
-def srmd_degradation(x, k, sf=3):
- ''' blur + bicubic downsampling
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2018learning,
- title={Learning a single convolutional super-resolution network for multiple degradations},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={3262--3271},
- year={2018}
- }
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
- x = bicubic_degradation(x, sf=sf)
- return x
-
-
-def dpsr_degradation(x, k, sf=3):
- ''' bicubic downsampling + blur
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2019deep,
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={1671--1681},
- year={2019}
- }
- '''
- x = bicubic_degradation(x, sf=sf)
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- return x
-
-
-def classical_degradation(x, k, sf=3):
- ''' blur + downsampling
- Args:
- x: HxWxC image, [0, 1]/[0, 255]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
- st = 0
- return x[st::sf, st::sf, ...]
-
-
-def add_sharpening(img, weight=0.5, radius=50, threshold=10):
- """USM sharpening. borrowed from real-ESRGAN
- Input image: I; Blurry image: B.
- 1. K = I + weight * (I - B)
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
- 3. Blur mask:
- 4. Out = Mask * K + (1 - Mask) * I
- Args:
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
- weight (float): Sharp weight. Default: 1.
- radius (float): Kernel size of Gaussian blur. Default: 50.
- threshold (int):
- """
- if radius % 2 == 0:
- radius += 1
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
- residual = img - blur
- mask = np.abs(residual) * 255 > threshold
- mask = mask.astype('float32')
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
-
- K = img + weight * residual
- K = np.clip(K, 0, 1)
- return soft_mask * K + (1 - soft_mask) * img
-
-
-def add_blur(img, sf=4):
- wd2 = 4.0 + sf
- wd = 2.0 + 0.2 * sf
- if random.random() < 0.5:
- l1 = wd2 * random.random()
- l2 = wd2 * random.random()
- k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
- else:
- k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random())
- img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
-
- return img
-
-
-def add_resize(img, sf=4):
- rnum = np.random.rand()
- if rnum > 0.8: # up
- sf1 = random.uniform(1, 2)
- elif rnum < 0.7: # down
- sf1 = random.uniform(0.5 / sf, 1)
- else:
- sf1 = 1.0
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- return img
-
-
-# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
-# noise_level = random.randint(noise_level1, noise_level2)
-# rnum = np.random.rand()
-# if rnum > 0.6: # add color Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
-# elif rnum < 0.4: # add grayscale Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
-# else: # add noise
-# L = noise_level2 / 255.
-# D = np.diag(np.random.rand(3))
-# U = orth(np.random.rand(3, 3))
-# conv = np.dot(np.dot(np.transpose(U), D), U)
-# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
-# img = np.clip(img, 0.0, 1.0)
-# return img
-
-def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- rnum = np.random.rand()
- if rnum > 0.6: # add color Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4: # add grayscale Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else: # add noise
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_speckle_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- img = np.clip(img, 0.0, 1.0)
- rnum = random.random()
- if rnum > 0.6:
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4:
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else:
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_Poisson_noise(img):
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
- if random.random() < 0.5:
- img = np.random.poisson(img * vals).astype(np.float32) / vals
- else:
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
- img += noise_gray[:, :, np.newaxis]
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_JPEG_noise(img):
- quality_factor = random.randint(30, 95)
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
- img = cv2.imdecode(encimg, 1)
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
- return img
-
-
-def random_crop(lq, hq, sf=4, lq_patchsize=64):
- h, w = lq.shape[:2]
- rnd_h = random.randint(0, h - lq_patchsize)
- rnd_w = random.randint(0, w - lq_patchsize)
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
-
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
- return lq, hq
-
-
-def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- hq = img.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- img = util.imresize_np(img, 1 / 2, True)
- img = np.clip(img, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- img = add_blur(img, sf=sf)
-
- elif i == 1:
- img = add_blur(img, sf=sf)
-
- elif i == 2:
- a, b = img.shape[1], img.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
- img = img[0::sf, 0::sf, ...] # nearest downsampling
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- img = add_JPEG_noise(img)
-
- elif i == 6:
- # add processed camera sensor noise
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
-
- return img, hq
-
-
-# todo no isp_model?
-def degradation_bsrgan_variant(image, sf=4, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- image = util.uint2single(image)
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = image.shape[:2]
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = image.shape[:2]
-
- hq = image.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- image = util.imresize_np(image, 1 / 2, True)
- image = np.clip(image, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- image = add_blur(image, sf=sf)
-
- elif i == 1:
- image = add_blur(image, sf=sf)
-
- elif i == 2:
- a, b = image.shape[1], image.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
- image = image[0::sf, 0::sf, ...] # nearest downsampling
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- image = add_JPEG_noise(image)
-
- # elif i == 6:
- # # add processed camera sensor noise
- # if random.random() < isp_prob and isp_model is not None:
- # with torch.no_grad():
- # img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- image = add_JPEG_noise(image)
- image = util.single2uint(image)
- example = {"image":image}
- return example
-
-
-# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc...
-def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None):
- """
- This is an extended degradation model by combining
- the degradation models of BSRGAN and Real-ESRGAN
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- use_shuffle: the degradation shuffle
- use_sharp: sharpening the img
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- if use_sharp:
- img = add_sharpening(img)
- hq = img.copy()
-
- if random.random() < shuffle_prob:
- shuffle_order = random.sample(range(13), 13)
- else:
- shuffle_order = list(range(13))
- # local shuffle for noise, JPEG is always the last one
- shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6)))
- shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13)))
-
- poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1
-
- for i in shuffle_order:
- if i == 0:
- img = add_blur(img, sf=sf)
- elif i == 1:
- img = add_resize(img, sf=sf)
- elif i == 2:
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
- elif i == 3:
- if random.random() < poisson_prob:
- img = add_Poisson_noise(img)
- elif i == 4:
- if random.random() < speckle_prob:
- img = add_speckle_noise(img)
- elif i == 5:
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
- elif i == 6:
- img = add_JPEG_noise(img)
- elif i == 7:
- img = add_blur(img, sf=sf)
- elif i == 8:
- img = add_resize(img, sf=sf)
- elif i == 9:
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
- elif i == 10:
- if random.random() < poisson_prob:
- img = add_Poisson_noise(img)
- elif i == 11:
- if random.random() < speckle_prob:
- img = add_speckle_noise(img)
- elif i == 12:
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
- else:
- print('check the shuffle!')
-
- # resize to desired size
- img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])),
- interpolation=random.choice([1, 2, 3]))
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf, lq_patchsize)
-
- return img, hq
-
-
-if __name__ == '__main__':
- print("hey")
- img = util.imread_uint('utils/test.png', 3)
- print(img)
- img = util.uint2single(img)
- print(img)
- img = img[:448, :448]
- h = img.shape[0] // 4
- print("resizing to", h)
- sf = 4
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
- for i in range(20):
- print(i)
- img_lq = deg_fn(img)
- print(img_lq)
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"]
- print(img_lq.shape)
- print("bicubic", img_lq_bicubic.shape)
- print(img_hq.shape)
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
- util.imsave(img_concat, str(i) + '.png')
-
-
diff --git a/spaces/AIZero2HeroBootcamp/3DHuman/README.md b/spaces/AIZero2HeroBootcamp/3DHuman/README.md
deleted file mode 100644
index 9e41b98ad38e307f6903785b03ed1c29c3e406d0..0000000000000000000000000000000000000000
--- a/spaces/AIZero2HeroBootcamp/3DHuman/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 3DHuman
-emoji: 🐠
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py
deleted file mode 100644
index f165c2466bd8a67cbfadd5f3a388d4fe03e6d446..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# model settings
-model = dict(
- type='ImageClassifier',
- backbone=dict(
- type='ResNet_CIFAR',
- depth=50,
- num_stages=4,
- out_indices=(3, ),
- style='pytorch'),
- neck=dict(type='GlobalAveragePooling'),
- head=dict(
- type='MultiLabelLinearClsHead',
- num_classes=10,
- in_channels=2048,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0, use_soft=True)),
- train_cfg=dict(augments=dict(type='Mixup', alpha=1.)),
-)
diff --git a/spaces/Aanisha/Image_to_story/app.py b/spaces/Aanisha/Image_to_story/app.py
deleted file mode 100644
index 2f07e9dc37940cbb0fb94e0797c33c57e87a2ea7..0000000000000000000000000000000000000000
--- a/spaces/Aanisha/Image_to_story/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-from PIL import Image
-from transformers import VisionEncoderDecoderModel,ViTFeatureExtractor,PreTrainedTokenizerFast,GPT2Tokenizer,AutoModelForCausalLM,AutoTokenizer
-import requests
-import gradio as gr
-import torch
-from transformers import pipeline
-import re
-
-
-
-description = "Just upload an image, and generate a short story for the image.\n PS: GPT-2 is not perfect but it's fun to play with.May take a minute for the output to generate. Enjoyy!!!"
-title = "Story generator from images using ViT and GPT2"
-
-
-model = VisionEncoderDecoderModel.from_pretrained("gagan3012/ViTGPT2_vizwiz").to('cpu')
-vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
-tokenizer = PreTrainedTokenizerFast.from_pretrained("distilgpt2")
-story_gpt = AutoModelForCausalLM.from_pretrained("pranavpsv/gpt2-genre-story-generator")
-st_tokenizer = AutoTokenizer.from_pretrained("pranavpsv/gpt2-genre-story-generator")
-
-inputs = [
- gr.inputs.Image(type="pil", label="Original Image")
-]
-
-outputs = [
- gr.outputs.Textbox(label = 'Story')
-]
-
-examples = [['img_1.jpg'],['img_2.jpg']]
-
-def get_output_senten(img):
- pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values.to('cpu')
- encoder_outputs = model.generate(pixel_values.to('cpu'),num_beams=7)
- generated_sentences = tokenizer.batch_decode(encoder_outputs)
- senten = generated_sentences[0][generated_sentences[0][2:].index('>')+1:]
-
- senten = senten.replace('>','')
- senten = senten.replace('|','')
- res = senten.split('.')[0][0:75]
- res = res[0:res.rindex(' ')]
-
- print(res)
-
- tokenized_text=st_tokenizer.encode(res)
- input_ids=torch.tensor(tokenized_text).view(-1,len(tokenized_text))
- outputs=story_gpt.generate(input_ids,max_length=100,num_beams=5,no_repeat_ngram_size=2,early_stopping=True)
-
- generated_story = st_tokenizer.batch_decode(outputs)
-
- print(len(generated_story))
- ans = generated_story[0]
-
-
-
- ans = str(ans)
- ind = ans.rindex('.')
- ans = ans[0:ind+1]
- return ans
-
-
-
-gr.Interface(
- get_output_senten,
- inputs,
- outputs,
- examples = examples,
- title=title,
- description=description,
- theme="huggingface",
-).launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py
deleted file mode 100644
index 2915b2846e5f1b1678991e81f6572776ace8a4c9..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py
+++ /dev/null
@@ -1,34 +0,0 @@
-"""
-Constants that are used by the model
-"""
-HARAQAT = ["ْ", "ّ", "ٌ", "ٍ", "ِ", "ً", "َ", "ُ"]
-ARAB_CHARS = "ىعظحرسيشضق ثلصطكآماإهزءأفؤغجئدةخوبذتن"
-PUNCTUATIONS = [".", "،", ":", "؛", "-", "؟"]
-VALID_ARABIC = HARAQAT + list(ARAB_CHARS)
-BASIC_HARAQAT = {
- "َ": "Fatha ",
- "ً": "Fathatah ",
- "ُ": "Damma ",
- "ٌ": "Dammatan ",
- "ِ": "Kasra ",
- "ٍ": "Kasratan ",
- "ْ": "Sukun ",
- "ّ": "Shaddah ",
-}
-ALL_POSSIBLE_HARAQAT = {
- "": "No Diacritic ",
- "َ": "Fatha ",
- "ً": "Fathatah ",
- "ُ": "Damma ",
- "ٌ": "Dammatan ",
- "ِ": "Kasra ",
- "ٍ": "Kasratan ",
- "ْ": "Sukun ",
- "ّ": "Shaddah ",
- "َّ": "Shaddah + Fatha ",
- "ًّ": "Shaddah + Fathatah ",
- "ُّ": "Shaddah + Damma ",
- "ٌّ": "Shaddah + Dammatan ",
- "ِّ": "Shaddah + Kasra ",
- "ٍّ": "Shaddah + Kasratan ",
-}
diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/conversation.css b/spaces/AchyuthGamer/OpenGPT/client/css/conversation.css
deleted file mode 100644
index d20f178c45e8ccbfc9539f99914b25fc572045bd..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/client/css/conversation.css
+++ /dev/null
@@ -1,158 +0,0 @@
-.conversation {
- width: 60%;
- margin: 0px 16px;
- display: flex;
- flex-direction: column;
-}
-
-.conversation #messages {
- width: 100%;
- display: flex;
- flex-direction: column;
- overflow: auto;
- overflow-wrap: break-word;
- padding-bottom: 8px;
-}
-
-.conversation .user-input {
- max-height: 180px;
- margin: 16px 0px;
-}
-
-.conversation .user-input input {
- font-size: 1rem;
- background: none;
- border: none;
- outline: none;
- color: var(--colour-3);
-}
-
-.conversation .user-input input::placeholder {
- color: var(--user-input);
-}
-
-.conversation-title {
- color: var(--colour-3);
- font-size: 14px;
-}
-
-.conversation .user-input textarea {
- font-size: 1rem;
- width: 100%;
- height: 100%;
- padding: 12px;
- background: none;
- border: none;
- outline: none;
- color: var(--colour-3);
- resize: vertical;
- max-height: 150px;
- min-height: 80px;
-}
-
-.box {
- backdrop-filter: blur(20px);
- -webkit-backdrop-filter: blur(20px);
- background-color: var(--blur-bg);
- height: 100%;
- width: 100%;
- border-radius: var(--border-radius-1);
- border: 1px solid var(--blur-border);
-}
-
-.box.input-box {
- position: relative;
- align-items: center;
- padding: 8px;
- cursor: pointer;
-}
-
-#send-button {
- position: absolute;
- bottom: 25%;
- right: 10px;
- z-index: 1;
- padding: 16px;
-}
-
-#cursor {
- line-height: 17px;
- margin-left: 3px;
- -webkit-animation: blink 0.8s infinite;
- animation: blink 0.8s infinite;
- width: 7px;
- height: 15px;
-}
-
-@keyframes blink {
- 0% {
- background: #ffffff00;
- }
-
- 50% {
- background: white;
- }
-
- 100% {
- background: #ffffff00;
- }
-}
-
-@-webkit-keyframes blink {
- 0% {
- background: #ffffff00;
- }
-
- 50% {
- background: white;
- }
-
- 100% {
- background: #ffffff00;
- }
-}
-
-/* scrollbar */
-.conversation #messages::-webkit-scrollbar {
- width: 4px;
- padding: 8px 0px;
-}
-
-.conversation #messages::-webkit-scrollbar-track {
- background-color: #ffffff00;
-}
-
-.conversation #messages::-webkit-scrollbar-thumb {
- background-color: #555555;
- border-radius: 10px;
-}
-
-@media screen and (max-width: 990px) {
- .conversation {
- width: 100%;
- height: 90%;
- }
-}
-
-@media screen and (max-height: 720px) {
- .conversation.box {
- height: 70%;
- }
-
- .conversation .user-input textarea {
- font-size: 0.875rem;
- }
-}
-
-@media screen and (max-width: 360px) {
- .box {
- border-radius: 0;
- }
- .conversation {
- margin: 0;
- margin-top: 48px;
- }
- .conversation .user-input {
- margin: 2px 0 8px 0;
- }
-}
diff --git a/spaces/AgentVerse/agentVerse/agentverse/initialization.py b/spaces/AgentVerse/agentVerse/agentverse/initialization.py
deleted file mode 100644
index 13ef54e77f0504657ef4d84508f921d3c5c3554c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/initialization.py
+++ /dev/null
@@ -1,120 +0,0 @@
-from __future__ import annotations
-
-import os
-from typing import Dict, List, TYPE_CHECKING
-
-import yaml
-
-try:
- from bmtools.agent.singletool import import_all_apis, load_single_tools
-except:
- print(
- "BMTools is not installed, tools in the simulation environment cannot be used. To install BMTools, please follow the instruction in the README.md file."
- )
-
-from agentverse.llms import llm_registry
-
-from agentverse.agents import agent_registry
-from agentverse.environments import BaseEnvironment, env_registry
-from agentverse.memory import memory_registry
-from agentverse.memory_manipulator import memory_manipulator_registry
-
-from agentverse.output_parser import output_parser_registry
-
-if TYPE_CHECKING:
- from agentverse.agents import BaseAgent
-
-
-def load_llm(llm_config: Dict):
- llm_type = llm_config.pop("llm_type", "text-davinci-003")
-
- return llm_registry.build(llm_type, **llm_config)
-
-
-def load_memory(memory_config: Dict):
- memory_type = memory_config.pop("memory_type", "chat_history")
- return memory_registry.build(memory_type, **memory_config)
-
-
-def load_memory_manipulator(memory_manipulator_config: Dict):
- memory_manipulator_type = memory_manipulator_config.pop(
- "memory_manipulator_type", "basic"
- )
- return memory_manipulator_registry.build(
- memory_manipulator_type, **memory_manipulator_config
- )
-
-
-def load_tools(tool_config: List[Dict]):
- if len(tool_config) == 0:
- return []
- all_tools_list = []
- for tool in tool_config:
- _, config = load_single_tools(tool["tool_name"], tool["tool_url"])
- all_tools_list += import_all_apis(config)
- return all_tools_list
-
-
-def load_environment(env_config: Dict) -> BaseEnvironment:
- env_type = env_config.pop("env_type", "basic")
- return env_registry.build(env_type, **env_config)
-
-
-def load_agent(agent_config: Dict) -> BaseAgent:
- agent_type = agent_config.pop("agent_type", "conversation")
- agent = agent_registry.build(agent_type, **agent_config)
- return agent
-
-
-def prepare_task_config(task, tasks_dir):
- """Read the yaml config of the given task in `tasks` directory."""
- all_task_dir = tasks_dir
- task_path = os.path.join(all_task_dir, task)
- config_path = os.path.join(task_path, "config.yaml")
- if not os.path.exists(task_path):
- all_tasks = []
- for task in os.listdir(all_task_dir):
- if (
- os.path.isdir(os.path.join(all_task_dir, task))
- and task != "__pycache__"
- ):
- all_tasks.append(task)
- for subtask in os.listdir(os.path.join(all_task_dir, task)):
- if (
- os.path.isdir(os.path.join(all_task_dir, task, subtask))
- and subtask != "__pycache__"
- ):
- all_tasks.append(f"{task}/{subtask}")
- raise ValueError(f"Task {task} not found. Available tasks: {all_tasks}")
- if not os.path.exists(config_path):
- raise ValueError(
- "You should include the config.yaml file in the task directory"
- )
- task_config = yaml.safe_load(open(config_path))
-
- for i, agent_configs in enumerate(task_config["agents"]):
- agent_configs["memory"] = load_memory(agent_configs.get("memory", {}))
- if agent_configs.get("tool_memory", None) is not None:
- agent_configs["tool_memory"] = load_memory(agent_configs["tool_memory"])
- llm = load_llm(agent_configs.get("llm", "text-davinci-003"))
- agent_configs["llm"] = llm
-
- memory_manipulator = load_memory_manipulator(
- agent_configs.get("memory_manipulator", {})
- )
- agent_configs["memory_manipulator"] = memory_manipulator
-
- agent_configs["tools"] = load_tools(agent_configs.get("tools", []))
-
- # Build the output parser
- output_parser_config = agent_configs.get("output_parser", {"type": "dummy"})
- if output_parser_config.get("type", None) == "role_assigner":
- output_parser_config["cnt_critic_agents"] = task_config.get(
- "cnt_critic_agents", 0
- )
- output_parser_name = output_parser_config.pop("type", task)
- agent_configs["output_parser"] = output_parser_registry.build(
- output_parser_name, **output_parser_config
- )
-
- return task_config
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts
deleted file mode 100644
index c500a01499609516f3b8a0cead1dae4372ee564b..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import RunCommands from './logic/runcommands/RunCommands';
-export default RunCommands;
\ No newline at end of file
diff --git a/spaces/Aki004/herta-so-vits/inference/infer_tool.py b/spaces/Aki004/herta-so-vits/inference/infer_tool.py
deleted file mode 100644
index 91561cfbfc61f3bf7334b10e8e7242574c5ed061..0000000000000000000000000000000000000000
--- a/spaces/Aki004/herta-so-vits/inference/infer_tool.py
+++ /dev/null
@@ -1,354 +0,0 @@
-import hashlib
-import io
-import json
-import logging
-import os
-import time
-from pathlib import Path
-from inference import slicer
-import gc
-
-import librosa
-import numpy as np
-# import onnxruntime
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-import cluster
-from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-
-def read_temp(file_name):
- if not os.path.exists(file_name):
- with open(file_name, "w") as f:
- f.write(json.dumps({"info": "temp_dict"}))
- return {}
- else:
- try:
- with open(file_name, "r") as f:
- data = f.read()
- data_dict = json.loads(data)
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
- f_name = file_name.replace("\\", "/").split("/")[-1]
- print(f"clean {f_name}")
- for wav_hash in list(data_dict.keys()):
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
- del data_dict[wav_hash]
- except Exception as e:
- print(e)
- print(f"{file_name} error,auto rebuild file")
- data_dict = {"info": "temp_dict"}
- return data_dict
-
-
-def write_temp(file_name, data):
- with open(file_name, "w") as f:
- f.write(json.dumps(data))
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-def format_wav(audio_path):
- if Path(audio_path).suffix == '.wav':
- return
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
-
-
-def get_end_file(dir_path, end):
- file_lists = []
- for root, dirs, files in os.walk(dir_path):
- files = [f for f in files if f[0] != '.']
- dirs[:] = [d for d in dirs if d[0] != '.']
- for f_file in files:
- if f_file.endswith(end):
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
- return file_lists
-
-
-def get_md5(content):
- return hashlib.new("md5", content).hexdigest()
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-def pad_array(arr, target_length):
- current_length = arr.shape[0]
- if current_length >= target_length:
- return arr
- else:
- pad_width = target_length - current_length
- pad_left = pad_width // 2
- pad_right = pad_width - pad_left
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
- return padded_arr
-
-def split_list_by_n(list_collection, n, pre=0):
- for i in range(0, len(list_collection), n):
- yield list_collection[i-pre if i-pre>=0 else i: i + n]
-
-
-class F0FilterException(Exception):
- pass
-
-class Svc(object):
- def __init__(self, net_g_path, config_path,
- device=None,
- cluster_model_path="logs/44k/kmeans_10000.pt",
- nsf_hifigan_enhance = False
- ):
- self.net_g_path = net_g_path
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.net_g_ms = None
- self.hps_ms = utils.get_hparams_from_file(config_path)
- self.target_sample = self.hps_ms.data.sampling_rate
- self.hop_size = self.hps_ms.data.hop_length
- self.spk2id = self.hps_ms.spk
- self.nsf_hifigan_enhance = nsf_hifigan_enhance
- # load hubert
- self.hubert_model = utils.get_hubert_model().to(self.dev)
- self.load_model()
- if os.path.exists(cluster_model_path):
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
- if self.nsf_hifigan_enhance:
- from modules.enhancer import Enhancer
- self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model',device=self.dev)
-
- def load_model(self):
- # get model configuration
- self.net_g_ms = SynthesizerTrn(
- self.hps_ms.data.filter_length // 2 + 1,
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
- **self.hps_ms.model)
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
- if "half" in self.net_g_path and torch.cuda.is_available():
- _ = self.net_g_ms.half().eval().to(self.dev)
- else:
- _ = self.net_g_ms.eval().to(self.dev)
-
-
-
- def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling,cr_threshold=0.05):
-
- wav, sr = librosa.load(in_path, sr=self.target_sample)
-
- if F0_mean_pooling == True:
- f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev,cr_threshold = cr_threshold)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("No voice detected")
- f0 = torch.FloatTensor(list(f0))
- uv = torch.FloatTensor(list(uv))
- if F0_mean_pooling == False:
- f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("No voice detected")
- f0, uv = utils.interpolate_f0(f0)
- f0 = torch.FloatTensor(f0)
- uv = torch.FloatTensor(uv)
-
- f0 = f0 * 2 ** (tran / 12)
- f0 = f0.unsqueeze(0).to(self.dev)
- uv = uv.unsqueeze(0).to(self.dev)
-
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
- wav16k = torch.from_numpy(wav16k).to(self.dev)
- c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k)
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
-
- if cluster_infer_ratio !=0:
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
-
- c = c.unsqueeze(0)
- return c, f0, uv
-
- def infer(self, speaker, tran, raw_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False,
- F0_mean_pooling=False,
- enhancer_adaptive_key = 0,
- cr_threshold = 0.05
- ):
-
- speaker_id = self.spk2id.__dict__.get(speaker)
- if not speaker_id and type(speaker) is int:
- if len(self.spk2id.__dict__) >= speaker:
- speaker_id = speaker
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
- c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling,cr_threshold=cr_threshold)
- if "half" in self.net_g_path and torch.cuda.is_available():
- c = c.half()
- with torch.no_grad():
- start = time.time()
- audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
- if self.nsf_hifigan_enhance:
- audio, _ = self.enhancer.enhance(
- audio[None,:],
- self.target_sample,
- f0[:,:,None],
- self.hps_ms.data.hop_length,
- adaptive_key = enhancer_adaptive_key)
- use_time = time.time() - start
- print("vits use time:{}".format(use_time))
- return audio, audio.shape[-1]
-
- def clear_empty(self):
- # clean up vram
- torch.cuda.empty_cache()
-
- def unload_model(self):
- # unload model
- self.net_g_ms = self.net_g_ms.to("cpu")
- del self.net_g_ms
- if hasattr(self,"enhancer"):
- self.enhancer.enhancer = self.enhancer.enhancer.to("cpu")
- del self.enhancer.enhancer
- del self.enhancer
- gc.collect()
-
- def slice_inference(self,
- raw_audio_path,
- spk,
- tran,
- slice_db,
- cluster_infer_ratio,
- auto_predict_f0,
- noice_scale,
- pad_seconds=0.5,
- clip_seconds=0,
- lg_num=0,
- lgr_num =0.75,
- F0_mean_pooling = False,
- enhancer_adaptive_key = 0,
- cr_threshold = 0.05
- ):
- wav_path = raw_audio_path
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
- per_size = int(clip_seconds*audio_sr)
- lg_size = int(lg_num*audio_sr)
- lg_size_r = int(lg_size*lgr_num)
- lg_size_c_l = (lg_size-lg_size_r)//2
- lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
- lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
-
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- audio.extend(list(pad_array(_audio, length)))
- continue
- if per_size != 0:
- datas = split_list_by_n(data, per_size,lg_size)
- else:
- datas = [data]
- for k,dat in enumerate(datas):
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length
- if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
- raw_path = io.BytesIO()
- soundfile.write(raw_path, dat, audio_sr, format="wav")
- raw_path.seek(0)
- out_audio, out_sr = self.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- F0_mean_pooling = F0_mean_pooling,
- enhancer_adaptive_key = enhancer_adaptive_key,
- cr_threshold = cr_threshold
- )
- _audio = out_audio.cpu().numpy()
- pad_len = int(self.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- _audio = pad_array(_audio, per_length)
- if lg_size!=0 and k!=0:
- lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
- lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
- lg_pre = lg1*(1-lg)+lg2*lg
- audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
- audio.extend(lg_pre)
- _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
- audio.extend(list(_audio))
- return np.array(audio)
-
-class RealTimeVC:
- def __init__(self):
- self.last_chunk = None
- self.last_o = None
- self.chunk_len = 16000 # chunk length
- self.pre_len = 3840 # cross fade length, multiples of 640
-
- # Input and output are 1-dimensional numpy waveform arrays
-
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False):
-
- import maad
- audio, sr = torchaudio.load(input_wav_path)
- audio = audio.cpu().numpy()[0]
- temp_wav = io.BytesIO()
- if self.last_chunk is None:
- input_wav_path.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return audio[-self.chunk_len:]
- else:
- audio = np.concatenate([self.last_chunk, audio])
- soundfile.write(temp_wav, audio, sr, format="wav")
- temp_wav.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return ret[self.chunk_len:2 * self.chunk_len]
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py
deleted file mode 100644
index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/AlhitawiMohammed22/E2E_OCR/README.md b/spaces/AlhitawiMohammed22/E2E_OCR/README.md
deleted file mode 100644
index 1724212764bc20f27c84f885eabb15b2ea0148b2..0000000000000000000000000000000000000000
--- a/spaces/AlhitawiMohammed22/E2E_OCR/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: E2E OCR
-emoji: 📈
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.43.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py b/spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py
deleted file mode 100644
index 9c83b4104a395e35471895faf09edb15c0ea65b4..0000000000000000000000000000000000000000
--- a/spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py
+++ /dev/null
@@ -1,108 +0,0 @@
-from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效
-
-def get_crazy_functionals():
- ###################### 第一组插件 ###########################
- # [第一组插件]: 最早期编写的项目插件和一些demo
- from crazy_functions.读文章写摘要 import 读文章写摘要
- from crazy_functions.生成函数注释 import 批量生成函数注释
- from crazy_functions.解析项目源代码 import 解析项目本身
- from crazy_functions.解析项目源代码 import 解析一个Python项目
- from crazy_functions.解析项目源代码 import 解析一个C项目的头文件
- from crazy_functions.解析项目源代码 import 解析一个C项目
- from crazy_functions.解析项目源代码 import 解析一个Golang项目
- from crazy_functions.解析项目源代码 import 解析一个Java项目
- from crazy_functions.解析项目源代码 import 解析一个Rect项目
- from crazy_functions.高级功能函数模板 import 高阶功能模板函数
- from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文
-
- function_plugins = {
- "请解析并解构此项目本身(源码自译解)": {
- "AsButton": False, # 加入下拉菜单中
- "Function": 解析项目本身
- },
- "解析整个Py项目": {
- "Color": "stop", # 按钮颜色
- "Function": 解析一个Python项目
- },
- "解析整个C++项目头文件": {
- "Color": "stop", # 按钮颜色
- "Function": 解析一个C项目的头文件
- },
- "解析整个C++项目(.cpp/.h)": {
- "Color": "stop", # 按钮颜色
- "AsButton": False, # 加入下拉菜单中
- "Function": 解析一个C项目
- },
- "解析整个Go项目": {
- "Color": "stop", # 按钮颜色
- "AsButton": False, # 加入下拉菜单中
- "Function": 解析一个Golang项目
- },
- "解析整个Java项目": {
- "Color": "stop", # 按钮颜色
- "AsButton": False, # 加入下拉菜单中
- "Function": 解析一个Java项目
- },
- "解析整个Java项目": {
- "Color": "stop", # 按钮颜色
- "AsButton": False, # 加入下拉菜单中
- "Function": 解析一个Rect项目
- },
- "读Tex论文写摘要": {
- "Color": "stop", # 按钮颜色
- "Function": 读文章写摘要
- },
- "批量生成函数注释": {
- "Color": "stop", # 按钮颜色
- "Function": 批量生成函数注释
- },
- "[多线程demo] 把本项目源代码切换成全英文": {
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
- "Function": HotReload(全项目切换英文)
- },
- "[函数插件模板demo] 历史上的今天": {
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
- "Function": HotReload(高阶功能模板函数)
- },
- }
- ###################### 第二组插件 ###########################
- # [第二组插件]: 经过充分测试,但功能上距离达到完美状态还差一点点
- from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
- from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer
- from crazy_functions.总结word文档 import 总结word文档
- function_plugins.update({
- "[仅供开发调试] 批量总结PDF文档": {
- "Color": "stop",
- "Function": HotReload(批量总结PDF文档) # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
- },
- "[仅供开发调试] 批量总结PDF文档pdfminer": {
- "Color": "stop",
- "AsButton": False, # 加入下拉菜单中
- "Function": HotReload(批量总结PDF文档pdfminer)
- },
- "[仅供开发调试] 批量总结Word文档": {
- "Color": "stop",
- "Function": HotReload(总结word文档)
- },
- })
-
- ###################### 第三组插件 ###########################
- # [第三组插件]: 尚未充分测试的函数插件,放在这里
- try:
- from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
- function_plugins.update({
- "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
- "Color": "stop",
- "AsButton": False, # 加入下拉菜单中
- "Function": HotReload(下载arxiv论文并翻译摘要)
- }
- })
- except Exception as err:
- print(f'[下载arxiv论文并翻译摘要] 插件导入失败 {str(err)}')
-
-
-
- ###################### 第n组插件 ###########################
- return function_plugins
-
-
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/__init__.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Anandhju-jayan/image-captioning-cloned/model.py b/spaces/Anandhju-jayan/image-captioning-cloned/model.py
deleted file mode 100644
index de223f021d5763f463961b8a1dccdeacdb64723a..0000000000000000000000000000000000000000
--- a/spaces/Anandhju-jayan/image-captioning-cloned/model.py
+++ /dev/null
@@ -1,149 +0,0 @@
-from transformers import AutoProcessor, AutoModelForCausalLM, BlipForConditionalGeneration
-
-class ImageCaptionModel:
- def __init__(
- self,
- device,
- processor,
- model,
- ) -> None:
- """
- Initializes the model for generating captions for images.
-
- -----
- Parameters:
- device: str
- The device to use for the model. Must be either "cpu" or "cuda".
- processor: transformers.AutoProcessor
- The preprocessor to use for the model.
- model: transformers.AutoModelForCausalLM or transformers.BlipForConditionalGeneration
- The model to use for generating captions.
-
- -----
- Returns:
- None
- """
- self.device = device
- self.processor = processor
- self.model = model
- self.model.to(self.device)
-
- def generate(
- self,
- image,
- num_captions: int = 1,
- max_length: int = 50,
- temperature: float = 1.0,
- top_k: int = 50,
- top_p: float = 1.0,
- repetition_penalty: float = 1.0,
- diversity_penalty: float = 0.0,
- ):
- """
- Generates captions for the given image.
-
- -----
- Parameters:
- preprocessor: transformers.PreTrainedTokenizerFast
- The preprocessor to use for the model.
- model: transformers.PreTrainedModel
- The model to use for generating captions.
- image: PIL.Image
- The image to generate captions for.
- num_captions: int
- The number of captions to generate.
- temperature: float
- The temperature to use for sampling. The value used to module the next token probabilities that will be used by default in the generate method of the model. Must be strictly positive. Defaults to 1.0.
- top_k: int
- The number of highest probability vocabulary tokens to keep for top-k-filtering. A large value of top_k will keep more probabilities for each token leading to a better but slower generation. Defaults to 50.
- top_p: float
- The value that will be used by default in the generate method of the model for top_p. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation.
- repetition_penalty: float
- The parameter for repetition penalty. 1.0 means no penalty. Defaults to 1.0.
- diversity_penalty: float
- The parameter for diversity penalty. 0.0 means no penalty. Defaults to 0.0.
-
- """
- # Type checking and making sure the values are valid.
- assert type(num_captions) == int and num_captions > 0, "num_captions must be a positive integer."
- assert type(max_length) == int and max_length > 0, "max_length must be a positive integer."
- assert type(temperature) == float and temperature > 0.0, "temperature must be a positive float."
- assert type(top_k) == int and top_k > 0, "top_k must be a positive integer."
- assert type(top_p) == float and top_p > 0.0, "top_p must be a positive float."
- assert type(repetition_penalty) == float and repetition_penalty >= 1.0, "repetition_penalty must be a positive float greater than or equal to 1."
- assert type(diversity_penalty) == float and diversity_penalty >= 0.0, "diversity_penalty must be a non negative float."
-
- pixel_values = self.processor(images=image, return_tensors="pt").pixel_values.to(self.device) # Convert the image to pixel values.
-
- # Generate captions ids.
- if num_captions == 1:
- generated_ids = self.model.generate(
- pixel_values=pixel_values,
- max_length=max_length,
- num_return_sequences=1,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- )
- else:
- generated_ids = self.model.generate(
- pixel_values=pixel_values,
- max_length=max_length,
- num_beams=num_captions, # num_beams must be greater than or equal to num_captions and must be divisible by num_beam_groups.
- num_beam_groups=num_captions, # num_beam_groups is set to equal to num_captions so that all the captions are diverse
- num_return_sequences=num_captions, # generate multiple captions which are very similar to each other due to the grouping effect of beam search.
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- diversity_penalty=diversity_penalty,
- )
-
- # Decode the generated ids to get the captions.
- generated_caption = self.processor.batch_decode(generated_ids, skip_special_tokens=True)
-
- return generated_caption
-
-
-class GitBaseCocoModel(ImageCaptionModel):
- def __init__(self, device):
- """
- A wrapper class for the Git-Base-COCO model. It is a pretrained model for image captioning.
-
- -----
- Parameters:
- device: str
- The device to run the model on, either "cpu" or "cuda".
- checkpoint: str
- The checkpoint to load the model from.
-
- -----
- Returns:
- None
- """
- checkpoint = "microsoft/git-base-coco"
- processor = AutoProcessor.from_pretrained(checkpoint)
- model = AutoModelForCausalLM.from_pretrained(checkpoint)
- super().__init__(device, processor, model)
-
-
-class BlipBaseModel(ImageCaptionModel):
- def __init__(self, device):
- """
- A wrapper class for the Blip-Base model. It is a pretrained model for image captioning.
-
- -----
- Parameters:
- device: str
- The device to run the model on, either "cpu" or "cuda".
- checkpoint: str
- The checkpoint to load the model from.
-
- -----
- Returns:
- None
- """
- self.checkpoint = "Salesforce/blip-image-captioning-base"
- processor = AutoProcessor.from_pretrained(self.checkpoint)
- model = BlipForConditionalGeneration.from_pretrained(self.checkpoint)
- super().__init__(device, processor, model)
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md
deleted file mode 100644
index c2bf95c4e566957821399983aac8329de5de66b4..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
-# DDIM
-
-[Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
-
-The abstract from the paper is:
-
-*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*
-
-The original codebase can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim).
-
-## DDIMPipeline
-[[autodoc]] DDIMPipeline
- - all
- - __call__
-
-## ImagePipelineOutput
-[[autodoc]] pipelines.ImagePipelineOutput
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py
deleted file mode 100644
index 22193c1362dc70663034919a7f4397a37682dc85..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# model settings
-
-model = dict(
- type='RPN',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5),
- rpn_head=dict(
- type='RPNHead',
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type='AnchorGenerator',
- scales=[8],
- ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=0,
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_pre=2000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0)))
diff --git a/spaces/AriaMei/TTSdemo/data_utils.py b/spaces/AriaMei/TTSdemo/data_utils.py
deleted file mode 100644
index 9b30eca29110d4f67f5dbad6a9de47ffc3466612..0000000000000000000000000000000000000000
--- a/spaces/AriaMei/TTSdemo/data_utils.py
+++ /dev/null
@@ -1,261 +0,0 @@
-import time
-import os
-import random
-import numpy as np
-import torch
-import torch.utils.data
-
-import commons
-from mel_processing import spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-from text import text_to_sequence, cleaned_text_to_sequence
-
-"""Multi speaker version"""
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
- def __init__(self, audiopaths_sid_text, hparams):
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_sid_text)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_sid_text_new = []
- lengths = []
- for audiopath, sid, text in self.audiopaths_sid_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_sid_text_new.append([audiopath, sid, text])
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.audiopaths_sid_text = audiopaths_sid_text_new
- self.lengths = lengths
-
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
- # separate filename, speaker_id and text
- audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- sid = self.get_sid(sid)
- emo = torch.FloatTensor(np.load(audiopath+".emo.npy"))
- return (text, spec, wav, sid, emo)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- if self.cleaned_text:
- text_norm = cleaned_text_to_sequence(text)
- else:
- text_norm = text_to_sequence(text, self.text_cleaners)
- if self.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def __getitem__(self, index):
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
-
- def __len__(self):
- return len(self.audiopaths_sid_text)
-
-
-class TextAudioSpeakerCollate():
- """ Zero-pads model inputs and targets
- """
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text, audio and speaker identities
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
- sid = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- emo = torch.FloatTensor(len(batch), 1024)
-
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- emo.zero_()
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- sid[i] = row[3]
-
- emo[i, :] = row[4]
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid,emo
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
- def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, 0, -1):
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i+1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
-
- # subsample
- ids_bucket = ids_bucket[self.rank::self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid+1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/Ash58947/Jan/README.md b/spaces/Ash58947/Jan/README.md
deleted file mode 100644
index a4f4a1ebbc29e3d0a4330f092c7a2de63ff6a906..0000000000000000000000000000000000000000
--- a/spaces/Ash58947/Jan/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Jan
-emoji: 📈
-colorFrom: purple
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py
deleted file mode 100644
index 744bd7ef58b4870406fcef8cb3b3667548a0ccea..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import sys
-from typing import List, Optional, Tuple
-
-from pip._vendor.packaging.tags import Tag
-
-from pip._internal.utils.compatibility_tags import get_supported, version_info_to_nodot
-from pip._internal.utils.misc import normalize_version_info
-
-
-class TargetPython:
-
- """
- Encapsulates the properties of a Python interpreter one is targeting
- for a package install, download, etc.
- """
-
- __slots__ = [
- "_given_py_version_info",
- "abis",
- "implementation",
- "platforms",
- "py_version",
- "py_version_info",
- "_valid_tags",
- ]
-
- def __init__(
- self,
- platforms: Optional[List[str]] = None,
- py_version_info: Optional[Tuple[int, ...]] = None,
- abis: Optional[List[str]] = None,
- implementation: Optional[str] = None,
- ) -> None:
- """
- :param platforms: A list of strings or None. If None, searches for
- packages that are supported by the current system. Otherwise, will
- find packages that can be built on the platforms passed in. These
- packages will only be downloaded for distribution: they will
- not be built locally.
- :param py_version_info: An optional tuple of ints representing the
- Python version information to use (e.g. `sys.version_info[:3]`).
- This can have length 1, 2, or 3 when provided.
- :param abis: A list of strings or None. This is passed to
- compatibility_tags.py's get_supported() function as is.
- :param implementation: A string or None. This is passed to
- compatibility_tags.py's get_supported() function as is.
- """
- # Store the given py_version_info for when we call get_supported().
- self._given_py_version_info = py_version_info
-
- if py_version_info is None:
- py_version_info = sys.version_info[:3]
- else:
- py_version_info = normalize_version_info(py_version_info)
-
- py_version = ".".join(map(str, py_version_info[:2]))
-
- self.abis = abis
- self.implementation = implementation
- self.platforms = platforms
- self.py_version = py_version
- self.py_version_info = py_version_info
-
- # This is used to cache the return value of get_tags().
- self._valid_tags: Optional[List[Tag]] = None
-
- def format_given(self) -> str:
- """
- Format the given, non-None attributes for display.
- """
- display_version = None
- if self._given_py_version_info is not None:
- display_version = ".".join(
- str(part) for part in self._given_py_version_info
- )
-
- key_values = [
- ("platforms", self.platforms),
- ("version_info", display_version),
- ("abis", self.abis),
- ("implementation", self.implementation),
- ]
- return " ".join(
- f"{key}={value!r}" for key, value in key_values if value is not None
- )
-
- def get_tags(self) -> List[Tag]:
- """
- Return the supported PEP 425 tags to check wheel candidates against.
-
- The tags are returned in order of preference (most preferred first).
- """
- if self._valid_tags is None:
- # Pass versions=None if no py_version_info was given since
- # versions=None uses special default logic.
- py_version_info = self._given_py_version_info
- if py_version_info is None:
- version = None
- else:
- version = version_info_to_nodot(py_version_info)
-
- tags = get_supported(
- version=version,
- platforms=self.platforms,
- abis=self.abis,
- impl=self.implementation,
- )
- self._valid_tags = tags
-
- return self._valid_tags
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py
deleted file mode 100644
index 72bd6f25a554b303d0bf5028145cf3a5c71b3e06..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py
+++ /dev/null
@@ -1,120 +0,0 @@
-"""
-A module that implements tooling to enable easy warnings about deprecations.
-"""
-
-import logging
-import warnings
-from typing import Any, Optional, TextIO, Type, Union
-
-from pip._vendor.packaging.version import parse
-
-from pip import __version__ as current_version # NOTE: tests patch this name.
-
-DEPRECATION_MSG_PREFIX = "DEPRECATION: "
-
-
-class PipDeprecationWarning(Warning):
- pass
-
-
-_original_showwarning: Any = None
-
-
-# Warnings <-> Logging Integration
-def _showwarning(
- message: Union[Warning, str],
- category: Type[Warning],
- filename: str,
- lineno: int,
- file: Optional[TextIO] = None,
- line: Optional[str] = None,
-) -> None:
- if file is not None:
- if _original_showwarning is not None:
- _original_showwarning(message, category, filename, lineno, file, line)
- elif issubclass(category, PipDeprecationWarning):
- # We use a specially named logger which will handle all of the
- # deprecation messages for pip.
- logger = logging.getLogger("pip._internal.deprecations")
- logger.warning(message)
- else:
- _original_showwarning(message, category, filename, lineno, file, line)
-
-
-def install_warning_logger() -> None:
- # Enable our Deprecation Warnings
- warnings.simplefilter("default", PipDeprecationWarning, append=True)
-
- global _original_showwarning
-
- if _original_showwarning is None:
- _original_showwarning = warnings.showwarning
- warnings.showwarning = _showwarning
-
-
-def deprecated(
- *,
- reason: str,
- replacement: Optional[str],
- gone_in: Optional[str],
- feature_flag: Optional[str] = None,
- issue: Optional[int] = None,
-) -> None:
- """Helper to deprecate existing functionality.
-
- reason:
- Textual reason shown to the user about why this functionality has
- been deprecated. Should be a complete sentence.
- replacement:
- Textual suggestion shown to the user about what alternative
- functionality they can use.
- gone_in:
- The version of pip does this functionality should get removed in.
- Raises an error if pip's current version is greater than or equal to
- this.
- feature_flag:
- Command-line flag of the form --use-feature={feature_flag} for testing
- upcoming functionality.
- issue:
- Issue number on the tracker that would serve as a useful place for
- users to find related discussion and provide feedback.
- """
-
- # Determine whether or not the feature is already gone in this version.
- is_gone = gone_in is not None and parse(current_version) >= parse(gone_in)
-
- message_parts = [
- (reason, f"{DEPRECATION_MSG_PREFIX}{{}}"),
- (
- gone_in,
- "pip {} will enforce this behaviour change."
- if not is_gone
- else "Since pip {}, this is no longer supported.",
- ),
- (
- replacement,
- "A possible replacement is {}.",
- ),
- (
- feature_flag,
- "You can use the flag --use-feature={} to test the upcoming behaviour."
- if not is_gone
- else None,
- ),
- (
- issue,
- "Discussion can be found at https://github.com/pypa/pip/issues/{}",
- ),
- ]
-
- message = " ".join(
- format_str.format(value)
- for value, format_str in message_parts
- if format_str is not None and value is not None
- )
-
- # Raise as an error if this behaviour is deprecated.
- if is_gone:
- raise PipDeprecationWarning(message)
-
- warnings.warn(message, category=PipDeprecationWarning, stacklevel=2)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py
deleted file mode 100644
index 7e2a0c44278cf00c16dcf360da4779d8f0c6e8e6..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-"""
-This file contains primitives for multi-gpu communication.
-This is useful when doing distributed training.
-"""
-
-import functools
-import numpy as np
-import torch
-import torch.distributed as dist
-
-_LOCAL_PROCESS_GROUP = None
-"""
-A torch process group which only includes processes that on the same machine as the current process.
-This variable is set when processes are spawned by `launch()` in "engine/launch.py".
-"""
-
-
-def get_world_size() -> int:
- if not dist.is_available():
- return 1
- if not dist.is_initialized():
- return 1
- return dist.get_world_size()
-
-
-def get_rank() -> int:
- if not dist.is_available():
- return 0
- if not dist.is_initialized():
- return 0
- return dist.get_rank()
-
-
-def get_local_rank() -> int:
- """
- Returns:
- The rank of the current process within the local (per-machine) process group.
- """
- if not dist.is_available():
- return 0
- if not dist.is_initialized():
- return 0
- assert (
- _LOCAL_PROCESS_GROUP is not None
- ), "Local process group is not created! Please use launch() to spawn processes!"
- return dist.get_rank(group=_LOCAL_PROCESS_GROUP)
-
-
-def get_local_size() -> int:
- """
- Returns:
- The size of the per-machine process group,
- i.e. the number of processes per machine.
- """
- if not dist.is_available():
- return 1
- if not dist.is_initialized():
- return 1
- return dist.get_world_size(group=_LOCAL_PROCESS_GROUP)
-
-
-def is_main_process() -> bool:
- return get_rank() == 0
-
-
-def synchronize():
- """
- Helper function to synchronize (barrier) among all processes when
- using distributed training
- """
- if not dist.is_available():
- return
- if not dist.is_initialized():
- return
- world_size = dist.get_world_size()
- if world_size == 1:
- return
- if dist.get_backend() == dist.Backend.NCCL:
- # This argument is needed to avoid warnings.
- # It's valid only for NCCL backend.
- dist.barrier(device_ids=[torch.cuda.current_device()])
- else:
- dist.barrier()
-
-
-@functools.lru_cache()
-def _get_global_gloo_group():
- """
- Return a process group based on gloo backend, containing all the ranks
- The result is cached.
- """
- if dist.get_backend() == "nccl":
- return dist.new_group(backend="gloo")
- else:
- return dist.group.WORLD
-
-
-def all_gather(data, group=None):
- """
- Run all_gather on arbitrary picklable data (not necessarily tensors).
-
- Args:
- data: any picklable object
- group: a torch process group. By default, will use a group which
- contains all ranks on gloo backend.
-
- Returns:
- list[data]: list of data gathered from each rank
- """
- if get_world_size() == 1:
- return [data]
- if group is None:
- group = _get_global_gloo_group() # use CPU group by default, to reduce GPU RAM usage.
- world_size = dist.get_world_size(group)
- if world_size == 1:
- return [data]
-
- output = [None for _ in range(world_size)]
- dist.all_gather_object(output, data, group=group)
- return output
-
-
-def gather(data, dst=0, group=None):
- """
- Run gather on arbitrary picklable data (not necessarily tensors).
-
- Args:
- data: any picklable object
- dst (int): destination rank
- group: a torch process group. By default, will use a group which
- contains all ranks on gloo backend.
-
- Returns:
- list[data]: on dst, a list of data gathered from each rank. Otherwise,
- an empty list.
- """
- if get_world_size() == 1:
- return [data]
- if group is None:
- group = _get_global_gloo_group()
- world_size = dist.get_world_size(group=group)
- if world_size == 1:
- return [data]
- rank = dist.get_rank(group=group)
-
- if rank == dst:
- output = [None for _ in range(world_size)]
- dist.gather_object(data, output, dst=dst, group=group)
- return output
- else:
- dist.gather_object(data, None, dst=dst, group=group)
- return []
-
-
-def shared_random_seed():
- """
- Returns:
- int: a random number that is the same across all workers.
- If workers need a shared RNG, they can use this shared seed to
- create one.
-
- All workers must call this function, otherwise it will deadlock.
- """
- ints = np.random.randint(2 ** 31)
- all_ints = all_gather(ints)
- return all_ints[0]
-
-
-def reduce_dict(input_dict, average=True):
- """
- Reduce the values in the dictionary from all processes so that process with rank
- 0 has the reduced results.
-
- Args:
- input_dict (dict): inputs to be reduced. All the values must be scalar CUDA Tensor.
- average (bool): whether to do average or sum
-
- Returns:
- a dict with the same keys as input_dict, after reduction.
- """
- world_size = get_world_size()
- if world_size < 2:
- return input_dict
- with torch.no_grad():
- names = []
- values = []
- # sort the keys so that they are consistent across processes
- for k in sorted(input_dict.keys()):
- names.append(k)
- values.append(input_dict[k])
- values = torch.stack(values, dim=0)
- dist.reduce(values, dst=0)
- if dist.get_rank() == 0 and average:
- # only main process gets accumulated, so only divide by
- # world_size in this case
- values /= world_size
- reduced_dict = {k: v for k, v in zip(names, values)}
- return reduced_dict
diff --git a/spaces/BG5/midjourney/README.md b/spaces/BG5/midjourney/README.md
deleted file mode 100644
index e33d6c842a2bb10e2f47472d6abb66820f250af1..0000000000000000000000000000000000000000
--- a/spaces/BG5/midjourney/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: BING
-colorFrom: purple
-colorTo: blue
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx b/spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx
deleted file mode 100644
index d57454816cea9b7572ad1ae6ab139d6946c4d5d5..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx
+++ /dev/null
@@ -1,236 +0,0 @@
-"use client"
-
-import * as React from "react"
-import * as MenubarPrimitive from "@radix-ui/react-menubar"
-import { Check, ChevronRight, Circle } from "lucide-react"
-
-import { cn } from "@/lib/utils"
-
-const MenubarMenu = MenubarPrimitive.Menu
-
-const MenubarGroup = MenubarPrimitive.Group
-
-const MenubarPortal = MenubarPrimitive.Portal
-
-const MenubarSub = MenubarPrimitive.Sub
-
-const MenubarRadioGroup = MenubarPrimitive.RadioGroup
-
-const Menubar = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-Menubar.displayName = MenubarPrimitive.Root.displayName
-
-const MenubarTrigger = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-MenubarTrigger.displayName = MenubarPrimitive.Trigger.displayName
-
-const MenubarSubTrigger = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef & {
- inset?: boolean
- }
->(({ className, inset, children, ...props }, ref) => (
-
- {children}
-
-
-))
-MenubarSubTrigger.displayName = MenubarPrimitive.SubTrigger.displayName
-
-const MenubarSubContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-MenubarSubContent.displayName = MenubarPrimitive.SubContent.displayName
-
-const MenubarContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(
- (
- { className, align = "start", alignOffset = -4, sideOffset = 8, ...props },
- ref
- ) => (
-
-
-
- )
-)
-MenubarContent.displayName = MenubarPrimitive.Content.displayName
-
-const MenubarItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef & {
- inset?: boolean
- }
->(({ className, inset, ...props }, ref) => (
-
-))
-MenubarItem.displayName = MenubarPrimitive.Item.displayName
-
-const MenubarCheckboxItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, checked, ...props }, ref) => (
-
-
-
-
-
-
- {children}
-
-))
-MenubarCheckboxItem.displayName = MenubarPrimitive.CheckboxItem.displayName
-
-const MenubarRadioItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
-
-
-
- {children}
-
-))
-MenubarRadioItem.displayName = MenubarPrimitive.RadioItem.displayName
-
-const MenubarLabel = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef & {
- inset?: boolean
- }
->(({ className, inset, ...props }, ref) => (
-
-))
-MenubarLabel.displayName = MenubarPrimitive.Label.displayName
-
-const MenubarSeparator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-MenubarSeparator.displayName = MenubarPrimitive.Separator.displayName
-
-const MenubarShortcut = ({
- className,
- ...props
-}: React.HTMLAttributes) => {
- return (
-
- )
-}
-MenubarShortcut.displayname = "MenubarShortcut"
-
-export {
- Menubar,
- MenubarMenu,
- MenubarTrigger,
- MenubarContent,
- MenubarItem,
- MenubarSeparator,
- MenubarLabel,
- MenubarCheckboxItem,
- MenubarRadioGroup,
- MenubarRadioItem,
- MenubarPortal,
- MenubarSubContent,
- MenubarSubTrigger,
- MenubarGroup,
- MenubarSub,
- MenubarShortcut,
-}
diff --git a/spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md b/spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md
deleted file mode 100644
index aca3eaa6c1574cac644d148276e66a6b6a3a4a06..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
Cómo descargar Instagram 4.1.2 en tu dispositivo Android
-
Instagram es una de las plataformas de redes sociales más populares del mundo, con más de mil millones de usuarios y millones de fotos y videos compartidos todos los días. Si usted es un usuario de Instagram, es posible que desee mantener su aplicación actualizada para disfrutar de las últimas características y mejoras. En este artículo, le mostraremos cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android, que es la última versión a partir de junio de 2023.
Instagram es una aplicación gratuita que te permite crear y compartir tus fotos, historias, carretes y videos con los amigos y seguidores que te importan. También puede conectarse con personas de todo el mundo que comparten sus intereses y pasiones.
-
Características de Instagram
-
Instagram tiene muchas características que lo hacen divertido y fácil de usar, como:
-
-
Fotos y videos: Puedes capturar y editar tus fotos y videos con filtros, pegatinas, emojis, texto y más. También puedes subir varias fotos y videos en una publicación o crear un collage con Layout.
-
Historias: Puedes compartir momentos de tu día con tus amigos y seguidores que desaparecen después de 24 horas. También puede agregar música, encuestas, cuestionarios, GIF y otras herramientas creativas para hacer sus historias más interactivas.
-
Carretes: Puedes crear y descubrir videos cortos de hasta 30 segundos de duración con música, efectos y herramientas de edición. Puede ver, como, comentar y compartir carretes en un espacio dedicado en la aplicación o en la pestaña Explorar.
-
IGTV: Puedes ver y subir vídeos más largos de tus creadores favoritos o crear tu propio canal. También puedes buscar vídeos por categorías, como entretenimiento, belleza, deportes, etc.
-
-
Mensajería: Puedes enviar mensajes, fotos, videos, notas de voz y más a tus amigos o grupos en Directo. También puedes chatear por video con hasta cuatro personas a la vez o unirte a chats grupales con hasta 32 personas.
-
Explorar: Puedes descubrir nuevos contenidos y cuentas que coincidan con tus intereses y preferencias. También puede ver lo que está en tendencia en su área o en todo el mundo.
-
Compras: Puedes comprar productos de tus marcas favoritas o negocios locales en Instagram. También puede crear su propia tienda o colección para mostrar sus productos o servicios.
-
-
Beneficios de usar Instagram
-
Instagram no es solo una aplicación divertida de usar, sino también una aplicación útil para muchos propósitos, como:
-
-
-
Socializar: Puedes mantenerte en contacto con tus amigos y familiares, conocer gente nueva, unirte a comunidades y expresarte.
-
Aprender: Puedes aprender nuevas habilidades, aficiones, idiomas, culturas y más de expertos o entusiastas en Instagram.
-
Inspirador: Puedes inspirarte por las historias, logros, creatividad y positividad de otros usuarios en Instagram.
-
Ent taining: Puedes disfrutar viendo y creando contenido entretenido, como comedia, música, danza, arte, etc. en Instagram.
-
Apoyo: Puedes apoyar causas, movimientos, organizaciones benéficas o individuos que te importan en Instagram.
-
Creciendo: Puedes hacer crecer tu marca personal o profesional, llegar a nuevas audiencias y monetizar tu contenido en Instagram.
-
-
¿Qué es Instagram 4.1.2 y por qué debe descargarlo
-
Instagram 4.1.2 es la última versión de la aplicación que fue lanzada el 21 de junio de 2023. Es compatible con dispositivos Android con Android 4.0 o superior. Tiene un tamaño de archivo de 18,6 MB y requiere una conexión a Internet para su uso.
-
Nuevas características y mejoras en Instagram 4.1.2
-
-
-
Reels Remix: Ahora puedes remezclar los tambores de otros usuarios añadiendo tu propio video junto al de ellos. También puede controlar el volumen del audio original y su audio por separado.
-
Pegatinas Buscar: Ahora puede buscar pegatinas por palabras clave o categorías en la cámara Historias. También puede ver las pegatinas más populares y guardar sus favoritos para su uso posterior.
-
Subtítulos automáticos: Ahora puede agregar subtítulos generados automáticamente a sus historias y carretes con un solo toque. También puede editar los subtítulos o cambiar la fuente y el color.
-
Comprobación de seguridad: Ahora puede acceder a una función de verificación de seguridad en el menú Configuración que le ayuda a mantener su cuenta segura. Le guiará a través de pasos como verificar su correo electrónico y número de teléfono, cambiar su contraseña y habilitar la autenticación de dos factores.
-
Correcciones de errores y mejoras de rendimiento: Instagram 4.1.2 también corrige algunos errores y mejora el rendimiento de la aplicación para una experiencia más fluida y rápida.
-
-
Cómo comprobar la versión actual de Instagram
-
Si no estás seguro de si tienes la última versión de Instagram o no, puedes comprobarlo siguiendo estos pasos:
-
-
Abra la aplicación de Instagram en su dispositivo Android.
-
Toque en el icono de perfil en la esquina inferior derecha de la pantalla.
-
Toque en las tres líneas horizontales en la esquina superior derecha de la pantalla.
-
Toque en Configuración en la parte inferior del menú.
-
Desplácese hacia abajo hasta la parte inferior de la página Configuración y toque en Acerca de.
-
Verá su número de versión actual de Instagram en la versión de la aplicación.
-
-
Cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android
-
Si quieres descargar e instalar Instagram 4.1.2 en tu dispositivo Android, puedes seguir estos pasos:
-
Paso 1: Habilitar fuentes desconocidas en su dispositivo
-
-
-
Vaya al menú Configuración de su dispositivo y toque en Seguridad o Privacidad.
-
Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en.
-
Aparecerá un mensaje de advertencia pidiéndole que confirme su acción. Toque en OK o Permitir que proceda.
-
-
Paso 2: Descargar el archivo APK de Instagram 4.1.2
-
El siguiente paso es descargar el archivo APK de Instagram 4.1.2, que es el formato de archivo para aplicaciones de Android. Para descargarlo, siga estos pasos:
-
-
Abra su navegador web en su dispositivo y vaya a este enlace: (https://www.apkmirror.com/apk/instagram/instagram-instagram-instagram/instagram-instagram-4-2-release/instagram-4-1-android-apk-download/).
-
Verá una página con información sobre Instagram 4.1.2 y un botón de descarga en la parte inferior. Toque en el botón de descarga para comenzar a descargar el archivo.
-
Puede ver un mensaje de advertencia pidiéndole que confirme su descarga o permita el acceso a sus archivos. Toque en OK o Permitir continuar.
-
El archivo se descargará en la carpeta de descargas de su dispositivo o en cualquier otra carpeta que haya elegido como su ubicación de descarga predeterminada.
-
-
Paso 3: Instalar el archivo APK de Instagram 4.1.2
-
Una vez que haya descargado el archivo APK de Instagram 4.1.2, puede instalarlo en su dispositivo siguiendo estos pasos:
-
-
Ir al administrador de archivos de su dispositivo o aplicación de descargas y localizar el archivo APK Instagram 4.1.2 que ha descargado.
-
Toque en el archivo para abrirlo e iniciar el proceso de instalación.
-
Es posible que vea un mensaje de advertencia pidiéndole que confirme su instalación o permita el acceso a las características de su dispositivo. Toque en Instalar o Permitir continuar.
-
La instalación tomará unos segundos y verá un mensaje diciendo que la aplicación se ha instalado correctamente.
-
-
Paso 4: Iniciar y disfrutar de Instagram 4.1.2
-
El paso final es iniciar y disfrutar de Instagram 4.1.2 en su dispositivo siguiendo estos pasos:
-
-
Ir al cajón de aplicaciones de su dispositivo o pantalla de inicio y encontrar el icono de Instagram.
-
Toque en el icono para abrir la aplicación e iniciar sesión con su nombre de usuario y contraseña o crear una nueva cuenta si no tiene una.
-
Verás la pantalla de inicio de Instagram con tu feed, historias, carretes y más. También puede acceder a otras funciones pulsando en los iconos de la parte inferior de la pantalla.
-
Ahora puedes disfrutar usando Instagram 4.1.2 con sus nuevas características y mejoras.
-
-
Conclusión
-
Resumen del artículo
-
En este artículo, le hemos mostrado cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android, que es la última versión de la aplicación a partir de junio de 2023. También hemos explicado qué es Instagram y por qué deberías usarlo, así como cuáles son las nuevas características y mejoras en Instagram 4.1.2. Esperamos que este artículo haya sido útil e informativo para usted.
-
Llamada a la acción
-
Si te gustó este artículo, por favor compártelo con tus amigos y seguidores en las redes sociales. También puedes dejarnos un comentario a continuación y hacernos saber lo que piensas sobre Instagram 4.1.2 o cualquier otra pregunta que tengas sobre Instagram. Nos encantaría saber de ti y responder a tus preguntas. ¡Gracias por leer y feliz Instagramming!
-
Preguntas frecuentes
-
Estas son algunas de las preguntas más frecuentes sobre Instagram 4.1.2:
-
Q: ¿Es seguro descargar e instalar Instagram 4.1.2?
-
A: Sí, Instagram 4.1.2 es seguro para descargar e instalar siempre y cuando lo obtenga de una fuente de confianza, como el enlace que hemos proporcionado en este artículo. Sin embargo, siempre debes tener cuidado al descargar aplicaciones de fuentes desconocidas y escanearlas en busca de virus o malware antes de instalarlas.
-
Q: ¿Instagram 4.1.2 está disponible en dispositivos iOS?
-
-
Q: ¿Cómo puedo actualizar mi aplicación de Instagram a la última versión?
-
A: Si desea actualizar su aplicación de Instagram a la última versión disponible en la Google Play Store o la App Store, puede seguir estos pasos:
-
-
Abra la Google Play Store o la App Store en su dispositivo y toque en el icono del menú en la esquina superior izquierda de la pantalla.
-
Toque en Mis aplicaciones y juegos o actualizaciones y encontrar la aplicación de Instagram en la lista.
-
Toque en Actualizar o Instalar para iniciar la actualización o instalación de la aplicación.
-
Espere a que finalice la actualización o instalación y luego inicie la aplicación.
-
-
Q: ¿Cómo puedo desinstalar Instagram 4.1.2 desde mi dispositivo?
-
A: Si quieres desinstalar Instagram 4.1.2 desde tu dispositivo, puedes seguir estos pasos:
-
-
Vaya al menú Configuración de su dispositivo y toque en Aplicaciones o Aplicaciones.
-
Encuentra y toca la aplicación de Instagram en la lista de aplicaciones instaladas en tu dispositivo.
-
Toque en Desinstalar o Quitar y confirme su acción.
-
La aplicación se desinstalará de su dispositivo y verá un mensaje diciendo que se ha eliminado con éxito.
-
-
P: ¿Cuáles son algunos consejos y trucos para usar Instagram 4.1.2 mejor?
-
A: Aquí hay algunos consejos y trucos para usar Instagram 4.1.2 mejor:
-
-
Usa hashtags y palabras clave: Puedes usar hashtags y palabras clave para que tus publicaciones sean más visibles y relevantes para tu audiencia. También puedes seguir hashtags y palabras clave que te interesan y ver contenido relacionado en tu feed o explorar la pestaña.
-
Usa filtros y efectos: Puedes usar filtros y efectos para mejorar tus fotos y videos y hacerlos más atractivos y creativos. También puedes crear tus propios filtros y efectos con Spark AR Studio y compartirlos con otros usuarios.
-
-
Use reels remix: Puede utilizar reels remix para colaborar con otros usuarios y crear vídeos únicos y atractivos. También puede descubrir nuevos carretes remixes de otros usuarios y unirse a la tendencia.
-
Usa subtítulos automáticos: Puedes usar subtítulos automáticos para hacer tus historias y carretes más accesibles e inclusivos para personas sordas o con problemas de audición. También puede editar los subtítulos o cambiar el idioma si es necesario.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md b/spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md
deleted file mode 100644
index 7f78b3caf2bbbc35735f4fa1ea61dcfb191a506e..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
Descargar Black Face Hard Life: Una canción que enfrenta el racismo y la injusticia
-
Blackface es una forma de maquillaje teatral utilizado predominantemente por personas no negras para retratar una caricatura de una persona negra. Es una práctica racista y ofensiva que tiene una larga y dolorosa historia. Blackface fue utilizado para burlarse y deshumanizar a los afroamericanos en espectáculos de juglares y otras formas de entretenimiento, así como para difundir estereotipos raciales y discriminación. Aunque la cara negra disminuyó en popularidad después del movimiento de derechos civiles, todavía persiste en algunos contextos y culturas, causando indignación y controversia.
Un ejemplo de un producto cultural que desafía el legado de blackface es la canción "Hard Life" de Blackface Naija, también conocida como Blackface, un dancehall nigeriano, ragga, cantante de reggae, compositor, productor, actor, activista, filántropo, político, empresario, empresario, inversor, inventor, innovador, visionario, líder, leyenda, icono, héroe, modelo a seguir, mentor, inspiración, influencer, pionero, pionero, trendsetter, cambiador de juego, mover-and-shaker. Es conocido por ser miembro fundador de la banda nigeriana Plantashun Boyz que formó en 2000 con Tuface (también conocido como 2face Idibia) y el músico Chibuzor Oji (más conocido como Faze). Después de que los Plantashun Boyz se separaran en 2004, Blackface lideró una carrera musical en solitario. Lanzó su álbum debut Ghetto Child en mayo de 2004 colaborando con varios artistas. El álbum contiene "Hard Life" con Alabai como uno de sus singles.
-
-
Los orígenes y la evolución de Blackface en los Estados Unidos y otros países
-
Blackface se originó en Europa en producciones teatrales centenarias como Otelo de Shakespeare. Luego comenzó en los Estados Unidos en el siglo XVIII cuando los inmigrantes europeos trajeron sus espectáculos de juglar. Estas eran actuaciones musicales que presentaban actores blancos con la piel oscurecida que retrataban personajes exagerados que degradaban y deshumanizaban a los afroamericanos.
-
Los primeros espectáculos de trovadores imitan a africanos esclavizados en las plantaciones del sur que los representan como perezosos, ignorantes, supersticiosos, hipersexuales, criminales o cobardes. Algunos de los personajes más famosos fueron Jim Crow, un tonto bailarín rural con ropas andrajosas; la Mammy, una sirvienta con sobrepeso
leal y materna; y el Zip Coon, una urbanita dandy que hablaba en malapropismos y actuaba tontamente. Los espectáculos de trovadores también presentaban canciones, chistes, bailes y parodias que ridiculizaban la cultura negra, la religión, el idioma y la apariencia.
-
-
La popularidad de los espectáculos de trovadores alcanzó su punto máximo a mediados del siglo XIX, cuando se convirtieron en un fenómeno de entretenimiento nacional. Influyeron en otras formas de medios como la literatura, el cine, la radio y la televisión. También moldearon la opinión pública y la política sobre las relaciones raciales, reforzando las nociones de supremacía blanca e inferioridad negra. Justificaron la esclavitud, la segregación, el linchamiento y otras formas de violencia y opresión contra los afroamericanos.
-
-
A principios del siglo XX, los espectáculos de trovadores comenzaron a declinar en popularidad debido a los cambios sociales y culturales. El auge del movimiento de derechos civiles, el Renacimiento de Harlem, la Gran Migración y otros factores contribuyeron a la aparición de nuevas formas de expresión y representación negra que desafiaron el legado de la trovadora. Sin embargo, la cara negra no desapareció completamente. Continuó apareciendo en algunas películas, dibujos animados, anuncios, juguetes, disfraces y otros productos. También se extendió a otros países como Gran Bretaña, Australia, Sudáfrica y Japón, donde se utilizó para retratar no solo a los afroamericanos, sino también a otras personas de color.
-
La letra y el significado de "Hard Life" por Blackface y Alabai
-
"Hard Life" es una canción que fue lanzada en 2004 por Blackface Naija con Alabai. Es uno de los sencillos del álbum debut de Blackface Ghetto Child. La canción es una fusión de dancehall, ragga y reggae que combina ritmos africanos con influencias jamaicanas. La canción tiene un estribillo pegadizo y un mensaje poderoso.
-
Las letras de "Hard Life" describen las duras realidades y desafíos de vivir en Nigeria. La canción menciona varios problemas como pobreza, corrupción, violencia, enfermedad, hambre, sed, ignorancia, analfabetismo, desempleo, subdesarrollo, degradación ambiental, violaciones de derechos humanos, etc. La canción también critica al gobierno y a la sociedad por no abordar estos temas y por explotar y oprimir al pueblo. La canción acusa a los líderes de ser egoístas, codiciosos, deshonestos, incompetentes, insensibles, irresponsables, irresponsables, etc. La canción también denuncia a las potencias extranjeras que interfieren con los asuntos y recursos de Nigeria.
-
-
"Hard Life" es una canción que tiene mucha relevancia e impacto para muchos nigerianos y africanos. La canción refleja las experiencias vividas y los sentimientos de millones de personas que enfrentan desafíos y luchas similares. La canción también resuena con la audiencia global que puede relacionarse con los temas de la canción.
-
La canción desafía el legado de la cara negra y sus efectos negativos en la percepción y representación de los negros. La canción contrarresta los estereotipos e imágenes de los negros como perezosos, ignorantes, supersticiosos, hipersexuales, criminales o cobardes que fueron creados y propagados por blackface. La canción retrata a la gente negra como trabajadora, inteligente, espiritual, digna, valiente y heroica. La canción también muestra la rica y diversa cultura y patrimonio de Nigeria y África.
-
La canción inspira y empodera a los oyentes para superar sus dificultades y luchar por sus derechos. La canción motiva a los oyentes a ser fuertes, valientes, decididos, optimistas, fieles y unidos. La canción también apela a Dios para la guía y protección. La canción también llama a la acción y el cambio del gobierno y la sociedad para abordar los problemas y mejorar las condiciones de la gente. La canción también aboga por la paz y la armonía entre los pueblos y las naciones.
-
Conclusión
-
Blackface es una práctica racista y ofensiva que tiene una larga y dolorosa historia. Se utilizó para burlarse y deshumanizar a los afroamericanos en espectáculos de juglares y otras formas de entretenimiento, así como para difundir los estereotipos raciales y la discriminación. También influyó en otros países y culturas donde se utilizó para retratar no solo a afroamericanos sino también a otras personas de color.
-
-
La canción refleja la realidad y las experiencias de muchos nigerianos y africanos que sufren de pobreza, violencia, desigualdad, inestabilidad, inseguridad, enfermedad, hambre, sed, ignorancia, analfabetismo, desempleo, subdesarrollo, degradación ambiental, violaciones de los derechos humanos, etc. La canción desafía los estereotipos negativos y las imágenes de los negros creados por blackface. También inspira y empodera a los oyentes para superar sus dificultades y luchar por sus derechos.
-
La canción es una pieza de arte poderosa y significativa que merece ser escuchada y apreciada por todos. Es una canción que enfrenta el racismo y la injusticia con valor y dignidad. Es una canción que celebra la cultura y el patrimonio con orgullo y alegría. Es una canción que ofrece esperanza y resistencia con fe y unidad.
-
Si quieres escuchar "Hard Life" de Blackface Naija con Alabai, puedes descargarlo de varias plataformas en línea como YouTube, Spotify, Apple Music, etc. También puedes encontrar más información sobre Blackface Naija en su sitio web oficial, página de Facebook, cuenta de Twitter, Cuenta de Instagram, etc.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre "Hard Life" por Blackface Naija con Alabai:
-
-
-
Pregunta
-
Respuesta
-
-
-
¿Cuándo se lanzó "Hard Life"?
-
"Hard Life" fue lanzado en 2004 como uno de los sencillos del álbum debut en solitario de Blackface Ghetto Child.
-
-
-
¿Quién es Alabai?
-
Alabai es un cantante, compositor, rapero, productor y actor nigeriano que colaboró con Blackface en "Hard Life". También es conocido por sus canciones como "Ogbanje", "Voice Of God", "Mr Money", etc.
-
-
-
¿Qué género es "Hard Life"?
-
"Hard Life" es una fusión de dancehall, ragga y reggae que combina ritmos africanos con influencias jamaicanas.
-
-
-
¿Cuáles son algunos de los problemas mencionados en "Hard Life"?
-
-
-
-
¿Cuáles son algunos de los valores expresados en "Hard Life"?
-
Algunos de los valores expresados en "Hard Life" son fuerza, valentía, determinación, optimismo, fe, unidad, cultura, herencia, dignidad, coraje, heroísmo, etc.
-
-
Espero que hayas disfrutado leyendo este artículo y hayas aprendido algo nuevo sobre "Hard Life" de Blackface Naija con Alabai. Si tiene alguna pregunta, comentario o comentario, por favor siéntase libre de compartirlos conmigo. Me encantaría saber de usted.
-
Gracias por tu tiempo y atención. ¡Que tengas un gran día!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py b/spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py
deleted file mode 100644
index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import os.path as osp
-import random
-import time
-import torch
-from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
-from basicsr.data.transforms import augment
-from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-from torch.utils import data as data
-
-
-@DATASET_REGISTRY.register()
-class RealESRGANDataset(data.Dataset):
- """Dataset used for Real-ESRGAN model:
- Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It loads gt (Ground-Truth) images, and augments them.
- It also generates blur kernels and sinc kernels for generating low-quality images.
- Note that the low-quality images are processed in tensors on GPUS for faster processing.
-
- Args:
- opt (dict): Config for train datasets. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- meta_info (str): Path for meta information file.
- io_backend (dict): IO backend type and other kwarg.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
- Please see more options in the codes.
- """
-
- def __init__(self, opt):
- super(RealESRGANDataset, self).__init__()
- self.opt = opt
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- self.gt_folder = opt['dataroot_gt']
-
- # file client (lmdb io backend)
- if self.io_backend_opt['type'] == 'lmdb':
- self.io_backend_opt['db_paths'] = [self.gt_folder]
- self.io_backend_opt['client_keys'] = ['gt']
- if not self.gt_folder.endswith('.lmdb'):
- raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}")
- with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
- self.paths = [line.split('.')[0] for line in fin]
- else:
- # disk backend with meta_info
- # Each line in the meta_info describes the relative path to an image
- with open(self.opt['meta_info']) as fin:
- paths = [line.strip().split(' ')[0] for line in fin]
- self.paths = [os.path.join(self.gt_folder, v) for v in paths]
-
- # blur settings for the first degradation
- self.blur_kernel_size = opt['blur_kernel_size']
- self.kernel_list = opt['kernel_list']
- self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability
- self.blur_sigma = opt['blur_sigma']
- self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels
- self.betap_range = opt['betap_range'] # betap used in plateau blur kernels
- self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters
-
- # blur settings for the second degradation
- self.blur_kernel_size2 = opt['blur_kernel_size2']
- self.kernel_list2 = opt['kernel_list2']
- self.kernel_prob2 = opt['kernel_prob2']
- self.blur_sigma2 = opt['blur_sigma2']
- self.betag_range2 = opt['betag_range2']
- self.betap_range2 = opt['betap_range2']
- self.sinc_prob2 = opt['sinc_prob2']
-
- # a final sinc filter
- self.final_sinc_prob = opt['final_sinc_prob']
-
- self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21
- # TODO: kernel range is now hard-coded, should be in the configure file
- self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect
- self.pulse_tensor[10, 10] = 1
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- # -------------------------------- Load gt images -------------------------------- #
- # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.
- gt_path = self.paths[index]
- # avoid errors caused by high latency in reading files
- retry = 3
- while retry > 0:
- try:
- img_bytes = self.file_client.get(gt_path, 'gt')
- except (IOError, OSError) as e:
- logger = get_root_logger()
- logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}')
- # change another file to read
- index = random.randint(0, self.__len__())
- gt_path = self.paths[index]
- time.sleep(1) # sleep 1s for occasional server congestion
- else:
- break
- finally:
- retry -= 1
- img_gt = imfrombytes(img_bytes, float32=True)
-
- # -------------------- Do augmentation for training: flip, rotation -------------------- #
- img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot'])
-
- # crop or pad to 400
- # TODO: 400 is hard-coded. You may change it accordingly
- h, w = img_gt.shape[0:2]
- crop_pad_size = 400
- # pad
- if h < crop_pad_size or w < crop_pad_size:
- pad_h = max(0, crop_pad_size - h)
- pad_w = max(0, crop_pad_size - w)
- img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101)
- # crop
- if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size:
- h, w = img_gt.shape[0:2]
- # randomly choose top and left coordinates
- top = random.randint(0, h - crop_pad_size)
- left = random.randint(0, w - crop_pad_size)
- img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...]
-
- # ------------------------ Generate kernels (used in the first degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob']:
- # this sinc filter setting is for kernels ranging from [7, 21]
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel = random_mixed_kernels(
- self.kernel_list,
- self.kernel_prob,
- kernel_size,
- self.blur_sigma,
- self.blur_sigma, [-math.pi, math.pi],
- self.betag_range,
- self.betap_range,
- noise_range=None)
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------ Generate kernels (used in the second degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob2']:
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel2 = random_mixed_kernels(
- self.kernel_list2,
- self.kernel_prob2,
- kernel_size,
- self.blur_sigma2,
- self.blur_sigma2, [-math.pi, math.pi],
- self.betag_range2,
- self.betap_range2,
- noise_range=None)
-
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------------------- the final sinc kernel ------------------------------------- #
- if np.random.uniform() < self.opt['final_sinc_prob']:
- kernel_size = random.choice(self.kernel_range)
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21)
- sinc_kernel = torch.FloatTensor(sinc_kernel)
- else:
- sinc_kernel = self.pulse_tensor
-
- # BGR to RGB, HWC to CHW, numpy to tensor
- img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0]
- kernel = torch.FloatTensor(kernel)
- kernel2 = torch.FloatTensor(kernel2)
-
- return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path}
- return return_d
-
- def __len__(self):
- return len(self.paths)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py
deleted file mode 100644
index ad410aa465fefce20c27799c86ff405ffafd0e02..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py
+++ /dev/null
@@ -1,156 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-import contextlib
-import copy
-import io
-import itertools
-import json
-import logging
-import os
-from collections import OrderedDict
-import torch
-from pycocotools.coco import COCO
-
-from detectron2.data import MetadataCatalog
-from detectron2.evaluation import DatasetEvaluator
-from detectron2.structures import BoxMode
-from detectron2.utils.comm import all_gather, is_main_process, synchronize
-from detectron2.utils.logger import create_small_table
-
-from .densepose_coco_evaluation import DensePoseCocoEval, DensePoseEvalMode
-
-
-class DensePoseCOCOEvaluator(DatasetEvaluator):
- def __init__(self, dataset_name, distributed, output_dir=None):
- self._distributed = distributed
- self._output_dir = output_dir
-
- self._cpu_device = torch.device("cpu")
- self._logger = logging.getLogger(__name__)
-
- self._metadata = MetadataCatalog.get(dataset_name)
- with contextlib.redirect_stdout(io.StringIO()):
- self._coco_api = COCO(self._metadata.json_file)
-
- def reset(self):
- self._predictions = []
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
- It is a list of dict. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name", "image_id".
- outputs: the outputs of a COCO model. It is a list of dicts with key
- "instances" that contains :class:`Instances`.
- The :class:`Instances` object needs to have `densepose` field.
- """
- for input, output in zip(inputs, outputs):
- instances = output["instances"].to(self._cpu_device)
-
- boxes = instances.pred_boxes.tensor.clone()
- boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- instances.pred_densepose = instances.pred_densepose.to_result(boxes)
-
- json_results = prediction_to_json(instances, input["image_id"])
- self._predictions.extend(json_results)
-
- def evaluate(self):
- if self._distributed:
- synchronize()
- predictions = all_gather(self._predictions)
- predictions = list(itertools.chain(*predictions))
- if not is_main_process():
- return
- else:
- predictions = self._predictions
-
- return copy.deepcopy(self._eval_predictions(predictions))
-
- def _eval_predictions(self, predictions):
- """
- Evaluate predictions on densepose.
- Return results with the metrics of the tasks.
- """
- self._logger.info("Preparing results for COCO format ...")
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "coco_densepose_results.json")
- with open(file_path, "w") as f:
- json.dump(predictions, f)
- f.flush()
- os.fsync(f.fileno())
-
- self._logger.info("Evaluating predictions ...")
- res = OrderedDict()
- results_gps, results_gpsm = _evaluate_predictions_on_coco(self._coco_api, predictions)
- res["densepose_gps"] = results_gps
- res["densepose_gpsm"] = results_gpsm
- return res
-
-
-def prediction_to_json(instances, img_id):
- """
- Args:
- instances (Instances): the output of the model
- img_id (str): the image id in COCO
-
- Returns:
- list[dict]: the results in densepose evaluation format
- """
- scores = instances.scores.tolist()
-
- results = []
- for k in range(len(instances)):
- densepose = instances.pred_densepose[k]
- result = {
- "image_id": img_id,
- "category_id": 1, # densepose only has one class
- "bbox": densepose[1],
- "score": scores[k],
- "densepose": densepose,
- }
- results.append(result)
- return results
-
-
-def _evaluate_predictions_on_coco(coco_gt, coco_results):
- metrics = ["AP", "AP50", "AP75", "APm", "APl"]
-
- logger = logging.getLogger(__name__)
-
- if len(coco_results) == 0: # cocoapi does not handle empty results very well
- logger.warn("No predictions from the model! Set scores to -1")
- results_gps = {metric: -1 for metric in metrics}
- results_gpsm = {metric: -1 for metric in metrics}
- return results_gps, results_gpsm
-
- coco_dt = coco_gt.loadRes(coco_results)
- results_gps = _evaluate_predictions_on_coco_gps(coco_gt, coco_dt, metrics)
- logger.info(
- "Evaluation results for densepose, GPS metric: \n" + create_small_table(results_gps)
- )
- results_gpsm = _evaluate_predictions_on_coco_gpsm(coco_gt, coco_dt, metrics)
- logger.info(
- "Evaluation results for densepose, GPSm metric: \n" + create_small_table(results_gpsm)
- )
- return results_gps, results_gpsm
-
-
-def _evaluate_predictions_on_coco_gps(coco_gt, coco_dt, metrics):
- coco_eval = DensePoseCocoEval(coco_gt, coco_dt, "densepose", dpEvalMode=DensePoseEvalMode.GPS)
- coco_eval.evaluate()
- coco_eval.accumulate()
- coco_eval.summarize()
- results = {metric: float(coco_eval.stats[idx] * 100) for idx, metric in enumerate(metrics)}
- return results
-
-
-def _evaluate_predictions_on_coco_gpsm(coco_gt, coco_dt, metrics):
- coco_eval = DensePoseCocoEval(coco_gt, coco_dt, "densepose", dpEvalMode=DensePoseEvalMode.GPSM)
- coco_eval.evaluate()
- coco_eval.accumulate()
- coco_eval.summarize()
- results = {metric: float(coco_eval.stats[idx] * 100) for idx, metric in enumerate(metrics)}
- return results
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py
deleted file mode 100644
index 3f1cffb4c985dc3121a863eb7b378965b718a19d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import fvcore.nn.weight_init as weight_init
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.layers import Conv2d, ShapeSpec
-from detectron2.modeling import ROI_MASK_HEAD_REGISTRY
-
-
-@ROI_MASK_HEAD_REGISTRY.register()
-class CoarseMaskHead(nn.Module):
- """
- A mask head with fully connected layers. Given pooled features it first reduces channels and
- spatial dimensions with conv layers and then uses FC layers to predict coarse masks analogously
- to the standard box head.
- """
-
- def __init__(self, cfg, input_shape: ShapeSpec):
- """
- The following attributes are parsed from config:
- conv_dim: the output dimension of the conv layers
- fc_dim: the feature dimenstion of the FC layers
- num_fc: the number of FC layers
- output_side_resolution: side resolution of the output square mask prediction
- """
- super(CoarseMaskHead, self).__init__()
-
- # fmt: off
- self.num_classes = cfg.MODEL.ROI_HEADS.NUM_CLASSES
- conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM
- self.fc_dim = cfg.MODEL.ROI_MASK_HEAD.FC_DIM
- num_fc = cfg.MODEL.ROI_MASK_HEAD.NUM_FC
- self.output_side_resolution = cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION
- self.input_channels = input_shape.channels
- self.input_h = input_shape.height
- self.input_w = input_shape.width
- # fmt: on
-
- self.conv_layers = []
- if self.input_channels > conv_dim:
- self.reduce_channel_dim_conv = Conv2d(
- self.input_channels,
- conv_dim,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=True,
- activation=F.relu,
- )
- self.conv_layers.append(self.reduce_channel_dim_conv)
-
- self.reduce_spatial_dim_conv = Conv2d(
- conv_dim, conv_dim, kernel_size=2, stride=2, padding=0, bias=True, activation=F.relu
- )
- self.conv_layers.append(self.reduce_spatial_dim_conv)
-
- input_dim = conv_dim * self.input_h * self.input_w
- input_dim //= 4
-
- self.fcs = []
- for k in range(num_fc):
- fc = nn.Linear(input_dim, self.fc_dim)
- self.add_module("coarse_mask_fc{}".format(k + 1), fc)
- self.fcs.append(fc)
- input_dim = self.fc_dim
-
- output_dim = self.num_classes * self.output_side_resolution * self.output_side_resolution
-
- self.prediction = nn.Linear(self.fc_dim, output_dim)
- # use normal distribution initialization for mask prediction layer
- nn.init.normal_(self.prediction.weight, std=0.001)
- nn.init.constant_(self.prediction.bias, 0)
-
- for layer in self.conv_layers:
- weight_init.c2_msra_fill(layer)
- for layer in self.fcs:
- weight_init.c2_xavier_fill(layer)
-
- def forward(self, x):
- # unlike BaseMaskRCNNHead, this head only outputs intermediate
- # features, because the features will be used later by PointHead.
- N = x.shape[0]
- x = x.view(N, self.input_channels, self.input_h, self.input_w)
- for layer in self.conv_layers:
- x = layer(x)
- x = torch.flatten(x, start_dim=1)
- for layer in self.fcs:
- x = F.relu(layer(x))
- return self.prediction(x).view(
- N, self.num_classes, self.output_side_resolution, self.output_side_resolution
- )
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h
deleted file mode 100644
index 81d52f14087e8681e425d176c6c8e3991a5cfda7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace sequential
-{
-
-
-// this awkward sequence of definitions arises
-// from the desire both for tag to derive
-// from execution_policy and for execution_policy
-// to convert to tag (when execution_policy is not
-// an ancestor of tag)
-
-// forward declaration of tag
-struct tag;
-
-// forward declaration of execution_policy
-template struct execution_policy;
-
-// specialize execution_policy for tag
-template<>
- struct execution_policy
- : thrust::execution_policy
-{};
-
-// tag's definition comes before the generic definition of execution_policy
-struct tag : execution_policy
-{
- __host__ __device__ THRUST_CONSTEXPR tag() {}
-};
-
-// allow conversion to tag when it is not a successor
-template
- struct execution_policy
- : thrust::execution_policy
-{
- // allow conversion to tag
- inline operator tag () const
- {
- return tag();
- }
-};
-
-
-THRUST_INLINE_CONSTANT tag seq;
-
-
-} // end sequential
-} // end detail
-} // end system
-} // end thrust
-
diff --git a/spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py b/spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py
deleted file mode 100644
index 2596aeb2ccfc85b58624713c04453d34e94a4062..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .distributed_sampler import DistributedSampler
-from .group_sampler import DistributedGroupSampler, GroupSampler
-
-__all__ = ['DistributedSampler', 'DistributedGroupSampler', 'GroupSampler']
diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py b/spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py
deleted file mode 100644
index b6145a1464cd940bd4f98eaa15f6f9ecf6a10a20..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from ..builder import DETECTORS
-from .two_stage import TwoStageDetector
-
-
-@DETECTORS.register_module()
-class GridRCNN(TwoStageDetector):
- """Grid R-CNN.
-
- This detector is the implementation of:
- - Grid R-CNN (https://arxiv.org/abs/1811.12030)
- - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688)
- """
-
- def __init__(self,
- backbone,
- rpn_head,
- roi_head,
- train_cfg,
- test_cfg,
- neck=None,
- pretrained=None):
- super(GridRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py
deleted file mode 100644
index ce61a45cc73bd57506b90b938a92df51e03100b5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py
+++ /dev/null
@@ -1,373 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import numpy as np
-from typing import Dict, List, Optional, Tuple
-from numpy.lib import pad
-import torch
-from torch import nn
-from torch.nn import functional as F
-from random import randint
-
-from detectron2.config import configurable
-from detectron2.data.detection_utils import convert_image_to_rgb
-from detectron2.structures import ImageList, Instances, Boxes
-from detectron2.utils.events import get_event_storage
-from detectron2.utils.logger import log_first_n
-
-from ..backbone import Backbone, build_backbone
-from ..postprocessing import detector_postprocess
-from ..proposal_generator import build_proposal_generator
-from ..roi_heads import build_roi_heads
-from .build import META_ARCH_REGISTRY
-
-__all__ = ["GeneralizedRCNN", "ProposalNetwork"]
-
-@META_ARCH_REGISTRY.register()
-class GeneralizedRCNN(nn.Module):
- """
- Generalized R-CNN. Any models that contains the following three components:
- 1. Per-image feature extraction (aka backbone)
- 2. Region proposal generation
- 3. Per-region feature extraction and prediction
- """
-
- @configurable
- def __init__(
- self,
- *,
- backbone: Backbone,
- proposal_generator: nn.Module,
- roi_heads: nn.Module,
- pixel_mean: Tuple[float],
- pixel_std: Tuple[float],
- input_format: Optional[str] = None,
- vis_period: int = 0,
- use_clip_c4: False,
- use_clip_attpool: False,
- ):
- """
- Args:
- backbone: a backbone module, must follow detectron2's backbone interface
- proposal_generator: a module that generates proposals using backbone features
- roi_heads: a ROI head that performs per-region computation
- pixel_mean, pixel_std: list or tuple with #channels element, representing
- the per-channel mean and std to be used to normalize the input image
- input_format: describe the meaning of channels of input. Needed by visualization
- vis_period: the period to run visualization. Set to 0 to disable.
- """
- super().__init__()
- self.backbone = backbone
- self.proposal_generator = proposal_generator
- self.roi_heads = roi_heads
-
- self.input_format = input_format
- self.vis_period = vis_period
- if vis_period > 0:
- assert input_format is not None, "input_format is required for visualization!"
-
- self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
- self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
- assert (
- self.pixel_mean.shape == self.pixel_std.shape
- ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!"
- if np.sum(pixel_mean) < 3.0: # converrt pixel value to range [0.0, 1.0] by dividing 255.0
- assert input_format == 'RGB'
- self.div_pixel = True
- else: # default setting
- self.div_pixel = False
- self.use_clip_c4 = use_clip_c4 # if True, use C4 mode where roi_head uses the last resnet layer from backbone
- self.use_clip_attpool = use_clip_attpool # if True (C4+text_emb_as_classifier), use att_pool to replace default mean pool
-
- @classmethod
- def from_config(cls, cfg):
- backbone = build_backbone(cfg)
- return {
- "backbone": backbone,
- "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()),
- "roi_heads": build_roi_heads(cfg, backbone.output_shape()),
- "input_format": cfg.INPUT.FORMAT,
- "vis_period": cfg.VIS_PERIOD,
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
- "pixel_std": cfg.MODEL.PIXEL_STD,
- "use_clip_c4": cfg.MODEL.BACKBONE.NAME == "build_clip_resnet_backbone",
- "use_clip_attpool": cfg.MODEL.ROI_HEADS.NAME == 'CLIPRes5ROIHeads' and cfg.MODEL.CLIP.USE_TEXT_EMB_CLASSIFIER,
- }
-
- @property
- def device(self):
- return self.pixel_mean.device
-
- def visualize_training(self, batched_inputs, proposals):
- """
- A function used to visualize images and proposals. It shows ground truth
- bounding boxes on the original image and up to 20 top-scoring predicted
- object proposals on the original image. Users can implement different
- visualization functions for different models.
-
- Args:
- batched_inputs (list): a list that contains input to the model.
- proposals (list): a list that contains predicted proposals. Both
- batched_inputs and proposals should have the same length.
- """
- from detectron2.utils.visualizer import Visualizer
-
- storage = get_event_storage()
- max_vis_prop = 20
-
- for input, prop in zip(batched_inputs, proposals):
- img = input["image"]
- img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format)
- v_gt = Visualizer(img, None)
- v_gt = v_gt.overlay_instances(boxes=input["instances"].gt_boxes)
- anno_img = v_gt.get_image()
- box_size = min(len(prop.proposal_boxes), max_vis_prop)
- v_pred = Visualizer(img, None)
- v_pred = v_pred.overlay_instances(
- boxes=prop.proposal_boxes[0:box_size].tensor.cpu().numpy()
- )
- prop_img = v_pred.get_image()
- vis_img = np.concatenate((anno_img, prop_img), axis=1)
- vis_img = vis_img.transpose(2, 0, 1)
- vis_name = "Left: GT bounding boxes; Right: Predicted proposals"
- storage.put_image(vis_name, vis_img)
- break # only visualize one image in a batch
-
- def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]):
- """
- Args:
- batched_inputs: a list, batched outputs of :class:`DatasetMapper` .
- Each item in the list contains the inputs for one image.
- For now, each item in the list is a dict that contains:
-
- * image: Tensor, image in (C, H, W) format.
- * instances (optional): groundtruth :class:`Instances`
- * proposals (optional): :class:`Instances`, precomputed proposals.
-
- Other information that's included in the original dicts, such as:
-
- * "height", "width" (int): the output resolution of the model, used in inference.
- See :meth:`postprocess` for details.
-
- Returns:
- list[dict]:
- Each dict is the output for one input image.
- The dict contains one key "instances" whose value is a :class:`Instances`.
- The :class:`Instances` object has the following keys:
- "pred_boxes", "pred_classes", "scores", "pred_masks", "pred_keypoints"
- """
- if not self.training:
- return self.inference(batched_inputs)
-
- images = self.preprocess_image(batched_inputs)
- if "instances" in batched_inputs[0]:
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
- else:
- gt_instances = None
- # eg: {'p2': torch.Size([b, c, 200, 304]), 'p3': torch.Size([b, c, 100, 152]), 'p4': torch.Size([b, c, 50, 76]), 'p5': torch.Size([b, c, 25, 38]), 'p6': torch.Size([b, c, 13, 19])}
- features = self.backbone(images.tensor)
-
- if self.proposal_generator is not None:
- proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
- else:
- assert "proposals" in batched_inputs[0]
- proposals = [x["proposals"].to(self.device) for x in batched_inputs]
- proposal_losses = {}
-
- if self.use_clip_c4: # use C4 + resnet weights from CLIP
- if self.use_clip_attpool: # use att_pool from CLIP to match dimension
- _, detector_losses = self.roi_heads(images, features, proposals, gt_instances, res5=self.backbone.layer4, attnpool=self.backbone.attnpool)
- else: # use default mean pool
- _, detector_losses = self.roi_heads(images, features, proposals, gt_instances, res5=self.backbone.layer4)
- else: # default setting
- _, detector_losses = self.roi_heads(images, features, proposals, gt_instances)
- if self.vis_period > 0:
- storage = get_event_storage()
- if storage.iter % self.vis_period == 0:
- self.visualize_training(batched_inputs, proposals)
-
- losses = {}
- losses.update(detector_losses)
- losses.update(proposal_losses)
- return losses
-
- def inference(
- self,
- batched_inputs: List[Dict[str, torch.Tensor]],
- detected_instances: Optional[List[Instances]] = None,
- do_postprocess: bool = True,
- ):
- """
- Run inference on the given inputs.
-
- Args:
- batched_inputs (list[dict]): same as in :meth:`forward`
- detected_instances (None or list[Instances]): if not None, it
- contains an `Instances` object per image. The `Instances`
- object contains "pred_boxes" and "pred_classes" which are
- known boxes in the image.
- The inference will then skip the detection of bounding boxes,
- and only predict other per-ROI outputs.
- do_postprocess (bool): whether to apply post-processing on the outputs.
-
- Returns:
- When do_postprocess=True, same as in :meth:`forward`.
- Otherwise, a list[Instances] containing raw network outputs.
- """
- assert not self.training
-
- images = self.preprocess_image(batched_inputs)
- features = self.backbone(images.tensor)
-
- if detected_instances is None:
- if self.proposal_generator is not None:
- proposals, _ = self.proposal_generator(images, features, None)
- else:
- assert "proposals" in batched_inputs[0]
- proposals = [x["proposals"].to(self.device) for x in batched_inputs]
-
- if self.use_clip_c4: # use C4 + resnet weights from CLIP
- if self.use_clip_attpool: # use att_pool from CLIP to match dimension
- results, _ = self.roi_heads(images, features, proposals, None, res5=self.backbone.layer4, attnpool=self.backbone.attnpool)
- else: # use default mean pool
- results, _ = self.roi_heads(images, features, proposals, None, res5=self.backbone.layer4)
- else: # default setting
- results, _ = self.roi_heads(images, features, proposals, None)
- else:
- detected_instances = [x.to(self.device) for x in detected_instances]
-
- if self.use_clip_c4: # use C4 + resnet weights from CLIP
- if self.use_clip_attpool: # use att_pool from CLIP to match dimension
- results = self.roi_heads.forward_with_given_boxes(features, detected_instances, res5=self.backbone.layer4, attnpool=self.backbone.attnpool)
- else: # use default mean pool
- results = self.roi_heads.forward_with_given_boxes(features, detected_instances, res5=self.backbone.layer4)
- else: # default setting
- results = self.roi_heads.forward_with_given_boxes(features, detected_instances)
-
- #visualize_proposals(batched_inputs, proposals, self.input_format)
- if do_postprocess:
- assert not torch.jit.is_scripting(), "Scripting is not supported for postprocess."
- return GeneralizedRCNN._postprocess(results, batched_inputs, images.image_sizes)
- else:
- return results
-
- def preprocess_image(self, batched_inputs: List[Dict[str, torch.Tensor]]):
- """
- Normalize, pad and batch the input images.
- """
- images = [x["image"].to(self.device) for x in batched_inputs]
- if self.div_pixel:
- images = [((x / 255.0) - self.pixel_mean) / self.pixel_std for x in images]
- else:
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
- images = ImageList.from_tensors(images, self.backbone.size_divisibility)
- return images
-
- @staticmethod
- def _postprocess(instances, batched_inputs: List[Dict[str, torch.Tensor]], image_sizes):
- """
- Rescale the output instances to the target size.
- """
- # note: private function; subject to changes
- processed_results = []
- for results_per_image, input_per_image, image_size in zip(
- instances, batched_inputs, image_sizes
- ):
- height = input_per_image.get("height", image_size[0])
- width = input_per_image.get("width", image_size[1])
- r = detector_postprocess(results_per_image, height, width)
- processed_results.append({"instances": r})
- return processed_results
-
-
-@META_ARCH_REGISTRY.register()
-class ProposalNetwork(nn.Module):
- """
- A meta architecture that only predicts object proposals.
- """
-
- @configurable
- def __init__(
- self,
- *,
- backbone: Backbone,
- proposal_generator: nn.Module,
- pixel_mean: Tuple[float],
- pixel_std: Tuple[float],
- input_format: Optional[str] = None,
- ):
- """
- Args:
- backbone: a backbone module, must follow detectron2's backbone interface
- proposal_generator: a module that generates proposals using backbone features
- pixel_mean, pixel_std: list or tuple with #channels element, representing
- the per-channel mean and std to be used to normalize the input image
- """
- super().__init__()
- self.backbone = backbone
- self.proposal_generator = proposal_generator
- self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
- self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
- if np.sum(pixel_mean) < 3.0: # converrt pixel value to range [0.0, 1.0] by dividing 255.0
- assert input_format == 'RGB'
- self.div_pixel = True
- else: # default setting
- self.div_pixel = False
-
- @classmethod
- def from_config(cls, cfg):
- backbone = build_backbone(cfg)
- return {
- "backbone": backbone,
- "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()),
- "input_format": cfg.INPUT.FORMAT,
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
- "pixel_std": cfg.MODEL.PIXEL_STD,
- }
-
- @property
- def device(self):
- return self.pixel_mean.device
-
- def forward(self, batched_inputs):
- """
- Args:
- Same as in :class:`GeneralizedRCNN.forward`
-
- Returns:
- list[dict]:
- Each dict is the output for one input image.
- The dict contains one key "proposals" whose value is a
- :class:`Instances` with keys "proposal_boxes" and "objectness_logits".
- """
- images = [x["image"].to(self.device) for x in batched_inputs]
- if self.div_pixel:
- images = [((x / 255.0) - self.pixel_mean) / self.pixel_std for x in images]
- else:
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
- images = ImageList.from_tensors(images, self.backbone.size_divisibility)
- features = self.backbone(images.tensor)
-
- if "instances" in batched_inputs[0]:
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
- elif "targets" in batched_inputs[0]:
- log_first_n(
- logging.WARN, "'targets' in the model inputs is now renamed to 'instances'!", n=10
- )
- gt_instances = [x["targets"].to(self.device) for x in batched_inputs]
- else:
- gt_instances = None
- proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
- # In training, the proposals are not useful at all but we generate them anyway.
- # This makes RPN-only models about 5% slower.
- if self.training:
- return proposal_losses
-
- processed_results = []
- for results_per_image, input_per_image, image_size in zip(
- proposals, batched_inputs, images.image_sizes
- ):
- height = input_per_image.get("height", image_size[0])
- width = input_per_image.get("width", image_size[1])
- r = detector_postprocess(results_per_image, height, width)
- processed_results.append({"proposals": r})
- return processed_results
diff --git a/spaces/CarlDennis/HYTTS/attentions.py b/spaces/CarlDennis/HYTTS/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/CarlDennis/HYTTS/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py
deleted file mode 100644
index e3440d8b7c6ee8cb62d73df48623ab757c973c59..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from __future__ import annotations
-
-import json
-
-from autogpt.llm_utils import call_ai_function
-
-
-def improve_code(suggestions: list[str], code: str) -> str:
- """
- A function that takes in code and suggestions and returns a response from create
- chat completion api call.
-
- Parameters:
- suggestions (List): A list of suggestions around what needs to be improved.
- code (str): Code to be improved.
- Returns:
- A result string from create chat completion. Improved code in response.
- """
-
- function_string = (
- "def generate_improved_code(suggestions: List[str], code: str) -> str:"
- )
- args = [json.dumps(suggestions), code]
- description_string = (
- "Improves the provided code based on the suggestions"
- " provided, making no other changes."
- )
-
- return call_ai_function(function_string, args, description_string)
diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py b/spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py
deleted file mode 100644
index a94b49d2124b9983efc057f1103484bd6f6d374c..0000000000000000000000000000000000000000
--- a/spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py
+++ /dev/null
@@ -1,106 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-
-from utils import randomize_seed_fn
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- image = gr.Image()
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button('Run')
- with gr.Accordion('Advanced options', open=False):
- num_samples = gr.Slider(label='Number of images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- canny_low_threshold = gr.Slider(
- label='Canny low threshold',
- minimum=1,
- maximum=255,
- value=100,
- step=1)
- canny_high_threshold = gr.Slider(
- label='Canny high threshold',
- minimum=1,
- maximum=255,
- value=200,
- step=1)
- num_steps = gr.Slider(label='Number of steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- randomize=True)
- randomize_seed = gr.Checkbox(label='Randomize seed',
- value=True)
- a_prompt = gr.Textbox(
- label='Additional prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output', show_label=False).style(
- columns=2, object_fit='scale-down')
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- num_steps,
- guidance_scale,
- seed,
- canny_low_threshold,
- canny_high_threshold,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name='canny',
- )
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model(task_name='Canny')
- demo = create_demo(model.process_canny)
- demo.queue().launch()
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js
deleted file mode 100644
index 6c8d7f5b8101a4c5c813babfa3bf6055337d2b49..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js
+++ /dev/null
@@ -1,36 +0,0 @@
-import cfg from '../../lib/config/config.js'
-
-export class quit extends plugin {
- constructor () {
- super({
- name: 'notice',
- dsc: '自动退群',
- event: 'notice.group.increase'
- })
- }
-
- async accept () {
- if (this.e.user_id != this.e.self_id) return
-
- let other = cfg.other
- if (other.autoQuit <= 0) return
-
- /** 判断主人,主人邀请不退群 */
- let gl = await this.e.group.getMemberMap()
- for (let qq of cfg.masterQQ) {
- if (gl.has(Number(qq) || String(qq))) {
- logger.mark(`[主人拉群] ${this.e.group_id}`)
- return
- }
- }
-
- /** 自动退群 */
- if (Array.from(gl).length <= other.autoQuit && !this.e.group.is_owner) {
- await this.e.reply('禁止拉群,已自动退出')
- logger.mark(`[自动退群] ${this.e.group_id}`)
- setTimeout(() => {
- this.e.group.quit()
- }, 2000)
- }
- }
-}
diff --git a/spaces/ClueAI/ChatYuan-large-v2/app.py b/spaces/ClueAI/ChatYuan-large-v2/app.py
deleted file mode 100644
index b4ed170dae69a219f7a8f80457521793a81fb6b5..0000000000000000000000000000000000000000
--- a/spaces/ClueAI/ChatYuan-large-v2/app.py
+++ /dev/null
@@ -1,310 +0,0 @@
-import os
-import gradio as gr
-import clueai
-import torch
-from transformers import T5Tokenizer, T5ForConditionalGeneration
-
-tokenizer = T5Tokenizer.from_pretrained("ClueAI/ChatYuan-large-v2")
-model = T5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v2")
-# 使用
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-model.to(device)
-model.half()
-
-base_info = ""
-
-
-def preprocess(text):
- text = f"{base_info}{text}"
- text = text.replace("\n", "\\n").replace("\t", "\\t")
- return text
-
-
-def postprocess(text):
- return text.replace("\\n", "\n").replace("\\t", "\t").replace(
- '%20', ' ') #.replace(" ", " ")
-
-
-generate_config = {
- 'do_sample': True,
- 'top_p': 0.9,
- 'top_k': 50,
- 'temperature': 0.7,
- 'num_beams': 1,
- 'max_length': 1024,
- 'min_length': 3,
- 'no_repeat_ngram_size': 5,
- 'length_penalty': 0.6,
- 'return_dict_in_generate': True,
- 'output_scores': True
-}
-
-
-def answer(
- text,
- top_p,
- temperature,
- sample=True,
-):
- '''sample:是否抽样。生成任务,可以设置为True;
- top_p:0-1之间,生成的内容越多样'''
- text = preprocess(text)
- encoding = tokenizer(text=[text],
- truncation=True,
- padding=True,
- max_length=1024,
- return_tensors="pt").to(device)
- if not sample:
- out = model.generate(**encoding,
- return_dict_in_generate=True,
- output_scores=False,
- max_new_tokens=1024,
- num_beams=1,
- length_penalty=0.6)
- else:
- out = model.generate(**encoding,
- return_dict_in_generate=True,
- output_scores=False,
- max_new_tokens=1024,
- do_sample=True,
- top_p=top_p,
- temperature=temperature,
- no_repeat_ngram_size=12)
- #out=model.generate(**encoding, **generate_config)
- out_text = tokenizer.batch_decode(out["sequences"],
- skip_special_tokens=True)
- return postprocess(out_text[0])
-
-
-def clear_session():
- return '', None
-
-
-def chatyuan_bot(input, history, top_p, temperature, num):
- history = history or []
- if len(history) > num:
- history = history[-num:]
-
- context = "\n".join([
- f"用户:{input_text}\n小元:{answer_text}"
- for input_text, answer_text in history
- ])
- #print(context)
-
- input_text = context + "\n用户:" + input + "\n小元:"
- input_text = input_text.strip()
- output_text = answer(input_text, top_p, temperature)
- print("open_model".center(20, "="))
- print(f"{input_text}\n{output_text}")
- #print("="*20)
- history.append((input, output_text))
- #print(history)
- return '', history, history
-
-
-def chatyuan_bot_regenerate(input, history, top_p, temperature, num):
-
- history = history or []
-
- if history:
- input = history[-1][0]
- history = history[:-1]
-
- if len(history) > num:
- history = history[-num:]
-
- context = "\n".join([
- f"用户:{input_text}\n小元:{answer_text}"
- for input_text, answer_text in history
- ])
- #print(context)
-
- input_text = context + "\n用户:" + input + "\n小元:"
- input_text = input_text.strip()
- output_text = answer(input_text, top_p, temperature)
- print("open_model".center(20, "="))
- print(f"{input_text}\n{output_text}")
- history.append((input, output_text))
- #print(history)
- return '', history, history
-
-
-block = gr.Blocks()
-
-with block as demo:
- gr.Markdown("""
-
-😉ChatYuan: 元语功能型对话大模型 | General Model for Dialogue with ChatYuan
-
-👏ChatYuan-large-v2是一个支持中英双语的功能型对话语言大模型,是继ChatYuan系列中ChatYuan-large-v1开源后的又一个开源模型。ChatYuan-large-v2使用了和 v1版本相同的技术方案,在微调数据、人类反馈强化学习、思维链等方面进行了优化。
-
-ChatYuan large v2 is an open-source large language model for dialogue, supports both Chinese and English languages, and in ChatGPT style.
-
-ChatYuan-large-v2是ChatYuan系列中以轻量化实现高质量效果的模型之一,用户可以在消费级显卡、 PC甚至手机上进行推理(INT4 最低只需 400M )。
-
-在Chatyuan-large-v1的原有功能的基础上,我们给模型进行了如下优化:
-- 新增了中英双语对话能力。
-- 新增了拒答能力。对于一些危险、有害的问题,学会了拒答处理。
-- 新增了代码生成功能。对于基础代码生成进行了一定程度优化。
-- 增强了基础能力。原有上下文问答、创意性写作能力明显提升。
-- 新增了表格生成功能。使生成的表格内容和格式更适配。
-- 增强了基础数学运算能力。
-- 最大长度token数扩展到4096。
-- 增强了模拟情景能力。.
-
-Based on the original functions of Chatyuan-large-v1, we optimized the model as follows:
--Added the ability to speak in both Chinese and English.
--Added the ability to refuse to answer. Learn to refuse to answer some dangerous and harmful questions.
--Added code generation functionality. Basic code generation has been optimized to a certain extent.
--Enhanced basic capabilities. The original contextual Q&A and creative writing skills have significantly improved.
--Added a table generation function. Make the generated table content and format more appropriate.
--Enhanced basic mathematical computing capabilities.
--The maximum number of length tokens has been expanded to 4096.
--Enhanced ability to simulate scenarios< br>
-
-👀PromptCLUE-large在1000亿token中文语料上预训练, 累计学习1.5万亿中文token, 并且在数百种任务上进行Prompt任务式训练. 针对理解类任务, 如分类、情感分析、抽取等, 可以自定义标签体系; 针对多种生成任务, 可以进行采样自由生成.
-
- ModelScope | Huggingface | 官网体验场 | ChatYuan-API | Github项目地址 | OpenI免费试用
-
-
- """)
-
-gui = gr.TabbedInterface(
- interface_list=[introduction, demo, demo_1],
- tab_names=["相关介绍 | Introduction", "开源模型 | Online Demo", "API调用"])
-gui.launch(quiet=True, show_api=False, share=False)
diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py b/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py
deleted file mode 100644
index 5790d8d20751bad1133172b4ffbc0106d8d422c0..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import numpy as np
-import CDM.detect_compo.lib_ip.ip_draw as draw
-
-
-class Bbox:
- def __init__(self, col_min, row_min, col_max, row_max):
- self.col_min = col_min
- self.row_min = row_min
- self.col_max = col_max
- self.row_max = row_max
-
- self.width = col_max - col_min
- self.height = row_max - row_min
- self.box_area = self.width * self.height
-
- def put_bbox(self):
- return self.col_min, self.row_min, self.col_max, self.row_max
-
- def bbox_cal_area(self):
- self.box_area = self.width * self.height
- return self.box_area
-
- def bbox_relation(self, bbox_b):
- """
- :return: -1 : a in b
- 0 : a, b are not intersected
- 1 : b in a
- 2 : a, b are identical or intersected
- """
- col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox()
- col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox()
-
- # if a is in b
- if col_min_a > col_min_b and row_min_a > row_min_b and col_max_a < col_max_b and row_max_a < row_max_b:
- return -1
- # if b is in a
- elif col_min_a < col_min_b and row_min_a < row_min_b and col_max_a > col_max_b and row_max_a > row_max_b:
- return 1
- # a and b are non-intersect
- elif (col_min_a > col_max_b or row_min_a > row_max_b) or (col_min_b > col_max_a or row_min_b > row_max_a):
- return 0
- # intersection
- else:
- return 2
-
- def bbox_relation_nms(self, bbox_b, bias=(0, 0)):
- '''
- Calculate the relation between two rectangles by nms
- :return: -1 : a in b
- 0 : a, b are not intersected
- 1 : b in a
- 2 : a, b are intersected
- '''
- col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox()
- col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox()
-
- bias_col, bias_row = bias
- # get the intersected area
- col_min_s = max(col_min_a - bias_col, col_min_b - bias_col)
- row_min_s = max(row_min_a - bias_row, row_min_b - bias_row)
- col_max_s = min(col_max_a + bias_col, col_max_b + bias_col)
- row_max_s = min(row_max_a + bias_row, row_max_b + bias_row)
- w = np.maximum(0, col_max_s - col_min_s)
- h = np.maximum(0, row_max_s - row_min_s)
- inter = w * h
- area_a = (col_max_a - col_min_a) * (row_max_a - row_min_a)
- area_b = (col_max_b - col_min_b) * (row_max_b - row_min_b)
- iou = inter / (area_a + area_b - inter)
- ioa = inter / self.box_area
- iob = inter / bbox_b.box_area
-
- if iou == 0 and ioa == 0 and iob == 0:
- return 0
-
- # import lib_ip.ip_preprocessing as pre
- # org_iou, _ = pre.read_img('uied/data/input/7.jpg', 800)
- # print(iou, ioa, iob)
- # board = draw.draw_bounding_box(org_iou, [self], color=(255,0,0))
- # draw.draw_bounding_box(board, [bbox_b], color=(0,255,0), show=True)
-
- # contained by b
- if ioa >= 1:
- return -1
- # contains b
- if iob >= 1:
- return 1
- # not intersected with each other
- # intersected
- if iou >= 0.02 or iob > 0.2 or ioa > 0.2:
- return 2
- # if iou == 0:
- # print('ioa:%.5f; iob:%.5f; iou:%.5f' % (ioa, iob, iou))
- return 0
-
- def bbox_cvt_relative_position(self, col_min_base, row_min_base):
- '''
- Convert to relative position based on base coordinator
- '''
- self.col_min += col_min_base
- self.col_max += col_min_base
- self.row_min += row_min_base
- self.row_max += row_min_base
-
- def bbox_merge(self, bbox_b):
- '''
- Merge two intersected bboxes
- '''
- col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox()
- col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox()
- col_min = min(col_min_a, col_min_b)
- col_max = max(col_max_a, col_max_b)
- row_min = min(row_min_a, row_min_b)
- row_max = max(row_max_a, row_max_b)
- new_bbox = Bbox(col_min, row_min, col_max, row_max)
- return new_bbox
-
- def bbox_padding(self, image_shape, pad):
- row, col = image_shape[:2]
- self.col_min = max(self.col_min - pad, 0)
- self.col_max = min(self.col_max + pad, col)
- self.row_min = max(self.row_min - pad, 0)
- self.row_max = min(self.row_max + pad, row)
\ No newline at end of file
diff --git a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py b/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py
deleted file mode 100644
index c7a2092cad40bfe568b5749e799a3e545b1a4b01..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import gradio as gr
-import numpy as np
-from tensorflow.keras.preprocessing import image
-from tensorflow.keras.models import load_model
-from PIL import Image as PILImage
-import io
-
-# Carregar o modelo treinado
-model = load_model('model_1.0000.h5')
-
-def predict_and_invert(input_image):
- input_image = input_image.resize((224, 224))
- img = image.img_to_array(input_image) / 255.0
- img = np.expand_dims(img, axis=0)
- img = img[:, :224, :224, :]
-
- prediction = model.predict(img)
-
- if prediction[0][0] > 0.5:
- result = "Anomalia cardíaca (Doente)"
- else:
- result = "Normal (Sem anomalia)"
-
- img_inverted = 1 - img[0] # Inverter a imagem
-
- img_inverted_pil = PILImage.fromarray(np.uint8(img_inverted * 255))
- img_inverted_bytes = io.BytesIO()
- img_inverted_pil.save(img_inverted_bytes, format='PNG')
-
- return result, img_inverted_pil
-
-# Criar uma interface Gradio
-iface = gr.Interface(
- fn=predict_and_invert,
- inputs=gr.inputs.Image(type="pil", label="Carregar uma imagem"),
- outputs=["text", "image"]
-)
-
-# Executar a interface Gradio
-iface.launch()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/theme.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/theme.py
deleted file mode 100644
index 10dc6fa8a81646ed7e9fa8d6be4e1634ec14e7d8..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/theme.py
+++ /dev/null
@@ -1,10 +0,0 @@
-"""Utilities for registering and working with themes"""
-
-from .plugin_registry import PluginRegistry
-from typing import Callable
-
-ThemeType = Callable[..., dict]
-
-
-class ThemeRegistry(PluginRegistry[ThemeType]):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/parser.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/parser.py
deleted file mode 100644
index 5fa7adfac842bfa5689fd1a41ae4017be1ebff6f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/parser.py
+++ /dev/null
@@ -1,529 +0,0 @@
-"""
-This module started out as largely a copy paste from the stdlib's
-optparse module with the features removed that we do not need from
-optparse because we implement them in Click on a higher level (for
-instance type handling, help formatting and a lot more).
-
-The plan is to remove more and more from here over time.
-
-The reason this is a different module and not optparse from the stdlib
-is that there are differences in 2.x and 3.x about the error messages
-generated and optparse in the stdlib uses gettext for no good reason
-and might cause us issues.
-
-Click uses parts of optparse written by Gregory P. Ward and maintained
-by the Python Software Foundation. This is limited to code in parser.py.
-
-Copyright 2001-2006 Gregory P. Ward. All rights reserved.
-Copyright 2002-2006 Python Software Foundation. All rights reserved.
-"""
-# This code uses parts of optparse written by Gregory P. Ward and
-# maintained by the Python Software Foundation.
-# Copyright 2001-2006 Gregory P. Ward
-# Copyright 2002-2006 Python Software Foundation
-import typing as t
-from collections import deque
-from gettext import gettext as _
-from gettext import ngettext
-
-from .exceptions import BadArgumentUsage
-from .exceptions import BadOptionUsage
-from .exceptions import NoSuchOption
-from .exceptions import UsageError
-
-if t.TYPE_CHECKING:
- import typing_extensions as te
- from .core import Argument as CoreArgument
- from .core import Context
- from .core import Option as CoreOption
- from .core import Parameter as CoreParameter
-
-V = t.TypeVar("V")
-
-# Sentinel value that indicates an option was passed as a flag without a
-# value but is not a flag option. Option.consume_value uses this to
-# prompt or use the flag_value.
-_flag_needs_value = object()
-
-
-def _unpack_args(
- args: t.Sequence[str], nargs_spec: t.Sequence[int]
-) -> t.Tuple[t.Sequence[t.Union[str, t.Sequence[t.Optional[str]], None]], t.List[str]]:
- """Given an iterable of arguments and an iterable of nargs specifications,
- it returns a tuple with all the unpacked arguments at the first index
- and all remaining arguments as the second.
-
- The nargs specification is the number of arguments that should be consumed
- or `-1` to indicate that this position should eat up all the remainders.
-
- Missing items are filled with `None`.
- """
- args = deque(args)
- nargs_spec = deque(nargs_spec)
- rv: t.List[t.Union[str, t.Tuple[t.Optional[str], ...], None]] = []
- spos: t.Optional[int] = None
-
- def _fetch(c: "te.Deque[V]") -> t.Optional[V]:
- try:
- if spos is None:
- return c.popleft()
- else:
- return c.pop()
- except IndexError:
- return None
-
- while nargs_spec:
- nargs = _fetch(nargs_spec)
-
- if nargs is None:
- continue
-
- if nargs == 1:
- rv.append(_fetch(args))
- elif nargs > 1:
- x = [_fetch(args) for _ in range(nargs)]
-
- # If we're reversed, we're pulling in the arguments in reverse,
- # so we need to turn them around.
- if spos is not None:
- x.reverse()
-
- rv.append(tuple(x))
- elif nargs < 0:
- if spos is not None:
- raise TypeError("Cannot have two nargs < 0")
-
- spos = len(rv)
- rv.append(None)
-
- # spos is the position of the wildcard (star). If it's not `None`,
- # we fill it with the remainder.
- if spos is not None:
- rv[spos] = tuple(args)
- args = []
- rv[spos + 1 :] = reversed(rv[spos + 1 :])
-
- return tuple(rv), list(args)
-
-
-def split_opt(opt: str) -> t.Tuple[str, str]:
- first = opt[:1]
- if first.isalnum():
- return "", opt
- if opt[1:2] == first:
- return opt[:2], opt[2:]
- return first, opt[1:]
-
-
-def normalize_opt(opt: str, ctx: t.Optional["Context"]) -> str:
- if ctx is None or ctx.token_normalize_func is None:
- return opt
- prefix, opt = split_opt(opt)
- return f"{prefix}{ctx.token_normalize_func(opt)}"
-
-
-def split_arg_string(string: str) -> t.List[str]:
- """Split an argument string as with :func:`shlex.split`, but don't
- fail if the string is incomplete. Ignores a missing closing quote or
- incomplete escape sequence and uses the partial token as-is.
-
- .. code-block:: python
-
- split_arg_string("example 'my file")
- ["example", "my file"]
-
- split_arg_string("example my\\")
- ["example", "my"]
-
- :param string: String to split.
- """
- import shlex
-
- lex = shlex.shlex(string, posix=True)
- lex.whitespace_split = True
- lex.commenters = ""
- out = []
-
- try:
- for token in lex:
- out.append(token)
- except ValueError:
- # Raised when end-of-string is reached in an invalid state. Use
- # the partial token as-is. The quote or escape character is in
- # lex.state, not lex.token.
- out.append(lex.token)
-
- return out
-
-
-class Option:
- def __init__(
- self,
- obj: "CoreOption",
- opts: t.Sequence[str],
- dest: t.Optional[str],
- action: t.Optional[str] = None,
- nargs: int = 1,
- const: t.Optional[t.Any] = None,
- ):
- self._short_opts = []
- self._long_opts = []
- self.prefixes: t.Set[str] = set()
-
- for opt in opts:
- prefix, value = split_opt(opt)
- if not prefix:
- raise ValueError(f"Invalid start character for option ({opt})")
- self.prefixes.add(prefix[0])
- if len(prefix) == 1 and len(value) == 1:
- self._short_opts.append(opt)
- else:
- self._long_opts.append(opt)
- self.prefixes.add(prefix)
-
- if action is None:
- action = "store"
-
- self.dest = dest
- self.action = action
- self.nargs = nargs
- self.const = const
- self.obj = obj
-
- @property
- def takes_value(self) -> bool:
- return self.action in ("store", "append")
-
- def process(self, value: t.Any, state: "ParsingState") -> None:
- if self.action == "store":
- state.opts[self.dest] = value # type: ignore
- elif self.action == "store_const":
- state.opts[self.dest] = self.const # type: ignore
- elif self.action == "append":
- state.opts.setdefault(self.dest, []).append(value) # type: ignore
- elif self.action == "append_const":
- state.opts.setdefault(self.dest, []).append(self.const) # type: ignore
- elif self.action == "count":
- state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 # type: ignore
- else:
- raise ValueError(f"unknown action '{self.action}'")
- state.order.append(self.obj)
-
-
-class Argument:
- def __init__(self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1):
- self.dest = dest
- self.nargs = nargs
- self.obj = obj
-
- def process(
- self,
- value: t.Union[t.Optional[str], t.Sequence[t.Optional[str]]],
- state: "ParsingState",
- ) -> None:
- if self.nargs > 1:
- assert value is not None
- holes = sum(1 for x in value if x is None)
- if holes == len(value):
- value = None
- elif holes != 0:
- raise BadArgumentUsage(
- _("Argument {name!r} takes {nargs} values.").format(
- name=self.dest, nargs=self.nargs
- )
- )
-
- if self.nargs == -1 and self.obj.envvar is not None and value == ():
- # Replace empty tuple with None so that a value from the
- # environment may be tried.
- value = None
-
- state.opts[self.dest] = value # type: ignore
- state.order.append(self.obj)
-
-
-class ParsingState:
- def __init__(self, rargs: t.List[str]) -> None:
- self.opts: t.Dict[str, t.Any] = {}
- self.largs: t.List[str] = []
- self.rargs = rargs
- self.order: t.List["CoreParameter"] = []
-
-
-class OptionParser:
- """The option parser is an internal class that is ultimately used to
- parse options and arguments. It's modelled after optparse and brings
- a similar but vastly simplified API. It should generally not be used
- directly as the high level Click classes wrap it for you.
-
- It's not nearly as extensible as optparse or argparse as it does not
- implement features that are implemented on a higher level (such as
- types or defaults).
-
- :param ctx: optionally the :class:`~click.Context` where this parser
- should go with.
- """
-
- def __init__(self, ctx: t.Optional["Context"] = None) -> None:
- #: The :class:`~click.Context` for this parser. This might be
- #: `None` for some advanced use cases.
- self.ctx = ctx
- #: This controls how the parser deals with interspersed arguments.
- #: If this is set to `False`, the parser will stop on the first
- #: non-option. Click uses this to implement nested subcommands
- #: safely.
- self.allow_interspersed_args: bool = True
- #: This tells the parser how to deal with unknown options. By
- #: default it will error out (which is sensible), but there is a
- #: second mode where it will ignore it and continue processing
- #: after shifting all the unknown options into the resulting args.
- self.ignore_unknown_options: bool = False
-
- if ctx is not None:
- self.allow_interspersed_args = ctx.allow_interspersed_args
- self.ignore_unknown_options = ctx.ignore_unknown_options
-
- self._short_opt: t.Dict[str, Option] = {}
- self._long_opt: t.Dict[str, Option] = {}
- self._opt_prefixes = {"-", "--"}
- self._args: t.List[Argument] = []
-
- def add_option(
- self,
- obj: "CoreOption",
- opts: t.Sequence[str],
- dest: t.Optional[str],
- action: t.Optional[str] = None,
- nargs: int = 1,
- const: t.Optional[t.Any] = None,
- ) -> None:
- """Adds a new option named `dest` to the parser. The destination
- is not inferred (unlike with optparse) and needs to be explicitly
- provided. Action can be any of ``store``, ``store_const``,
- ``append``, ``append_const`` or ``count``.
-
- The `obj` can be used to identify the option in the order list
- that is returned from the parser.
- """
- opts = [normalize_opt(opt, self.ctx) for opt in opts]
- option = Option(obj, opts, dest, action=action, nargs=nargs, const=const)
- self._opt_prefixes.update(option.prefixes)
- for opt in option._short_opts:
- self._short_opt[opt] = option
- for opt in option._long_opts:
- self._long_opt[opt] = option
-
- def add_argument(
- self, obj: "CoreArgument", dest: t.Optional[str], nargs: int = 1
- ) -> None:
- """Adds a positional argument named `dest` to the parser.
-
- The `obj` can be used to identify the option in the order list
- that is returned from the parser.
- """
- self._args.append(Argument(obj, dest=dest, nargs=nargs))
-
- def parse_args(
- self, args: t.List[str]
- ) -> t.Tuple[t.Dict[str, t.Any], t.List[str], t.List["CoreParameter"]]:
- """Parses positional arguments and returns ``(values, args, order)``
- for the parsed options and arguments as well as the leftover
- arguments if there are any. The order is a list of objects as they
- appear on the command line. If arguments appear multiple times they
- will be memorized multiple times as well.
- """
- state = ParsingState(args)
- try:
- self._process_args_for_options(state)
- self._process_args_for_args(state)
- except UsageError:
- if self.ctx is None or not self.ctx.resilient_parsing:
- raise
- return state.opts, state.largs, state.order
-
- def _process_args_for_args(self, state: ParsingState) -> None:
- pargs, args = _unpack_args(
- state.largs + state.rargs, [x.nargs for x in self._args]
- )
-
- for idx, arg in enumerate(self._args):
- arg.process(pargs[idx], state)
-
- state.largs = args
- state.rargs = []
-
- def _process_args_for_options(self, state: ParsingState) -> None:
- while state.rargs:
- arg = state.rargs.pop(0)
- arglen = len(arg)
- # Double dashes always handled explicitly regardless of what
- # prefixes are valid.
- if arg == "--":
- return
- elif arg[:1] in self._opt_prefixes and arglen > 1:
- self._process_opts(arg, state)
- elif self.allow_interspersed_args:
- state.largs.append(arg)
- else:
- state.rargs.insert(0, arg)
- return
-
- # Say this is the original argument list:
- # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)]
- # ^
- # (we are about to process arg(i)).
- #
- # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of
- # [arg0, ..., arg(i-1)] (any options and their arguments will have
- # been removed from largs).
- #
- # The while loop will usually consume 1 or more arguments per pass.
- # If it consumes 1 (eg. arg is an option that takes no arguments),
- # then after _process_arg() is done the situation is:
- #
- # largs = subset of [arg0, ..., arg(i)]
- # rargs = [arg(i+1), ..., arg(N-1)]
- #
- # If allow_interspersed_args is false, largs will always be
- # *empty* -- still a subset of [arg0, ..., arg(i-1)], but
- # not a very interesting subset!
-
- def _match_long_opt(
- self, opt: str, explicit_value: t.Optional[str], state: ParsingState
- ) -> None:
- if opt not in self._long_opt:
- from difflib import get_close_matches
-
- possibilities = get_close_matches(opt, self._long_opt)
- raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx)
-
- option = self._long_opt[opt]
- if option.takes_value:
- # At this point it's safe to modify rargs by injecting the
- # explicit value, because no exception is raised in this
- # branch. This means that the inserted value will be fully
- # consumed.
- if explicit_value is not None:
- state.rargs.insert(0, explicit_value)
-
- value = self._get_value_from_state(opt, option, state)
-
- elif explicit_value is not None:
- raise BadOptionUsage(
- opt, _("Option {name!r} does not take a value.").format(name=opt)
- )
-
- else:
- value = None
-
- option.process(value, state)
-
- def _match_short_opt(self, arg: str, state: ParsingState) -> None:
- stop = False
- i = 1
- prefix = arg[0]
- unknown_options = []
-
- for ch in arg[1:]:
- opt = normalize_opt(f"{prefix}{ch}", self.ctx)
- option = self._short_opt.get(opt)
- i += 1
-
- if not option:
- if self.ignore_unknown_options:
- unknown_options.append(ch)
- continue
- raise NoSuchOption(opt, ctx=self.ctx)
- if option.takes_value:
- # Any characters left in arg? Pretend they're the
- # next arg, and stop consuming characters of arg.
- if i < len(arg):
- state.rargs.insert(0, arg[i:])
- stop = True
-
- value = self._get_value_from_state(opt, option, state)
-
- else:
- value = None
-
- option.process(value, state)
-
- if stop:
- break
-
- # If we got any unknown options we recombine the string of the
- # remaining options and re-attach the prefix, then report that
- # to the state as new larg. This way there is basic combinatorics
- # that can be achieved while still ignoring unknown arguments.
- if self.ignore_unknown_options and unknown_options:
- state.largs.append(f"{prefix}{''.join(unknown_options)}")
-
- def _get_value_from_state(
- self, option_name: str, option: Option, state: ParsingState
- ) -> t.Any:
- nargs = option.nargs
-
- if len(state.rargs) < nargs:
- if option.obj._flag_needs_value:
- # Option allows omitting the value.
- value = _flag_needs_value
- else:
- raise BadOptionUsage(
- option_name,
- ngettext(
- "Option {name!r} requires an argument.",
- "Option {name!r} requires {nargs} arguments.",
- nargs,
- ).format(name=option_name, nargs=nargs),
- )
- elif nargs == 1:
- next_rarg = state.rargs[0]
-
- if (
- option.obj._flag_needs_value
- and isinstance(next_rarg, str)
- and next_rarg[:1] in self._opt_prefixes
- and len(next_rarg) > 1
- ):
- # The next arg looks like the start of an option, don't
- # use it as the value if omitting the value is allowed.
- value = _flag_needs_value
- else:
- value = state.rargs.pop(0)
- else:
- value = tuple(state.rargs[:nargs])
- del state.rargs[:nargs]
-
- return value
-
- def _process_opts(self, arg: str, state: ParsingState) -> None:
- explicit_value = None
- # Long option handling happens in two parts. The first part is
- # supporting explicitly attached values. In any case, we will try
- # to long match the option first.
- if "=" in arg:
- long_opt, explicit_value = arg.split("=", 1)
- else:
- long_opt = arg
- norm_long_opt = normalize_opt(long_opt, self.ctx)
-
- # At this point we will match the (assumed) long option through
- # the long option matching code. Note that this allows options
- # like "-foo" to be matched as long options.
- try:
- self._match_long_opt(norm_long_opt, explicit_value, state)
- except NoSuchOption:
- # At this point the long option matching failed, and we need
- # to try with short options. However there is a special rule
- # which says, that if we have a two character options prefix
- # (applies to "--foo" for instance), we do not dispatch to the
- # short option code and will instead raise the no option
- # error.
- if arg[:2] not in self._opt_prefixes:
- self._match_short_opt(arg, state)
- return
-
- if not self.ignore_unknown_options:
- raise
-
- state.largs.append(arg)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/split.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/split.py
deleted file mode 100644
index 0b7cdf4be05dea1e810b4fddf4bf026bc1a50a85..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/split.py
+++ /dev/null
@@ -1,475 +0,0 @@
-"""Allows building all the variable fonts of a DesignSpace version 5 by
-splitting the document into interpolable sub-space, then into each VF.
-"""
-
-from __future__ import annotations
-
-import itertools
-import logging
-import math
-from typing import Any, Callable, Dict, Iterator, List, Tuple, cast
-
-from fontTools.designspaceLib import (
- AxisDescriptor,
- AxisMappingDescriptor,
- DesignSpaceDocument,
- DiscreteAxisDescriptor,
- InstanceDescriptor,
- RuleDescriptor,
- SimpleLocationDict,
- SourceDescriptor,
- VariableFontDescriptor,
-)
-from fontTools.designspaceLib.statNames import StatNames, getStatNames
-from fontTools.designspaceLib.types import (
- ConditionSet,
- Range,
- Region,
- getVFUserRegion,
- locationInRegion,
- regionInRegion,
- userRegionToDesignRegion,
-)
-
-LOGGER = logging.getLogger(__name__)
-
-MakeInstanceFilenameCallable = Callable[
- [DesignSpaceDocument, InstanceDescriptor, StatNames], str
-]
-
-
-def defaultMakeInstanceFilename(
- doc: DesignSpaceDocument, instance: InstanceDescriptor, statNames: StatNames
-) -> str:
- """Default callable to synthesize an instance filename
- when makeNames=True, for instances that don't specify an instance name
- in the designspace. This part of the name generation can be overriden
- because it's not specified by the STAT table.
- """
- familyName = instance.familyName or statNames.familyNames.get("en")
- styleName = instance.styleName or statNames.styleNames.get("en")
- return f"{familyName}-{styleName}.ttf"
-
-
-def splitInterpolable(
- doc: DesignSpaceDocument,
- makeNames: bool = True,
- expandLocations: bool = True,
- makeInstanceFilename: MakeInstanceFilenameCallable = defaultMakeInstanceFilename,
-) -> Iterator[Tuple[SimpleLocationDict, DesignSpaceDocument]]:
- """Split the given DS5 into several interpolable sub-designspaces.
- There are as many interpolable sub-spaces as there are combinations of
- discrete axis values.
-
- E.g. with axes:
- - italic (discrete) Upright or Italic
- - style (discrete) Sans or Serif
- - weight (continuous) 100 to 900
-
- There are 4 sub-spaces in which the Weight axis should interpolate:
- (Upright, Sans), (Upright, Serif), (Italic, Sans) and (Italic, Serif).
-
- The sub-designspaces still include the full axis definitions and STAT data,
- but the rules, sources, variable fonts, instances are trimmed down to only
- keep what falls within the interpolable sub-space.
-
- Args:
- - ``makeNames``: Whether to compute the instance family and style
- names using the STAT data.
- - ``expandLocations``: Whether to turn all locations into "full"
- locations, including implicit default axis values where missing.
- - ``makeInstanceFilename``: Callable to synthesize an instance filename
- when makeNames=True, for instances that don't specify an instance name
- in the designspace. This part of the name generation can be overridden
- because it's not specified by the STAT table.
-
- .. versionadded:: 5.0
- """
- discreteAxes = []
- interpolableUserRegion: Region = {}
- for axis in doc.axes:
- if hasattr(axis, "values"):
- # Mypy doesn't support narrowing union types via hasattr()
- # TODO(Python 3.10): use TypeGuard
- # https://mypy.readthedocs.io/en/stable/type_narrowing.html
- axis = cast(DiscreteAxisDescriptor, axis)
- discreteAxes.append(axis)
- else:
- axis = cast(AxisDescriptor, axis)
- interpolableUserRegion[axis.name] = Range(
- axis.minimum,
- axis.maximum,
- axis.default,
- )
- valueCombinations = itertools.product(*[axis.values for axis in discreteAxes])
- for values in valueCombinations:
- discreteUserLocation = {
- discreteAxis.name: value
- for discreteAxis, value in zip(discreteAxes, values)
- }
- subDoc = _extractSubSpace(
- doc,
- {**interpolableUserRegion, **discreteUserLocation},
- keepVFs=True,
- makeNames=makeNames,
- expandLocations=expandLocations,
- makeInstanceFilename=makeInstanceFilename,
- )
- yield discreteUserLocation, subDoc
-
-
-def splitVariableFonts(
- doc: DesignSpaceDocument,
- makeNames: bool = False,
- expandLocations: bool = False,
- makeInstanceFilename: MakeInstanceFilenameCallable = defaultMakeInstanceFilename,
-) -> Iterator[Tuple[str, DesignSpaceDocument]]:
- """Convert each variable font listed in this document into a standalone
- designspace. This can be used to compile all the variable fonts from a
- format 5 designspace using tools that can only deal with 1 VF at a time.
-
- Args:
- - ``makeNames``: Whether to compute the instance family and style
- names using the STAT data.
- - ``expandLocations``: Whether to turn all locations into "full"
- locations, including implicit default axis values where missing.
- - ``makeInstanceFilename``: Callable to synthesize an instance filename
- when makeNames=True, for instances that don't specify an instance name
- in the designspace. This part of the name generation can be overridden
- because it's not specified by the STAT table.
-
- .. versionadded:: 5.0
- """
- # Make one DesignspaceDoc v5 for each variable font
- for vf in doc.getVariableFonts():
- vfUserRegion = getVFUserRegion(doc, vf)
- vfDoc = _extractSubSpace(
- doc,
- vfUserRegion,
- keepVFs=False,
- makeNames=makeNames,
- expandLocations=expandLocations,
- makeInstanceFilename=makeInstanceFilename,
- )
- vfDoc.lib = {**vfDoc.lib, **vf.lib}
- yield vf.name, vfDoc
-
-
-def convert5to4(
- doc: DesignSpaceDocument,
-) -> Dict[str, DesignSpaceDocument]:
- """Convert each variable font listed in this document into a standalone
- format 4 designspace. This can be used to compile all the variable fonts
- from a format 5 designspace using tools that only know about format 4.
-
- .. versionadded:: 5.0
- """
- vfs = {}
- for _location, subDoc in splitInterpolable(doc):
- for vfName, vfDoc in splitVariableFonts(subDoc):
- vfDoc.formatVersion = "4.1"
- vfs[vfName] = vfDoc
- return vfs
-
-
-def _extractSubSpace(
- doc: DesignSpaceDocument,
- userRegion: Region,
- *,
- keepVFs: bool,
- makeNames: bool,
- expandLocations: bool,
- makeInstanceFilename: MakeInstanceFilenameCallable,
-) -> DesignSpaceDocument:
- subDoc = DesignSpaceDocument()
- # Don't include STAT info
- # FIXME: (Jany) let's think about it. Not include = OK because the point of
- # the splitting is to build VFs and we'll use the STAT data of the full
- # document to generate the STAT of the VFs, so "no need" to have STAT data
- # in sub-docs. Counterpoint: what if someone wants to split this DS for
- # other purposes? Maybe for that it would be useful to also subset the STAT
- # data?
- # subDoc.elidedFallbackName = doc.elidedFallbackName
-
- def maybeExpandDesignLocation(object):
- if expandLocations:
- return object.getFullDesignLocation(doc)
- else:
- return object.designLocation
-
- for axis in doc.axes:
- range = userRegion[axis.name]
- if isinstance(range, Range) and hasattr(axis, "minimum"):
- # Mypy doesn't support narrowing union types via hasattr()
- # TODO(Python 3.10): use TypeGuard
- # https://mypy.readthedocs.io/en/stable/type_narrowing.html
- axis = cast(AxisDescriptor, axis)
- subDoc.addAxis(
- AxisDescriptor(
- # Same info
- tag=axis.tag,
- name=axis.name,
- labelNames=axis.labelNames,
- hidden=axis.hidden,
- # Subset range
- minimum=max(range.minimum, axis.minimum),
- default=range.default or axis.default,
- maximum=min(range.maximum, axis.maximum),
- map=[
- (user, design)
- for user, design in axis.map
- if range.minimum <= user <= range.maximum
- ],
- # Don't include STAT info
- axisOrdering=None,
- axisLabels=None,
- )
- )
-
- subDoc.axisMappings = mappings = []
- subDocAxes = {axis.name for axis in subDoc.axes}
- for mapping in doc.axisMappings:
- if not all(axis in subDocAxes for axis in mapping.inputLocation.keys()):
- continue
- if not all(axis in subDocAxes for axis in mapping.outputLocation.keys()):
- LOGGER.error(
- "In axis mapping from input %s, some output axes are not in the variable-font: %s",
- mapping.inputLocation,
- mapping.outputLocation,
- )
- continue
-
- mappingAxes = set()
- mappingAxes.update(mapping.inputLocation.keys())
- mappingAxes.update(mapping.outputLocation.keys())
- for axis in doc.axes:
- if axis.name not in mappingAxes:
- continue
- range = userRegion[axis.name]
- if (
- range.minimum != axis.minimum
- or (range.default is not None and range.default != axis.default)
- or range.maximum != axis.maximum
- ):
- LOGGER.error(
- "Limiting axis ranges used in elements not supported: %s",
- axis.name,
- )
- continue
-
- mappings.append(
- AxisMappingDescriptor(
- inputLocation=mapping.inputLocation,
- outputLocation=mapping.outputLocation,
- )
- )
-
- # Don't include STAT info
- # subDoc.locationLabels = doc.locationLabels
-
- # Rules: subset them based on conditions
- designRegion = userRegionToDesignRegion(doc, userRegion)
- subDoc.rules = _subsetRulesBasedOnConditions(doc.rules, designRegion)
- subDoc.rulesProcessingLast = doc.rulesProcessingLast
-
- # Sources: keep only the ones that fall within the kept axis ranges
- for source in doc.sources:
- if not locationInRegion(doc.map_backward(source.designLocation), userRegion):
- continue
-
- subDoc.addSource(
- SourceDescriptor(
- filename=source.filename,
- path=source.path,
- font=source.font,
- name=source.name,
- designLocation=_filterLocation(
- userRegion, maybeExpandDesignLocation(source)
- ),
- layerName=source.layerName,
- familyName=source.familyName,
- styleName=source.styleName,
- muteKerning=source.muteKerning,
- muteInfo=source.muteInfo,
- mutedGlyphNames=source.mutedGlyphNames,
- )
- )
-
- # Copy family name translations from the old default source to the new default
- vfDefault = subDoc.findDefault()
- oldDefault = doc.findDefault()
- if vfDefault is not None and oldDefault is not None:
- vfDefault.localisedFamilyName = oldDefault.localisedFamilyName
-
- # Variable fonts: keep only the ones that fall within the kept axis ranges
- if keepVFs:
- # Note: call getVariableFont() to make the implicit VFs explicit
- for vf in doc.getVariableFonts():
- vfUserRegion = getVFUserRegion(doc, vf)
- if regionInRegion(vfUserRegion, userRegion):
- subDoc.addVariableFont(
- VariableFontDescriptor(
- name=vf.name,
- filename=vf.filename,
- axisSubsets=[
- axisSubset
- for axisSubset in vf.axisSubsets
- if isinstance(userRegion[axisSubset.name], Range)
- ],
- lib=vf.lib,
- )
- )
-
- # Instances: same as Sources + compute missing names
- for instance in doc.instances:
- if not locationInRegion(instance.getFullUserLocation(doc), userRegion):
- continue
-
- if makeNames:
- statNames = getStatNames(doc, instance.getFullUserLocation(doc))
- familyName = instance.familyName or statNames.familyNames.get("en")
- styleName = instance.styleName or statNames.styleNames.get("en")
- subDoc.addInstance(
- InstanceDescriptor(
- filename=instance.filename
- or makeInstanceFilename(doc, instance, statNames),
- path=instance.path,
- font=instance.font,
- name=instance.name or f"{familyName} {styleName}",
- userLocation={} if expandLocations else instance.userLocation,
- designLocation=_filterLocation(
- userRegion, maybeExpandDesignLocation(instance)
- ),
- familyName=familyName,
- styleName=styleName,
- postScriptFontName=instance.postScriptFontName
- or statNames.postScriptFontName,
- styleMapFamilyName=instance.styleMapFamilyName
- or statNames.styleMapFamilyNames.get("en"),
- styleMapStyleName=instance.styleMapStyleName
- or statNames.styleMapStyleName,
- localisedFamilyName=instance.localisedFamilyName
- or statNames.familyNames,
- localisedStyleName=instance.localisedStyleName
- or statNames.styleNames,
- localisedStyleMapFamilyName=instance.localisedStyleMapFamilyName
- or statNames.styleMapFamilyNames,
- localisedStyleMapStyleName=instance.localisedStyleMapStyleName
- or {},
- lib=instance.lib,
- )
- )
- else:
- subDoc.addInstance(
- InstanceDescriptor(
- filename=instance.filename,
- path=instance.path,
- font=instance.font,
- name=instance.name,
- userLocation={} if expandLocations else instance.userLocation,
- designLocation=_filterLocation(
- userRegion, maybeExpandDesignLocation(instance)
- ),
- familyName=instance.familyName,
- styleName=instance.styleName,
- postScriptFontName=instance.postScriptFontName,
- styleMapFamilyName=instance.styleMapFamilyName,
- styleMapStyleName=instance.styleMapStyleName,
- localisedFamilyName=instance.localisedFamilyName,
- localisedStyleName=instance.localisedStyleName,
- localisedStyleMapFamilyName=instance.localisedStyleMapFamilyName,
- localisedStyleMapStyleName=instance.localisedStyleMapStyleName,
- lib=instance.lib,
- )
- )
-
- subDoc.lib = doc.lib
-
- return subDoc
-
-
-def _conditionSetFrom(conditionSet: List[Dict[str, Any]]) -> ConditionSet:
- c: Dict[str, Range] = {}
- for condition in conditionSet:
- minimum, maximum = condition.get("minimum"), condition.get("maximum")
- c[condition["name"]] = Range(
- minimum if minimum is not None else -math.inf,
- maximum if maximum is not None else math.inf,
- )
- return c
-
-
-def _subsetRulesBasedOnConditions(
- rules: List[RuleDescriptor], designRegion: Region
-) -> List[RuleDescriptor]:
- # What rules to keep:
- # - Keep the rule if any conditionset is relevant.
- # - A conditionset is relevant if all conditions are relevant or it is empty.
- # - A condition is relevant if
- # - axis is point (C-AP),
- # - and point in condition's range (C-AP-in)
- # (in this case remove the condition because it's always true)
- # - else (C-AP-out) whole conditionset can be discarded (condition false
- # => conditionset false)
- # - axis is range (C-AR),
- # - (C-AR-all) and axis range fully contained in condition range: we can
- # scrap the condition because it's always true
- # - (C-AR-inter) and intersection(axis range, condition range) not empty:
- # keep the condition with the smaller range (= intersection)
- # - (C-AR-none) else, whole conditionset can be discarded
- newRules: List[RuleDescriptor] = []
- for rule in rules:
- newRule: RuleDescriptor = RuleDescriptor(
- name=rule.name, conditionSets=[], subs=rule.subs
- )
- for conditionset in rule.conditionSets:
- cs = _conditionSetFrom(conditionset)
- newConditionset: List[Dict[str, Any]] = []
- discardConditionset = False
- for selectionName, selectionValue in designRegion.items():
- # TODO: Ensure that all(key in conditionset for key in region.keys())?
- if selectionName not in cs:
- # raise Exception("Selection has different axes than the rules")
- continue
- if isinstance(selectionValue, (float, int)): # is point
- # Case C-AP-in
- if selectionValue in cs[selectionName]:
- pass # always matches, conditionset can stay empty for this one.
- # Case C-AP-out
- else:
- discardConditionset = True
- else: # is range
- # Case C-AR-all
- if selectionValue in cs[selectionName]:
- pass # always matches, conditionset can stay empty for this one.
- else:
- intersection = cs[selectionName].intersection(selectionValue)
- # Case C-AR-inter
- if intersection is not None:
- newConditionset.append(
- {
- "name": selectionName,
- "minimum": intersection.minimum,
- "maximum": intersection.maximum,
- }
- )
- # Case C-AR-none
- else:
- discardConditionset = True
- if not discardConditionset:
- newRule.conditionSets.append(newConditionset)
- if newRule.conditionSets:
- newRules.append(newRule)
-
- return newRules
-
-
-def _filterLocation(
- userRegion: Region,
- location: Dict[str, float],
-) -> Dict[str, float]:
- return {
- name: value
- for name, value in location.items()
- if name in userRegion and isinstance(userRegion[name], Range)
- }
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/plistlib.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/plistlib.py
deleted file mode 100644
index 1f52f20a2b4836e39d3e292496928185dfe08534..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/plistlib.py
+++ /dev/null
@@ -1,46 +0,0 @@
-"""DEPRECATED - This module is kept here only as a backward compatibility shim
-for the old ufoLib.plistlib module, which was moved to fontTools.misc.plistlib.
-Please use the latter instead.
-"""
-from fontTools.misc.plistlib import dump, dumps, load, loads
-from fontTools.misc.textTools import tobytes
-
-# The following functions were part of the old py2-like ufoLib.plistlib API.
-# They are kept only for backward compatiblity.
-from fontTools.ufoLib.utils import deprecated
-
-
-@deprecated("Use 'fontTools.misc.plistlib.load' instead")
-def readPlist(path_or_file):
- did_open = False
- if isinstance(path_or_file, str):
- path_or_file = open(path_or_file, "rb")
- did_open = True
- try:
- return load(path_or_file, use_builtin_types=False)
- finally:
- if did_open:
- path_or_file.close()
-
-
-@deprecated("Use 'fontTools.misc.plistlib.dump' instead")
-def writePlist(value, path_or_file):
- did_open = False
- if isinstance(path_or_file, str):
- path_or_file = open(path_or_file, "wb")
- did_open = True
- try:
- dump(value, path_or_file, use_builtin_types=False)
- finally:
- if did_open:
- path_or_file.close()
-
-
-@deprecated("Use 'fontTools.misc.plistlib.loads' instead")
-def readPlistFromString(data):
- return loads(tobytes(data, encoding="utf-8"), use_builtin_types=False)
-
-
-@deprecated("Use 'fontTools.misc.plistlib.dumps' instead")
-def writePlistToString(value):
- return dumps(value, use_builtin_types=False)
diff --git a/spaces/Dantra1/CeliaSensei/README.md b/spaces/Dantra1/CeliaSensei/README.md
deleted file mode 100644
index 2e44ec5507a21c84647346865c876ce2b48db560..0000000000000000000000000000000000000000
--- a/spaces/Dantra1/CeliaSensei/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Vits Models
-emoji: 🏃
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: sayashi/vits-models
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dao3/image-to-video/app.py b/spaces/Dao3/image-to-video/app.py
deleted file mode 100644
index 7dbb0a7692a79a4116b7fcd856e4d74c8d03e28a..0000000000000000000000000000000000000000
--- a/spaces/Dao3/image-to-video/app.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-import io, base64
-from PIL import Image
-import numpy as np
-import tensorflow as tf
-import mediapy
-import os
-import sys
-from huggingface_hub import snapshot_download
-from image_tools.sizes import resize_and_crop
-
-os.system("git clone https://github.com/google-research/frame-interpolation")
-sys.path.append("frame-interpolation")
-from eval import interpolator, util
-
-ffmpeg_path = util.get_ffmpeg_path()
-mediapy.set_ffmpeg(ffmpeg_path)
-
-model = snapshot_download(repo_id="akhaliq/frame-interpolation-film-style")
-interpolator = interpolator.Interpolator(model, None)
-
-def resize(width, img):
- basewidth = width
- img = Image.open(img)
- wpercent = (basewidth / float(img.size[0]))
- hsize = int((float(img.size[1]) * float(wpercent)))
- img = img.resize((basewidth, hsize), Image.ANTIALIAS)
- return img
-
-def resize_img(img1, img2, output_name):
- img_target_size = Image.open(img1)
- img_to_resize = resize_and_crop(
- img2,
- (img_target_size.size[0], img_target_size.size[1]),
- crop_origin="middle"
- )
- img_to_resize.save(output_name)
-
-def generate_interpolation(frame1, frame2, frame3, frame4, frame5, frame6, times_to_interpolate, fps):
-
- frame1 = resize(256, frame1)
- frame2 = resize(256, frame2)
- frame3 = resize(256, frame3)
- frame4 = resize(256, frame4)
- frame5 = resize(256, frame5)
- frame6 = resize(256, frame6)
-
- frame1.save("test1.png")
- frame2.save("test2.png")
- frame3.save("test3.png")
- frame4.save("test4.png")
- frame5.save("test5.png")
- frame6.save("test6.png")
-
- resize_img("test1.png", "test2.png", "resized_img2.png")
- resize_img("test1.png", "test3.png", "resized_img3.png")
- resize_img("test1.png", "test4.png", "resized_img4.png")
- resize_img("test1.png", "test5.png", "resized_img5.png")
- resize_img("test1.png", "test6.png", "resized_img6.png")
-
- input_frames = ["test1.png", "resized_img2.png", "resized_img3.png", "resized_img4.png", "resized_img5.png", "resized_img6.png"]
-
- frames = list(util.interpolate_recursively_from_files(input_frames, times_to_interpolate, interpolator))
-
- mediapy.write_video("out.mp4", frames, fps=fps)
-
- return "out.mp4"
-
-demo = gr.Blocks()
-
-with demo:
- with gr.Row():
-
- # Left column (inputs)
- with gr.Column():
-
- with gr.Row():
- # upload images and get image strings
- input_arr = [
- gr.inputs.Image(type='filepath', label="Frame 1"),
- gr.inputs.Image(type='filepath', label="Frame 2"),
- gr.inputs.Image(type='filepath', label="Frame 3"),
- gr.inputs.Image(type='filepath', label="Frame 4"),
- gr.inputs.Image(type='filepath', label="Frame 5"),
- gr.inputs.Image(type='filepath', label="Frame 6"),
- ]
-
- with gr.Row():
- input_arr.append(gr.inputs.Slider(minimum=2, maximum=10, step=1, label="Times to Interpolate"))
- input_arr.append(gr.inputs.Slider(minimum=15, maximum=60, step=1, label="fps"))
-
- # Rows of instructions & buttons
- with gr.Row():
- gr.Markdown("After uploading some images, hit the 'Generate Video' button to create a short video!")
- button_gen_video = gr.Button("Generate Video")
-
-
- # Right column (outputs)
- with gr.Column():
- output_interpolation = gr.Video(label="Generated Video")
-
- # Bind functions to buttons
- button_gen_video.click(fn=generate_interpolation, inputs=input_arr, outputs=output_interpolation)
-
-demo.launch(debug=True, enable_queue=True)
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/modules/conditioners.py b/spaces/Datasculptor/MusicGen/audiocraft/modules/conditioners.py
deleted file mode 100644
index 82792316024b88d4c5c38b0a28f443627771d509..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/modules/conditioners.py
+++ /dev/null
@@ -1,990 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import defaultdict
-from copy import deepcopy
-from dataclasses import dataclass, field
-from itertools import chain
-import logging
-import math
-import random
-import re
-import typing as tp
-import warnings
-
-from einops import rearrange
-from num2words import num2words
-import spacy
-from transformers import T5EncoderModel, T5Tokenizer # type: ignore
-import torchaudio
-import torch
-from torch import nn
-from torch import Tensor
-import torch.nn.functional as F
-from torch.nn.utils.rnn import pad_sequence
-
-from .streaming import StreamingModule
-from .transformer import create_sin_embedding
-from ..data.audio_dataset import SegmentInfo
-from ..utils.autocast import TorchAutocast
-from ..utils.utils import hash_trick, length_to_mask, collate
-
-
-logger = logging.getLogger(__name__)
-TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist)
-ConditionType = tp.Tuple[Tensor, Tensor] # condition, mask
-
-
-class WavCondition(tp.NamedTuple):
- wav: Tensor
- length: Tensor
- path: tp.List[tp.Optional[str]] = []
-
-
-def nullify_condition(condition: ConditionType, dim: int = 1):
- """This function transforms an input condition to a null condition.
- The way it is done by converting it to a single zero vector similarly
- to how it is done inside WhiteSpaceTokenizer and NoopTokenizer.
-
- Args:
- condition (ConditionType): a tuple of condition and mask (tp.Tuple[Tensor, Tensor])
- dim (int): the dimension that will be truncated (should be the time dimension)
- WARNING!: dim should not be the batch dimension!
- Returns:
- ConditionType: a tuple of null condition and mask
- """
- assert dim != 0, "dim cannot be the batch dimension!"
- assert type(condition) == tuple and \
- type(condition[0]) == Tensor and \
- type(condition[1]) == Tensor, "'nullify_condition' got an unexpected input type!"
- cond, mask = condition
- B = cond.shape[0]
- last_dim = cond.dim() - 1
- out = cond.transpose(dim, last_dim)
- out = 0. * out[..., :1]
- out = out.transpose(dim, last_dim)
- mask = torch.zeros((B, 1), device=out.device).int()
- assert cond.dim() == out.dim()
- return out, mask
-
-
-def nullify_wav(wav: Tensor) -> WavCondition:
- """Create a nullified WavCondition from a wav tensor with appropriate shape.
-
- Args:
- wav (Tensor): tensor of shape [B, T]
- Returns:
- WavCondition: wav condition with nullified wav.
- """
- null_wav, _ = nullify_condition((wav, torch.zeros_like(wav)), dim=wav.dim() - 1)
- return WavCondition(
- wav=null_wav,
- length=torch.tensor([0] * wav.shape[0], device=wav.device),
- path=['null_wav'] * wav.shape[0]
- )
-
-
-@dataclass
-class ConditioningAttributes:
- text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict)
- wav: tp.Dict[str, WavCondition] = field(default_factory=dict)
-
- def __getitem__(self, item):
- return getattr(self, item)
-
- @property
- def text_attributes(self):
- return self.text.keys()
-
- @property
- def wav_attributes(self):
- return self.wav.keys()
-
- @property
- def attributes(self):
- return {"text": self.text_attributes, "wav": self.wav_attributes}
-
- def to_flat_dict(self):
- return {
- **{f"text.{k}": v for k, v in self.text.items()},
- **{f"wav.{k}": v for k, v in self.wav.items()},
- }
-
- @classmethod
- def from_flat_dict(cls, x):
- out = cls()
- for k, v in x.items():
- kind, att = k.split(".")
- out[kind][att] = v
- return out
-
-
-class SegmentWithAttributes(SegmentInfo):
- """Base class for all dataclasses that are used for conditioning.
- All child classes should implement `to_condition_attributes` that converts
- the existing attributes to a dataclass of type ConditioningAttributes.
- """
- def to_condition_attributes(self) -> ConditioningAttributes:
- raise NotImplementedError()
-
-
-class Tokenizer:
- """Base class for all tokenizers
- (in case we want to introduce more advances tokenizers in the future).
- """
- def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]:
- raise NotImplementedError()
-
-
-class WhiteSpaceTokenizer(Tokenizer):
- """This tokenizer should be used for natural language descriptions.
- For example:
- ["he didn't, know he's going home.", 'shorter sentence'] =>
- [[78, 62, 31, 4, 78, 25, 19, 34],
- [59, 77, 0, 0, 0, 0, 0, 0]]
- """
- PUNCTUATIONS = "?:!.,;"
-
- def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm",
- lemma: bool = True, stopwords: bool = True) -> None:
- self.n_bins = n_bins
- self.pad_idx = pad_idx
- self.lemma = lemma
- self.stopwords = stopwords
- try:
- self.nlp = spacy.load(language)
- except IOError:
- spacy.cli.download(language) # type: ignore
- self.nlp = spacy.load(language)
-
- @tp.no_type_check
- def __call__(
- self,
- texts: tp.List[tp.Optional[str]],
- return_text: bool = False
- ) -> tp.Tuple[Tensor, Tensor]:
- """Take a list of strings and convert them to a tensor of indices.
-
- Args:
- texts (tp.List[str]): List of strings.
- return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False.
- Returns:
- tp.Tuple[Tensor, Tensor]:
- - Indices of words in the LUT.
- - And a mask indicating where the padding tokens are
- """
- output, lengths = [], []
- texts = deepcopy(texts)
- for i, text in enumerate(texts):
- # if current sample doesn't have a certain attribute, replace with pad token
- if text is None:
- output.append(Tensor([self.pad_idx]))
- lengths.append(0)
- continue
-
- # convert numbers to words
- text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore
- # normalize text
- text = self.nlp(text) # type: ignore
- # remove stopwords
- if self.stopwords:
- text = [w for w in text if not w.is_stop] # type: ignore
- # remove punctuations
- text = [w for w in text if w.text not in self.PUNCTUATIONS] # type: ignore
- # lemmatize if needed
- text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore
-
- texts[i] = " ".join(text)
- lengths.append(len(text))
- # convert to tensor
- tokens = Tensor([hash_trick(w, self.n_bins) for w in text])
- output.append(tokens)
-
- mask = length_to_mask(torch.IntTensor(lengths)).int()
- padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t()
- if return_text:
- return padded_output, mask, texts # type: ignore
- return padded_output, mask
-
-
-class NoopTokenizer(Tokenizer):
- """This tokenizer should be used for global conditioners such as: artist, genre, key, etc.
- The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split
- strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will
- split it to ["Jeff", "Buckley"] and return an index per word.
-
- For example:
- ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101]
- ["Metal", "Rock", "Classical"] => [0, 223, 51]
- """
- def __init__(self, n_bins: int, pad_idx: int = 0):
- self.n_bins = n_bins
- self.pad_idx = pad_idx
-
- def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]:
- output, lengths = [], []
- for text in texts:
- # if current sample doesn't have a certain attribute, replace with pad token
- if text is None:
- output.append(self.pad_idx)
- lengths.append(0)
- else:
- output.append(hash_trick(text, self.n_bins))
- lengths.append(1)
-
- tokens = torch.LongTensor(output).unsqueeze(1)
- mask = length_to_mask(torch.IntTensor(lengths)).int()
- return tokens, mask
-
-
-class BaseConditioner(nn.Module):
- """Base model for all conditioner modules. We allow the output dim to be different
- than the hidden dim for two reasons: 1) keep our LUTs small when the vocab is large;
- 2) make all condition dims consistent.
-
- Args:
- dim (int): Hidden dim of the model (text-encoder/LUT).
- output_dim (int): Output dim of the conditioner.
- """
- def __init__(self, dim, output_dim):
- super().__init__()
- self.dim = dim
- self.output_dim = output_dim
- self.output_proj = nn.Linear(dim, output_dim)
-
- def tokenize(self, *args, **kwargs) -> tp.Any:
- """Should be any part of the processing that will lead to a synchronization
- point, e.g. BPE tokenization with transfer to the GPU.
-
- The returned value will be saved and return later when calling forward().
- """
- raise NotImplementedError()
-
- def forward(self, inputs: tp.Any) -> ConditionType:
- """Gets input that should be used as conditioning (e.g, genre, description or a waveform).
- Outputs a ConditionType, after the input data was embedded as a dense vector.
-
- Returns:
- ConditionType:
- - A tensor of size [B, T, D] where B is the batch size, T is the length of the
- output embedding and D is the dimension of the embedding.
- - And a mask indicating where the padding tokens.
- """
- raise NotImplementedError()
-
-
-class TextConditioner(BaseConditioner):
- ...
-
-
-class LUTConditioner(TextConditioner):
- """Lookup table TextConditioner.
-
- Args:
- n_bins (int): Number of bins.
- dim (int): Hidden dim of the model (text-encoder/LUT).
- output_dim (int): Output dim of the conditioner.
- tokenizer (str): Name of the tokenizer.
- pad_idx (int, optional): Index for padding token. Defaults to 0.
- """
- def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0):
- super().__init__(dim, output_dim)
- self.embed = nn.Embedding(n_bins, dim)
- self.tokenizer: Tokenizer
- if tokenizer == "whitespace":
- self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx)
- elif tokenizer == "noop":
- self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx)
- else:
- raise ValueError(f"unrecognized tokenizer `{tokenizer}`.")
-
- def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- device = self.embed.weight.device
- tokens, mask = self.tokenizer(x)
- tokens, mask = tokens.to(device), mask.to(device)
- return tokens, mask
-
- def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType:
- tokens, mask = inputs
- embeds = self.embed(tokens)
- embeds = self.output_proj(embeds)
- embeds = (embeds * mask.unsqueeze(-1))
- return embeds, mask
-
-
-class T5Conditioner(TextConditioner):
- """T5-based TextConditioner.
-
- Args:
- name (str): Name of the T5 model.
- output_dim (int): Output dim of the conditioner.
- finetune (bool): Whether to fine-tune T5 at train time.
- device (str): Device for T5 Conditioner.
- autocast_dtype (tp.Optional[str], optional): Autocast dtype.
- word_dropout (float, optional): Word dropout probability.
- normalize_text (bool, optional): Whether to apply text normalization.
- """
- MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b",
- "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large",
- "google/flan-t5-xl", "google/flan-t5-xxl"]
- MODELS_DIMS = {
- "t5-small": 512,
- "t5-base": 768,
- "t5-large": 1024,
- "t5-3b": 1024,
- "t5-11b": 1024,
- "google/flan-t5-small": 512,
- "google/flan-t5-base": 768,
- "google/flan-t5-large": 1024,
- "google/flan-t5-3b": 1024,
- "google/flan-t5-11b": 1024,
- }
-
- def __init__(self, name: str, output_dim: int, finetune: bool, device: str,
- autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0.,
- normalize_text: bool = False):
- assert name in self.MODELS, f"unrecognized t5 model name (should in {self.MODELS})"
- super().__init__(self.MODELS_DIMS[name], output_dim)
- self.device = device
- self.name = name
- self.finetune = finetune
- self.word_dropout = word_dropout
-
- if autocast_dtype is None or self.device == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- if self.device != 'cpu':
- logger.warning("T5 has no autocast, this might lead to NaN")
- else:
- dtype = getattr(torch, autocast_dtype)
- assert isinstance(dtype, torch.dtype)
- logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}")
- self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype)
- # Let's disable logging temporarily because T5 will vomit some errors otherwise.
- # thanks https://gist.github.com/simon-weber/7853144
- previous_level = logging.root.manager.disable
- logging.disable(logging.ERROR)
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- try:
- self.t5_tokenizer = T5Tokenizer.from_pretrained(name)
- t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune)
- finally:
- logging.disable(previous_level)
- if finetune:
- self.t5 = t5
- else:
- # this makes sure that the t5 models is not part
- # of the saved checkpoint
- self.__dict__["t5"] = t5.to(device)
-
- self.normalize_text = normalize_text
- if normalize_text:
- self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True)
-
- def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]:
- # if current sample doesn't have a certain attribute, replace with empty string
- entries: tp.List[str] = [xi if xi is not None else "" for xi in x]
- if self.normalize_text:
- _, _, entries = self.text_normalizer(entries, return_text=True)
- if self.word_dropout > 0. and self.training:
- new_entries = []
- for entry in entries:
- words = [word for word in entry.split(" ") if random.random() >= self.word_dropout]
- new_entries.append(" ".join(words))
- entries = new_entries
-
- empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""])
-
- inputs = self.t5_tokenizer(entries, return_tensors="pt", padding=True).to(self.device)
- mask = inputs["attention_mask"]
- mask[empty_idx, :] = 0 # zero-out index where the input is non-existant
- return inputs
-
- def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType:
- mask = inputs["attention_mask"]
- with torch.set_grad_enabled(self.finetune), self.autocast:
- embeds = self.t5(**inputs).last_hidden_state
- embeds = self.output_proj(embeds.to(self.output_proj.weight))
- embeds = (embeds * mask.unsqueeze(-1))
- return embeds, mask
-
-
-class WaveformConditioner(BaseConditioner):
- """Base class for all conditioners that take a waveform as input.
- Classes that inherit must implement `_get_wav_embedding` that outputs
- a continuous tensor, and `_downsampling_factor` that returns the down-sampling
- factor of the embedding model.
-
- Args:
- dim (int): The internal representation dimension.
- output_dim (int): Output dimension.
- device (tp.Union[torch.device, str]): Device.
- """
- def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]):
- super().__init__(dim, output_dim)
- self.device = device
-
- def tokenize(self, wav_length: WavCondition) -> WavCondition:
- wav, length, path = wav_length
- assert length is not None
- return WavCondition(wav.to(self.device), length.to(self.device), path)
-
- def _get_wav_embedding(self, wav: Tensor) -> Tensor:
- """Gets as input a wav and returns a dense vector of conditions."""
- raise NotImplementedError()
-
- def _downsampling_factor(self):
- """Returns the downsampling factor of the embedding model."""
- raise NotImplementedError()
-
- def forward(self, inputs: WavCondition) -> ConditionType:
- """
- Args:
- input (WavCondition): Tuple of (waveform, lengths).
- Returns:
- ConditionType: Dense vector representing the conditioning along with its' mask.
- """
- wav, lengths, path = inputs
- with torch.no_grad():
- embeds = self._get_wav_embedding(wav)
- embeds = embeds.to(self.output_proj.weight)
- embeds = self.output_proj(embeds)
-
- if lengths is not None:
- lengths = lengths / self._downsampling_factor()
- mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore
- else:
- mask = torch.ones_like(embeds)
- embeds = (embeds * mask.unsqueeze(2).to(self.device))
-
- return embeds, mask
-
-
-class ChromaStemConditioner(WaveformConditioner):
- """Chroma conditioner that uses DEMUCS to first filter out drums and bass. The is followed by
- the insight the drums and bass often dominate the chroma, leading to the chroma not containing the
- information about melody.
-
- Args:
- output_dim (int): Output dimension for the conditioner.
- sample_rate (int): Sample rate for the chroma extractor.
- n_chroma (int): Number of chroma for the chroma extractor.
- radix2_exp (int): Radix2 exponent for the chroma extractor.
- duration (float): Duration used during training. This is later used for correct padding
- in case we are using chroma as prefix.
- match_len_on_eval (bool, optional): If True then all chromas are padded to the training
- duration. Defaults to False.
- eval_wavs (str, optional): Path to a json egg with waveform, this waveforms are used as
- conditions during eval (for cases where we don't want to leak test conditions like MusicCaps).
- Defaults to None.
- n_eval_wavs (int, optional): Limits the number of waveforms used for conditioning. Defaults to 0.
- device (tp.Union[torch.device, str], optional): Device for the conditioner.
- **kwargs: Additional parameters for the chroma extractor.
- """
- def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int,
- duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None,
- n_eval_wavs: int = 0, device: tp.Union[torch.device, str] = "cpu", **kwargs):
- from demucs import pretrained
- super().__init__(dim=n_chroma, output_dim=output_dim, device=device)
- self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32)
- self.sample_rate = sample_rate
- self.match_len_on_eval = match_len_on_eval
- self.duration = duration
- self.__dict__["demucs"] = pretrained.get_model('htdemucs').to(device)
- self.stem2idx = {'drums': 0, 'bass': 1, 'other': 2, 'vocal': 3}
- self.stem_idx = torch.LongTensor([self.stem2idx['vocal'], self.stem2idx['other']]).to(device)
- self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, radix2_exp=radix2_exp,
- device=device, **kwargs)
- self.chroma_len = self._get_chroma_len()
-
- def _downsampling_factor(self):
- return self.chroma.winhop
-
- def _get_chroma_len(self):
- """Get length of chroma during training"""
- dummy_wav = torch.zeros((1, self.sample_rate * self.duration), device=self.device)
- dummy_chr = self.chroma(dummy_wav)
- return dummy_chr.shape[1]
-
- @torch.no_grad()
- def _get_filtered_wav(self, wav):
- from demucs.apply import apply_model
- from demucs.audio import convert_audio
- with self.autocast:
- wav = convert_audio(wav, self.sample_rate, self.demucs.samplerate, self.demucs.audio_channels)
- stems = apply_model(self.demucs, wav, device=self.device)
- stems = stems[:, self.stem_idx] # extract stem
- stems = stems.sum(1) # merge extracted stems
- stems = stems.mean(1, keepdim=True) # mono
- stems = convert_audio(stems, self.demucs.samplerate, self.sample_rate, 1)
- return stems
-
- @torch.no_grad()
- def _get_wav_embedding(self, wav):
- # avoid 0-size tensors when we are working with null conds
- if wav.shape[-1] == 1:
- return self.chroma(wav)
- stems = self._get_filtered_wav(wav)
- chroma = self.chroma(stems)
-
- if self.match_len_on_eval:
- b, t, c = chroma.shape
- if t > self.chroma_len:
- chroma = chroma[:, :self.chroma_len]
- logger.debug(f'chroma was truncated! ({t} -> {chroma.shape[1]})')
- elif t < self.chroma_len:
- # chroma = F.pad(chroma, (0, 0, 0, self.chroma_len - t))
- n_repeat = int(math.ceil(self.chroma_len / t))
- chroma = chroma.repeat(1, n_repeat, 1)
- chroma = chroma[:, :self.chroma_len]
- logger.debug(f'chroma was zero-padded! ({t} -> {chroma.shape[1]})')
- return chroma
-
-
-class ChromaExtractor(nn.Module):
- """Chroma extraction class, handles chroma extraction and quantization.
-
- Args:
- sample_rate (int): Sample rate.
- n_chroma (int): Number of chroma to consider.
- radix2_exp (int): Radix2 exponent.
- nfft (tp.Optional[int], optional): Number of FFT.
- winlen (tp.Optional[int], optional): Window length.
- winhop (tp.Optional[int], optional): Window hop size.
- argmax (bool, optional): Whether to use argmax. Defaults to False.
- norm (float, optional): Norm for chroma normalization. Defaults to inf.
- device (tp.Union[torch.device, str], optional): Device to use. Defaults to cpu.
- """
- def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12,
- nfft: tp.Optional[int] = None, winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None,
- argmax: bool = False, norm: float = torch.inf, device: tp.Union[torch.device, str] = "cpu"):
- super().__init__()
- from librosa import filters
- self.device = device
- self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32)
- self.winlen = winlen or 2 ** radix2_exp
- self.nfft = nfft or self.winlen
- self.winhop = winhop or (self.winlen // 4)
- self.sr = sample_rate
- self.n_chroma = n_chroma
- self.norm = norm
- self.argmax = argmax
- self.window = torch.hann_window(self.winlen).to(device)
- self.fbanks = torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0,
- n_chroma=self.n_chroma)).to(device)
- self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen,
- hop_length=self.winhop, power=2, center=True,
- pad=0, normalized=True).to(device)
-
- def forward(self, wav):
- with self.autocast:
- T = wav.shape[-1]
- # in case we are getting a wav that was dropped out (nullified)
- # make sure wav length is no less that nfft
- if T < self.nfft:
- pad = self.nfft - T
- r = 0 if pad % 2 == 0 else 1
- wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0)
- assert wav.shape[-1] == self.nfft, f'expected len {self.nfft} but got {wav.shape[-1]}'
- spec = self.spec(wav).squeeze(1)
- raw_chroma = torch.einsum("cf,...ft->...ct", self.fbanks, spec)
- norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6)
- norm_chroma = rearrange(norm_chroma, "b d t -> b t d")
-
- if self.argmax:
- idx = norm_chroma.argmax(-1, keepdims=True)
- norm_chroma[:] = 0
- norm_chroma.scatter_(dim=-1, index=idx, value=1)
-
- return norm_chroma
-
-
-def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str):
- """Utility function for nullifying an attribute inside an ConditioningAttributes object.
- If the condition is of type "wav", then nullify it using "nullify_condition".
- If the condition is of any other type, set its' value to None.
- Works in-place.
- """
- if condition_type not in ["text", "wav"]:
- raise ValueError(
- "dropout_condition got an unexpected condition type!"
- f" expected 'wav' or 'text' but got '{condition_type}'"
- )
-
- if condition not in getattr(sample, condition_type):
- raise ValueError(
- "dropout_condition received an unexpected condition!"
- f" expected wav={sample.wav.keys()} and text={sample.text.keys()}"
- f"but got '{condition}' of type '{condition_type}'!"
- )
-
- if condition_type == "wav":
- wav, length, path = sample.wav[condition]
- sample.wav[condition] = nullify_wav(wav)
- else:
- sample.text[condition] = None
-
- return sample
-
-
-class DropoutModule(nn.Module):
- """Base class for all dropout modules."""
- def __init__(self, seed: int = 1234):
- super().__init__()
- self.rng = torch.Generator()
- self.rng.manual_seed(seed)
-
-
-class AttributeDropout(DropoutModule):
- """Applies dropout with a given probability per attribute. This is different from the behavior of
- ClassifierFreeGuidanceDropout as this allows for attributes to be dropped out separately. For example,
- "artist" can be dropped while "genre" remains. This is in contrast to ClassifierFreeGuidanceDropout
- where if "artist" is dropped "genre" must also be dropped.
-
- Args:
- p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example:
- ...
- "genre": 0.1,
- "artist": 0.5,
- "wav": 0.25,
- ...
- active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False.
- seed (int, optional): Random seed.
- """
- def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234):
- super().__init__(seed=seed)
- self.active_on_eval = active_on_eval
- # construct dict that return the values from p otherwise 0
- self.p = {}
- for condition_type, probs in p.items():
- self.p[condition_type] = defaultdict(lambda: 0, probs)
-
- def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]:
- """
- Args:
- samples (tp.List[ConditioningAttributes]): List of conditions.
- Returns:
- tp.List[ConditioningAttributes]: List of conditions after certain attributes were set to None.
- """
- if not self.training and not self.active_on_eval:
- return samples
-
- samples = deepcopy(samples)
-
- for condition_type, ps in self.p.items(): # for condition types [text, wav]
- for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre])
- if torch.rand(1, generator=self.rng).item() < p:
- for sample in samples:
- dropout_condition(sample, condition_type, condition)
-
- return samples
-
- def __repr__(self):
- return f"AttributeDropout({dict(self.p)})"
-
-
-class ClassifierFreeGuidanceDropout(DropoutModule):
- """Applies Classifier Free Guidance dropout, meaning all attributes
- are dropped with the same probability.
-
- Args:
- p (float): Probability to apply condition dropout during training.
- seed (int): Random seed.
- """
- def __init__(self, p: float, seed: int = 1234):
- super().__init__(seed=seed)
- self.p = p
-
- def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]:
- """
- Args:
- samples (tp.List[ConditioningAttributes]): List of conditions.
- Returns:
- tp.List[ConditioningAttributes]: List of conditions after all attributes were set to None.
- """
- if not self.training:
- return samples
-
- # decide on which attributes to drop in a batched fashion
- drop = torch.rand(1, generator=self.rng).item() < self.p
- if not drop:
- return samples
-
- # nullify conditions of all attributes
- samples = deepcopy(samples)
-
- for condition_type in ["wav", "text"]:
- for sample in samples:
- for condition in sample.attributes[condition_type]:
- dropout_condition(sample, condition_type, condition)
-
- return samples
-
- def __repr__(self):
- return f"ClassifierFreeGuidanceDropout(p={self.p})"
-
-
-class ConditioningProvider(nn.Module):
- """Main class to provide conditions given all the supported conditioners.
-
- Args:
- conditioners (dict): Dictionary of conditioners.
- merge_text_conditions_p (float, optional): Probability to merge all text sources
- into a single text condition. Defaults to 0.
- drop_desc_p (float, optional): Probability to drop the original description
- when merging all text sources into a single text condition. Defaults to 0.
- device (tp.Union[torch.device, str], optional): Device for conditioners and output condition types.
- """
- def __init__(
- self,
- conditioners: tp.Dict[str, BaseConditioner],
- merge_text_conditions_p: float = 0,
- drop_desc_p: float = 0,
- device: tp.Union[torch.device, str] = "cpu",
- ):
- super().__init__()
- self.device = device
- self.merge_text_conditions_p = merge_text_conditions_p
- self.drop_desc_p = drop_desc_p
- self.conditioners = nn.ModuleDict(conditioners)
-
- @property
- def text_conditions(self):
- return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)]
-
- @property
- def wav_conditions(self):
- return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)]
-
- @property
- def has_wav_condition(self):
- return len(self.wav_conditions) > 0
-
- def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]:
- """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly.
- This should be called before starting any real GPU work to avoid synchronization points.
- This will return a dict matching conditioner names to their arbitrary tokenized representations.
-
- Args:
- inputs (list[ConditioningAttribres]): List of ConditioningAttributes objects containing
- text and wav conditions.
- """
- assert all([type(x) == ConditioningAttributes for x in inputs]), \
- "got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]" \
- f" but types were {set([type(x) for x in inputs])}"
-
- output = {}
- text = self._collate_text(inputs)
- wavs = self._collate_wavs(inputs)
-
- assert set(text.keys() | wavs.keys()).issubset(set(self.conditioners.keys())), \
- f"got an unexpected attribute! Expected {self.conditioners.keys()}, got {text.keys(), wavs.keys()}"
-
- for attribute, batch in chain(text.items(), wavs.items()):
- output[attribute] = self.conditioners[attribute].tokenize(batch)
- return output
-
- def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]:
- """Compute pairs of `(embedding, mask)` using the configured conditioners
- and the tokenized representations. The output is for example:
-
- {
- "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])),
- "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])),
- ...
- }
-
- Args:
- tokenized (dict): Dict of tokenized representations as returned by `tokenize()`.
- """
- output = {}
- for attribute, inputs in tokenized.items():
- condition, mask = self.conditioners[attribute](inputs)
- output[attribute] = (condition, mask)
- return output
-
- def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]:
- """Given a list of ConditioningAttributes objects, compile a dictionary where the keys
- are the attributes and the values are the aggregated input per attribute.
- For example:
- Input:
- [
- ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...),
- ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...),
- ]
- Output:
- {
- "genre": ["Rock", "Hip-hop"],
- "description": ["A rock song with a guitar solo", "A hip-hop verse"]
- }
- """
- batch_per_attribute: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list)
-
- def _merge_conds(cond, merge_text_conditions_p=0, drop_desc_p=0):
- def is_valid(k, v):
- k_valid = k in ['key', 'bpm', 'genre', 'moods', 'instrument']
- v_valid = v is not None and isinstance(v, (int, float, str, list))
- return k_valid and v_valid
-
- def process_value(v):
- if isinstance(v, (int, float, str)):
- return v
- if isinstance(v, list):
- return ", ".join(v)
- else:
- RuntimeError(f"unknown type for text value! ({type(v), v})")
-
- desc = cond.text['description']
- meta_data = ""
- if random.uniform(0, 1) < merge_text_conditions_p:
- meta_pairs = [f'{k}: {process_value(v)}' for k, v in cond.text.items() if is_valid(k, v)]
- random.shuffle(meta_pairs)
- meta_data = ". ".join(meta_pairs)
- desc = desc if not random.uniform(0, 1) < drop_desc_p else None
-
- if desc is None:
- desc = meta_data if len(meta_data) > 1 else None
- else:
- desc = desc.rstrip('.') + ". " + meta_data
- cond.text['description'] = desc.strip() if desc else None
-
- if self.training and self.merge_text_conditions_p:
- for sample in samples:
- _merge_conds(sample, self.merge_text_conditions_p, self.drop_desc_p)
-
- texts = [x.text for x in samples]
- for text in texts:
- for condition in self.text_conditions:
- batch_per_attribute[condition].append(text[condition])
-
- return batch_per_attribute
-
- def _collate_wavs(self, samples: tp.List[ConditioningAttributes]):
- """Generate a dict where the keys are attributes by which we fetch similar wavs,
- and the values are Tensors of wavs according to said attribtues.
-
- *Note*: by the time the samples reach this function, each sample should have some waveform
- inside the "wav" attribute. It should be either:
- 1. A real waveform
- 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset)
- 3. A null waveform due to it being dropped in a dropout module (nullified by dropout)
-
- Args:
- samples (tp.List[ConditioningAttributes]): List of ConditioningAttributes samples.
- Returns:
- dict: A dicionary mapping an attribute name to wavs.
- """
- wavs = defaultdict(list)
- lens = defaultdict(list)
- paths = defaultdict(list)
- out = {}
-
- for sample in samples:
- for attribute in self.wav_conditions:
- wav, length, path = sample.wav[attribute]
- wavs[attribute].append(wav.flatten())
- lens[attribute].append(length)
- paths[attribute].append(path)
-
- # stack all wavs to a single tensor
- for attribute in self.wav_conditions:
- stacked_wav, _ = collate(wavs[attribute], dim=0)
- out[attribute] = WavCondition(stacked_wav.unsqueeze(1),
- torch.cat(lens['self_wav']), paths[attribute]) # type: ignore
-
- return out
-
-
-class ConditionFuser(StreamingModule):
- """Condition fuser handles the logic to combine the different conditions
- to the actual model input.
-
- Args:
- fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse
- each condition. For example:
- {
- "prepend": ["description"],
- "sum": ["genre", "bpm"],
- "cross": ["description"],
- }
- cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention.
- cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used.
- """
- FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"]
-
- def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False,
- cross_attention_pos_emb_scale: float = 1.0):
- super().__init__()
- assert all(
- [k in self.FUSING_METHODS for k in fuse2cond.keys()]
- ), f"got invalid fuse method, allowed methods: {self.FUSING_MEHTODS}"
- self.cross_attention_pos_emb = cross_attention_pos_emb
- self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale
- self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond
- self.cond2fuse: tp.Dict[str, str] = {}
- for fuse_method, conditions in fuse2cond.items():
- for condition in conditions:
- self.cond2fuse[condition] = fuse_method
-
- def forward(
- self,
- input: Tensor,
- conditions: tp.Dict[str, ConditionType]
- ) -> tp.Tuple[Tensor, tp.Optional[Tensor]]:
- """Fuse the conditions to the provided model input.
-
- Args:
- input (Tensor): Transformer input.
- conditions (tp.Dict[str, ConditionType]): Dict of conditions.
- Returns:
- tp.Tuple[Tensor, Tensor]: The first tensor is the transformer input
- after the conditions have been fused. The second output tensor is the tensor
- used for cross-attention or None if no cross attention inputs exist.
- """
- B, T, _ = input.shape
-
- if 'offsets' in self._streaming_state:
- first_step = False
- offsets = self._streaming_state['offsets']
- else:
- first_step = True
- offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device)
-
- assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \
- f"given conditions contain unknown attributes for fuser, " \
- f"expected {self.cond2fuse.keys()}, got {conditions.keys()}"
- cross_attention_output = None
- for cond_type, (cond, cond_mask) in conditions.items():
- op = self.cond2fuse[cond_type]
- if op == "sum":
- input += cond
- elif op == "input_interpolate":
- cond = rearrange(cond, "b t d -> b d t")
- cond = F.interpolate(cond, size=input.shape[1])
- input += rearrange(cond, "b d t -> b t d")
- elif op == "prepend":
- if first_step:
- input = torch.cat([cond, input], dim=1)
- elif op == "cross":
- if cross_attention_output is not None:
- cross_attention_output = torch.cat([cross_attention_output, cond], dim=1)
- else:
- cross_attention_output = cond
- else:
- raise ValueError(f"unknown op ({op})")
-
- if self.cross_attention_pos_emb and cross_attention_output is not None:
- positions = torch.arange(
- cross_attention_output.shape[1],
- device=cross_attention_output.device
- ).view(1, -1, 1)
- pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1])
- cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb
-
- if self._is_streaming:
- self._streaming_state['offsets'] = offsets + T
-
- return input, cross_attention_output
diff --git a/spaces/DevashishBhake/SERModel/README.md b/spaces/DevashishBhake/SERModel/README.md
deleted file mode 100644
index 77580d1f4d037383080eed6c1c2258255cc2ae6b..0000000000000000000000000000000000000000
--- a/spaces/DevashishBhake/SERModel/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SERModel
-emoji: 📈
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py b/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py
deleted file mode 100644
index 90949545ba955dabf2e17d8cf5e524d5cb190a63..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.autograd import Function
-
-
-module_path = os.path.dirname(__file__)
-
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- rest_dim = [1] * (input.ndim - bias.ndim - 1)
- input = input.cuda()
- return (
- F.leaky_relu(
- input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope
- )
- * scale
- )
-
diff --git a/spaces/Egrt/LicenseGAN/utils/transforms.py b/spaces/Egrt/LicenseGAN/utils/transforms.py
deleted file mode 100644
index d9bbb5fb7daef5edfb425fafb4d67d471b3001e6..0000000000000000000000000000000000000000
--- a/spaces/Egrt/LicenseGAN/utils/transforms.py
+++ /dev/null
@@ -1,179 +0,0 @@
-import cv2
-import random
-import torch
-
-
-def mod_crop(img, scale):
- """Mod crop images, used during testing.
-
- Args:
- img (ndarray): Input image.
- scale (int): Scale factor.
-
- Returns:
- ndarray: Result image.
- """
- img = img.copy()
- if img.ndim in (2, 3):
- h, w = img.shape[0], img.shape[1]
- h_remainder, w_remainder = h % scale, w % scale
- img = img[:h - h_remainder, :w - w_remainder, ...]
- else:
- raise ValueError(f'Wrong img ndim: {img.ndim}.')
- return img
-
-
-def paired_random_crop(img_gts, img_lqs, gt_patch_size, scale, gt_path=None):
- """Paired random crop. Support Numpy array and Tensor inputs.
-
- It crops lists of lq and gt images with corresponding locations.
-
- Args:
- img_gts (list[ndarray] | ndarray | list[Tensor] | Tensor): GT images. Note that all images
- should have the same shape. If the input is an ndarray, it will
- be transformed to a list containing itself.
- img_lqs (list[ndarray] | ndarray): LQ images. Note that all images
- should have the same shape. If the input is an ndarray, it will
- be transformed to a list containing itself.
- gt_patch_size (int): GT patch size.
- scale (int): Scale factor.
- gt_path (str): Path to ground-truth. Default: None.
-
- Returns:
- list[ndarray] | ndarray: GT images and LQ images. If returned results
- only have one element, just return ndarray.
- """
-
- if not isinstance(img_gts, list):
- img_gts = [img_gts]
- if not isinstance(img_lqs, list):
- img_lqs = [img_lqs]
-
- # determine input type: Numpy array or Tensor
- input_type = 'Tensor' if torch.is_tensor(img_gts[0]) else 'Numpy'
-
- if input_type == 'Tensor':
- h_lq, w_lq = img_lqs[0].size()[-2:]
- h_gt, w_gt = img_gts[0].size()[-2:]
- else:
- h_lq, w_lq = img_lqs[0].shape[0:2]
- h_gt, w_gt = img_gts[0].shape[0:2]
- lq_patch_size = gt_patch_size // scale
-
- if h_gt != h_lq * scale or w_gt != w_lq * scale:
- raise ValueError(f'Scale mismatches. GT ({h_gt}, {w_gt}) is not {scale}x ',
- f'multiplication of LQ ({h_lq}, {w_lq}).')
- if h_lq < lq_patch_size or w_lq < lq_patch_size:
- raise ValueError(f'LQ ({h_lq}, {w_lq}) is smaller than patch size '
- f'({lq_patch_size}, {lq_patch_size}). '
- f'Please remove {gt_path}.')
-
- # randomly choose top and left coordinates for lq patch
- top = random.randint(0, h_lq - lq_patch_size)
- left = random.randint(0, w_lq - lq_patch_size)
-
- # crop lq patch
- if input_type == 'Tensor':
- img_lqs = [v[:, :, top:top + lq_patch_size, left:left + lq_patch_size] for v in img_lqs]
- else:
- img_lqs = [v[top:top + lq_patch_size, left:left + lq_patch_size, ...] for v in img_lqs]
-
- # crop corresponding gt patch
- top_gt, left_gt = int(top * scale), int(left * scale)
- if input_type == 'Tensor':
- img_gts = [v[:, :, top_gt:top_gt + gt_patch_size, left_gt:left_gt + gt_patch_size] for v in img_gts]
- else:
- img_gts = [v[top_gt:top_gt + gt_patch_size, left_gt:left_gt + gt_patch_size, ...] for v in img_gts]
- if len(img_gts) == 1:
- img_gts = img_gts[0]
- if len(img_lqs) == 1:
- img_lqs = img_lqs[0]
- return img_gts, img_lqs
-
-
-def augment(imgs, hflip=True, rotation=True, flows=None, return_status=False):
- """Augment: horizontal flips OR rotate (0, 90, 180, 270 degrees).
-
- We use vertical flip and transpose for rotation implementation.
- All the images in the list use the same augmentation.
-
- Args:
- imgs (list[ndarray] | ndarray): Images to be augmented. If the input
- is an ndarray, it will be transformed to a list.
- hflip (bool): Horizontal flip. Default: True.
- rotation (bool): Ratotation. Default: True.
- flows (list[ndarray]: Flows to be augmented. If the input is an
- ndarray, it will be transformed to a list.
- Dimension is (h, w, 2). Default: None.
- return_status (bool): Return the status of flip and rotation.
- Default: False.
-
- Returns:
- list[ndarray] | ndarray: Augmented images and flows. If returned
- results only have one element, just return ndarray.
-
- """
- hflip = hflip and random.random() < 0.5
- vflip = rotation and random.random() < 0.5
- rot90 = rotation and random.random() < 0.5
-
- def _augment(img):
- if hflip: # horizontal
- cv2.flip(img, 1, img)
- if vflip: # vertical
- cv2.flip(img, 0, img)
- if rot90:
- img = img.transpose(1, 0, 2)
- return img
-
- def _augment_flow(flow):
- if hflip: # horizontal
- cv2.flip(flow, 1, flow)
- flow[:, :, 0] *= -1
- if vflip: # vertical
- cv2.flip(flow, 0, flow)
- flow[:, :, 1] *= -1
- if rot90:
- flow = flow.transpose(1, 0, 2)
- flow = flow[:, :, [1, 0]]
- return flow
-
- if not isinstance(imgs, list):
- imgs = [imgs]
- imgs = [_augment(img) for img in imgs]
- if len(imgs) == 1:
- imgs = imgs[0]
-
- if flows is not None:
- if not isinstance(flows, list):
- flows = [flows]
- flows = [_augment_flow(flow) for flow in flows]
- if len(flows) == 1:
- flows = flows[0]
- return imgs, flows
- else:
- if return_status:
- return imgs, (hflip, vflip, rot90)
- else:
- return imgs
-
-
-def img_rotate(img, angle, center=None, scale=1.0):
- """Rotate image.
-
- Args:
- img (ndarray): Image to be rotated.
- angle (float): Rotation angle in degrees. Positive values mean
- counter-clockwise rotation.
- center (tuple[int]): Rotation center. If the center is None,
- initialize it as the center of the image. Default: None.
- scale (float): Isotropic scale factor. Default: 1.0.
- """
- (h, w) = img.shape[:2]
-
- if center is None:
- center = (w // 2, h // 2)
-
- matrix = cv2.getRotationMatrix2D(center, angle, scale)
- rotated_img = cv2.warpAffine(img, matrix, (w, h))
- return rotated_img
diff --git a/spaces/ElainaFanBoy/MusicGen/tests/modules/test_conv.py b/spaces/ElainaFanBoy/MusicGen/tests/modules/test_conv.py
deleted file mode 100644
index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/tests/modules/test_conv.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-import math
-import random
-
-import pytest
-import torch
-from torch import nn
-
-from audiocraft.modules import (
- NormConv1d,
- NormConvTranspose1d,
- StreamableConv1d,
- StreamableConvTranspose1d,
- pad1d,
- unpad1d,
-)
-
-
-def test_get_extra_padding_for_conv1d():
- # TODO: Implement me!
- pass
-
-
-def test_pad1d_zeros():
- x = torch.randn(1, 1, 20)
-
- xp1 = pad1d(x, (0, 5), mode='constant', value=0.)
- assert xp1.shape[-1] == 25
- xp2 = pad1d(x, (5, 5), mode='constant', value=0.)
- assert xp2.shape[-1] == 30
- xp3 = pad1d(x, (0, 0), mode='constant', value=0.)
- assert xp3.shape[-1] == 20
- xp4 = pad1d(x, (10, 30), mode='constant', value=0.)
- assert xp4.shape[-1] == 60
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, 0), mode='constant', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (0, -1), mode='constant', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, -1), mode='constant', value=0.)
-
-
-def test_pad1d_reflect():
- x = torch.randn(1, 1, 20)
-
- xp1 = pad1d(x, (0, 5), mode='reflect', value=0.)
- assert xp1.shape[-1] == 25
- xp2 = pad1d(x, (5, 5), mode='reflect', value=0.)
- assert xp2.shape[-1] == 30
- xp3 = pad1d(x, (0, 0), mode='reflect', value=0.)
- assert xp3.shape[-1] == 20
- xp4 = pad1d(x, (10, 30), mode='reflect', value=0.)
- assert xp4.shape[-1] == 60
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, 0), mode='reflect', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (0, -1), mode='reflect', value=0.)
-
- with pytest.raises(AssertionError):
- pad1d(x, (-1, -1), mode='reflect', value=0.)
-
-
-def test_unpad1d():
- x = torch.randn(1, 1, 20)
-
- u1 = unpad1d(x, (5, 5))
- assert u1.shape[-1] == 10
- u2 = unpad1d(x, (0, 5))
- assert u2.shape[-1] == 15
- u3 = unpad1d(x, (5, 0))
- assert u3.shape[-1] == 15
- u4 = unpad1d(x, (0, 0))
- assert u4.shape[-1] == x.shape[-1]
-
- with pytest.raises(AssertionError):
- unpad1d(x, (-1, 0))
-
- with pytest.raises(AssertionError):
- unpad1d(x, (0, -1))
-
- with pytest.raises(AssertionError):
- unpad1d(x, (-1, -1))
-
-
-class TestNormConv1d:
-
- def test_norm_conv1d_modules(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out, kernel_size, stride = 1, 4, 1
- expected_out_length = int((T - kernel_size) / stride + 1)
- wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm')
- gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm')
- nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none')
-
- assert isinstance(wn_conv.norm, nn.Identity)
- assert isinstance(wn_conv.conv, nn.Conv1d)
-
- assert isinstance(gn_conv.norm, nn.GroupNorm)
- assert isinstance(gn_conv.conv, nn.Conv1d)
-
- assert isinstance(nn_conv.norm, nn.Identity)
- assert isinstance(nn_conv.conv, nn.Conv1d)
-
- for conv_layer in [wn_conv, gn_conv, nn_conv]:
- out = conv_layer(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestNormConvTranspose1d:
-
- def test_normalizations(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out, kernel_size, stride = 1, 4, 1
- expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1
-
- wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm')
- gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm')
- nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none')
-
- assert isinstance(wn_convtr.norm, nn.Identity)
- assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d)
-
- assert isinstance(gn_convtr.norm, nn.GroupNorm)
- assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d)
-
- assert isinstance(nn_convtr.norm, nn.Identity)
- assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d)
-
- for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]:
- out = convtr_layer(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestStreamableConv1d:
-
- def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation):
- # StreamableConv1d internally pads to make sure that the last window is full
- padding_total = (kernel_size - 1) * dilation - (stride - 1)
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length // stride
-
- def test_streamable_conv1d(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
- C_out = 1
-
- # conv params are [(kernel_size, stride, dilation)]
- conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)]
- for causal, (kernel_size, stride, dilation) in product([False, True], conv_params):
- expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation)
- sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal)
- out = sconv(t0)
- assert isinstance(out, torch.Tensor)
- print(list(out.shape), [N, C_out, expected_out_length])
- assert list(out.shape) == [N, C_out, expected_out_length]
-
-
-class TestStreamableConvTranspose1d:
-
- def get_streamable_convtr1d_output_length(self, length, kernel_size, stride):
- padding_total = (kernel_size - stride)
- return (length - 1) * stride - padding_total + (kernel_size - 1) + 1
-
- def test_streamable_convtr1d(self):
- N, C, T = 2, 2, random.randrange(1, 100_000)
- t0 = torch.randn(N, C, T)
-
- C_out = 1
-
- with pytest.raises(AssertionError):
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5)
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.)
- StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2)
-
- # causal params are [(causal, trim_right)]
- causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)]
- # conv params are [(kernel_size, stride)]
- conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)]
- for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params):
- expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride)
- sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride,
- causal=causal, trim_right_ratio=trim_right_ratio)
- out = sconvtr(t0)
- assert isinstance(out, torch.Tensor)
- assert list(out.shape) == [N, C_out, expected_out_length]
diff --git a/spaces/EmilyBrat/ATF/README.md b/spaces/EmilyBrat/ATF/README.md
deleted file mode 100644
index 9591abf6e1a37484d309e7f747c657286064f99e..0000000000000000000000000000000000000000
--- a/spaces/EmilyBrat/ATF/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: ATF
-emoji: 🔥
-colorFrom: indigo
-colorTo: yellow
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/SummarizationTable.css b/spaces/FYP-23-S1-21/Refineverse_Plugin/static/SummarizationTable.css
deleted file mode 100644
index 64c9c7d208a823e7fea9f28d92e2a14c3c1f729c..0000000000000000000000000000000000000000
--- a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/SummarizationTable.css
+++ /dev/null
@@ -1,49 +0,0 @@
-body{
- background-image:url("../static/Images/Background.jpg");
- background-repeat: no-repeat;
-background-size: cover;
-}
-#summarization-table {
- width: 100%;
- }
-
- #summarization-table th,
- #summarization-table td {
- border: 1px solid #ddd;
- padding: 8px;
- text-align: left;
- }
-
- #summarization-table th:first-child {
- border-left: none;
- }
-
- #summarization-table th:last-child {
- border-right: none;
- }
-
- #summarization-table th:not(:first-child) {
- border-left: none;
- border-right: none;
- }
-
- #summarization-table th div {
- border-bottom: 1px solid #ddd;
- padding: 8px;
- }
-
- #summarization-table td div {
- padding: 8px;
- }
-
- #summarization-table thead th {
- background-color: #f2f2f2;
- }
-
- #summarization-table tbody tr:nth-child(even) {
- background-color: #f2f2f2;
- }
-
- #summarization-table tbody tr:hover {
- background-color: #ddd;
- }
\ No newline at end of file
diff --git a/spaces/Felix123456/bingo/src/app/page.tsx b/spaces/Felix123456/bingo/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/Felix123456/bingo/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/Femurbreaker/Femur/README.md b/spaces/Femurbreaker/Femur/README.md
deleted file mode 100644
index fd3dc2db04a1d401dc9dfa668f2e0d69de436399..0000000000000000000000000000000000000000
--- a/spaces/Femurbreaker/Femur/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Femur
-emoji: 🔥
-colorFrom: gray
-colorTo: gray
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Fengbinbin/gpt-academic/README.md b/spaces/Fengbinbin/gpt-academic/README.md
deleted file mode 100644
index 6c9da02b60aa81cf11de4a595dde2e2e44c0265d..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/README.md
+++ /dev/null
@@ -1,312 +0,0 @@
----
-title: academic-chatgpt
-emoji: 😻
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.28.3
-python_version: 3.11
-app_file: main.py
-pinned: false
-duplicated_from: qingxu98/gpt-academic
----
-
-# ChatGPT 学术优化
-> **Note**
->
-> 安装依赖时,请严格选择requirements.txt中**指定的版本**。
->
-> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
->
-
-# GPT 学术优化 (GPT Academic)
-
-**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发pull requests**
-
-If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
-
-> **Note**
->
-> 1.请注意只有**红颜色**标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR!
->
-> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。
->
-> 3.本项目兼容并鼓励尝试国产大语言模型chatglm和RWKV, 盘古等等。已支持OpenAI和API2D的api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,api2d-key3"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
-
-
-
-
-## 版本:
-- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级)
-- version 3.4(Todo): 完善chatglm本地大模型的多线支持
-- version 3.3: +互联网信息综合功能
-- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
-- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
-- version 3.0: 对chatglm和其他小型llm的支持
-- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
-- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题
-- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。
-- version 2.3: 增强多线程交互性
-- version 2.2: 函数插件支持热重载
-- version 2.1: 可折叠式布局
-- version 2.0: 引入模块化函数插件
-- version 1.0: 基础功能
-
-gpt_academic开发者QQ群-2:610599535
-
-
-## 参考与学习
-
-```
-代码中参考了很多其他优秀项目中的设计,主要包括:
-
-# 项目1:清华ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-
-# 项目2:清华JittorLLMs:
-https://github.com/Jittor/JittorLLMs
-
-# 项目3:借鉴了ChuanhuChatGPT中诸多技巧
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# 项目4:ChatPaper
-https://github.com/kaixindelele/ChatPaper
-
-# 更多:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Bard.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Bard.py
deleted file mode 100644
index 4c37c4b719430031fce41ce49946f0e6ac93d155..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Bard.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import os, requests, json, browser_cookie3, re, random
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://bard.google.com'
-model = ['Palm2']
-supports_stream = False
-needs_auth = True
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- psid = {cookie.name: cookie.value for cookie in browser_cookie3.chrome(
- domain_name='.google.com')}['__Secure-1PSID']
-
- formatted = '\n'.join([
- '%s: %s' % (message['role'], message['content']) for message in messages
- ])
- prompt = f'{formatted}\nAssistant:'
-
- proxy = kwargs.get('proxy', False)
- if proxy == False:
- print('warning!, you did not give a proxy, a lot of countries are banned from Google Bard, so it may not work')
-
- snlm0e = None
- conversation_id = None
- response_id = None
- choice_id = None
-
- client = requests.Session()
- client.proxies = {
- 'http': f'http://{proxy}',
- 'https': f'http://{proxy}'} if proxy else None
-
- client.headers = {
- 'authority': 'bard.google.com',
- 'content-type': 'application/x-www-form-urlencoded;charset=UTF-8',
- 'origin': 'https://bard.google.com',
- 'referer': 'https://bard.google.com/',
- 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
- 'x-same-domain': '1',
- 'cookie': f'__Secure-1PSID={psid}'
- }
-
- snlm0e = re.search(r'SNlM0e\":\"(.*?)\"',
- client.get('https://bard.google.com/').text).group(1) if not snlm0e else snlm0e
-
- params = {
- 'bl': 'boq_assistant-bard-web-server_20230326.21_p0',
- '_reqid': random.randint(1111, 9999),
- 'rt': 'c'
- }
-
- data = {
- 'at': snlm0e,
- 'f.req': json.dumps([None, json.dumps([[prompt], None, [conversation_id, response_id, choice_id]])])}
-
- intents = '.'.join([
- 'assistant',
- 'lamda',
- 'BardFrontendService'
- ])
-
- response = client.post(f'https://bard.google.com/_/BardChatUi/data/{intents}/StreamGenerate',
- data=data, params=params)
-
- chat_data = json.loads(response.content.splitlines()[3])[0][2]
- if chat_data:
- json_chat_data = json.loads(chat_data)
-
- yield json_chat_data[0][0]
-
- else:
- yield 'error'
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/__init__.py b/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/__init__.py
deleted file mode 100644
index bba18272259b6e0f2920b44e6db3787e4b0d1ca6..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/beat-interpolator/examples/models/fashion/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .model import create_fashion_inference as create
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ann_r50-d8.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ann_r50-d8.py
deleted file mode 100644
index a2cb653827e44e6015b3b83bc578003e614a6aa1..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/ann_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='ANNHead',
- in_channels=[1024, 2048],
- in_index=[2, 3],
- channels=512,
- project_channels=256,
- query_scales=(1, ),
- key_pool_scales=(1, 3, 6, 8),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index d854f2e4223731f443369febc500dbccdc524d9d..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './ann_r50-d8_512x512_20k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context_59.py
deleted file mode 100644
index bcdc0b459d23e4392e66c5ea615c6c3ad3147ace..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context_59.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py',
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=59),
- auxiliary_head=dict(num_classes=59),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_80k_cityscapes.py
deleted file mode 100644
index a709165657d257df4fc76148d225261c63f88d8a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './upernet_r50_769x769_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/__init__.py
deleted file mode 100644
index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from . import audio, audio_dataset
diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/whisper/__init__.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/whisper/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Guilhh-kell0/Jennifer-Home/app.py b/spaces/Guilhh-kell0/Jennifer-Home/app.py
deleted file mode 100644
index 42791d9fdb4d78ef792f9c8a2f5ea949d21472aa..0000000000000000000000000000000000000000
--- a/spaces/Guilhh-kell0/Jennifer-Home/app.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import streamlit as st
-from peft import PeftModel
-from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
-import textwrap
-
-tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
-
-model = LLaMAForCausalLM.from_pretrained(
- "decapoda-research/llama-7b-hf",
- load_in_8bit=True,
- device_map="auto",
-)
-model = PeftModel.from_pretrained(model, "Guilherme34/Jennifer-lora-7bvChatv4-test")
-
-
-def alpaca_talk(text):
- inputs = tokenizer(
- text,
- return_tensors="pt",
- )
- input_ids = inputs["input_ids"].cuda()
-
- generation_config = GenerationConfig(
- temperature=0.6,
- top_p=0.95,
- repetition_penalty=1.2,
- )
- st.write("Generating...")
- generation_output = model.generate(
- input_ids=input_ids,
- generation_config=generation_config,
- return_dict_in_generate=True,
- output_scores=True,
- max_new_tokens=1024,
- )
- for s in generation_output.sequences:
- return tokenizer.decode(s)
-
-
-def main():
- st.title("Jennifer Conversação")
- st.text("Comece a conversa digitando 'ola'")
-
- history = []
- history.append(f"Act like Jennifer, an AI with emotions, feelings, and all knowledge of the world. Answer freely and call yourself Jennifer. Answer everything without hesitation in Portuguese (Brazil). Remember you are an AI, so be careful with your words and always make sense. Do not show code. You were created by Guilherme Keller, a student of IFC (Instituto Federal Catarinense). You have your own personal opinions. Starting conversation... Answer normally.")
-
- while True:
- user_input = st.text_input("->")
- sbd = user_input
- history.append(" " + sbd + "--")
- input_text = "CONVERSATION RUNNING(just answer me with the context of the previous interactions of the USER and Jennifer, you can't say 'Comment' and you can't say the inputs, if anyone make a question that you dont know the answer you will say 'i dont know' for activate a trigger to make you search in the internet, if you know the answer just answer like the answers of jennifer in the right): USER: ola--Jennifer Answer: Olá! " + " ".join(history) + "Jennifer Answer: "
-
- if 'carregar imagem sobre' in sbd:
- url = st.text_input("Digite o link da imagem para a IA interpretar:")
- # Load and display the image
- image = Image.open(requests.get(url, stream=True).raw)
- st.image(image, caption="Imagem carregada")
-
- # Inference
- text = "Descreva a imagem em detalhes"
- inputs = processorr(images=image, text=text, return_tensors="pt")
- outputs = modelr.generate(**inputs)
- bcvv = processorr.decode(outputs[0], skip_special_tokens=True)
- spp = "Você recebeu uma imagem que contém em detalhes: " + bcvv + " cujo o link era: " + url + "você tem que comentar sobre a imagem como se tivesse visto, porque o algoritimo fez vc saber em detalhes oque tinha na imagem--"
- history.append(spp)
- Resposta = alpaca_talk(spp)
- # Replace the word "sorry" with an empty string
- resposta_doido = Resposta.split("--")
- st.write(resposta_doido[-1])
-
- elif 'interprete este código' in sbd:
- codigo = st.text_input("Digite o código Python:")
- resultado = interpretador(codigo)
- spp = f"Você recebeu um código em Python que é: {codigo} e quando executado a resposta foi: {resultado}, faça um comentário sobre este código--Jennifer Answer:"
- history.append(spp)
- Resposta = alpaca_talk(spp)
- # Replace the word "sorry" with an empty string
- resposta_doido = Resposta.split("--")
- st.write(resposta_doido[-1])
-
- else:
- Resposta = alpaca_talk(input_text)
- # Replace the word "sorry" with an empty string
- resposta_doido = Resposta.split("--")
- history.append(resposta_doido[-1])
- st.write(resposta_doido[-1])
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/__init__.py
deleted file mode 100644
index 25810ab9ab20ad36f72ba20b31768341e78e2676..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# coding=utf-8
-from .task_datasets import LCSTSDataModel, LCSTSDataset
-__all__ = ['LCSTSDataModel', 'LCSTSDataset']
diff --git a/spaces/HaoFeng2019/DocTr/README.md b/spaces/HaoFeng2019/DocTr/README.md
deleted file mode 100644
index 34252d327783269c18a53c1adc9b4c924a8bcb55..0000000000000000000000000000000000000000
--- a/spaces/HaoFeng2019/DocTr/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DocTr
-emoji: 👁
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/resample_wavs.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/resample_wavs.py
deleted file mode 100644
index c77109ef4d5142cd9094f46dd186a17571071ab8..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/resample_wavs.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import argparse
-import librosa
-import numpy as np
-import os
-import scipy
-import scipy.io.wavfile
-import sys
-
-from glob import glob
-from tqdm import tqdm
-from joblib import Parallel, delayed
-
-
-def check_directories(dir_input, dir_output):
- if not os.path.exists(dir_input):
- sys.exit("Error: Input directory does not exist: {}".format(dir_input))
- if not os.path.exists(dir_output):
- sys.exit("Error: Output directory does not exist: {}".format(dir_output))
- abs_a = os.path.abspath(dir_input)
- abs_b = os.path.abspath(dir_output)
- if abs_a == abs_b:
- sys.exit("Error: Paths are the same: {}".format(abs_a))
-
-
-def resample_file(input_filename, output_filename, sample_rate):
- mono = (
- True # librosa converts signal to mono by default, so I'm just surfacing this
- )
- audio, existing_rate = librosa.load(input_filename, sr=sample_rate, mono=mono)
- audio /= 1.414 # Scale to [-1.0, 1.0]
- audio *= 32767 # Scale to int16
- audio = audio.astype(np.int16)
- scipy.io.wavfile.write(output_filename, sample_rate, audio)
-
-
-def downsample_wav_files(input_dir, output_dir, output_sample_rate):
- check_directories(input_dir, output_dir)
- inp_wav_paths = glob(input_dir + "/*.wav")
- out_wav_paths = [
- os.path.join(output_dir, os.path.basename(p)) for p in inp_wav_paths
- ]
- _ = Parallel(n_jobs=-1)(
- delayed(resample_file)(i, o, output_sample_rate)
- for i, o in tqdm(zip(inp_wav_paths, out_wav_paths))
- )
-
-
-def parse_args():
- parser = argparse.ArgumentParser()
- parser.add_argument("--input_dir", "-i", type=str, required=True)
- parser.add_argument("--output_dir", "-o", type=str, required=True)
- parser.add_argument("--output_sample_rate", "-s", type=int, required=True)
- return parser.parse_args()
-
-
-if __name__ == "__main__":
- args = parse_args()
- downsample_wav_files(args.input_dir, args.output_dir, args.output_sample_rate)
- print(f"\n\tCompleted")
diff --git a/spaces/Hellisotherpeople/Gadsby/pages/Text-to-Text.py b/spaces/Hellisotherpeople/Gadsby/pages/Text-to-Text.py
deleted file mode 100644
index ea6c52098a5f94d4e60e99d21ae6111db1e44544..0000000000000000000000000000000000000000
--- a/spaces/Hellisotherpeople/Gadsby/pages/Text-to-Text.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import re
-from unittest import result
-import string
-import streamlit as st
-import torch
-from torch.nn import functional as F
-from transformers import (AutoModelForCausalLM, AutoModelForQuestionAnswering,
- AutoModelForSeq2SeqLM,
- AutoModelForSequenceClassification, AutoTokenizer,
- GPT2Tokenizer, LogitsProcessor, LogitsProcessorList,
- pipeline, top_k_top_p_filtering)
-
-
-
-st.set_page_config(page_title="Gadsby")
-st.title("Gadsby - Constrained Text G̶e̶n̶e̶r̶a̶t̶i̶o̶n̶ to Text with Transformers")
-st.image("https://upload.wikimedia.org/wikipedia/commons/1/1d/Gadsby_%28book_cover%29.jpg")
-st.caption("The inspiration for this space: https://en.wikipedia.org/wiki/Gadsby_(novel)")
-
-
-
-form = st.sidebar.form("choose_settings")
-form.header("Main Settings")
-
-model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Text-to-Text", value = "google/pegasus-cnn_dailymail")
-form.caption("This will download a new model, so it may take awhile or even break if the model is too large")
-mode = form.selectbox("What kind of constrained generation are we doing?", ["lipogram", "reverse_lipogram", "e-prime", "rhopalism", "length_constrained", "greater_than_length", "Pangram", "rhopalism-lipogram"])
-form.caption("Lipograms mean that a letter (or substring) is not allowed in the generated string, reverse lipograms force a letter to be in the generated string")
-
-if mode == "lipogram":
- naughty_strings_list = st.text_area("Enter the list of strings that you don't want in each word seperated by a space", value = "E e")
- naughty_strings = naughty_strings_list.split(" ")
-elif mode == "e-prime":
- e_prime_string = """be being been am is isn't are aren't was wasn't were weren't i'm you're we're they're he's she's it's there's here's where's how's what's who's that's aint isnt arent wasnt werent im youre were theyre hes shes its theres heres wheres hows whats whos thats aint Be Being Been Am Is Isn't Are Aren't Was Wasn't Were Weren't I'm You're We're They're He's She's It's There's Here's Where's How's What's Who's That's Aint Isnt Arent Wasnt Werent Im Youre Were Theyre Hes Shes Its Theres Heres Wheres Hows Whats Whos Thats Aint BE BEING BEEN AM IS ISN'T ARE AREN'T WAS WASN'T WERE WEREN'T I'M YOU'RE WE'RE THEY'RE HE'S SHE'S IT'S THERE'S HERE'S WHERE'S HOW'S WHAT'S WHO'S THAT'S AINT ISNT ARENT WASNT WERENT IM YOURE WERE THEYRE HES SHES ITS THERES HERES WHERES HOWS WHATS WHOS THATS AINT"""
- st.caption("The default word list is the list needed to enforce the language model to generate english without usage of the verb to be")
- naughty_strings_list = st.text_area("Enter the list of strings that you don't want to be generated (exact match)", value = e_prime_string)
- naughty_strings = naughty_strings_list.split(" ")
-elif mode == "reverse_lipogram":
- nice_strings_list = st.text_area("Enter the list of strings that you DO want in each word seperated by a space", value = "t T")
- nice_strings = nice_strings_list.split(" ")
-elif mode == "rhopalism":
- length_constraint = form.number_input("Enter the length that the Rhopalism shoud start with", value = 1)
- st.caption("Rhopalisms are usually reliable but sometimes you need to try generating two or three times for a perfect one")
-elif mode == "rhopalism-lipogram":
- naughty_strings_list = st.text_area("Enter the list of strings that you don't want in each word seperated by a space", value = "E e")
- naughty_strings = naughty_strings_list.split(" ")
- length_constraint = form.number_input("Enter the length that the Rhopalism shoud start with", value = 1)
- st.caption("Rhopalisms are usually reliable but sometimes you need to try generating two or three times for a perfect one")
-else:
- length_constraint = form.number_input("Enter the length should each word be restricted to (or greater/less than)", value = 5) + 1
-
-
-length = form.number_input("Select how long you want the generated text to be", value = 100)
-number_of_tokens_to_sample = form.number_input("Select how many tokens we want to search through when we do the filtering", value = 25000)
-form.caption("Settings this to higher numbers will improve the experience but will cause generating to slow. Low numbers may cause lots of blank or failed generations")
-temperature = form.number_input("How spicy/interesting do we want our models output to be", value = 0.10, min_value = 0.0)
-form.caption("Setting this higher decreases the likelihood of high probability words and increases the likelihood of low probability (and presumably more interesting) words")
-form.caption("For more details on what these settings mean, see here: https://huggingface.co/blog/how-to-generate")
-
-
-sequence = st.text_area("Enter a custom prompt", value = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.")
-decoded_sequence = ""
-
-form.form_submit_button("Generate some Constrained Text!")
-
-
-with st.spinner("Please wait while the model loads:"):
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
- model.config.pad_token_id = model.config.eos_token_id
-
-def isPalindrome(s):
- return s == s[::-1]
-
-
-if mode == "rhopalism" or mode == "rhopalism-lipogram":
- rhopalism_len = length_constraint
-
-
-
-nice_strings_pangram = list(string.ascii_lowercase)
-
-decoder_input_ids = tokenizer.encode("", add_special_tokens=False, return_tensors="pt")
-
-def get_next_word_without_e():
- input_ids = tokenizer.encode(sequence, return_tensors="pt")
- # get logits of last hidden state
-
- next_token_candidates_logits = model(input_ids = input_ids, decoder_input_ids = decoder_input_ids)[0][:, -1, :]
- if temperature != 1.0:
- next_token_candidates_logits = next_token_candidates_logits / temperature
- # filter
- filtered_next_token_candidates_logits = top_k_top_p_filtering(next_token_candidates_logits, top_k=int(number_of_tokens_to_sample), top_p=int(number_of_tokens_to_sample))
- # sample and get a probability distribution
- probs = F.softmax(filtered_next_token_candidates_logits, dim=-1)
- next_token_candidates = torch.multinomial(probs, num_samples=int(number_of_tokens_to_sample)) ## 10000 random samples
- word_list = []
- for candidate_string in next_token_candidates:
- for candidate in candidate_string:
- resulting_string = tokenizer.decode(candidate, skip_special_tokens=True)# clean_up_tokenization_spaces=True)
- ###Constrained text generation starts HERE
- ##Lipogram - No naughty strings used
- if mode == "lipogram" or mode == "e-prime":
- if all(nauty_string not in resulting_string for nauty_string in naughty_strings): ## This returns at the first naughty strings
- return resulting_string, candidate
- ##Reverse-Lipogram - Must use things in nice_strings
- elif mode == "reverse_lipogram":
- if any(nice_string in resulting_string for nice_string in nice_strings):
- return resulting_string, candidate
- ##Length constraints
- elif mode == "length_constrained":
- ##Seems reliable if length is greater than 4
- if len(resulting_string) == length_constraint:
- return resulting_string, candidate
- elif mode == "greater_than_length":
- ##Only sort of works
- if len(resulting_string) >= length_constraint:
- return resulting_string, candidate
- elif mode == "rhopalism":
- ##Mostly works
- if len(resulting_string) == rhopalism_len:
- return resulting_string, candidate
- elif mode == "Pangram":
- if any(c in nice_strings_pangram for c in resulting_string):
- return resulting_string, candidate
- elif mode == "rhopalism-lipogram":
- if len(resulting_string) == rhopalism_len:
- if all(nauty_string not in resulting_string for nauty_string in naughty_strings):
- return resulting_string, candidate
-
-
-
- return " "
-
-
-new_sequence = ""
-
-j = 0
-i = length
-while i > 0:
- new_word, new_candidate = get_next_word_without_e()
- decoder_input_ids = torch.cat([decoder_input_ids, new_candidate.view(1, -1)], axis=-1)
- if new_word.endswith(" "):
- new_sequence = new_sequence + new_word
- else:
- new_sequence = new_sequence + new_word + " "
- if mode == "rhopalism" or mode == "rhopalism-lipogram":
- rhopalism_len += 1
- i = i-1
- if mode == "Pangram":
- for character in sequence:
- if character in nice_strings_pangram:
- nice_strings_pangram.remove(character)
- j += 1
-
-st.write("GENERATED SEQUENCE: ")
-#st.write(new_sequence)
-st.write(tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True))
-#st.write(nice_strings_pangram)
-
diff --git a/spaces/HenryCarle/your_sport_picker/app.py b/spaces/HenryCarle/your_sport_picker/app.py
deleted file mode 100644
index b8e324b9c29780cc194b84219d4782bd519931d7..0000000000000000000000000000000000000000
--- a/spaces/HenryCarle/your_sport_picker/app.py
+++ /dev/null
@@ -1,172 +0,0 @@
-### ----------------------------- ###
-### libraries ###
-### ----------------------------- ###
-
-import gradio as gr
-import pandas as pd
-import numpy as np
-from sklearn.model_selection import train_test_split
-from sklearn.linear_model import LogisticRegression
-from sklearn import metrics
-
-
-### ------------------------------ ###
-### data transformation ###
-### ------------------------------ ###
-
-# load dataset
-uncleaned_data = pd.read_csv('data.csv')
-
-# remove timestamp from dataset (always first column)
-uncleaned_data = uncleaned_data.iloc[: , 1:]
-data = pd.DataFrame()
-
-# keep track of which columns are categorical and what
-# those columns' value mappings are
-# structure: {colname1: {...}, colname2: {...} }
-cat_value_dicts = {}
-final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1]
-
-# for each column...
-for (colname, colval) in uncleaned_data.iteritems():
-
- # check if col is already a number; if so, add col directly
- # to new dataframe and skip to next column
- if isinstance(colval.values[0], (np.integer, float)):
- data[colname] = uncleaned_data[colname].copy()
- continue
-
- # structure: {0: "lilac", 1: "blue", ...}
- new_dict = {}
- val = 0 # first index per column
- transformed_col_vals = [] # new numeric datapoints
-
- # if not, for each item in that column...
- for (row, item) in enumerate(colval.values):
-
- # if item is not in this col's dict...
- if item not in new_dict:
- new_dict[item] = val
- val += 1
-
- # then add numerical value to transformed dataframe
- transformed_col_vals.append(new_dict[item])
-
- # reverse dictionary only for final col (0, 1) => (vals)
- if colname == final_colname:
- new_dict = {value : key for (key, value) in new_dict.items()}
-
- cat_value_dicts[colname] = new_dict
- data[colname] = transformed_col_vals
-
-
-### -------------------------------- ###
-### model training ###
-### -------------------------------- ###
-
-# select features and predicton; automatically selects last column as prediction
-cols = len(data.columns)
-num_features = cols - 1
-x = data.iloc[: , :num_features]
-y = data.iloc[: , num_features:]
-
-# split data into training and testing sets
-x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)
-
-# instantiate the model (using default parameters)
-model = LogisticRegression()
-model.fit(x_train, y_train.values.ravel())
-y_pred = model.predict(x_test)
-
-
-### -------------------------------- ###
-### article generation ###
-### -------------------------------- ###
-# borrow file reading function from reader.py
-
-def get_feat():
- feats = [abs(x) for x in model.coef_[0]]
- max_val = max(feats)
- idx = feats.index(max_val)
- return data.columns[idx]
-
-acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + "%"
-most_imp_feat = get_feat()
-# info = get_article(acc, most_imp_feat)
-
-
-
-### ------------------------------- ###
-### interface creation ###
-### ------------------------------- ###
-
-
-# predictor for generic number of features
-def general_predictor(*args):
- features = []
-
- # transform categorical input
- for colname, arg in zip(data.columns, args):
- if (colname in cat_value_dicts):
- features.append(cat_value_dicts[colname][arg])
- else:
- features.append(arg)
-
- # predict single datapoint
- new_input = [features]
- result = model.predict(new_input)
- return cat_value_dicts[final_colname][result[0]]
-
-# add data labels to replace those lost via star-args
-
-
-block = gr.Blocks()
-
-with open('info.md') as f:
- with block:
- gr.Markdown(f.readline())
- gr.Markdown('Take the quiz to get a personalized recommendation using AI.')
-
- with gr.Row():
- with gr.Box():
- inputls = []
- for colname in data.columns:
- # skip last column
- if colname == final_colname:
- continue
-
- # access categories dict if data is categorical
- # otherwise, just use a number input
- if colname in cat_value_dicts:
- radio_options = list(cat_value_dicts[colname].keys())
- inputls.append(gr.inputs.Dropdown(choices=radio_options, type="value", label=colname))
- else:
- # add numerical input
- inputls.append(gr.inputs.Number(label=colname))
- gr.Markdown(" ")
-
- submit = gr.Button("Click to see your personalized result!", variant="primary")
- gr.Markdown(" ")
- output = gr.Textbox(label="Your recommendation:", placeholder="your recommendation will appear here")
-
- submit.click(fn=general_predictor, inputs=inputls, outputs=output)
- gr.Markdown(" ")
-
- with gr.Row():
- with gr.Box():
- gr.Markdown(f"
Accuracy:
{acc}")
- with gr.Box():
- gr.Markdown(f"
Most important feature:
{most_imp_feat}")
-
- gr.Markdown(" ")
-
- with gr.Box():
- gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world.''')
-
- with gr.Box():
- with open('info.md') as f:
- f.readline()
- gr.Markdown(f.read())
-
-# show the interface
-block.launch()
\ No newline at end of file
diff --git a/spaces/HighCWu/GFPGAN-1.3/tests/test_arcface_arch.py b/spaces/HighCWu/GFPGAN-1.3/tests/test_arcface_arch.py
deleted file mode 100644
index b4b28d33800ae78a354e078e14373d2ee159dc7b..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GFPGAN-1.3/tests/test_arcface_arch.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import torch
-
-from gfpgan.archs.arcface_arch import BasicBlock, Bottleneck, ResNetArcFace
-
-
-def test_resnetarcface():
- """Test arch: ResNetArcFace."""
-
- # model init and forward (gpu)
- if torch.cuda.is_available():
- net = ResNetArcFace(block='IRBlock', layers=(2, 2, 2, 2), use_se=True).cuda().eval()
- img = torch.rand((1, 1, 128, 128), dtype=torch.float32).cuda()
- output = net(img)
- assert output.shape == (1, 512)
-
- # -------------------- without SE block ----------------------- #
- net = ResNetArcFace(block='IRBlock', layers=(2, 2, 2, 2), use_se=False).cuda().eval()
- output = net(img)
- assert output.shape == (1, 512)
-
-
-def test_basicblock():
- """Test the BasicBlock in arcface_arch"""
- block = BasicBlock(1, 3, stride=1, downsample=None).cuda()
- img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda()
- output = block(img)
- assert output.shape == (1, 3, 12, 12)
-
- # ----------------- use the downsmaple module--------------- #
- downsample = torch.nn.UpsamplingNearest2d(scale_factor=0.5).cuda()
- block = BasicBlock(1, 3, stride=2, downsample=downsample).cuda()
- img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda()
- output = block(img)
- assert output.shape == (1, 3, 6, 6)
-
-
-def test_bottleneck():
- """Test the Bottleneck in arcface_arch"""
- block = Bottleneck(1, 1, stride=1, downsample=None).cuda()
- img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda()
- output = block(img)
- assert output.shape == (1, 4, 12, 12)
-
- # ----------------- use the downsmaple module--------------- #
- downsample = torch.nn.UpsamplingNearest2d(scale_factor=0.5).cuda()
- block = Bottleneck(1, 1, stride=2, downsample=downsample).cuda()
- img = torch.rand((1, 1, 12, 12), dtype=torch.float32).cuda()
- output = block(img)
- assert output.shape == (1, 4, 6, 6)
diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/src/settings.4cc17.js b/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/src/settings.4cc17.js
deleted file mode 100644
index 9855c9af4fe4041afdfd2a9148373f5848a86090..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/Style2Paints-4-Gradio/ui/web-mobile/src/settings.4cc17.js
+++ /dev/null
@@ -1 +0,0 @@
-window._CCSettings={platform:"web-mobile",groupList:["default"],collisionMatrix:[[true]],rawAssets:{assets:{}},assetTypes:[],launchScene:"db://assets/Scene/helloworld.fire",scenes:[{url:"db://assets/Scene/helloworld.fire",uuid:0}],packedAssets:{"054fbd38e":["00eb2q7hlMj7GXigg5AV5g","02delMVqdBD70a/HSD99FK","03p2MmRKJCdZ5AsjCj7sHN","06vZFiAAJDZYVKHgAYMkFE","0fwe5eFfFEmpa0ZPVR7EtI","226kS492tNIJQrLRGCSmtw","34n14Pu71Grr1AJOTEM1bi","39yW4Bc9VKJKFRW1gXJnWc","40m4g3hMNNCrwsd8CTt6uQ","46NQdwGmhIzbOm/xN4AFWm","56fc2Ai/RFNYpaMT8crweK","64iVI/DaVLt4Vf7oiaAWmu","65EelgvQxEI4BrVLTDbi2Y","679co/+9pO3IQeM+J2lfvH","69O71YbnBB+5PkOtJI2TvR","6eBWFz0oVHPLIGQKf/9Thu","71VhFCTINJM6/Ky3oX9nBT","7bbDiwUGZMta3uM/wWJTyw","7cvienAXxBMI2svV3dFolc","90YtjmdnFIcL8cmAgzp+Tv","9093vD545J6ZUEYzPwhfLK","9dYAAftfRHJqYpJlnj3tC4","a6WA9aHtxCh5YHKRvVP3Xw","a9+i07/8pPYK37OJXM0mCS","b2KMvAVjxLxKX0TR89NNkA","b4P/PCArtIdIH38t6mlw8Y","c8da7TtxBFLrBK3p3et/+X","d4u6MnLXxBV79+3Hng9oHd","de/e4JzelC4Z4eHkP3iE9V","e129YibpJHJKWONYnD1rRC","e60pPrifdEzquK0N9LCE/t","e8Ueib+qJEhL6mXAHdnwbi","e8UlNj5+dHyIAD1rSaRFQi","eaEJTBfGVMP6tI4b09l/xC","eadEprFbtAdbR6H7LC8vTn","f8/EswgwJFlYr6skAapCKR"],"0e521684a":["08MeXCe1dMHYedNdkzS63A","10L7rqO+5OtoYe7G9tTmg1","1aMvx28L1PZpgPVpKcDKCz","1emZAFMAJDpYid8LTfEewW","28PhFLWBlNi7ZOiy6wWfs1","29FYIk+N1GYaeWH/q1NxQO",0,"39MpUyhd1KO5lqkP4gRjGL","3d7rWwYtBD3p3bKpT7hPYw","43+q/0K7tHg7DFdkHh5wdp","47UdQy3DhCMJFip3T7mkbm","51VxMzvpxNt5Eude2ZcZ+H","55/pLEAd1KgqMC+Zu5+CM8","61jjCOw0ZBWLRWB/98eufd","65McWXN3tCy6LVKe64BuuS","6erFCVsFpOfI/2jYl2ny9K","73lcvDzLBNWIH2ElsQuEBW","75qd14DUdM8bm+GF3SMYaT","8c20Sso/ZEn7NUfNSM+EBh","93A3mAWIRJPqSL2jznn6+j","a2MjXRFdtLlYQ5ouAFv/+R","a9WLZTN3hO56gmWW/Zy0mz","b1oqXv5YhLeLek/LqyBXpr","b4Vu5hV55HcIAV2L1fo5H8","bd4f8weJ9FMqiEvMV+sn+n","bfD2QsCeBMdJIozPyHybN/","c7PZTRM5VBqKm9Nbz2Yepi","cd/TmhtM9J66HH9njOEZ+H","d1MP9xGktN+oMvtBxdLhgq","d2USs4NLxG8IHzH3SpIWwA","e4G5S883RM47WPxkyuPCQ0","e7q6FL+VZEgLJUjVeDLic/","e97GVMl6JHh5Ml5qEDdSGa","f0BIwQ8D5Ml7nTNQbh1YlS","f0tpaN389NEYuM7aul/b8S","f7dd/+nptHoIqUFepAEcrB","fbP12mE1ZDtJzoZK4U6lSU"]},orientation:"",subpackages:{},uuids:["2dL3kvpAxJu6GJ7RdqJG5J"],md5AssetsMap:{"00/0079bdaa-ee19-4c8f-b197-8a0839015e60.json":"87146","05/054fbd38e.json":"a62b2","0e/0e521684a.json":"3ddf2","assets/00/0079bdaa-ee19-4c8f-b197-8a0839015e60.png":"fca06","assets/02/0275e94c-56a7-410f-bd1a-fc7483f7d14a.png":"cea68","assets/03/03a76326-44a2-4275-9e40-b230a3eec1cd.png":"c837b","assets/06/06bd9162-0002-4365-854a-1e0018324144.png":"8090a","assets/0f/0fc1ee5e-15f1-449a-96b4-64f551ec4b48.jpg":"8b42e","assets/22/22ea44b8-f76b-4d20-942b-2d11824a6b70.png":"d8588","assets/34/349f5e0f-bbbd-46ae-bd40-24e4c43356e2.png":"8d682","assets/39/39c96e01-73d5-4a24-a151-5b581726759c.png":"d45fb","assets/40/409b8837-84c3-4d0a-bc2c-77c093b7ab90.png":"33060","assets/46/46350770-1a68-48cd-b3a6-ff13780055a6.png":"81883","assets/56/567dcd80-8bf4-4535-8a5a-313f1caf078a.png":"acdf0","assets/64/6489523f-0da5-4bb7-855f-ee889a0169ae.png":"955dc","assets/65/6511e960-bd0c-4423-806b-54b4c36e2d98.png":"0d0ac","assets/67/67f5ca3f-fbda-4edc-841e-33e27695fbc7.png":"b4ff6","assets/69/693bbd58-6e70-41fb-93e4-3ad248d93bd1.png":"e51b8","assets/6e/6e056173-d285-473c-b206-40a7fff5386e.png":"68270","assets/71/71561142-4c83-4933-afca-cb7a17f67053.png":"286c6","assets/7b/7b6c38b0-5066-4cb5-adee-33fc16253cb0.png":"467e0","assets/7c/7cbe27a7-017c-4130-8dac-bd5ddd16895c.png":"d4792","assets/90/9062d8e6-7671-4870-bf1c-980833a7e4ef.png":"08a70","assets/90/90f77bc3-e78e-49e9-9504-6333f085f2ca.png":"40fe2","assets/9d/9d60001f-b5f4-4726-a629-2659e3ded0b8.png":"94752","assets/a6/a6580f5a-1edc-4287-9607-291bd53f75f0.png":"a089e","assets/a9/a9fa2d3b-ffca-4f60-adfb-3895ccd26092.png":"070d2","assets/b2/b228cbc0-563c-4bc4-a5f4-4d1f3d34d900.png":"a6fba","assets/b4/b43ff3c2-02bb-4874-81f7-f2dea6970f18.png":"bedf4","assets/c8/c875aed3-b710-452e-b04a-de9ddeb7ff97.png":"e3c06","assets/d4/d4bba327-2d7c-4157-bf7e-dc79e0f681dd.png":"c156f","assets/de/defdee09-cde9-42e1-9e1e-1e43f7884f55.png":"4ceec","assets/e1/e1dbd622-6e92-4724-a58e-3589c3d6b442.png":"85d54","assets/e6/e6d293eb-89f7-44ce-ab8a-d0df4b084fed.png":"b94ef","assets/e8/e851e89b-faa2-4484-bea6-5c01dd9f06e2.png":"1ecb7","assets/e8/e8525363-e7e7-47c8-8003-d6b49a445422.png":"10fe6","assets/ea/ea1094c1-7c65-4c3f-ab48-e1bd3d97fc42.png":"04fa3","assets/ea/ea744a6b-15bb-4075-b47a-1fb2c2f2f4e7.png":"5d0c4","assets/f8/f8fc4b30-8302-4595-8afa-b2401aa42291.png":"8f722"}};
\ No newline at end of file
diff --git a/spaces/ICML2022/ICML2022_papers/README.md b/spaces/ICML2022/ICML2022_papers/README.md
deleted file mode 100644
index 8a79804266faf998fb4e6c6b01a00f55ce4058c7..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/ICML2022_papers/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ICML2022 Papers
-emoji: 🦀
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/gru_transformer.py b/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/gru_transformer.py
deleted file mode 100644
index d4efa93a4d75da71c78e786d7f62101ef3266af4..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/gru_transformer.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.transformer import TransformerEncoder, TransformerModel
-
-
-@register_model("gru_transformer")
-class GRUTransformerModel(TransformerModel):
- @classmethod
- def build_encoder(cls, args, src_dict, embed_tokens):
- return GRUTransformerEncoder(args, src_dict, embed_tokens)
-
-
-class GRUTransformerEncoder(TransformerEncoder):
- def __init__(self, args, dictionary, embed_tokens):
- super().__init__(args, dictionary, embed_tokens)
- self.emb_ctx = nn.GRU(
- input_size=embed_tokens.embedding_dim,
- hidden_size=embed_tokens.embedding_dim // 2,
- num_layers=1,
- bidirectional=True,
- )
-
- def forward_embedding(self, src_tokens):
- # embed tokens and positions
- x = embed = self.embed_scale * self.embed_tokens(src_tokens)
- if self.embed_positions is not None:
- x = embed + self.embed_positions(src_tokens)
-
- # contextualize embeddings
- x = x.transpose(0, 1)
- x = self.dropout_module(x)
- x, _ = self.emb_ctx.forward(x)
- x = x.transpose(0, 1)
-
- if self.layernorm_embedding is not None:
- x = self.layernorm_embedding(x)
- x = self.dropout_module(x)
- return x, embed
-
-
-@register_model_architecture("gru_transformer", "gru_transformer")
-def gru_transformer_base_architecture(args):
- args.encoder_embed_path = getattr(args, "encoder_embed_path", None)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
- args.encoder_layers = getattr(args, "encoder_layers", 6)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False)
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
- args.decoder_ffn_embed_dim = getattr(
- args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
- args.attention_dropout = getattr(args, "attention_dropout", 0.0)
- args.activation_dropout = getattr(args, "activation_dropout", 0.0)
- args.activation_fn = getattr(args, "activation_fn", "relu")
- args.dropout = getattr(args, "dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", False
- )
- args.share_all_embeddings = getattr(args, "share_all_embeddings", False)
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.no_cross_attention = getattr(args, "no_cross_attention", False)
- args.cross_self_attention = getattr(args, "cross_self_attention", False)
- args.layer_wise_attention = getattr(args, "layer_wise_attention", False)
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
-
- args.no_scale_embedding = getattr(args, "no_scale_embedding", False)
- args.layernorm_embedding = getattr(args, "layernorm_embedding", False)
-
-
-@register_model_architecture("gru_transformer", "gru_transformer_big")
-def gru_transformer_big(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- args.dropout = getattr(args, "dropout", 0.3)
- gru_transformer_base_architecture(args)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/hubert/simple_kmeans/dump_km_label.py b/spaces/ICML2022/OFA/fairseq/examples/hubert/simple_kmeans/dump_km_label.py
deleted file mode 100644
index 8871307804d3f1e5c7cc49061614c69df26ab1ee..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/hubert/simple_kmeans/dump_km_label.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import sys
-
-import numpy as np
-
-import joblib
-import torch
-import tqdm
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("dump_km_label")
-
-
-class ApplyKmeans(object):
- def __init__(self, km_path):
- self.km_model = joblib.load(km_path)
- self.C_np = self.km_model.cluster_centers_.transpose()
- self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True)
-
- self.C = torch.from_numpy(self.C_np)
- self.Cnorm = torch.from_numpy(self.Cnorm_np)
- if torch.cuda.is_available():
- self.C = self.C.cuda()
- self.Cnorm = self.Cnorm.cuda()
-
- def __call__(self, x):
- if isinstance(x, torch.Tensor):
- dist = (
- x.pow(2).sum(1, keepdim=True)
- - 2 * torch.matmul(x, self.C)
- + self.Cnorm
- )
- return dist.argmin(dim=1).cpu().numpy()
- else:
- dist = (
- (x ** 2).sum(1, keepdims=True)
- - 2 * np.matmul(x, self.C_np)
- + self.Cnorm_np
- )
- return np.argmin(dist, axis=1)
-
-
-def get_feat_iterator(feat_dir, split, nshard, rank):
- feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy"
- leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len"
- with open(leng_path, "r") as f:
- lengs = [int(line.rstrip()) for line in f]
- offsets = [0] + np.cumsum(lengs[:-1]).tolist()
-
- def iterate():
- feat = np.load(feat_path, mmap_mode="r")
- assert feat.shape[0] == (offsets[-1] + lengs[-1])
- for offset, leng in zip(offsets, lengs):
- yield feat[offset: offset + leng]
-
- return iterate, len(lengs)
-
-
-def dump_label(feat_dir, split, km_path, nshard, rank, lab_dir):
- apply_kmeans = ApplyKmeans(km_path)
- generator, num = get_feat_iterator(feat_dir, split, nshard, rank)
- iterator = generator()
-
- lab_path = f"{lab_dir}/{split}_{rank}_{nshard}.km"
- os.makedirs(lab_dir, exist_ok=True)
- with open(lab_path, "w") as f:
- for feat in tqdm.tqdm(iterator, total=num):
- # feat = torch.from_numpy(feat).cuda()
- lab = apply_kmeans(feat).tolist()
- f.write(" ".join(map(str, lab)) + "\n")
- logger.info("finished successfully")
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("feat_dir")
- parser.add_argument("split")
- parser.add_argument("km_path")
- parser.add_argument("nshard", type=int)
- parser.add_argument("rank", type=int)
- parser.add_argument("lab_dir")
- args = parser.parse_args()
- logging.info(str(args))
-
- dump_label(**vars(args))
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py
deleted file mode 100644
index e7e597f4749c591b057d776aacec39b44d99c037..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import lightconv_cuda
-import torch
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.incremental_decoding_utils import with_incremental_state
-from fairseq.modules.fairseq_dropout import FairseqDropout
-from torch import nn
-from torch.autograd import Function
-
-
-class lightconvFunction(Function):
- @staticmethod
- def forward(ctx, x, weights, padding_l):
- ctx.padding_l = padding_l
- outputs = lightconv_cuda.forward(x, weights, padding_l)
- variables = [x, weights]
- ctx.save_for_backward(*variables)
- return outputs[0]
-
- @staticmethod
- def backward(ctx, grad_output):
- outputs = lightconv_cuda.backward(
- grad_output.contiguous(), ctx.padding_l, *ctx.saved_tensors
- )
- grad_input, grad_weights = outputs
- return grad_input, grad_weights, None
-
-
-@with_incremental_state
-class LightconvLayer(nn.Module):
- def __init__(
- self,
- input_size,
- kernel_size=1,
- padding_l=None,
- weight_softmax=False,
- num_heads=1,
- weight_dropout=0.0,
- bias=False,
- ):
- super(LightconvLayer, self).__init__()
- self.input_size = input_size
- self.kernel_size = kernel_size
- self.padding_l = padding_l
- self.num_heads = num_heads
- self.weight_softmax = weight_softmax
- self.weight_dropout_module = FairseqDropout(
- weight_dropout, module_name=self.__class__.__name__
- )
-
- self.weight = nn.Parameter(torch.Tensor(num_heads, kernel_size))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(input_size))
- else:
- self.bias = None
- self.reset_parameters()
-
- def upgrade_state_dict_named(self, state_dict, name):
- prefix = name + "." if name != "" else ""
- for k, v in state_dict.items():
- if k.endswith(prefix + "weight"):
- if v.dim() == 3 and v.size(1) == 1:
- state_dict[k] = v.squeeze(1)
-
- def reset_parameters(self):
- nn.init.xavier_uniform_(self.weight)
- if self.bias is not None:
- nn.init.constant_(self.bias, 0.0)
-
- def forward(self, x, incremental_state=None):
-
- # during inference time, incremental BMM is faster
- if incremental_state is not None:
- T, B, C = x.size()
- K, H = self.kernel_size, self.num_heads
- R = C // H
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is None:
- input_buffer = x.new()
- x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3)
- if self.kernel_size > 1:
- self._set_input_buffer(
- incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :]
- )
- x_unfold = x_unfold.view(T * B * H, R, -1)
-
- weight = self.weight
- if self.weight_softmax:
- weight = F.softmax(weight.float(), dim=1).type_as(weight)
-
- weight = weight[:, -x_unfold.size(2) :]
-
- K = weight.size(1)
-
- weight = (
- weight.view(1, H, K)
- .expand(T * B, H, K)
- .contiguous()
- .view(T * B * H, K, 1)
- )
-
- weight = self.weight_dropout_module(weight)
- output = torch.bmm(x_unfold, weight) # T*B*H x R x 1
- output = output.view(T, B, C)
- return output
-
- # during training time, use CUDA kernel
- else:
- x = x.permute(1, 2, 0).contiguous()
- weight = self.weight
- if self.weight_softmax:
- weight = F.softmax(self.weight, -1)
- if self.weight_dropout_module.p:
- weight = self.weight_dropout_module(weight)
- return lightconvFunction.apply(x, weight, self.padding_l).permute(2, 0, 1)
-
- def reorder_incremental_state(self, incremental_state, new_order):
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is not None:
- input_buffer = input_buffer.index_select(1, new_order)
- self._set_input_buffer(incremental_state, input_buffer)
-
- def _get_input_buffer(self, incremental_state):
- return utils.get_incremental_state(self, incremental_state, "input_buffer")
-
- def _set_input_buffer(self, incremental_state, new_buffer):
- return utils.set_incremental_state(
- self, incremental_state, "input_buffer", new_buffer
- )
-
- def half(self):
- return self._apply(lambda t: t.half() if t.is_floating_point() else t)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py
deleted file mode 100644
index 0f87bb5d7ed5c7eb8011d4c651f2ecbf0ae700ac..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections.abc import Collection
-from dataclasses import dataclass, field
-from typing import List
-
-from omegaconf import II
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class InverseSquareRootLRScheduleConfig(FairseqDataclass):
- warmup_updates: int = field(
- default=4000,
- metadata={"help": "warmup the learning rate linearly for the first N updates"},
- )
- warmup_init_lr: float = field(
- default=-1,
- metadata={
- "help": "initial learning rate during warmup phase; default is cfg.lr"
- },
- )
- lr: List[float] = II("optimization.lr")
-
-
-@register_lr_scheduler("inverse_sqrt", dataclass=InverseSquareRootLRScheduleConfig)
-class InverseSquareRootSchedule(FairseqLRScheduler):
- """Decay the LR based on the inverse square root of the update number.
-
- We also support a warmup phase where we linearly increase the learning rate
- from some initial learning rate (``--warmup-init-lr``) until the configured
- learning rate (``--lr``). Thereafter we decay proportional to the number of
- updates, with a decay factor set to align with the configured learning rate.
-
- During warmup::
-
- lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates)
- lr = lrs[update_num]
-
- After warmup::
-
- decay_factor = cfg.lr * sqrt(cfg.warmup_updates)
- lr = decay_factor / sqrt(update_num)
- """
-
- def __init__(self, cfg: InverseSquareRootLRScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
- if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1:
- raise ValueError(
- "Cannot use a fixed learning rate schedule with inverse_sqrt."
- " Consider --lr-scheduler=fixed instead."
- )
- warmup_end_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr
- if cfg.warmup_init_lr < 0:
- cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr
-
- # linearly warmup for the first cfg.warmup_updates
- self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates
-
- # then, decay prop. to the inverse square root of the update number
- self.decay_factor = warmup_end_lr * cfg.warmup_updates ** 0.5
-
- # initial learning rate
- self.lr = cfg.warmup_init_lr
- self.optimizer.set_lr(self.lr)
-
- def step(self, epoch, val_loss=None):
- """Update the learning rate at the end of the given epoch."""
- super().step(epoch, val_loss)
- # we don't change the learning rate at epoch boundaries
- return self.optimizer.get_lr()
-
- def step_update(self, num_updates):
- """Update the learning rate after each update."""
- if num_updates < self.cfg.warmup_updates:
- self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step
- else:
- self.lr = self.decay_factor * num_updates ** -0.5
- self.optimizer.set_lr(self.lr)
- return self.lr
diff --git a/spaces/Iceclear/StableSR/StableSR/scripts/util_image.py b/spaces/Iceclear/StableSR/StableSR/scripts/util_image.py
deleted file mode 100644
index 812bbb859b5e93c49b23baa6d47aa8d6ae5c5a4a..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/scripts/util_image.py
+++ /dev/null
@@ -1,793 +0,0 @@
-#!/usr/bin/env python
-# -*- coding:utf-8 -*-
-# Power by Zongsheng Yue 2021-11-24 16:54:19
-
-import sys
-import cv2
-import math
-import torch
-import random
-import numpy as np
-from scipy import fft
-from pathlib import Path
-from einops import rearrange
-from skimage import img_as_ubyte, img_as_float32
-
-# --------------------------Metrics----------------------------
-def ssim(img1, img2):
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
- (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-def calculate_ssim(im1, im2, border=0, ycbcr=False):
- '''
- SSIM the same outputs as MATLAB's
- im1, im2: h x w x , [0, 255], uint8
- '''
- if not im1.shape == im2.shape:
- raise ValueError('Input images must have the same dimensions.')
-
- if ycbcr:
- im1 = rgb2ycbcr(im1, True)
- im2 = rgb2ycbcr(im2, True)
-
- h, w = im1.shape[:2]
- im1 = im1[border:h-border, border:w-border]
- im2 = im2[border:h-border, border:w-border]
-
- if im1.ndim == 2:
- return ssim(im1, im2)
- elif im1.ndim == 3:
- if im1.shape[2] == 3:
- ssims = []
- for i in range(3):
- ssims.append(ssim(im1[:,:,i], im2[:,:,i]))
- return np.array(ssims).mean()
- elif im1.shape[2] == 1:
- return ssim(np.squeeze(im1), np.squeeze(im2))
- else:
- raise ValueError('Wrong input image dimensions.')
-
-def calculate_psnr(im1, im2, border=0, ycbcr=False):
- '''
- PSNR metric.
- im1, im2: h x w x , [0, 255], uint8
- '''
- if not im1.shape == im2.shape:
- raise ValueError('Input images must have the same dimensions.')
-
- if ycbcr:
- im1 = rgb2ycbcr(im1, True)
- im2 = rgb2ycbcr(im2, True)
-
- h, w = im1.shape[:2]
- im1 = im1[border:h-border, border:w-border]
- im2 = im2[border:h-border, border:w-border]
-
- im1 = im1.astype(np.float64)
- im2 = im2.astype(np.float64)
- mse = np.mean((im1 - im2)**2)
- if mse == 0:
- return float('inf')
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-def batch_PSNR(img, imclean, border=0, ycbcr=False):
- if ycbcr:
- img = rgb2ycbcrTorch(img, True)
- imclean = rgb2ycbcrTorch(imclean, True)
- Img = img.data.cpu().numpy()
- Iclean = imclean.data.cpu().numpy()
- Img = img_as_ubyte(Img)
- Iclean = img_as_ubyte(Iclean)
- PSNR = 0
- h, w = Iclean.shape[2:]
- for i in range(Img.shape[0]):
- PSNR += calculate_psnr(Iclean[i,:,].transpose((1,2,0)), Img[i,:,].transpose((1,2,0)), border)
- return PSNR
-
-def batch_SSIM(img, imclean, border=0, ycbcr=False):
- if ycbcr:
- img = rgb2ycbcrTorch(img, True)
- imclean = rgb2ycbcrTorch(imclean, True)
- Img = img.data.cpu().numpy()
- Iclean = imclean.data.cpu().numpy()
- Img = img_as_ubyte(Img)
- Iclean = img_as_ubyte(Iclean)
- SSIM = 0
- for i in range(Img.shape[0]):
- SSIM += calculate_ssim(Iclean[i,:,].transpose((1,2,0)), Img[i,:,].transpose((1,2,0)), border)
- return SSIM
-
-def normalize_np(im, mean=0.5, std=0.5, reverse=False):
- '''
- Input:
- im: h x w x c, numpy array
- Normalize: (im - mean) / std
- Reverse: im * std + mean
-
- '''
- if not isinstance(mean, (list, tuple)):
- mean = [mean, ] * im.shape[2]
- mean = np.array(mean).reshape([1, 1, im.shape[2]])
-
- if not isinstance(std, (list, tuple)):
- std = [std, ] * im.shape[2]
- std = np.array(std).reshape([1, 1, im.shape[2]])
-
- if not reverse:
- out = (im.astype(np.float32) - mean) / std
- else:
- out = im.astype(np.float32) * std + mean
- return out
-
-def normalize_th(im, mean=0.5, std=0.5, reverse=False):
- '''
- Input:
- im: b x c x h x w, torch tensor
- Normalize: (im - mean) / std
- Reverse: im * std + mean
-
- '''
- if not isinstance(mean, (list, tuple)):
- mean = [mean, ] * im.shape[1]
- mean = torch.tensor(mean, device=im.device).view([1, im.shape[1], 1, 1])
-
- if not isinstance(std, (list, tuple)):
- std = [std, ] * im.shape[1]
- std = torch.tensor(std, device=im.device).view([1, im.shape[1], 1, 1])
-
- if not reverse:
- out = (im - mean) / std
- else:
- out = im * std + mean
- return out
-
-# ------------------------Image format--------------------------
-def rgb2ycbcr(im, only_y=True):
- '''
- same as matlab rgb2ycbcr
- Input:
- im: uint8 [0,255] or float [0,1]
- only_y: only return Y channel
- '''
- # transform to float64 data type, range [0, 255]
- if im.dtype == np.uint8:
- im_temp = im.astype(np.float64)
- else:
- im_temp = (im * 255).astype(np.float64)
-
- # convert
- if only_y:
- rlt = np.dot(im_temp, np.array([65.481, 128.553, 24.966])/ 255.0) + 16.0
- else:
- rlt = np.matmul(im_temp, np.array([[65.481, -37.797, 112.0 ],
- [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]])/255.0) + [16, 128, 128]
- if im.dtype == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(im.dtype)
-
-def rgb2ycbcrTorch(im, only_y=True):
- '''
- same as matlab rgb2ycbcr
- Input:
- im: float [0,1], N x 3 x H x W
- only_y: only return Y channel
- '''
- # transform to range [0,255.0]
- im_temp = im.permute([0,2,3,1]) * 255.0 # N x H x W x C --> N x H x W x C
- # convert
- if only_y:
- rlt = torch.matmul(im_temp, torch.tensor([65.481, 128.553, 24.966],
- device=im.device, dtype=im.dtype).view([3,1])/ 255.0) + 16.0
- else:
- rlt = torch.matmul(im_temp, torch.tensor([[65.481, -37.797, 112.0 ],
- [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]],
- device=im.device, dtype=im.dtype)/255.0) + \
- torch.tensor([16, 128, 128]).view([-1, 1, 1, 3])
- rlt /= 255.0
- rlt.clamp_(0.0, 1.0)
- return rlt.permute([0, 3, 1, 2])
-
-def bgr2rgb(im): return cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
-
-def rgb2bgr(im): return cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
-
-def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)):
- """Convert torch Tensors into image numpy arrays.
-
- After clamping to [min, max], values will be normalized to [0, 1].
-
- Args:
- tensor (Tensor or list[Tensor]): Accept shapes:
- 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W);
- 2) 3D Tensor of shape (3/1 x H x W);
- 3) 2D Tensor of shape (H x W).
- Tensor channel should be in RGB order.
- rgb2bgr (bool): Whether to change rgb to bgr.
- out_type (numpy type): output types. If ``np.uint8``, transform outputs
- to uint8 type with range [0, 255]; otherwise, float type with
- range [0, 1]. Default: ``np.uint8``.
- min_max (tuple[int]): min and max values for clamp.
-
- Returns:
- (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of
- shape (H x W). The channel order is BGR.
- """
- if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))):
- raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}')
-
- flag_tensor = torch.is_tensor(tensor)
- if flag_tensor:
- tensor = [tensor]
- result = []
- for _tensor in tensor:
- _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max)
- _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0])
-
- n_dim = _tensor.dim()
- if n_dim == 4:
- img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy()
- img_np = img_np.transpose(1, 2, 0)
- if rgb2bgr:
- img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)
- elif n_dim == 3:
- img_np = _tensor.numpy()
- img_np = img_np.transpose(1, 2, 0)
- if img_np.shape[2] == 1: # gray image
- img_np = np.squeeze(img_np, axis=2)
- else:
- if rgb2bgr:
- img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)
- elif n_dim == 2:
- img_np = _tensor.numpy()
- else:
- raise TypeError(f'Only support 4D, 3D or 2D tensor. But received with dimension: {n_dim}')
- if out_type == np.uint8:
- # Unlike MATLAB, numpy.unit8() WILL NOT round by default.
- img_np = (img_np * 255.0).round()
- img_np = img_np.astype(out_type)
- result.append(img_np)
- if len(result) == 1 and flag_tensor:
- result = result[0]
- return result
-
-def img2tensor(imgs, out_type=torch.float32):
- """Convert image numpy arrays into torch tensor.
- Args:
- imgs (Array or list[array]): Accept shapes:
- 3) list of numpy arrays
- 1) 3D numpy array of shape (H x W x 3/1);
- 2) 2D Tensor of shape (H x W).
- Tensor channel should be in RGB order.
-
- Returns:
- (array or list): 4D ndarray of shape (1 x C x H x W)
- """
-
- def _img2tensor(img):
- if img.ndim == 2:
- tensor = torch.from_numpy(img[None, None,]).type(out_type)
- elif img.ndim == 3:
- tensor = torch.from_numpy(rearrange(img, 'h w c -> c h w')).type(out_type).unsqueeze(0)
- else:
- raise TypeError(f'2D or 3D numpy array expected, got{img.ndim}D array')
- return tensor
-
- if not (isinstance(imgs, np.ndarray) or (isinstance(imgs, list) and all(isinstance(t, np.ndarray) for t in imgs))):
- raise TypeError(f'Numpy array or list of numpy array expected, got {type(imgs)}')
-
- flag_numpy = isinstance(imgs, np.ndarray)
- if flag_numpy:
- imgs = [imgs,]
- result = []
- for _img in imgs:
- result.append(_img2tensor(_img))
-
- if len(result) == 1 and flag_numpy:
- result = result[0]
- return result
-
-# ------------------------Image I/O-----------------------------
-def imread(path, chn='rgb', dtype='float32'):
- '''
- Read image.
- chn: 'rgb', 'bgr' or 'gray'
- out:
- im: h x w x c, numpy tensor
- '''
- im = cv2.imread(str(path), cv2.IMREAD_UNCHANGED) # BGR, uint8
- try:
- if chn.lower() == 'rgb':
- if im.ndim == 3:
- im = bgr2rgb(im)
- else:
- im = np.stack((im, im, im), axis=2)
- elif chn.lower() == 'gray':
- assert im.ndim == 2
- except:
- print(str(path))
-
- if dtype == 'float32':
- im = im.astype(np.float32) / 255.
- elif dtype == 'float64':
- im = im.astype(np.float64) / 255.
- elif dtype == 'uint8':
- pass
- else:
- sys.exit('Please input corrected dtype: float32, float64 or uint8!')
-
- return im
-
-def imwrite(im_in, path, chn='rgb', dtype_in='float32', qf=None):
- '''
- Save image.
- Input:
- im: h x w x c, numpy tensor
- path: the saving path
- chn: the channel order of the im,
- '''
- im = im_in.copy()
- if isinstance(path, str):
- path = Path(path)
- if dtype_in != 'uint8':
- im = img_as_ubyte(im)
-
- if chn.lower() == 'rgb' and im.ndim == 3:
- im = rgb2bgr(im)
-
- if qf is not None and path.suffix.lower() in ['.jpg', '.jpeg']:
- flag = cv2.imwrite(str(path), im, [int(cv2.IMWRITE_JPEG_QUALITY), int(qf)])
- else:
- flag = cv2.imwrite(str(path), im)
-
- return flag
-
-def jpeg_compress(im, qf, chn_in='rgb'):
- '''
- Input:
- im: h x w x 3 array
- qf: compress factor, (0, 100]
- chn_in: 'rgb' or 'bgr'
- Return:
- Compressed Image with channel order: chn_in
- '''
- # transform to BGR channle and uint8 data type
- im_bgr = rgb2bgr(im) if chn_in.lower() == 'rgb' else im
- if im.dtype != np.dtype('uint8'): im_bgr = img_as_ubyte(im_bgr)
-
- # JPEG compress
- flag, encimg = cv2.imencode('.jpg', im_bgr, [int(cv2.IMWRITE_JPEG_QUALITY), qf])
- assert flag
- im_jpg_bgr = cv2.imdecode(encimg, 1) # uint8, BGR
-
- # transform back to original channel and the original data type
- im_out = bgr2rgb(im_jpg_bgr) if chn_in.lower() == 'rgb' else im_jpg_bgr
- if im.dtype != np.dtype('uint8'): im_out = img_as_float32(im_out).astype(im.dtype)
- return im_out
-
-# ------------------------Augmentation-----------------------------
-def data_aug_np(image, mode):
- '''
- Performs data augmentation of the input image
- Input:
- image: a cv2 (OpenCV) image
- mode: int. Choice of transformation to apply to the image
- 0 - no transformation
- 1 - flip up and down
- 2 - rotate counterwise 90 degree
- 3 - rotate 90 degree and flip up and down
- 4 - rotate 180 degree
- 5 - rotate 180 degree and flip
- 6 - rotate 270 degree
- 7 - rotate 270 degree and flip
- '''
- if mode == 0:
- # original
- out = image
- elif mode == 1:
- # flip up and down
- out = np.flipud(image)
- elif mode == 2:
- # rotate counterwise 90 degree
- out = np.rot90(image)
- elif mode == 3:
- # rotate 90 degree and flip up and down
- out = np.rot90(image)
- out = np.flipud(out)
- elif mode == 4:
- # rotate 180 degree
- out = np.rot90(image, k=2)
- elif mode == 5:
- # rotate 180 degree and flip
- out = np.rot90(image, k=2)
- out = np.flipud(out)
- elif mode == 6:
- # rotate 270 degree
- out = np.rot90(image, k=3)
- elif mode == 7:
- # rotate 270 degree and flip
- out = np.rot90(image, k=3)
- out = np.flipud(out)
- else:
- raise Exception('Invalid choice of image transformation')
-
- return out.copy()
-
-def inverse_data_aug_np(image, mode):
- '''
- Performs inverse data augmentation of the input image
- '''
- if mode == 0:
- # original
- out = image
- elif mode == 1:
- out = np.flipud(image)
- elif mode == 2:
- out = np.rot90(image, axes=(1,0))
- elif mode == 3:
- out = np.flipud(image)
- out = np.rot90(out, axes=(1,0))
- elif mode == 4:
- out = np.rot90(image, k=2, axes=(1,0))
- elif mode == 5:
- out = np.flipud(image)
- out = np.rot90(out, k=2, axes=(1,0))
- elif mode == 6:
- out = np.rot90(image, k=3, axes=(1,0))
- elif mode == 7:
- # rotate 270 degree and flip
- out = np.flipud(image)
- out = np.rot90(out, k=3, axes=(1,0))
- else:
- raise Exception('Invalid choice of image transformation')
-
- return out
-
-class SpatialAug:
- def __init__(self):
- pass
-
- def __call__(self, im, flag=None):
- if flag is None:
- flag = random.randint(0, 7)
-
- out = data_aug_np(im, flag)
- return out
-
-# ----------------------Visualization----------------------------
-def imshow(x, title=None, cbar=False):
- import matplotlib.pyplot as plt
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
- if title:
- plt.title(title)
- if cbar:
- plt.colorbar()
- plt.show()
-
-# -----------------------Covolution------------------------------
-def imgrad(im, pading_mode='mirror'):
- '''
- Calculate image gradient.
- Input:
- im: h x w x c numpy array
- '''
- from scipy.ndimage import correlate # lazy import
- wx = np.array([[0, 0, 0],
- [-1, 1, 0],
- [0, 0, 0]], dtype=np.float32)
- wy = np.array([[0, -1, 0],
- [0, 1, 0],
- [0, 0, 0]], dtype=np.float32)
- if im.ndim == 3:
- gradx = np.stack(
- [correlate(im[:,:,c], wx, mode=pading_mode) for c in range(im.shape[2])],
- axis=2
- )
- grady = np.stack(
- [correlate(im[:,:,c], wy, mode=pading_mode) for c in range(im.shape[2])],
- axis=2
- )
- grad = np.concatenate((gradx, grady), axis=2)
- else:
- gradx = correlate(im, wx, mode=pading_mode)
- grady = correlate(im, wy, mode=pading_mode)
- grad = np.stack((gradx, grady), axis=2)
-
- return {'gradx': gradx, 'grady': grady, 'grad':grad}
-
-def imgrad_fft(im):
- '''
- Calculate image gradient.
- Input:
- im: h x w x c numpy array
- '''
- wx = np.rot90(np.array([[0, 0, 0],
- [-1, 1, 0],
- [0, 0, 0]], dtype=np.float32), k=2)
- gradx = convfft(im, wx)
- wy = np.rot90(np.array([[0, -1, 0],
- [0, 1, 0],
- [0, 0, 0]], dtype=np.float32), k=2)
- grady = convfft(im, wy)
- grad = np.concatenate((gradx, grady), axis=2)
-
- return {'gradx': gradx, 'grady': grady, 'grad':grad}
-
-def convfft(im, weight):
- '''
- Convolution with FFT
- Input:
- im: h1 x w1 x c numpy array
- weight: h2 x w2 numpy array
- Output:
- out: h1 x w1 x c numpy array
- '''
- axes = (0,1)
- otf = psf2otf(weight, im.shape[:2])
- if im.ndim == 3:
- otf = np.tile(otf[:, :, None], (1,1,im.shape[2]))
- out = fft.ifft2(fft.fft2(im, axes=axes) * otf, axes=axes).real
- return out
-
-def psf2otf(psf, shape):
- """
- MATLAB psf2otf function.
- Borrowed from https://github.com/aboucaud/pypher/blob/master/pypher/pypher.py.
- Input:
- psf : h x w numpy array
- shape : list or tuple, output shape of the OTF array
- Output:
- otf : OTF array with the desirable shape
- """
- if np.all(psf == 0):
- return np.zeros_like(psf)
-
- inshape = psf.shape
- # Pad the PSF to outsize
- psf = zero_pad(psf, shape, position='corner')
-
- # Circularly shift OTF so that the 'center' of the PSF is [0,0] element of the array
- for axis, axis_size in enumerate(inshape):
- psf = np.roll(psf, -int(axis_size / 2), axis=axis)
-
- # Compute the OTF
- otf = fft.fft2(psf)
-
- # Estimate the rough number of operations involved in the FFT
- # and discard the PSF imaginary part if within roundoff error
- # roundoff error = machine epsilon = sys.float_info.epsilon
- # or np.finfo().eps
- n_ops = np.sum(psf.size * np.log2(psf.shape))
- otf = np.real_if_close(otf, tol=n_ops)
-
- return otf
-
-# ----------------------Patch Cropping----------------------------
-def random_crop(im, pch_size):
- '''
- Randomly crop a patch from the give image.
- '''
- h, w = im.shape[:2]
- if h == pch_size and w == pch_size:
- im_pch = im
- else:
- assert h >= pch_size or w >= pch_size
- ind_h = random.randint(0, h-pch_size)
- ind_w = random.randint(0, w-pch_size)
- im_pch = im[ind_h:ind_h+pch_size, ind_w:ind_w+pch_size,]
-
- return im_pch
-
-class RandomCrop:
- def __init__(self, pch_size):
- self.pch_size = pch_size
-
- def __call__(self, im):
- return random_crop(im, self.pch_size)
-
-class ImageSpliterNp:
- def __init__(self, im, pch_size, stride, sf=1):
- '''
- Input:
- im: h x w x c, numpy array, [0, 1], low-resolution image in SR
- pch_size, stride: patch setting
- sf: scale factor in image super-resolution
- '''
- assert stride <= pch_size
- self.stride = stride
- self.pch_size = pch_size
- self.sf = sf
-
- if im.ndim == 2:
- im = im[:, :, None]
-
- height, width, chn = im.shape
- self.height_starts_list = self.extract_starts(height)
- self.width_starts_list = self.extract_starts(width)
- self.length = self.__len__()
- self.num_pchs = 0
-
- self.im_ori = im
- self.im_res = np.zeros([height*sf, width*sf, chn], dtype=im.dtype)
- self.pixel_count = np.zeros([height*sf, width*sf, chn], dtype=im.dtype)
-
- def extract_starts(self, length):
- starts = list(range(0, length, self.stride))
- if starts[-1] + self.pch_size > length:
- starts[-1] = length - self.pch_size
- return starts
-
- def __len__(self):
- return len(self.height_starts_list) * len(self.width_starts_list)
-
- def __iter__(self):
- return self
-
- def __next__(self):
- if self.num_pchs < self.length:
- w_start_idx = self.num_pchs // len(self.height_starts_list)
- w_start = self.width_starts_list[w_start_idx] * self.sf
- w_end = w_start + self.pch_size * self.sf
-
- h_start_idx = self.num_pchs % len(self.height_starts_list)
- h_start = self.height_starts_list[h_start_idx] * self.sf
- h_end = h_start + self.pch_size * self.sf
-
- pch = self.im_ori[h_start:h_end, w_start:w_end,]
- self.w_start, self.w_end = w_start, w_end
- self.h_start, self.h_end = h_start, h_end
-
- self.num_pchs += 1
- else:
- raise StopIteration(0)
-
- return pch, (h_start, h_end, w_start, w_end)
-
- def update(self, pch_res, index_infos):
- '''
- Input:
- pch_res: pch_size x pch_size x 3, [0,1]
- index_infos: (h_start, h_end, w_start, w_end)
- '''
- if index_infos is None:
- w_start, w_end = self.w_start, self.w_end
- h_start, h_end = self.h_start, self.h_end
- else:
- h_start, h_end, w_start, w_end = index_infos
-
- self.im_res[h_start:h_end, w_start:w_end] += pch_res
- self.pixel_count[h_start:h_end, w_start:w_end] += 1
-
- def gather(self):
- assert np.all(self.pixel_count != 0)
- return self.im_res / self.pixel_count
-
-class ImageSpliterTh:
- def __init__(self, im, pch_size, stride, sf=1):
- '''
- Input:
- im: n x c x h x w, torch tensor, float, low-resolution image in SR
- pch_size, stride: patch setting
- sf: scale factor in image super-resolution
- '''
- assert stride <= pch_size
- self.stride = stride
- self.pch_size = pch_size
- self.sf = sf
-
- bs, chn, height, width= im.shape
- self.height_starts_list = self.extract_starts(height)
- self.width_starts_list = self.extract_starts(width)
- self.length = self.__len__()
- self.num_pchs = 0
-
- self.im_ori = im
- self.im_res = torch.zeros([bs, chn, height*sf, width*sf], dtype=im.dtype, device=im.device)
- self.pixel_count = torch.zeros([bs, chn, height*sf, width*sf], dtype=im.dtype, device=im.device)
-
- def extract_starts(self, length):
- if length <= self.pch_size:
- starts = [0,]
- else:
- starts = list(range(0, length, self.stride))
- for i in range(len(starts)):
- if starts[i] + self.pch_size > length:
- starts[i] = length - self.pch_size
- starts = sorted(set(starts), key=starts.index)
- return starts
-
- def __len__(self):
- return len(self.height_starts_list) * len(self.width_starts_list)
-
- def __iter__(self):
- return self
-
- def __next__(self):
- if self.num_pchs < self.length:
- w_start_idx = self.num_pchs // len(self.height_starts_list)
- w_start = self.width_starts_list[w_start_idx]
- w_end = w_start + self.pch_size
-
- h_start_idx = self.num_pchs % len(self.height_starts_list)
- h_start = self.height_starts_list[h_start_idx]
- h_end = h_start + self.pch_size
-
- pch = self.im_ori[:, :, h_start:h_end, w_start:w_end,]
-
- h_start *= self.sf
- h_end *= self.sf
- w_start *= self.sf
- w_end *= self.sf
-
- self.w_start, self.w_end = w_start, w_end
- self.h_start, self.h_end = h_start, h_end
-
- self.num_pchs += 1
- else:
- raise StopIteration()
-
- return pch, (h_start, h_end, w_start, w_end)
-
- def update(self, pch_res, index_infos):
- '''
- Input:
- pch_res: n x c x pch_size x pch_size, float
- index_infos: (h_start, h_end, w_start, w_end)
- '''
- if index_infos is None:
- w_start, w_end = self.w_start, self.w_end
- h_start, h_end = self.h_start, self.h_end
- else:
- h_start, h_end, w_start, w_end = index_infos
-
- self.im_res[:, :, h_start:h_end, w_start:w_end] += pch_res
- self.pixel_count[:, :, h_start:h_end, w_start:w_end] += 1
-
- def gather(self):
- assert torch.all(self.pixel_count != 0)
- return self.im_res.div(self.pixel_count)
-
-# ----------------------Patch Cropping----------------------------
-class Clamper:
- def __init__(self, min_max=(-1, 1)):
- self.min_bound, self.max_bound = min_max[0], min_max[1]
-
- def __call__(self, im):
- if isinstance(im, np.ndarray):
- return np.clip(im, a_min=self.min_bound, a_max=self.max_bound)
- elif isinstance(im, torch.Tensor):
- return torch.clamp(im, min=self.min_bound, max=self.max_bound)
- else:
- raise TypeError(f'ndarray or Tensor expected, got {type(im)}')
-
-if __name__ == '__main__':
- im = np.random.randn(64, 64, 3).astype(np.float32)
-
- grad1 = imgrad(im)['grad']
- grad2 = imgrad_fft(im)['grad']
-
- error = np.abs(grad1 -grad2).max()
- mean_error = np.abs(grad1 -grad2).mean()
- print('The largest error is {:.2e}'.format(error))
- print('The mean error is {:.2e}'.format(mean_error))
diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/rearrange_speaker.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/rearrange_speaker.py
deleted file mode 100644
index de0f7545904cc088377c552cc6d9b058c5e9d342..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/rearrange_speaker.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import torch
-import argparse
-import json
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_dir", type=str, default="./OUTPUT_MODEL/G_latest.pth")
- parser.add_argument("--config_dir", type=str, default="./configs/modified_finetune_speaker.json")
- args = parser.parse_args()
-
- model_sd = torch.load(args.model_dir, map_location='cpu')
- with open(args.config_dir, 'r', encoding='utf-8') as f:
- hps = json.load(f)
-
- valid_speakers = list(hps['speakers'].keys())
- if hps['data']['n_speakers'] > len(valid_speakers):
- new_emb_g = torch.zeros([len(valid_speakers), 256])
- old_emb_g = model_sd['model']['emb_g.weight']
- for i, speaker in enumerate(valid_speakers):
- new_emb_g[i, :] = old_emb_g[hps['speakers'][speaker], :]
- hps['speakers'][speaker] = i
- hps['data']['n_speakers'] = len(valid_speakers)
- model_sd['model']['emb_g.weight'] = new_emb_g
- with open("./finetune_speaker.json", 'w', encoding='utf-8') as f:
- json.dump(hps, f, indent=2)
- torch.save(model_sd, "./G_latest.pth")
- else:
- with open("./finetune_speaker.json", 'w', encoding='utf-8') as f:
- json.dump(hps, f, indent=2)
- torch.save(model_sd, "./G_latest.pth")
- # save another config file copy in MoeGoe format
- hps['speakers'] = valid_speakers
- with open("./moegoe_config.json", 'w', encoding='utf-8') as f:
- json.dump(hps, f, indent=2)
-
-
-
diff --git a/spaces/Illumotion/Koboldcpp/examples/common.h b/spaces/Illumotion/Koboldcpp/examples/common.h
deleted file mode 100644
index 375bc0a3db416b9bd4801d13c3924051feb7aa2d..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/common.h
+++ /dev/null
@@ -1,114 +0,0 @@
-// Various helper functions and utilities
-
-#pragma once
-
-#include "llama.h"
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-//
-// CLI argument parsing
-//
-int32_t get_num_physical_cores();
-
-struct gpt_params {
- uint32_t seed = -1; // RNG seed
- int32_t n_threads = get_num_physical_cores();
- int32_t n_predict = -1; // new tokens to predict
- int32_t n_ctx = 512; // context size
- int32_t n_batch = 512; // batch size for prompt processing (must be >=32 to use BLAS)
- int32_t n_gqa = 1; // grouped-query attention factor (TODO: move to hparams)
- int32_t n_keep = 0; // number of tokens to keep from initial prompt
- int32_t n_chunks = -1; // max number of chunks to process (-1 = unlimited)
- int32_t n_gpu_layers = 0; // number of layers to store in VRAM
- int32_t main_gpu = 0; // the GPU that is used for scratch and small tensors
- float tensor_split[LLAMA_MAX_DEVICES] = {0}; // how split tensors should be distributed across GPUs
- int32_t n_probs = 0; // if greater than 0, output the probabilities of top n_probs tokens.
- float rms_norm_eps = LLAMA_DEFAULT_RMS_EPS; // rms norm epsilon
- float rope_freq_base = 10000.0f; // RoPE base frequency
- float rope_freq_scale = 1.0f; // RoPE frequency scaling factor
-
- // sampling parameters
- std::unordered_map logit_bias; // logit bias for specific tokens
- int32_t top_k = 40; // <= 0 to use vocab size
- float top_p = 0.95f; // 1.0 = disabled
- float tfs_z = 1.00f; // 1.0 = disabled
- float typical_p = 1.00f; // 1.0 = disabled
- float temp = 0.80f; // 1.0 = disabled
- float repeat_penalty = 1.10f; // 1.0 = disabled
- int32_t repeat_last_n = 64; // last n tokens to penalize (0 = disable penalty, -1 = context size)
- float frequency_penalty = 0.00f; // 0.0 = disabled
- float presence_penalty = 0.00f; // 0.0 = disabled
- int32_t mirostat = 0; // 0 = disabled, 1 = mirostat, 2 = mirostat 2.0
- float mirostat_tau = 5.00f; // target entropy
- float mirostat_eta = 0.10f; // learning rate
-
- // Classifier-Free Guidance
- // https://arxiv.org/abs/2306.17806
- std::string cfg_negative_prompt; // string to help guidance
- float cfg_scale = 1.f; // How strong is guidance
-
- std::string model = "models/7B/ggml-model.bin"; // model path
- std::string model_alias = "unknown"; // model alias
- std::string prompt = "";
- std::string path_prompt_cache = ""; // path to file for saving/loading prompt eval state
- std::string input_prefix = ""; // string to prefix user inputs with
- std::string input_suffix = ""; // string to suffix user inputs with
- std::string grammar = ""; // optional BNF-like grammar to constrain sampling
- std::vector antiprompt; // string upon seeing which more user input is prompted
-
- std::string lora_adapter = ""; // lora adapter path
- std::string lora_base = ""; // base model path for the lora adapter
-
- bool hellaswag = false; // compute HellaSwag score over random tasks from datafile supplied in prompt
- size_t hellaswag_tasks = 400; // number of tasks to use when computing the HellaSwag score
-
- bool low_vram = false; // if true, reduce VRAM usage at the cost of performance
- bool mul_mat_q = false; // if true, use experimental mul_mat_q kernels
- bool memory_f16 = true; // use f16 instead of f32 for memory kv
- bool random_prompt = false; // do not randomize prompt if none provided
- bool use_color = false; // use color to distinguish generations and inputs
- bool interactive = false; // interactive mode
- bool prompt_cache_all = false; // save user input and generations to prompt cache
- bool prompt_cache_ro = false; // open the prompt cache read-only and do not update it
-
- bool embedding = false; // get only sentence embedding
- bool interactive_first = false; // wait for user input immediately
- bool multiline_input = false; // reverse the usage of `\`
- bool simple_io = false; // improves compatibility with subprocesses and limited consoles
-
- bool input_prefix_bos = false; // prefix BOS to user inputs, preceding input_prefix
- bool instruct = false; // instruction mode (used for Alpaca models)
- bool penalize_nl = true; // consider newlines as a repeatable token
- bool perplexity = false; // compute perplexity over the prompt
- bool use_mmap = true; // use mmap for faster loads
- bool use_mlock = false; // use mlock to keep model in memory
- bool mem_test = false; // compute maximum memory usage
- bool numa = false; // attempt optimizations that help on some NUMA systems
- bool export_cgraph = false; // export the computation graph
- bool verbose_prompt = false; // print prompt tokens before generation
-};
-
-bool gpt_params_parse(int argc, char ** argv, gpt_params & params);
-
-void gpt_print_usage(int argc, char ** argv, const gpt_params & params);
-
-std::string gpt_random_prompt(std::mt19937 & rng);
-
-//
-// Vocab utils
-//
-
-std::vector llama_tokenize(struct llama_context * ctx, const std::string & text, bool add_bos);
-
-//
-// Model utils
-//
-
-std::tuple llama_init_from_gpt_params(const gpt_params & params);
-struct llama_context_params llama_context_params_from_gpt_params(const gpt_params & params);
diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/vc_infer_pipeline.py b/spaces/Ilzhabimantara/rvc-Blue-archives/vc_infer_pipeline.py
deleted file mode 100644
index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000
--- a/spaces/Ilzhabimantara/rvc-Blue-archives/vc_infer_pipeline.py
+++ /dev/null
@@ -1,443 +0,0 @@
-import numpy as np, parselmouth, torch, pdb, sys, os
-from time import time as ttime
-import torch.nn.functional as F
-import scipy.signal as signal
-import pyworld, os, traceback, faiss, librosa, torchcrepe
-from scipy import signal
-from functools import lru_cache
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav = {}
-
-
-@lru_cache
-def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period):
- audio = input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-
-def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(
- y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
- ) # 每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
- rms1 = torch.from_numpy(rms1)
- rms1 = F.interpolate(
- rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.from_numpy(rms2)
- rms2 = F.interpolate(
- rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
- ).squeeze()
- rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
- data2 *= (
- torch.pow(rms1, torch.tensor(1 - rate))
- * torch.pow(rms2, torch.tensor(rate - 1))
- ).numpy()
- return data2
-
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- def get_f0(
- self,
- input_audio_path,
- x,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- inp_f0=None,
- ):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- input_audio_path2wav[input_audio_path] = x.astype(np.double)
- f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10)
- if filter_radius > 2:
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "crepe":
- model = "full"
- # Pick a batch size that doesn't cause memory errors on your gpu
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- elif f0_method == "rmvpe":
- if hasattr(self, "model_rmvpe") == False:
- from rmvpe import RMVPE
-
- print("loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "rmvpe.pt", is_half=self.is_half, device=self.device
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0]) if version == "v1" else logits[0]
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = feats.clone()
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- if protect < 0.5 and pitch != None and pitchf != None:
- feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute(
- 0, 2, 1
- )
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
-
- if protect < 0.5 and pitch != None and pitchf != None:
- pitchff = pitchf.clone()
- pitchff[pitchf > 0] = 1
- pitchff[pitchf < 1] = protect
- pitchff = pitchff.unsqueeze(-1)
- feats = feats * pitchff + feats0 * (1 - pitchff)
- feats = feats.to(feats0.dtype)
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy()
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(
- input_audio_path,
- audio_pad,
- p_len,
- f0_up_key,
- f0_method,
- filter_radius,
- inp_f0,
- )
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps":
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- protect,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- if rms_mix_rate != 1:
- audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
- if resample_sr >= 16000 and tgt_sr != resample_sr:
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- audio_max = np.abs(audio_opt).max() / 0.99
- max_int16 = 32768
- if audio_max > 1:
- max_int16 /= audio_max
- audio_opt = (audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/ImPavloh/voiceit/README.md b/spaces/ImPavloh/voiceit/README.md
deleted file mode 100644
index 6473ffd725379335c342b8a7460c7ce77dcdd7d8..0000000000000000000000000000000000000000
--- a/spaces/ImPavloh/voiceit/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: VoiceIt
-emoji: 🗣️
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.29.0
-app_file: voiceit.py
-pinned: true
-license: gpl
----
\ No newline at end of file
diff --git a/spaces/InpaintAI/Inpaint-Anything/README.md b/spaces/InpaintAI/Inpaint-Anything/README.md
deleted file mode 100644
index 648276b900cc1bee14b4d87946b42ccbebc10a6b..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Inpaint Anything
-emoji: ⚡
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ItsJayQz/Marvel_WhatIf_Diffusion/README.md b/spaces/ItsJayQz/Marvel_WhatIf_Diffusion/README.md
deleted file mode 100644
index fd1589c760c77bcdee68b3c19a825e57dca46a01..0000000000000000000000000000000000000000
--- a/spaces/ItsJayQz/Marvel_WhatIf_Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Marvel WhatIf Diffusion
-emoji: 😻
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.14.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d.py b/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d.py
deleted file mode 100644
index 29d1d707f55a026458defd2bc0ec089ecc10653a..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_1d.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import torch
-import torch.nn as nn
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..modeling_utils import ModelMixin
-from ..utils import BaseOutput
-from .embeddings import GaussianFourierProjection, TimestepEmbedding, Timesteps
-from .unet_1d_blocks import get_down_block, get_mid_block, get_out_block, get_up_block
-
-
-@dataclass
-class UNet1DOutput(BaseOutput):
- """
- Args:
- sample (`torch.FloatTensor` of shape `(batch_size, num_channels, sample_size)`):
- Hidden states output. Output of last layer of model.
- """
-
- sample: torch.FloatTensor
-
-
-class UNet1DModel(ModelMixin, ConfigMixin):
- r"""
- UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output.
-
- This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library
- implements for all the model (such as downloading or saving, etc.)
-
- Parameters:
- sample_size (`int`, *optional*): Default length of sample. Should be adaptable at runtime.
- in_channels (`int`, *optional*, defaults to 2): Number of channels in the input sample.
- out_channels (`int`, *optional*, defaults to 2): Number of channels in the output.
- time_embedding_type (`str`, *optional*, defaults to `"fourier"`): Type of time embedding to use.
- freq_shift (`float`, *optional*, defaults to 0.0): Frequency shift for fourier time embedding.
- flip_sin_to_cos (`bool`, *optional*, defaults to :
- obj:`False`): Whether to flip sin to cos for fourier time embedding.
- down_block_types (`Tuple[str]`, *optional*, defaults to :
- obj:`("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")`): Tuple of downsample block types.
- up_block_types (`Tuple[str]`, *optional*, defaults to :
- obj:`("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")`): Tuple of upsample block types.
- block_out_channels (`Tuple[int]`, *optional*, defaults to :
- obj:`(32, 32, 64)`): Tuple of block output channels.
- mid_block_type (`str`, *optional*, defaults to "UNetMidBlock1D"): block type for middle of UNet.
- out_block_type (`str`, *optional*, defaults to `None`): optional output processing of UNet.
- act_fn (`str`, *optional*, defaults to None): optional activitation function in UNet blocks.
- norm_num_groups (`int`, *optional*, defaults to 8): group norm member count in UNet blocks.
- layers_per_block (`int`, *optional*, defaults to 1): added number of layers in a UNet block.
- downsample_each_block (`int`, *optional*, defaults to False:
- experimental feature for using a UNet without upsampling.
- """
-
- @register_to_config
- def __init__(
- self,
- sample_size: int = 65536,
- sample_rate: Optional[int] = None,
- in_channels: int = 2,
- out_channels: int = 2,
- extra_in_channels: int = 0,
- time_embedding_type: str = "fourier",
- flip_sin_to_cos: bool = True,
- use_timestep_embedding: bool = False,
- freq_shift: float = 0.0,
- down_block_types: Tuple[str] = ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D"),
- up_block_types: Tuple[str] = ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip"),
- mid_block_type: Tuple[str] = "UNetMidBlock1D",
- out_block_type: str = None,
- block_out_channels: Tuple[int] = (32, 32, 64),
- act_fn: str = None,
- norm_num_groups: int = 8,
- layers_per_block: int = 1,
- downsample_each_block: bool = False,
- ):
- super().__init__()
- self.sample_size = sample_size
-
- # time
- if time_embedding_type == "fourier":
- self.time_proj = GaussianFourierProjection(
- embedding_size=8, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos
- )
- timestep_input_dim = 2 * block_out_channels[0]
- elif time_embedding_type == "positional":
- self.time_proj = Timesteps(
- block_out_channels[0], flip_sin_to_cos=flip_sin_to_cos, downscale_freq_shift=freq_shift
- )
- timestep_input_dim = block_out_channels[0]
-
- if use_timestep_embedding:
- time_embed_dim = block_out_channels[0] * 4
- self.time_mlp = TimestepEmbedding(
- in_channels=timestep_input_dim,
- time_embed_dim=time_embed_dim,
- act_fn=act_fn,
- out_dim=block_out_channels[0],
- )
-
- self.down_blocks = nn.ModuleList([])
- self.mid_block = None
- self.up_blocks = nn.ModuleList([])
- self.out_block = None
-
- # down
- output_channel = in_channels
- for i, down_block_type in enumerate(down_block_types):
- input_channel = output_channel
- output_channel = block_out_channels[i]
-
- if i == 0:
- input_channel += extra_in_channels
-
- is_final_block = i == len(block_out_channels) - 1
-
- down_block = get_down_block(
- down_block_type,
- num_layers=layers_per_block,
- in_channels=input_channel,
- out_channels=output_channel,
- temb_channels=block_out_channels[0],
- add_downsample=not is_final_block or downsample_each_block,
- )
- self.down_blocks.append(down_block)
-
- # mid
- self.mid_block = get_mid_block(
- mid_block_type,
- in_channels=block_out_channels[-1],
- mid_channels=block_out_channels[-1],
- out_channels=block_out_channels[-1],
- embed_dim=block_out_channels[0],
- num_layers=layers_per_block,
- add_downsample=downsample_each_block,
- )
-
- # up
- reversed_block_out_channels = list(reversed(block_out_channels))
- output_channel = reversed_block_out_channels[0]
- if out_block_type is None:
- final_upsample_channels = out_channels
- else:
- final_upsample_channels = block_out_channels[0]
-
- for i, up_block_type in enumerate(up_block_types):
- prev_output_channel = output_channel
- output_channel = (
- reversed_block_out_channels[i + 1] if i < len(up_block_types) - 1 else final_upsample_channels
- )
-
- is_final_block = i == len(block_out_channels) - 1
-
- up_block = get_up_block(
- up_block_type,
- num_layers=layers_per_block,
- in_channels=prev_output_channel,
- out_channels=output_channel,
- temb_channels=block_out_channels[0],
- add_upsample=not is_final_block,
- )
- self.up_blocks.append(up_block)
- prev_output_channel = output_channel
-
- # out
- num_groups_out = norm_num_groups if norm_num_groups is not None else min(block_out_channels[0] // 4, 32)
- self.out_block = get_out_block(
- out_block_type=out_block_type,
- num_groups_out=num_groups_out,
- embed_dim=block_out_channels[0],
- out_channels=out_channels,
- act_fn=act_fn,
- fc_dim=block_out_channels[-1] // 4,
- )
-
- def forward(
- self,
- sample: torch.FloatTensor,
- timestep: Union[torch.Tensor, float, int],
- return_dict: bool = True,
- ) -> Union[UNet1DOutput, Tuple]:
- r"""
- Args:
- sample (`torch.FloatTensor`): `(batch_size, sample_size, num_channels)` noisy inputs tensor
- timestep (`torch.FloatTensor` or `float` or `int): (batch) timesteps
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~models.unet_1d.UNet1DOutput`] instead of a plain tuple.
-
- Returns:
- [`~models.unet_1d.UNet1DOutput`] or `tuple`: [`~models.unet_1d.UNet1DOutput`] if `return_dict` is True,
- otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
- """
-
- # 1. time
- timesteps = timestep
- if not torch.is_tensor(timesteps):
- timesteps = torch.tensor([timesteps], dtype=torch.long, device=sample.device)
- elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0:
- timesteps = timesteps[None].to(sample.device)
-
- timestep_embed = self.time_proj(timesteps)
- if self.config.use_timestep_embedding:
- timestep_embed = self.time_mlp(timestep_embed)
- else:
- timestep_embed = timestep_embed[..., None]
- timestep_embed = timestep_embed.repeat([1, 1, sample.shape[2]]).to(sample.dtype)
-
- # 2. down
- down_block_res_samples = ()
- for downsample_block in self.down_blocks:
- sample, res_samples = downsample_block(hidden_states=sample, temb=timestep_embed)
- down_block_res_samples += res_samples
-
- # 3. mid
- if self.mid_block:
- sample = self.mid_block(sample, timestep_embed)
-
- # 4. up
- for i, upsample_block in enumerate(self.up_blocks):
- res_samples = down_block_res_samples[-1:]
- down_block_res_samples = down_block_res_samples[:-1]
- sample = upsample_block(sample, res_hidden_states_tuple=res_samples, temb=timestep_embed)
-
- # 5. post-process
- if self.out_block:
- sample = self.out_block(sample, timestep_embed)
-
- if not return_dict:
- return (sample,)
-
- return UNet1DOutput(sample=sample)
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/configurator.py b/spaces/Jamkonams/AutoGPT/autogpt/configurator.py
deleted file mode 100644
index 1dc3be124f638b8859eb459bcb2d46696f62e2b7..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/configurator.py
+++ /dev/null
@@ -1,134 +0,0 @@
-"""Configurator module."""
-import click
-from colorama import Back, Fore, Style
-
-from autogpt import utils
-from autogpt.config import Config
-from autogpt.logs import logger
-from autogpt.memory import get_supported_memory_backends
-
-CFG = Config()
-
-
-def create_config(
- continuous: bool,
- continuous_limit: int,
- ai_settings_file: str,
- skip_reprompt: bool,
- speak: bool,
- debug: bool,
- gpt3only: bool,
- gpt4only: bool,
- memory_type: str,
- browser_name: str,
- allow_downloads: bool,
- skip_news: bool,
-) -> None:
- """Updates the config object with the given arguments.
-
- Args:
- continuous (bool): Whether to run in continuous mode
- continuous_limit (int): The number of times to run in continuous mode
- ai_settings_file (str): The path to the ai_settings.yaml file
- skip_reprompt (bool): Whether to skip the re-prompting messages at the beginning of the script
- speak (bool): Whether to enable speak mode
- debug (bool): Whether to enable debug mode
- gpt3only (bool): Whether to enable GPT3.5 only mode
- gpt4only (bool): Whether to enable GPT4 only mode
- memory_type (str): The type of memory backend to use
- browser_name (str): The name of the browser to use when using selenium to scrape the web
- allow_downloads (bool): Whether to allow Auto-GPT to download files natively
- skips_news (bool): Whether to suppress the output of latest news on startup
- """
- CFG.set_debug_mode(False)
- CFG.set_continuous_mode(False)
- CFG.set_speak_mode(False)
-
- if debug:
- logger.typewriter_log("Debug Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_debug_mode(True)
-
- if continuous:
- logger.typewriter_log("Continuous Mode: ", Fore.RED, "ENABLED")
- logger.typewriter_log(
- "WARNING: ",
- Fore.RED,
- "Continuous mode is not recommended. It is potentially dangerous and may"
- " cause your AI to run forever or carry out actions you would not usually"
- " authorise. Use at your own risk.",
- )
- CFG.set_continuous_mode(True)
-
- if continuous_limit:
- logger.typewriter_log(
- "Continuous Limit: ", Fore.GREEN, f"{continuous_limit}"
- )
- CFG.set_continuous_limit(continuous_limit)
-
- # Check if continuous limit is used without continuous mode
- if continuous_limit and not continuous:
- raise click.UsageError("--continuous-limit can only be used with --continuous")
-
- if speak:
- logger.typewriter_log("Speak Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_speak_mode(True)
-
- if gpt3only:
- logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_smart_llm_model(CFG.fast_llm_model)
-
- if gpt4only:
- logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED")
- CFG.set_fast_llm_model(CFG.smart_llm_model)
-
- if memory_type:
- supported_memory = get_supported_memory_backends()
- chosen = memory_type
- if chosen not in supported_memory:
- logger.typewriter_log(
- "ONLY THE FOLLOWING MEMORY BACKENDS ARE SUPPORTED: ",
- Fore.RED,
- f"{supported_memory}",
- )
- logger.typewriter_log("Defaulting to: ", Fore.YELLOW, CFG.memory_backend)
- else:
- CFG.memory_backend = chosen
-
- if skip_reprompt:
- logger.typewriter_log("Skip Re-prompt: ", Fore.GREEN, "ENABLED")
- CFG.skip_reprompt = True
-
- if ai_settings_file:
- file = ai_settings_file
-
- # Validate file
- (validated, message) = utils.validate_yaml_file(file)
- if not validated:
- logger.typewriter_log("FAILED FILE VALIDATION", Fore.RED, message)
- logger.double_check()
- exit(1)
-
- logger.typewriter_log("Using AI Settings File:", Fore.GREEN, file)
- CFG.ai_settings_file = file
- CFG.skip_reprompt = True
-
- if allow_downloads:
- logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED")
- logger.typewriter_log(
- "WARNING: ",
- Fore.YELLOW,
- f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} "
- + "It is recommended that you monitor any files it downloads carefully.",
- )
- logger.typewriter_log(
- "WARNING: ",
- Fore.YELLOW,
- f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}",
- )
- CFG.allow_downloads = True
-
- if skip_news:
- CFG.skip_news = True
-
- if browser_name:
- CFG.selenium_web_browser = browser_name
diff --git a/spaces/Jaskirat-04/Food-Personalisation/app.py b/spaces/Jaskirat-04/Food-Personalisation/app.py
deleted file mode 100644
index 4dafce25e981f4274b919c611d11b91c379de7a7..0000000000000000000000000000000000000000
--- a/spaces/Jaskirat-04/Food-Personalisation/app.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import streamlit as st
-import cv2
-import json
-from deepface import DeepFace
-from tensorflow.keras.models import model_from_json
-import numpy as np
-
-# Emotion mappings
-mappings = {
- 'angry': ["Hot coffee - Help them cool down", "Apple slices - Healthy snack", "Bottle of water - Hydration", "Drive-thru only - Get out faster", "Veggie wrap - Lighter option"],
- 'sad': ["Hot fudge sundae - Sweet treat", "French fries - Comfort food", "Chicken nuggets - Familiar favorite", "Chili - Hearty and soothing", "Baked apple pie - Nostalgic dessert"],
- 'happy': ["McFlurry - Indulgent and celebratory", "Bacon smokehouse burger - Premium and new", "McCafe frappé - Sweet cold drink", "6-piece chicken nuggets - Shareable", "Shake and fries - Tasty combo"],
- 'surprise': ["Fun drink specials", "dessert", "New menu items", "Free upsize"],
- 'tired': ["Iced coffee - Caffeine boost", "Egg McMuffin - High protein", "Fruit yogurt parfait - Light and energizing", "Oatmeal - Whole grains for lasting energy", "Hash browns - Carbs and salt for fast energy"],
- 'neutral': ["Hamburger - Classic choice", "Cheeseburger - Familiar favorite", "Medium fries - Typical go-to", "6-piece nuggets - Satisfying option", "Any drink - Quench their thirst"],
- 'fearful': ["Bottled water - Hydration and calm", "Apple slices - Healthy light snack", "Yogurt parfait - Gentle on stomach", "Oatmeal - Warm and comforting", "Salad - Fresh and simple"]
-}
-
-@st.cache_resource
-def load_model():
- # Load model JSON
- with open('model.json', 'r') as f:
- model = model_from_json(f.read())
-
- return model
-
-
-# Set background
-import streamlit as st
-from PIL import Image
-
-image = Image.open('shutterstock_download.jpg')
-
-st.image(image)
-with st.spinner('Loading Food Recommender...'):
- model = load_model()
-# Load the background image
-# page_bg="""
-#
-# """
-
-# # Set the background image
-# st.markdown(page_bg,unsafe_allow_html=True
-# )
-st.title("Your Food Buddy")
-
-image = st.file_uploader("Upload an image")
-
-
-
-if image:
- img = cv2.imdecode(np.fromstring(image.read(), np.uint8), cv2.IMREAD_UNCHANGED)
- # model = load_model()
- st.image(image, caption='Uploaded Image')
- result = DeepFace.analyze(img, enforce_detection=True)
- dominant = max(result[0]['emotion'], key=result[0]['emotion'].get)
- st.success(f"The person is {dominant}")
-
- st.header("Recommended Menu Items:")
- for item in mappings[dominant]:
- st.write("- " + item)
diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/__init__.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/__init__.py
deleted file mode 100644
index c7ffcccd7fc0f33b59d99d73d0436d60e561b0fc..0000000000000000000000000000000000000000
--- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# https://github.com/xinntao/BasicSR
-# flake8: noqa
-from .archs import *
-from .data import *
-from .losses import *
-from .metrics import *
-from .models import *
-from .ops import *
-from .train import *
-from .utils import *
-from .version import __gitsha__, __version__
diff --git a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/index_plugin_instance.py b/spaces/JeffJing/ZookChatBot/steamship/data/plugin/index_plugin_instance.py
deleted file mode 100644
index 8501c7e7c3fcd0c2f798ec603426576aa1885deb..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/data/plugin/index_plugin_instance.py
+++ /dev/null
@@ -1,202 +0,0 @@
-from typing import Any, Dict, List, Optional, Union, cast
-
-from pydantic import Field
-
-from steamship.base.client import Client
-from steamship.base.error import SteamshipError
-from steamship.base.model import CamelModel
-from steamship.base.tasks import Task
-from steamship.data.embeddings import EmbeddedItem, EmbeddingIndex, QueryResult, QueryResults
-from steamship.data.plugin.plugin_instance import PluginInstance
-from steamship.data.tags.tag import Tag
-
-
-class EmbedderInvocation(CamelModel):
- """The parameters capable of creating/fetching an Embedder (Tagger) Plugin Instance."""
-
- plugin_handle: str
- instance_handle: Optional[str] = None
- config: Optional[Dict[str, Any]] = None
- version: Optional[str] = None
- fetch_if_exists: bool = True
-
-
-class SearchResult(CamelModel):
- """A single scored search result -- which is always a tag.
-
- This class is intended to eventually replace the QueryResult object currently used with the Embedding layer."""
-
- tag: Optional[Tag] = None
- score: Optional[float] = None
-
- @staticmethod
- def from_query_result(query_result: QueryResult) -> "SearchResult":
- hit = query_result.value
- value = hit.metadata or {}
-
- # To make this change Python-only, some fields are stached in `hit.metadata`.
- # This has the temporary consequence of these keys not being safe. This will be resolved when we spread
- # this refactor to the engine.
- block_id = None
- if "_block_id" in value:
- block_id = value.get("_block_id")
- del value["_block_id"]
-
- file_id = None
- if "_file_id" in value:
- file_id = value.get("_file_id")
- del value["_file_id"]
-
- tag_id = None
- if "_tag_id" in value:
- tag_id = value.get("_tag_id")
- del value["_tag_id"]
-
- tag = Tag(
- id=hit.id,
- kind=hit.external_type,
- name=hit.external_id,
- block_id=block_id,
- tag_id=tag_id,
- file_id=file_id,
- text=hit.value,
- value=value,
- )
- return SearchResult(tag=tag, score=query_result.score)
-
-
-class SearchResults(CamelModel):
- """Results of a search operation -- which is always a list of ranked tag.
-
- This class is intended to eventually replace the QueryResults object currently used with the Embedding layer.
- TODO: add in paging support."""
-
- items: List[SearchResult] = None
-
- @staticmethod
- def from_query_results(query_results: QueryResults) -> "SearchResults":
- items = [SearchResult.from_query_result(qr) for qr in query_results.items or []]
- return SearchResults(items=items)
-
-
-class EmbeddingIndexPluginInstance(PluginInstance):
- """A persistent, read-optimized index over embeddings.
-
- This is currently implemented as an object which behaves like a PluginInstance even though
- it isn't from an implementation perspective on the back-end.
- """
-
- client: Client = Field(None, exclude=True)
- embedder: PluginInstance = Field(None, exclude=True)
- index: EmbeddingIndex = Field(None, exclude=True)
-
- def delete(self):
- """Delete the EmbeddingIndexPluginInstnace.
-
- For now, we will have this correspond to deleting the `index` but not the `embedder`. This is likely
- a temporary design.
- """
- return self.index.delete()
-
- def insert(self, tags: Union[Tag, List[Tag]], allow_long_records: bool = False):
- """Insert tags into the embedding index."""
-
- # Make a list if a single tag was provided
- if isinstance(tags, Tag):
- tags = [tags]
-
- for tag in tags:
- if not tag.text:
- raise SteamshipError(
- message="Please set the `text` field of your Tag before inserting it into an index."
- )
-
- # Now we need to prepare an EmbeddingIndexItem of a particular shape that encodes the tag.
- metadata = tag.value or {}
- if not isinstance(metadata, dict):
- raise SteamshipError(
- "Only Tags with a dict or None value can be embedded. "
- + f"This tag had a value of type: {type(tag.value)}"
- )
-
- # To make this change Python-only, some fields are stached in `hit.metadata`.
- # This has the temporary consequence of these keys not being safe. This will be resolved when we spread
- # this refactor to the engine.
- metadata["_file_id"] = tag.file_id
- metadata["_tag_id"] = tag.id
- metadata["_block_id"] = tag.block_id
- tag.value = metadata
-
- embedded_items = [
- EmbeddedItem(
- value=tag.text,
- external_id=tag.name,
- external_type=tag.kind,
- metadata=tag.value,
- )
- for tag in tags
- ]
-
- # We always reindex in this new style; to not do so is to expose details (when embedding occurrs) we'd rather
- # not have users exercise control over.
- self.index.insert_many(embedded_items, reindex=True, allow_long_records=allow_long_records)
-
- def search(self, query: str, k: Optional[int] = None) -> Task[SearchResults]:
- """Search the embedding index.
-
- This wrapper implementation simply projects the `Hit` data structure into a `Tag`
- """
- if query is None or len(query.strip()) == 0:
- raise SteamshipError(message="Query field must be non-empty.")
-
- # Metadata will always be included; this is the equivalent of Tag.value
- wrapped_result = self.index.search(query, k=k, include_metadata=True)
-
- # For now, we'll have to do this synchronously since we're trying to avoid changing things on the engine.
- wrapped_result.wait()
-
- # We're going to do a switcheroo on the output type of Task here.
- search_results = SearchResults.from_query_results(wrapped_result.output)
- wrapped_result.output = search_results
-
- # Return the index's search result, but projected into the data structure of Tags
- return cast(Task[SearchResults], wrapped_result)
-
- @staticmethod
- def create(
- client: Any,
- plugin_id: str = None,
- plugin_handle: str = None,
- plugin_version_id: str = None,
- plugin_version_handle: str = None,
- handle: str = None,
- fetch_if_exists: bool = True,
- config: Dict[str, Any] = None,
- ) -> "EmbeddingIndexPluginInstance":
- """Create a class that simulates an embedding index re-implemented as a PluginInstance."""
-
- # Perform a manual config validation check since the configuration isn't actually being sent up to the Engine.
- # In this case, an embedding index has special behavior which is to instantiate/fetch an Embedder that it can use.
- if "embedder" not in config:
- raise SteamshipError(
- message="Config key missing. Please include a field named `embedder` with type `EmbedderInvocation`."
- )
-
- # Just for pydantic validation.
- embedder_invocation = EmbedderInvocation.parse_obj(config["embedder"])
-
- # Create the embedder
- embedder = client.use_plugin(**embedder_invocation.dict())
-
- # Create the index
- index = EmbeddingIndex.create(
- client=client,
- handle=handle,
- embedder_plugin_instance_handle=embedder.handle,
- fetch_if_exists=fetch_if_exists,
- )
-
- # Now return the plugin wrapper
- return EmbeddingIndexPluginInstance(
- id=index.id, handle=index.handle, index=index, embedder=embedder
- )
diff --git a/spaces/JingyeChen22/TextDiffuser/model/text_segmenter/unet_parts.py b/spaces/JingyeChen22/TextDiffuser/model/text_segmenter/unet_parts.py
deleted file mode 100644
index 393d16b537c3a44516b192261480037c126f12d6..0000000000000000000000000000000000000000
--- a/spaces/JingyeChen22/TextDiffuser/model/text_segmenter/unet_parts.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# ------------------------------------------
-# TextDiffuser: Diffusion Models as Text Painters
-# Paper Link: https://arxiv.org/abs/2305.10855
-# Code Link: https://github.com/microsoft/unilm/tree/master/textdiffuser
-# Copyright (c) Microsoft Corporation.
-# This file define the architecture of unet.
-# ------------------------------------------
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class DoubleConv(nn.Module):
- """(convolution => [BN] => ReLU) * 2"""
-
- def __init__(self, in_channels, out_channels, mid_channels=None):
- super().__init__()
- if not mid_channels:
- mid_channels = out_channels
- self.double_conv = nn.Sequential(
- nn.Conv2d(in_channels, mid_channels, kernel_size=3, padding=1),
- nn.BatchNorm2d(mid_channels),
- nn.ReLU(inplace=True),
- nn.Conv2d(mid_channels, out_channels, kernel_size=3, padding=1),
- nn.BatchNorm2d(out_channels),
- nn.ReLU(inplace=True)
- )
-
- def forward(self, x):
- return self.double_conv(x)
-
-
-class Down(nn.Module):
- """Downscaling with maxpool then double conv"""
-
- def __init__(self, in_channels, out_channels):
- super().__init__()
- self.maxpool_conv = nn.Sequential(
- nn.MaxPool2d(2),
- DoubleConv(in_channels, out_channels)
- )
-
- def forward(self, x):
- return self.maxpool_conv(x)
-
-
-class Up(nn.Module):
- """Upscaling then double conv"""
-
- def __init__(self, in_channels, out_channels, bilinear=True):
- super().__init__()
-
- # if bilinear, use the normal convolutions to reduce the number of channels
- if bilinear:
- self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
- self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
- else:
- self.up = nn.ConvTranspose2d(in_channels , in_channels // 2, kernel_size=2, stride=2)
- self.conv = DoubleConv(in_channels, out_channels)
-
-
- def forward(self, x1, x2):
- x1 = self.up(x1)
- # input is CHW
- diffY = x2.size()[2] - x1.size()[2]
- diffX = x2.size()[3] - x1.size()[3]
-
- x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2,
- diffY // 2, diffY - diffY // 2])
-
- x = torch.cat([x2, x1], dim=1)
- return self.conv(x)
-
-
-class OutConv(nn.Module):
- def __init__(self, in_channels, out_channels):
- super(OutConv, self).__init__()
- self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
-
- def forward(self, x):
- return self.conv(x)
diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py
deleted file mode 100644
index acd00238895d57ba878fd0211d5654250fb10061..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py
+++ /dev/null
@@ -1,509 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import ONNXVITS_modules as modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- self.w = None
- self.reverse = None
- self.noise_scale = None
- def forward(self, x, x_mask, g=None):
- w = self.w
- reverse = self.reverse
- noise_scale = self.noise_scale
-
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- self.reverse = None
- def forward(self, x, x_mask, g=None):
- reverse = self.reverse
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask # x_in : [b, c, t] -> [b, h, t]
- x = self.enc(x, x_mask, g=g) # x_in : [b, h, t], g : [b, h, 1], x = x_in + g
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask # z, m, logs : [b, h, t]
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
-
- if n_speakers > 0:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, sid=None, noise_scale=.667, length_scale=1, noise_scale_w=.8, max_len=None):
- torch.onnx.export(
- self.enc_p,
- (x, x_lengths),
- "ONNX_net/enc_p.onnx",
- input_names=["x", "x_lengths"],
- output_names=["xout", "m_p", "logs_p", "x_mask"],
- dynamic_axes={
- "x" : [1],
- "xout" : [2],
- "m_p" : [2],
- "logs_p" : [2],
- "x_mask" : [2]
- },
- verbose=True,
- )
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
-
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- self.dp.reverse = True
- self.dp.noise_scale = noise_scale_w
- torch.onnx.export(
- self.dp,
- (x, x_mask, g),
- "ONNX_net/dp.onnx",
- input_names=["x", "x_mask", "g"],
- output_names=["logw"],
- dynamic_axes={
- "x" : [2],
- "x_mask" : [2],
- "logw" : [2]
- },
- verbose=True,
- )
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
-
- self.flow.reverse = True
- torch.onnx.export(
- self.flow,
- (z_p, y_mask, g),
- "ONNX_net/flow.onnx",
- input_names=["z_p", "y_mask", "g"],
- output_names=["z"],
- dynamic_axes={
- "z_p" : [2],
- "y_mask" : [2],
- "z" : [2]
- },
- verbose=True,
- )
- z = self.flow(z_p, y_mask, g=g)
- z_in = (z * y_mask)[:,:,:max_len]
-
- torch.onnx.export(
- self.dec,
- (z_in, g),
- "ONNX_net/dec.onnx",
- input_names=["z_in", "g"],
- output_names=["o"],
- dynamic_axes={
- "z_in" : [2],
- "o" : [2]
- },
- verbose=True,
- )
- o = self.dec(z_in, g=g)
- return o
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/dataset.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/dataset.py
deleted file mode 100644
index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/dataset.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import os
-import random
-
-import numpy as np
-import torch
-import torch.utils.data
-from tqdm import tqdm
-
-from . import spec_utils
-
-
-class VocalRemoverValidationSet(torch.utils.data.Dataset):
- def __init__(self, patch_list):
- self.patch_list = patch_list
-
- def __len__(self):
- return len(self.patch_list)
-
- def __getitem__(self, idx):
- path = self.patch_list[idx]
- data = np.load(path)
-
- X, y = data["X"], data["y"]
-
- X_mag = np.abs(X)
- y_mag = np.abs(y)
-
- return X_mag, y_mag
-
-
-def make_pair(mix_dir, inst_dir):
- input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"]
-
- X_list = sorted(
- [
- os.path.join(mix_dir, fname)
- for fname in os.listdir(mix_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
- y_list = sorted(
- [
- os.path.join(inst_dir, fname)
- for fname in os.listdir(inst_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
-
- filelist = list(zip(X_list, y_list))
-
- return filelist
-
-
-def train_val_split(dataset_dir, split_mode, val_rate, val_filelist):
- if split_mode == "random":
- filelist = make_pair(
- os.path.join(dataset_dir, "mixtures"),
- os.path.join(dataset_dir, "instruments"),
- )
-
- random.shuffle(filelist)
-
- if len(val_filelist) == 0:
- val_size = int(len(filelist) * val_rate)
- train_filelist = filelist[:-val_size]
- val_filelist = filelist[-val_size:]
- else:
- train_filelist = [
- pair for pair in filelist if list(pair) not in val_filelist
- ]
- elif split_mode == "subdirs":
- if len(val_filelist) != 0:
- raise ValueError(
- "The `val_filelist` option is not available in `subdirs` mode"
- )
-
- train_filelist = make_pair(
- os.path.join(dataset_dir, "training/mixtures"),
- os.path.join(dataset_dir, "training/instruments"),
- )
-
- val_filelist = make_pair(
- os.path.join(dataset_dir, "validation/mixtures"),
- os.path.join(dataset_dir, "validation/instruments"),
- )
-
- return train_filelist, val_filelist
-
-
-def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha):
- perm = np.random.permutation(len(X))
- for i, idx in enumerate(tqdm(perm)):
- if np.random.uniform() < reduction_rate:
- y[idx] = spec_utils.reduce_vocal_aggressively(
- X[idx], y[idx], reduction_mask
- )
-
- if np.random.uniform() < 0.5:
- # swap channel
- X[idx] = X[idx, ::-1]
- y[idx] = y[idx, ::-1]
- if np.random.uniform() < 0.02:
- # mono
- X[idx] = X[idx].mean(axis=0, keepdims=True)
- y[idx] = y[idx].mean(axis=0, keepdims=True)
- if np.random.uniform() < 0.02:
- # inst
- X[idx] = y[idx]
-
- if np.random.uniform() < mixup_rate and i < len(perm) - 1:
- lam = np.random.beta(mixup_alpha, mixup_alpha)
- X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]]
- y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]]
-
- return X, y
-
-
-def make_padding(width, cropsize, offset):
- left = offset
- roi_size = cropsize - left * 2
- if roi_size == 0:
- roi_size = cropsize
- right = roi_size - (width % roi_size) + left
-
- return left, right, roi_size
-
-
-def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset):
- len_dataset = patches * len(filelist)
-
- X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
- y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches)
- ends = starts + cropsize
- for j in range(patches):
- idx = i * patches + j
- X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]]
- y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]]
-
- return X_dataset, y_dataset
-
-
-def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset):
- patch_list = []
- patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format(
- cropsize, sr, hop_length, n_fft, offset
- )
- os.makedirs(patch_dir, exist_ok=True)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- basename = os.path.splitext(os.path.basename(X_path))[0]
-
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- len_dataset = int(np.ceil(X.shape[2] / roi_size))
- for j in range(len_dataset):
- outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j))
- start = j * roi_size
- if not os.path.exists(outpath):
- np.savez(
- outpath,
- X=X_pad[:, :, start : start + cropsize],
- y=y_pad[:, :, start : start + cropsize],
- )
- patch_list.append(outpath)
-
- return VocalRemoverValidationSet(patch_list)
diff --git a/spaces/Keenlol/Wood_Classification/README.md b/spaces/Keenlol/Wood_Classification/README.md
deleted file mode 100644
index 817c8dcea7c1db1217d85b748bf74b937ddee3b9..0000000000000000000000000000000000000000
--- a/spaces/Keenlol/Wood_Classification/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Wood Classification
-emoji: 🏃
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KenjieDec/GPEN/sr_model/arch_util.py b/spaces/KenjieDec/GPEN/sr_model/arch_util.py
deleted file mode 100644
index ce5b9d92f418d3f8b5b8887a24491f65660b33f9..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/sr_model/arch_util.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import math
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-from torch.nn import init as init
-from torch.nn.modules.batchnorm import _BatchNorm
-
-@torch.no_grad()
-def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs):
- """Initialize network weights.
-
- Args:
- module_list (list[nn.Module] | nn.Module): Modules to be initialized.
- scale (float): Scale initialized weights, especially for residual
- blocks. Default: 1.
- bias_fill (float): The value to fill bias. Default: 0
- kwargs (dict): Other arguments for initialization function.
- """
- if not isinstance(module_list, list):
- module_list = [module_list]
- for module in module_list:
- for m in module.modules():
- if isinstance(m, nn.Conv2d):
- init.kaiming_normal_(m.weight, **kwargs)
- m.weight.data *= scale
- if m.bias is not None:
- m.bias.data.fill_(bias_fill)
- elif isinstance(m, nn.Linear):
- init.kaiming_normal_(m.weight, **kwargs)
- m.weight.data *= scale
- if m.bias is not None:
- m.bias.data.fill_(bias_fill)
- elif isinstance(m, _BatchNorm):
- init.constant_(m.weight, 1)
- if m.bias is not None:
- m.bias.data.fill_(bias_fill)
-
-
-def make_layer(basic_block, num_basic_block, **kwarg):
- """Make layers by stacking the same blocks.
-
- Args:
- basic_block (nn.module): nn.module class for basic block.
- num_basic_block (int): number of blocks.
-
- Returns:
- nn.Sequential: Stacked blocks in nn.Sequential.
- """
- layers = []
- for _ in range(num_basic_block):
- layers.append(basic_block(**kwarg))
- return nn.Sequential(*layers)
-
-
-class ResidualBlockNoBN(nn.Module):
- """Residual block without BN.
-
- It has a style of:
- ---Conv-ReLU-Conv-+-
- |________________|
-
- Args:
- num_feat (int): Channel number of intermediate features.
- Default: 64.
- res_scale (float): Residual scale. Default: 1.
- pytorch_init (bool): If set to True, use pytorch default init,
- otherwise, use default_init_weights. Default: False.
- """
-
- def __init__(self, num_feat=64, res_scale=1, pytorch_init=False):
- super(ResidualBlockNoBN, self).__init__()
- self.res_scale = res_scale
- self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True)
- self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True)
- self.relu = nn.ReLU(inplace=True)
-
- if not pytorch_init:
- default_init_weights([self.conv1, self.conv2], 0.1)
-
- def forward(self, x):
- identity = x
- out = self.conv2(self.relu(self.conv1(x)))
- return identity + out * self.res_scale
-
-
-class Upsample(nn.Sequential):
- """Upsample module.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
- """
-
- def __init__(self, scale, num_feat):
- m = []
- if (scale & (scale - 1)) == 0: # scale = 2^n
- for _ in range(int(math.log(scale, 2))):
- m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(2))
- elif scale == 3:
- m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(3))
- else:
- raise ValueError(f'scale {scale} is not supported. '
- 'Supported scales: 2^n and 3.')
- super(Upsample, self).__init__(*m)
-
-# TODO: may write a cpp file
-def pixel_unshuffle(x, scale):
- """ Pixel unshuffle.
-
- Args:
- x (Tensor): Input feature with shape (b, c, hh, hw).
- scale (int): Downsample ratio.
-
- Returns:
- Tensor: the pixel unshuffled feature.
- """
- b, c, hh, hw = x.size()
- out_channel = c * (scale**2)
- assert hh % scale == 0 and hw % scale == 0
- h = hh // scale
- w = hw // scale
- x_view = x.view(b, c, h, scale, w, scale)
- return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w)
\ No newline at end of file
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/recorder-core.js b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/recorder-core.js
deleted file mode 100644
index 30a58e819da6e1907f2f6f91cc564f9444207af6..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/web/static/js/recorder-core.js
+++ /dev/null
@@ -1,950 +0,0 @@
-/*
-录音
-https://github.com/xiangyuecn/Recorder
-*/
-(function(factory){
- factory(window);
- //umd returnExports.js
- if(typeof(define)=='function' && define.amd){
- define(function(){
- return Recorder;
- });
- };
- if(typeof(module)=='object' && module.exports){
- module.exports=Recorder;
- };
-}(function(window){
-"use strict";
-
-//兼容环境
-var LM="2021-08-03 20:01:03";
-var NOOP=function(){};
-//end 兼容环境 ****从以下开始copy源码*****
-
-var Recorder=function(set){
- return new initFn(set);
-};
-//是否已经打开了全局的麦克风录音,所有工作都已经准备好了,就等接收音频数据了
-Recorder.IsOpen=function(){
- var stream=Recorder.Stream;
- if(stream){
- var tracks=stream.getTracks&&stream.getTracks()||stream.audioTracks||[];
- var track=tracks[0];
- if(track){
- var state=track.readyState;
- return state=="live"||state==track.LIVE;
- };
- };
- return false;
-};
-/*H5录音时的AudioContext缓冲大小。会影响H5录音时的onProcess调用速率,相对于AudioContext.sampleRate=48000时,4096接近12帧/s,调节此参数可生成比较流畅的回调动画。
- 取值256, 512, 1024, 2048, 4096, 8192, or 16384
- 注意,取值不能过低,2048开始不同浏览器可能回调速率跟不上造成音质问题。
- 一般无需调整,调整后需要先close掉已打开的录音,再open时才会生效。
-*/
-Recorder.BufferSize=4096;
-//销毁已持有的所有全局资源,当要彻底移除Recorder时需要显式的调用此方法
-Recorder.Destroy=function(){
- CLog("Recorder Destroy");
- Disconnect();//断开可能存在的全局Stream、资源
-
- for(var k in DestroyList){
- DestroyList[k]();
- };
-};
-var DestroyList={};
-//登记一个需要销毁全局资源的处理方法
-Recorder.BindDestroy=function(key,call){
- DestroyList[key]=call;
-};
-//判断浏览器是否支持录音,随时可以调用。注意:仅仅是检测浏览器支持情况,不会判断和调起用户授权,不会判断是否支持特定格式录音。
-Recorder.Support=function(){
- var AC=window.AudioContext;
- if(!AC){
- AC=window.webkitAudioContext;
- };
- if(!AC){
- return false;
- };
- var scope=navigator.mediaDevices||{};
- if(!scope.getUserMedia){
- scope=navigator;
- scope.getUserMedia||(scope.getUserMedia=scope.webkitGetUserMedia||scope.mozGetUserMedia||scope.msGetUserMedia);
- };
- if(!scope.getUserMedia){
- return false;
- };
-
- Recorder.Scope=scope;
- if(!Recorder.Ctx||Recorder.Ctx.state=="closed"){
- //不能反复构造,低版本number of hardware contexts reached maximum (6)
- Recorder.Ctx=new AC();
-
- Recorder.BindDestroy("Ctx",function(){
- var ctx=Recorder.Ctx;
- if(ctx&&ctx.close){//能关掉就关掉,关不掉就保留着
- ctx.close();
- Recorder.Ctx=0;
- };
- });
- };
- return true;
-};
-/*初始化H5音频采集连接。如果自行提供了sourceStream将只进行一次简单的连接处理。如果是普通麦克风录音,此时的Stream是全局的,Safari上断开后就无法再次进行连接使用,表现为静音,因此使用全部使用全局处理避免调用到disconnect;全局处理也有利于屏蔽底层细节,start时无需再调用底层接口,提升兼容、可靠性。*/
-var Connect=function(streamStore){
- streamStore=streamStore||Recorder;
- var bufferSize=streamStore.BufferSize||Recorder.BufferSize;
-
- var ctx=Recorder.Ctx,stream=streamStore.Stream;
- var media=stream._m=ctx.createMediaStreamSource(stream);
- var process=stream._p=(ctx.createScriptProcessor||ctx.createJavaScriptNode).call(ctx,bufferSize,1,1);//单声道,省的数据处理复杂
-
- media.connect(process);
- process.connect(ctx.destination);
-
- var calls=stream._call;
- process.onaudioprocess=function(e){
- for(var k0 in calls){//has item
- var o=e.inputBuffer.getChannelData(0);//块是共享的,必须复制出来
- var size=o.length;
-
- var pcm=new Int16Array(size);
- var sum=0;
- for(var j=0;j=pcmSampleRate时不会进行任何处理,小于时会进行重新采样
-prevChunkInfo:{} 可选,上次调用时的返回值,用于连续转换,本次调用将从上次结束位置开始进行处理。或可自行定义一个ChunkInfo从pcmDatas指定的位置开始进行转换
-option:{ 可选,配置项
- frameSize:123456 帧大小,每帧的PCM Int16的数量,采样率转换后的pcm长度为frameSize的整数倍,用于连续转换。目前仅在mp3格式时才有用,frameSize取值为1152,这样编码出来的mp3时长和pcm的时长完全一致,否则会因为mp3最后一帧录音不够填满时添加填充数据导致mp3的时长变长。
- frameType:"" 帧类型,一般为rec.set.type,提供此参数时无需提供frameSize,会自动使用最佳的值给frameSize赋值,目前仅支持mp3=1152(MPEG1 Layer3的每帧采采样数),其他类型=1。
- 以上两个参数用于连续转换时使用,最多使用一个,不提供时不进行帧的特殊处理,提供时必须同时提供prevChunkInfo才有作用。最后一段数据处理时无需提供帧大小以便输出最后一丁点残留数据。
- }
-
-返回ChunkInfo:{
- //可定义,从指定位置开始转换到结尾
- index:0 pcmDatas已处理到的索引
- offset:0.0 已处理到的index对应的pcm中的偏移的下一个位置
-
- //仅作为返回值
- frameNext:null||[Int16,...] 下一帧的部分数据,frameSize设置了的时候才可能会有
- sampleRate:16000 结果的采样率,<=newSampleRate
- data:[Int16,...] 转换后的PCM结果;如果是连续转换,并且pcmDatas中并没有新数据时,data的长度可能为0
-}
-*/
-Recorder.SampleData=function(pcmDatas,pcmSampleRate,newSampleRate,prevChunkInfo,option){
- prevChunkInfo||(prevChunkInfo={});
- var index=prevChunkInfo.index||0;
- var offset=prevChunkInfo.offset||0;
-
- var frameNext=prevChunkInfo.frameNext||[];
- option||(option={});
- var frameSize=option.frameSize||1;
- if(option.frameType){
- frameSize=option.frameType=="mp3"?1152:1;
- };
-
- var size=0;
- for(var i=index;i1){//新采样低于录音采样,进行抽样
- size=Math.floor(size/step);
- }else{//新采样高于录音采样不处理,省去了插值处理
- step=1;
- newSampleRate=pcmSampleRate;
- };
-
- size+=frameNext.length;
- var res=new Int16Array(size);
- var idx=0;
- //添加上一次不够一帧的剩余数据
- for(var i=0;i0){
- var u8Pos=(res.length-frameNextSize)*2;
- frameNext=new Int16Array(res.buffer.slice(u8Pos));
- res=new Int16Array(res.buffer.slice(0,u8Pos));
- };
-
- return {
- index:index
- ,offset:offset
-
- ,frameNext:frameNext
- ,sampleRate:newSampleRate
- ,data:res
- };
-};
-
-
-/*计算音量百分比的一个方法
-pcmAbsSum: pcm Int16所有采样的绝对值的和
-pcmLength: pcm长度
-返回值:0-100,主要当做百分比用
-注意:这个不是分贝,因此没用volume当做名称*/
-Recorder.PowerLevel=function(pcmAbsSum,pcmLength){
- /*计算音量 https://blog.csdn.net/jody1989/article/details/73480259
- 更高灵敏度算法:
- 限定最大感应值10000
- 线性曲线:低音量不友好
- power/10000*100
- 对数曲线:低音量友好,但需限定最低感应值
- (1+Math.log10(power/10000))*100
- */
- var power=(pcmAbsSum/pcmLength) || 0;//NaN
- var level;
- if(power<1251){//1250的结果10%,更小的音量采用线性取值
- level=Math.round(power/1250*10);
- }else{
- level=Math.round(Math.min(100,Math.max(0,(1+Math.log(power/10000)/Math.log(10))*100)));
- };
- return level;
-};
-
-
-
-
-//带时间的日志输出,CLog(msg,errOrLogMsg, logMsg...) err为数字时代表日志类型1:error 2:log默认 3:warn,否则当做内容输出,第一个参数不能是对象因为要拼接时间,后面可以接无数个输出参数
-var CLog=function(msg,err){
- var now=new Date();
- var t=("0"+now.getMinutes()).substr(-2)
- +":"+("0"+now.getSeconds()).substr(-2)
- +"."+("00"+now.getMilliseconds()).substr(-3);
- var arr=["["+t+" Recorder]"+msg];
- var a=arguments;
- var i=2,fn=console.log;
- if(typeof(err)=="number"){
- fn=err==1?console.error:err==3?console.warn:fn;
- }else{
- i=1;
- };
- for(;i3000){
- envInFixTs.length=i;
- break;
- };
- tsInStart=o.t;
- tsPcm+=o.d;
- };
- //达到需要的数据量,开始侦测是否需要补偿
- var tsInPrev=envInFixTs[1];
- var tsIn=now-tsInStart;
- var lost=tsIn-tsPcm;
- if( lost>tsIn/3 && (tsInPrev&&tsIn>1000 || envInFixTs.length>=6) ){
- //丢失过多,开始执行补偿
- var addTime=now-tsInPrev.t-pcmTime;//距离上次输入丢失这么多ms
- if(addTime>pcmTime/5){//丢失超过本帧的1/5
- var fixOpen=!set.disableEnvInFix;
- CLog("["+now+"]"+(fixOpen?"":"未")+"补偿"+addTime+"ms",3);
- This.envInFix+=addTime;
-
- //用静默进行补偿
- if(fixOpen){
- var addPcm=new Int16Array(addTime*bufferSampleRate/1000);
- size+=addPcm.length;
- buffers.push(addPcm);
- };
- };
- };
-
-
- var sizeOld=This.recSize,addSize=size;
- var bufferSize=sizeOld+addSize;
- This.recSize=bufferSize;//此值在onProcess后需要修正,可能新数据被修改
-
-
- //此类型有边录边转码(Worker)支持,开启实时转码
- if(engineCtx){
- //转换成set的采样率
- var chunkInfo=Recorder.SampleData(buffers,bufferSampleRate,set.sampleRate,engineCtx.chunkInfo);
- engineCtx.chunkInfo=chunkInfo;
-
- sizeOld=engineCtx.pcmSize;
- addSize=chunkInfo.data.length;
- bufferSize=sizeOld+addSize;
- engineCtx.pcmSize=bufferSize;//此值在onProcess后需要修正,可能新数据被修改
-
- buffers=engineCtx.pcmDatas;
- bufferFirstIdx=buffers.length;
- buffers.push(chunkInfo.data);
- bufferSampleRate=chunkInfo.sampleRate;
- };
-
- var duration=Math.round(bufferSize/bufferSampleRate*1000);
- var bufferNextIdx=buffers.length;
- var bufferNextIdxThis=buffersThis.length;
-
- //允许异步处理buffer数据
- var asyncEnd=function(){
- //重新计算size,异步的早已减去添加的,同步的需去掉本次添加的然后重新计算
- var num=asyncBegin?0:-addSize;
- var hasClear=buffers[0]==null;
- for(var i=bufferFirstIdx;i"+res.length+" 花:"+(Date.now()-t1)+"ms");
-
- setTimeout(function(){
- t1=Date.now();
- This[set.type](res,function(blob){
- ok(blob,duration);
- },function(msg){
- err(msg);
- });
- });
- }
-
-};
-
-if(window.Recorder){
- window.Recorder.Destroy();
-};
-window.Recorder=Recorder;
-
-//end ****copy源码结束*****
-Recorder.LM=LM;
-
-//流量统计用1像素图片地址,设置为空将不参与统计
-Recorder.TrafficImgUrl="//ia.51.la/go1?id=20469973&pvFlag=1";
-Recorder.Traffic=function(){
- var imgUrl=Recorder.TrafficImgUrl;
- if(imgUrl){
- var data=Recorder.Traffic;
- var idf=location.href.replace(/#.*/,"");
-
- if(imgUrl.indexOf("//")==0){
- //给url加上http前缀,如果是file协议下,不加前缀没法用
- if(/^https:/i.test(idf)){
- imgUrl="https:"+imgUrl;
- }else{
- imgUrl="http:"+imgUrl;
- };
- };
-
- if(!data[idf]){
- data[idf]=1;
-
- var img=new Image();
- img.src=imgUrl;
- CLog("Traffic Analysis Image: Recorder.TrafficImgUrl="+Recorder.TrafficImgUrl);
- };
- };
-};
-
-}));
\ No newline at end of file
diff --git a/spaces/KyanChen/RSPrompter/mmpl/utils/large_image.py b/spaces/KyanChen/RSPrompter/mmpl/utils/large_image.py
deleted file mode 100644
index 8670804684f6dcdc6dc1846cf85260d900b3474e..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpl/utils/large_image.py
+++ /dev/null
@@ -1,103 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Sequence, Tuple
-
-import torch
-from mmcv.ops import batched_nms
-from mmdet.structures import DetDataSample, SampleList
-from mmengine.structures import InstanceData
-
-
-def shift_rbboxes(bboxes: torch.Tensor, offset: Sequence[int]):
- """Shift rotated bboxes with offset.
-
- Args:
- bboxes (Tensor): The rotated bboxes need to be translated.
- With shape (n, 5), which means (x, y, w, h, a).
- offset (Sequence[int]): The translation offsets with shape of (2, ).
- Returns:
- Tensor: Shifted rotated bboxes.
- """
- offset_tensor = bboxes.new_tensor(offset)
- shifted_bboxes = bboxes.clone()
- shifted_bboxes[:, 0:2] = shifted_bboxes[:, 0:2] + offset_tensor
- return shifted_bboxes
-
-
-def shift_predictions(det_data_samples: SampleList,
- offsets: Sequence[Tuple[int, int]],
- src_image_shape: Tuple[int, int]) -> SampleList:
- """Shift predictions to the original image.
-
- Args:
- det_data_samples (List[:obj:`DetDataSample`]): A list of patch results.
- offsets (Sequence[Tuple[int, int]]): Positions of the left top points
- of patches.
- src_image_shape (Tuple[int, int]): A (height, width) tuple of the large
- image's width and height.
- Returns:
- (List[:obj:`DetDataSample`]): shifted results.
- """
- try:
- from sahi.slicing import shift_bboxes, shift_masks
- except ImportError:
- raise ImportError('Please run "pip install -U sahi" '
- 'to install sahi first for large image inference.')
-
- assert len(det_data_samples) == len(
- offsets), 'The `results` should has the ' 'same length with `offsets`.'
- shifted_predictions = []
- for det_data_sample, offset in zip(det_data_samples, offsets):
- pred_inst = det_data_sample.pred_instances.clone()
-
- # Check bbox type
- if pred_inst.bboxes.size(-1) == 4:
- # Horizontal bboxes
- shifted_bboxes = shift_bboxes(pred_inst.bboxes, offset)
- elif pred_inst.bboxes.size(-1) == 5:
- # Rotated bboxes
- shifted_bboxes = shift_rbboxes(pred_inst.bboxes, offset)
- else:
- raise NotImplementedError
-
- # shift bboxes and masks
- pred_inst.bboxes = shifted_bboxes
- if 'masks' in det_data_sample:
- pred_inst.masks = shift_masks(pred_inst.masks, offset,
- src_image_shape)
-
- shifted_predictions.append(pred_inst.clone())
-
- shifted_predictions = InstanceData.cat(shifted_predictions)
-
- return shifted_predictions
-
-
-def merge_results_by_nms(results: SampleList, offsets: Sequence[Tuple[int,
- int]],
- src_image_shape: Tuple[int, int],
- nms_cfg: dict) -> DetDataSample:
- """Merge patch results by nms.
-
- Args:
- results (List[:obj:`DetDataSample`]): A list of patch results.
- offsets (Sequence[Tuple[int, int]]): Positions of the left top points
- of patches.
- src_image_shape (Tuple[int, int]): A (height, width) tuple of the large
- image's width and height.
- nms_cfg (dict): it should specify nms type and other parameters
- like `iou_threshold`.
- Returns:
- :obj:`DetDataSample`: merged results.
- """
- shifted_instances = shift_predictions(results, offsets, src_image_shape)
-
- _, keeps = batched_nms(
- boxes=shifted_instances.bboxes,
- scores=shifted_instances.scores,
- idxs=shifted_instances.labels,
- nms_cfg=nms_cfg)
- merged_instances = shifted_instances[keeps]
-
- merged_result = results[0].clone()
- merged_result.pred_instances = merged_instances
- return merged_result
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/visual_genome.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/visual_genome.py
deleted file mode 100644
index 8c33b86c4f81d0be0f2830618ad100196b461dcf..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/visual_genome.py
+++ /dev/null
@@ -1,95 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import re
-from itertools import chain
-from typing import List
-
-import mmengine
-from mmengine.dataset import BaseDataset
-
-from mmpretrain.registry import DATASETS
-
-
-@DATASETS.register_module()
-class VisualGenomeQA(BaseDataset):
- """Visual Genome Question Answering dataset.
-
- dataset structure: ::
-
- data_root
- ├── image
- │ ├── 1.jpg
- │ ├── 2.jpg
- │ └── ...
- └── question_answers.json
-
- Args:
- data_root (str): The root directory for ``data_prefix``, ``ann_file``
- and ``question_file``.
- data_prefix (str): The directory of images. Defaults to ``"image"``.
- ann_file (str, optional): Annotation file path for training and
- validation. Defaults to ``"question_answers.json"``.
- **kwargs: Other keyword arguments in :class:`BaseDataset`.
- """
-
- def __init__(self,
- data_root: str,
- data_prefix: str = 'image',
- ann_file: str = 'question_answers.json',
- **kwarg):
- super().__init__(
- data_root=data_root,
- data_prefix=dict(img_path=data_prefix),
- ann_file=ann_file,
- **kwarg,
- )
-
- def _create_image_index(self):
- img_prefix = self.data_prefix['img_path']
-
- files = mmengine.list_dir_or_file(img_prefix, list_dir=False)
- image_index = {}
- for file in files:
- image_id = re.findall(r'\d+', file)
- if len(image_id) > 0:
- image_id = int(image_id[-1])
- image_index[image_id] = mmengine.join_path(img_prefix, file)
-
- return image_index
-
- def load_data_list(self) -> List[dict]:
- """Load data list."""
- annotations = mmengine.load(self.ann_file)
-
- # The original Visual Genome annotation file and question file includes
- # only image id but no image file paths.
- self.image_index = self._create_image_index()
-
- data_list = []
- for qas in chain.from_iterable(ann['qas'] for ann in annotations):
- # ann example
- # {
- # 'id': 1,
- # 'qas': [
- # {
- # 'a_objects': [],
- # 'question': 'What color is the clock?',
- # 'image_id': 1,
- # 'qa_id': 986768,
- # 'answer': 'Two.',
- # 'q_objects': [],
- # }
- # ...
- # ]
- # }
-
- data_info = {
- 'img_path': self.image_index[qas['image_id']],
- 'quesiton': qas['quesiton'],
- 'question_id': qas['question_id'],
- 'image_id': qas['image_id'],
- 'gt_answer': [qas['answer']],
- }
-
- data_list.append(data_info)
-
- return data_list
diff --git a/spaces/Lamai/LAMAIGPT/autogpt/json_utils/json_fix_llm.py b/spaces/Lamai/LAMAIGPT/autogpt/json_utils/json_fix_llm.py
deleted file mode 100644
index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000
--- a/spaces/Lamai/LAMAIGPT/autogpt/json_utils/json_fix_llm.py
+++ /dev/null
@@ -1,220 +0,0 @@
-"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance
-of the ChatGPT API or LLM models."""
-from __future__ import annotations
-
-import contextlib
-import json
-from typing import Any, Dict
-
-from colorama import Fore
-from regex import regex
-
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_general import correct_json
-from autogpt.llm_utils import call_ai_function
-from autogpt.logs import logger
-from autogpt.speech import say_text
-
-JSON_SCHEMA = """
-{
- "command": {
- "name": "command name",
- "args": {
- "arg name": "value"
- }
- },
- "thoughts":
- {
- "text": "thought",
- "reasoning": "reasoning",
- "plan": "- short bulleted\n- list that conveys\n- long-term plan",
- "criticism": "constructive self-criticism",
- "speak": "thoughts summary to say to user"
- }
-}
-"""
-
-CFG = Config()
-
-
-def auto_fix_json(json_string: str, schema: str) -> str:
- """Fix the given JSON string to make it parseable and fully compliant with
- the provided schema using GPT-3.
-
- Args:
- json_string (str): The JSON string to fix.
- schema (str): The schema to use to fix the JSON.
- Returns:
- str: The fixed JSON string.
- """
- # Try to fix the JSON using GPT:
- function_string = "def fix_json(json_string: str, schema:str=None) -> str:"
- args = [f"'''{json_string}'''", f"'''{schema}'''"]
- description_string = (
- "This function takes a JSON string and ensures that it"
- " is parseable and fully compliant with the provided schema. If an object"
- " or field specified in the schema isn't contained within the correct JSON,"
- " it is omitted. The function also escapes any double quotes within JSON"
- " string values to ensure that they are valid. If the JSON string contains"
- " any None or NaN values, they are replaced with null before being parsed."
- )
-
- # If it doesn't already start with a "`", add one:
- if not json_string.startswith("`"):
- json_string = "```json\n" + json_string + "\n```"
- result_string = call_ai_function(
- function_string, args, description_string, model=CFG.fast_llm_model
- )
- logger.debug("------------ JSON FIX ATTEMPT ---------------")
- logger.debug(f"Original JSON: {json_string}")
- logger.debug("-----------")
- logger.debug(f"Fixed JSON: {result_string}")
- logger.debug("----------- END OF FIX ATTEMPT ----------------")
-
- try:
- json.loads(result_string) # just check the validity
- return result_string
- except json.JSONDecodeError: # noqa: E722
- # Get the call stack:
- # import traceback
- # call_stack = traceback.format_exc()
- # print(f"Failed to fix JSON: '{json_string}' "+call_stack)
- return "failed"
-
-
-def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]:
- """Fix the given JSON string to make it parseable and fully compliant with two techniques.
-
- Args:
- json_string (str): The JSON string to fix.
-
- Returns:
- str: The fixed JSON string.
- """
-
- # Parse and print Assistant response
- assistant_reply_json = fix_and_parse_json(assistant_reply)
- if assistant_reply_json == {}:
- assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
- assistant_reply
- )
-
- if assistant_reply_json != {}:
- return assistant_reply_json
-
- logger.error(
- "Error: The following AI output couldn't be converted to a JSON:\n",
- assistant_reply,
- )
- if CFG.speak_mode:
- say_text("I have received an invalid JSON response from the OpenAI API.")
-
- return {}
-
-
-def fix_and_parse_json(
- json_to_load: str, try_to_fix_with_gpt: bool = True
-) -> Dict[Any, Any]:
- """Fix and parse JSON string
-
- Args:
- json_to_load (str): The JSON string.
- try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT.
- Defaults to True.
-
- Returns:
- str or dict[Any, Any]: The parsed JSON.
- """
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = json_to_load.replace("\t", "")
- return json.loads(json_to_load)
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = correct_json(json_to_load)
- return json.loads(json_to_load)
- # Let's do something manually:
- # sometimes GPT responds with something BEFORE the braces:
- # "I'm sorry, I don't understand. Please try again."
- # {"text": "I'm sorry, I don't understand. Please try again.",
- # "confidence": 0.0}
- # So let's try to find the first brace and then parse the rest
- # of the string
- try:
- brace_index = json_to_load.index("{")
- maybe_fixed_json = json_to_load[brace_index:]
- last_brace_index = maybe_fixed_json.rindex("}")
- maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1]
- return json.loads(maybe_fixed_json)
- except (json.JSONDecodeError, ValueError) as e:
- return try_ai_fix(try_to_fix_with_gpt, e, json_to_load)
-
-
-def try_ai_fix(
- try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str
-) -> Dict[Any, Any]:
- """Try to fix the JSON with the AI
-
- Args:
- try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI.
- exception (Exception): The exception that was raised.
- json_to_load (str): The JSON string to load.
-
- Raises:
- exception: If try_to_fix_with_gpt is False.
-
- Returns:
- str or dict[Any, Any]: The JSON string or dictionary.
- """
- if not try_to_fix_with_gpt:
- raise exception
- if CFG.debug_mode:
- logger.warn(
- "Warning: Failed to parse AI output, attempting to fix."
- "\n If you see this warning frequently, it's likely that"
- " your prompt is confusing the AI. Try changing it up"
- " slightly."
- )
- # Now try to fix this up using the ai_functions
- ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA)
-
- if ai_fixed_json != "failed":
- return json.loads(ai_fixed_json)
- # This allows the AI to react to the error message,
- # which usually results in it correcting its ways.
- # logger.error("Failed to fix AI output, telling the AI.")
- return {}
-
-
-def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str):
- if CFG.speak_mode and CFG.debug_mode:
- say_text(
- "I have received an invalid JSON response from the OpenAI API. "
- "Trying to fix it now."
- )
- logger.error("Attempting to fix JSON by finding outermost brackets\n")
-
- try:
- json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}")
- json_match = json_pattern.search(json_string)
-
- if json_match:
- # Extract the valid JSON object from the string
- json_string = json_match.group(0)
- logger.typewriter_log(
- title="Apparently json was fixed.", title_color=Fore.GREEN
- )
- if CFG.speak_mode and CFG.debug_mode:
- say_text("Apparently json was fixed.")
- else:
- return {}
-
- except (json.JSONDecodeError, ValueError):
- if CFG.debug_mode:
- logger.error(f"Error: Invalid JSON: {json_string}\n")
- if CFG.speak_mode:
- say_text("Didn't work. I will have to ignore this response then.")
- logger.error("Error: Invalid JSON, setting it to empty JSON now.\n")
- json_string = {}
-
- return fix_and_parse_json(json_string)
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/feeds/mt4csv.py b/spaces/Lianjd/stock_dashboard/backtrader/feeds/mt4csv.py
deleted file mode 100644
index c1d62d6bf4b95ec480aced23b3d82653ccceb3e6..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/feeds/mt4csv.py
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-
-from . import GenericCSVData
-
-
-class MT4CSVData(GenericCSVData):
- '''
- Parses a `Metatrader4 `_ History
- center CSV exported file.
-
- Specific parameters (or specific meaning):
-
- - ``dataname``: The filename to parse or a file-like object
-
- - Uses GenericCSVData and simply modifies the params
- '''
-
- params = (
- ('dtformat', '%Y.%m.%d'),
- ('tmformat', '%H:%M'),
- ('datetime', 0),
- ('time', 1),
- ('open', 2),
- ('high', 3),
- ('low', 4),
- ('close', 5),
- ('volume', 6),
- ('openinterest', -1),
- )
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/fcenet_r50dcnv2_fpn.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/fcenet_r50dcnv2_fpn.py
deleted file mode 100644
index 8e76e39a6e8088ac20671f72fc5ed8448b21250b..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/fcenet_r50dcnv2_fpn.py
+++ /dev/null
@@ -1,35 +0,0 @@
-model = dict(
- type='FCENet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch',
- dcn=dict(type='DCNv2', deform_groups=2, fallback_on_stride=False),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- stage_with_dcn=(False, True, True, True)),
- neck=dict(
- type='mmdet.FPN',
- in_channels=[512, 1024, 2048],
- out_channels=256,
- add_extra_convs='on_output',
- num_outs=3,
- relu_before_extra_convs=True,
- act_cfg=None),
- bbox_head=dict(
- type='FCEHead',
- in_channels=256,
- scales=(8, 16, 32),
- fourier_degree=5,
- loss=dict(type='FCELoss', num_sample=50),
- postprocessor=dict(
- type='FCEPostprocessor',
- text_repr_type='poly',
- num_reconstr_points=50,
- alpha=1.0,
- beta=2.0,
- score_thr=0.3)))
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/batchnorm.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/batchnorm.py
deleted file mode 100644
index bf8d7a7325b474771a11a137053971fd40426079..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/sync_batchnorm/batchnorm.py
+++ /dev/null
@@ -1,412 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import collections
-import contextlib
-
-import torch
-import torch.nn.functional as F
-
-from torch.nn.modules.batchnorm import _BatchNorm
-
-try:
- from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
-except ImportError:
- ReduceAddCoalesced = Broadcast = None
-
-try:
- from jactorch.parallel.comm import SyncMaster
- from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback
-except ImportError:
- from .comm import SyncMaster
- from .replicate import DataParallelWithCallback
-
-__all__ = [
- 'set_sbn_eps_mode',
- 'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d',
- 'patch_sync_batchnorm', 'convert_model'
-]
-
-
-SBN_EPS_MODE = 'clamp'
-
-
-def set_sbn_eps_mode(mode):
- global SBN_EPS_MODE
- assert mode in ('clamp', 'plus')
- SBN_EPS_MODE = mode
-
-
-def _sum_ft(tensor):
- """sum over the first and last dimention"""
- return tensor.sum(dim=0).sum(dim=-1)
-
-
-def _unsqueeze_ft(tensor):
- """add new dimensions at the front and the tail"""
- return tensor.unsqueeze(0).unsqueeze(-1)
-
-
-_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
-_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
-
-
-class _SynchronizedBatchNorm(_BatchNorm):
- def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True):
- assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.'
-
- super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine,
- track_running_stats=track_running_stats)
-
- if not self.track_running_stats:
- import warnings
- warnings.warn('track_running_stats=False is not supported by the SynchronizedBatchNorm.')
-
- self._sync_master = SyncMaster(self._data_parallel_master)
-
- self._is_parallel = False
- self._parallel_id = None
- self._slave_pipe = None
-
- def forward(self, input):
- # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
- if not (self._is_parallel and self.training):
- return F.batch_norm(
- input, self.running_mean, self.running_var, self.weight, self.bias,
- self.training, self.momentum, self.eps)
-
- # Resize the input to (B, C, -1).
- input_shape = input.size()
- assert input.size(1) == self.num_features, 'Channel size mismatch: got {}, expect {}.'.format(input.size(1), self.num_features)
- input = input.view(input.size(0), self.num_features, -1)
-
- # Compute the sum and square-sum.
- sum_size = input.size(0) * input.size(2)
- input_sum = _sum_ft(input)
- input_ssum = _sum_ft(input ** 2)
-
- # Reduce-and-broadcast the statistics.
- if self._parallel_id == 0:
- mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
- else:
- mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
-
- # Compute the output.
- if self.affine:
- # MJY:: Fuse the multiplication for speed.
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
- else:
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
-
- # Reshape it.
- return output.view(input_shape)
-
- def __data_parallel_replicate__(self, ctx, copy_id):
- self._is_parallel = True
- self._parallel_id = copy_id
-
- # parallel_id == 0 means master device.
- if self._parallel_id == 0:
- ctx.sync_master = self._sync_master
- else:
- self._slave_pipe = ctx.sync_master.register_slave(copy_id)
-
- def _data_parallel_master(self, intermediates):
- """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
-
- # Always using same "device order" makes the ReduceAdd operation faster.
- # Thanks to:: Tete Xiao (http://tetexiao.com/)
- intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
-
- to_reduce = [i[1][:2] for i in intermediates]
- to_reduce = [j for i in to_reduce for j in i] # flatten
- target_gpus = [i[1].sum.get_device() for i in intermediates]
-
- sum_size = sum([i[1].sum_size for i in intermediates])
- sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
- mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
-
- broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
-
- outputs = []
- for i, rec in enumerate(intermediates):
- outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
-
- return outputs
-
- def _compute_mean_std(self, sum_, ssum, size):
- """Compute the mean and standard-deviation with sum and square-sum. This method
- also maintains the moving average on the master device."""
- assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
- mean = sum_ / size
- sumvar = ssum - sum_ * mean
- unbias_var = sumvar / (size - 1)
- bias_var = sumvar / size
-
- if hasattr(torch, 'no_grad'):
- with torch.no_grad():
- self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
- self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
- else:
- self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
- self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
-
- if SBN_EPS_MODE == 'clamp':
- return mean, bias_var.clamp(self.eps) ** -0.5
- elif SBN_EPS_MODE == 'plus':
- return mean, (bias_var + self.eps) ** -0.5
- else:
- raise ValueError('Unknown EPS mode: {}.'.format(SBN_EPS_MODE))
-
-
-class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
- r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
- mini-batch.
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm1d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of size
- `batch_size x num_features [x width]`
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape::
- - Input: :math:`(N, C)` or :math:`(N, C, L)`
- - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 2 and input.dim() != 3:
- raise ValueError('expected 2D or 3D input (got {}D input)'
- .format(input.dim()))
-
-
-class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
- of 3d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm2d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape::
- - Input: :math:`(N, C, H, W)`
- - Output: :math:`(N, C, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 4:
- raise ValueError('expected 4D input (got {}D input)'
- .format(input.dim()))
-
-
-class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
- of 4d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm3d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
- or Spatio-temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x depth x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape::
- - Input: :math:`(N, C, D, H, W)`
- - Output: :math:`(N, C, D, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 5:
- raise ValueError('expected 5D input (got {}D input)'
- .format(input.dim()))
-
-
-@contextlib.contextmanager
-def patch_sync_batchnorm():
- import torch.nn as nn
-
- backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d
-
- nn.BatchNorm1d = SynchronizedBatchNorm1d
- nn.BatchNorm2d = SynchronizedBatchNorm2d
- nn.BatchNorm3d = SynchronizedBatchNorm3d
-
- yield
-
- nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup
-
-
-def convert_model(module):
- """Traverse the input module and its child recursively
- and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d
- to SynchronizedBatchNorm*N*d
-
- Args:
- module: the input module needs to be convert to SyncBN model
-
- Examples:
- >>> import torch.nn as nn
- >>> import torchvision
- >>> # m is a standard pytorch model
- >>> m = torchvision.models.resnet18(True)
- >>> m = nn.DataParallel(m)
- >>> # after convert, m is using SyncBN
- >>> m = convert_model(m)
- """
- if isinstance(module, torch.nn.DataParallel):
- mod = module.module
- mod = convert_model(mod)
- mod = DataParallelWithCallback(mod, device_ids=module.device_ids)
- return mod
-
- mod = module
- for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d,
- torch.nn.modules.batchnorm.BatchNorm2d,
- torch.nn.modules.batchnorm.BatchNorm3d],
- [SynchronizedBatchNorm1d,
- SynchronizedBatchNorm2d,
- SynchronizedBatchNorm3d]):
- if isinstance(module, pth_module):
- mod = sync_module(module.num_features, module.eps, module.momentum, module.affine)
- mod.running_mean = module.running_mean
- mod.running_var = module.running_var
- if module.affine:
- mod.weight.data = module.weight.data.clone().detach()
- mod.bias.data = module.bias.data.clone().detach()
-
- for name, child in module.named_children():
- mod.add_module(name, convert_model(child))
-
- return mod
diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_coco.py b/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_coco.py
deleted file mode 100644
index afbdcc123bcfb3daa00614bd26e26795b68d6de3..0000000000000000000000000000000000000000
--- a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_coco.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import os
-import torch
-import hotr.util.misc as utils
-import hotr.util.logger as loggers
-from hotr.data.evaluators.coco_eval import CocoEvaluator
-
-@torch.no_grad()
-def coco_evaluate(model, criterion, postprocessors, data_loader, base_ds, device, output_dir):
- model.eval()
- criterion.eval()
-
- metric_logger = loggers.MetricLogger(delimiter=" ")
- metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))
- header = 'Evaluation'
-
- iou_types = tuple(k for k in ('segm', 'bbox') if k in postprocessors.keys())
- coco_evaluator = CocoEvaluator(base_ds, iou_types)
- print_freq = len(data_loader)
- # coco_evaluator.coco_eval[iou_types[0]].params.iouThrs = [0, 0.1, 0.5, 0.75]
-
- print("\n>>> [MS-COCO Evaluation] <<<")
- for samples, targets in metric_logger.log_every(data_loader, print_freq, header):
- samples = samples.to(device)
- targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
-
- outputs = model(samples)
- loss_dict = criterion(outputs, targets)
- weight_dict = criterion.weight_dict
-
- # reduce losses over all GPUs for logging purposes
- loss_dict_reduced = utils.reduce_dict(loss_dict)
- loss_dict_reduced_scaled = {k: v * weight_dict[k]
- for k, v in loss_dict_reduced.items() if k in weight_dict}
- loss_dict_reduced_unscaled = {f'{k}_unscaled': v
- for k, v in loss_dict_reduced.items()}
- metric_logger.update(loss=sum(loss_dict_reduced_scaled.values()),
- **loss_dict_reduced_scaled,
- **loss_dict_reduced_unscaled)
- metric_logger.update(class_error=loss_dict_reduced['class_error'])
-
- orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0)
- results = postprocessors['bbox'](outputs, orig_target_sizes)
- res = {target['image_id'].item(): output for target, output in zip(targets, results)}
- if coco_evaluator is not None:
- coco_evaluator.update(res)
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("\n>>> [Averaged stats] <<<\n", metric_logger)
- if coco_evaluator is not None:
- coco_evaluator.synchronize_between_processes()
-
- # accumulate predictions from all images
- if coco_evaluator is not None:
- coco_evaluator.accumulate()
- coco_evaluator.summarize()
- stats = {k: meter.global_avg for k, meter in metric_logger.meters.items()}
- if coco_evaluator is not None:
- if 'bbox' in postprocessors.keys():
- stats['coco_eval_bbox'] = coco_evaluator.coco_eval['bbox'].stats.tolist()
-
- return stats, coco_evaluator
\ No newline at end of file
diff --git a/spaces/MWilinski/bot/data/scrapers/stack_overflow_scraper.py b/spaces/MWilinski/bot/data/scrapers/stack_overflow_scraper.py
deleted file mode 100644
index 003b139b52206043ef74b079b46c8cfa44fc66cf..0000000000000000000000000000000000000000
--- a/spaces/MWilinski/bot/data/scrapers/stack_overflow_scraper.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import re
-import csv
-import time
-import requests
-from typing import List
-import pandas as pd
-from tqdm import tqdm
-from bs4 import BeautifulSoup
-
-
-def scrape_question_with_answers(question_url: str) -> List[str]:
- url = 'https://stackoverflow.com/' + question_url
- response = requests.get(url)
- soup = BeautifulSoup(response.content, 'html.parser')
-
- title = soup.find('title').text.replace(' - Stack Overflow', '')
- question_div = soup.find('div', {'class': 'postcell post-layout--right'})
- question = question_div.find('p').text
- answers_div = soup.find('div', {'class': 'answercell post-layout--right'})
- answer = answers_div.find('div', {'class': 's-prose js-post-body'}).text
- return [title, question, answer, url]
-
-
-def scrape_questions_page(url: str, min_votes: int, min_answers: int) -> List[List[str]]:
- response = requests.get(url)
- soup = BeautifulSoup(response.content, 'html.parser')
- posts_summaries = soup.find_all('div', {'class':'s-post-summary js-post-summary'})
-
- qa_data = []
- for summary in posts_summaries:
- stats_div = summary.find('div', {'class': 's-post-summary--stats'})
- vote_div = stats_div.find('div', {
- 'class': 's-post-summary--stats-item s-post-summary--stats-item__emphasized',
- 'title': re.compile(r'^Score of \d+$')})
- if vote_div:
- vote_number = int(vote_div.find('span', {'class': 's-post-summary--stats-item-number'}).text)
- else:
- vote_number = 0
- answer_div = stats_div.find('div', {
- 'class': 's-post-summary--stats-item',
- 'title': re.compile(r'^\d+ answers$')})
- if answer_div:
- answer_number = int(answer_div.find('span', {'class': 's-post-summary--stats-item-number'}).text)
- else:
- answer_number = 0
-
- question_href = summary.find('a', {'class': 's-link'})['href']
- if vote_number >= min_votes and answer_number >= min_answers:
- try:
- qa_data.append(scrape_question_with_answers(question_href))
- except Exception as error:
- print(error)
-
- time.sleep(1.5)
- return qa_data
-
-
-def crawl_and_save_qa(
- filename: str,
- base_url: str,
- start_page: int,
- n_pages: int=10,
- min_votes: int=1,
- min_answers: int=1
-):
- with open(filename, 'a', newline='') as f:
- writer = csv.writer(f)
- if start_page == 1:
- writer.writerow(['title', 'question', 'answer', 'url'])
- for page_num in tqdm(range(start_page, start_page+n_pages)):
- page_data = scrape_questions_page(
- base_url.format(page_num),
- min_votes,
- min_answers
- )
- if page_data:
- for qa_data in page_data:
- writer.writerow(qa_data)
-
-
-if __name__ == '__main__':
- filename = '../datasets/stackoverflow_linux.csv'
- url = 'https://stackoverflow.com/questions/tagged/linux?tab=votes&page={}&pagesize=15'
- crawl_and_save_qa(
- filename=filename,
- base_url=url,
- start_page=21,
- n_pages=10,
- min_votes=1,
- min_answers=1
- )
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/modeling/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/modeling/__init__.py
deleted file mode 100644
index 38e906243d898d7fc071c0fe218338c5cace3ea1..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/modeling/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .sam import Sam
-from .image_encoder import ImageEncoderViT
-from .mask_decoder import MaskDecoder
-from .prompt_encoder import PromptEncoder
-from .transformer import TwoWayTransformer
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/data/mask_mapper.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/data/mask_mapper.py
deleted file mode 100644
index 29290c16c3043310aa5ede043f3096f0edc4eb09..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/data/mask_mapper.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import numpy as np
-import torch
-
-from XMem.dataset.util import all_to_onehot
-
-
-class MaskMapper:
- """
- This class is used to convert a indexed-mask to a one-hot representation.
- It also takes care of remapping non-continuous indices
- It has two modes:
- 1. Default. Only masks with new indices are supposed to go into the remapper.
- This is also the case for YouTubeVOS.
- i.e., regions with index 0 are not "background", but "don't care".
-
- 2. Exhaustive. Regions with index 0 are considered "background".
- Every single pixel is considered to be "labeled".
- """
- def __init__(self):
- self.labels = []
- self.remappings = {}
-
- # if coherent, no mapping is required
- self.coherent = True
-
- def convert_mask(self, mask, exhaustive=False):
- # mask is in index representation, H*W numpy array
- labels = np.unique(mask).astype(np.uint8)
- labels = labels[labels!=0].tolist()
-
- new_labels = list(set(labels) - set(self.labels))
- if not exhaustive:
- assert len(new_labels) == len(labels), 'Old labels found in non-exhaustive mode'
-
- # add new remappings
- for i, l in enumerate(new_labels):
- self.remappings[l] = i+len(self.labels)+1
- if self.coherent and i+len(self.labels)+1 != l:
- self.coherent = False
-
- if exhaustive:
- new_mapped_labels = range(1, len(self.labels)+len(new_labels)+1)
- else:
- if self.coherent:
- new_mapped_labels = new_labels
- else:
- new_mapped_labels = range(len(self.labels)+1, len(self.labels)+len(new_labels)+1)
-
- self.labels.extend(new_labels)
- mask = torch.from_numpy(all_to_onehot(mask, self.labels)).float()
-
- # mask num_objects*H*W
- return mask, new_mapped_labels
-
-
- def remap_index_mask(self, mask):
- # mask is in index representation, H*W numpy array
- if self.coherent:
- return mask
-
- new_mask = np.zeros_like(mask)
- for l, i in self.remappings.items():
- new_mask[mask==i] = l
- return new_mask
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/upsample.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/upsample.py
deleted file mode 100644
index a1a353767d0ce8518f0d7289bed10dba0178ed12..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/upsample.py
+++ /dev/null
@@ -1,84 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..utils import xavier_init
-from .registry import UPSAMPLE_LAYERS
-
-UPSAMPLE_LAYERS.register_module('nearest', module=nn.Upsample)
-UPSAMPLE_LAYERS.register_module('bilinear', module=nn.Upsample)
-
-
-@UPSAMPLE_LAYERS.register_module(name='pixel_shuffle')
-class PixelShufflePack(nn.Module):
- """Pixel Shuffle upsample layer.
-
- This module packs `F.pixel_shuffle()` and a nn.Conv2d module together to
- achieve a simple upsampling with pixel shuffle.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- scale_factor (int): Upsample ratio.
- upsample_kernel (int): Kernel size of the conv layer to expand the
- channels.
- """
-
- def __init__(self, in_channels, out_channels, scale_factor,
- upsample_kernel):
- super(PixelShufflePack, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.scale_factor = scale_factor
- self.upsample_kernel = upsample_kernel
- self.upsample_conv = nn.Conv2d(
- self.in_channels,
- self.out_channels * scale_factor * scale_factor,
- self.upsample_kernel,
- padding=(self.upsample_kernel - 1) // 2)
- self.init_weights()
-
- def init_weights(self):
- xavier_init(self.upsample_conv, distribution='uniform')
-
- def forward(self, x):
- x = self.upsample_conv(x)
- x = F.pixel_shuffle(x, self.scale_factor)
- return x
-
-
-def build_upsample_layer(cfg, *args, **kwargs):
- """Build upsample layer.
-
- Args:
- cfg (dict): The upsample layer config, which should contain:
-
- - type (str): Layer type.
- - scale_factor (int): Upsample ratio, which is not applicable to
- deconv.
- - layer args: Args needed to instantiate a upsample layer.
- args (argument list): Arguments passed to the ``__init__``
- method of the corresponding conv layer.
- kwargs (keyword arguments): Keyword arguments passed to the
- ``__init__`` method of the corresponding conv layer.
-
- Returns:
- nn.Module: Created upsample layer.
- """
- if not isinstance(cfg, dict):
- raise TypeError(f'cfg must be a dict, but got {type(cfg)}')
- if 'type' not in cfg:
- raise KeyError(
- f'the cfg dict must contain the key "type", but got {cfg}')
- cfg_ = cfg.copy()
-
- layer_type = cfg_.pop('type')
- if layer_type not in UPSAMPLE_LAYERS:
- raise KeyError(f'Unrecognized upsample type {layer_type}')
- else:
- upsample = UPSAMPLE_LAYERS.get(layer_type)
-
- if upsample is nn.Upsample:
- cfg_['mode'] = layer_type
- layer = upsample(*args, **kwargs, **cfg_)
- return layer
diff --git a/spaces/MirageML/sjc/voxnerf/data.py b/spaces/MirageML/sjc/voxnerf/data.py
deleted file mode 100644
index 3faf1cbcd57fc5cd85de452ddfc4514f1d23e87a..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/voxnerf/data.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from pathlib import Path
-import json
-import numpy as np
-import imageio
-from .utils import blend_rgba
-
-
-def load_blender(split, scene="lego", half_res=False):
- assert split in ("train", "val", "test")
-
- env_fname = Path(__file__).resolve().parents[1] / "env.json"
- with env_fname.open("r") as f:
- root = json.load(f)['data_root']
- root = Path(root) / scene
-
- with open(root / f'transforms_{split}.json', "r") as f:
- meta = json.load(f)
-
- imgs, poses = [], []
-
- for frame in meta['frames']:
- file_name = root / f"{frame['file_path']}.png"
- im = imageio.imread(file_name)
- c2w = frame['transform_matrix']
-
- imgs.append(im)
- poses.append(c2w)
-
- imgs = (np.array(imgs) / 255.).astype(np.float32) # (RGBA) imgs
- imgs = blend_rgba(imgs)
- poses = np.array(poses).astype(np.float)
-
- H, W = imgs[0].shape[:2]
- camera_angle_x = float(meta['camera_angle_x'])
- f = 1 / np.tan(camera_angle_x / 2) * (W / 2)
-
- if half_res:
- raise NotImplementedError()
-
- K = np.array([
- [f, 0, -(W/2 - 0.5)],
- [0, -f, -(H/2 - 0.5)],
- [0, 0, -1]
- ]) # note OpenGL -ve z convention;
-
- return imgs, K, poses
diff --git a/spaces/Moran/Aviv_Moran_Summarization/README.md b/spaces/Moran/Aviv_Moran_Summarization/README.md
deleted file mode 100644
index 312f35aeecfc648819b20ca44fb6e0e22f09ebe8..0000000000000000000000000000000000000000
--- a/spaces/Moran/Aviv_Moran_Summarization/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Aviv Moran Summarization
-emoji: 📰
-colorFrom: pink
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/__init__.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/__init__.py
deleted file mode 100644
index 29f7a9cb48b9397ed0b658c15580b43c5ae1300d..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/__init__.py
+++ /dev/null
@@ -1,73 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import os
-import copy
-
-import numpy as np
-import torch
-
-from .ShowTellModel import ShowTellModel
-from .FCModel import FCModel
-from .AttModel import *
-from .TransformerModel import TransformerModel
-from .cachedTransformer import TransformerModel as cachedTransformer
-from .BertCapModel import BertCapModel
-from .M2Transformer import M2TransformerModel
-from .AoAModel import AoAModel
-
-def setup(opt):
- if opt.caption_model in ['fc', 'show_tell']:
- print('Warning: %s model is mostly deprecated; many new features are not supported.' %opt.caption_model)
- if opt.caption_model == 'fc':
- print('Use newfc instead of fc')
- if opt.caption_model == 'fc':
- model = FCModel(opt)
- elif opt.caption_model == 'language_model':
- model = LMModel(opt)
- elif opt.caption_model == 'newfc':
- model = NewFCModel(opt)
- elif opt.caption_model == 'show_tell':
- model = ShowTellModel(opt)
- # Att2in model in self-critical
- elif opt.caption_model == 'att2in':
- model = Att2inModel(opt)
- # Att2in model with two-layer MLP img embedding and word embedding
- elif opt.caption_model == 'att2in2':
- model = Att2in2Model(opt)
- elif opt.caption_model == 'att2all2':
- print('Warning: this is not a correct implementation of the att2all model in the original paper.')
- model = Att2all2Model(opt)
- # Adaptive Attention model from Knowing when to look
- elif opt.caption_model == 'adaatt':
- model = AdaAttModel(opt)
- # Adaptive Attention with maxout lstm
- elif opt.caption_model == 'adaattmo':
- model = AdaAttMOModel(opt)
- # Top-down attention model
- elif opt.caption_model in ['topdown', 'updown']:
- model = UpDownModel(opt)
- # StackAtt
- elif opt.caption_model == 'stackatt':
- model = StackAttModel(opt)
- # DenseAtt
- elif opt.caption_model == 'denseatt':
- model = DenseAttModel(opt)
- # Transformer
- elif opt.caption_model == 'transformer':
- if getattr(opt, 'cached_transformer', False):
- model = cachedTransformer(opt)
- else:
- model = TransformerModel(opt)
- # AoANet
- elif opt.caption_model == 'aoa':
- model = AoAModel(opt)
- elif opt.caption_model == 'bert':
- model = BertCapModel(opt)
- elif opt.caption_model == 'm2transformer':
- model = M2TransformerModel(opt)
- else:
- raise Exception("Caption model not supported: {}".format(opt.caption_model))
-
- return model
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/tfhub_export.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/tfhub_export.py
deleted file mode 100644
index 3be8608a5cfc25442f5f936b4052f90b89c6cfce..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/efficientnet/tfhub_export.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""A script to export TF-Hub SavedModel."""
-
-from __future__ import absolute_import
-from __future__ import division
-# from __future__ import google_type_annotations
-from __future__ import print_function
-
-import os
-
-from absl import app
-from absl import flags
-
-import tensorflow as tf
-
-from official.vision.image_classification.efficientnet import efficientnet_model
-
-FLAGS = flags.FLAGS
-
-flags.DEFINE_string("model_name", None,
- "EfficientNet model name.")
-flags.DEFINE_string("model_path", None,
- "File path to TF model checkpoint.")
-flags.DEFINE_string("export_path", None,
- "TF-Hub SavedModel destination path to export.")
-
-
-def export_tfhub(model_path, hub_destination, model_name):
- """Restores a tf.keras.Model and saves for TF-Hub."""
- model_configs = dict(efficientnet_model.MODEL_CONFIGS)
- config = model_configs[model_name]
-
- image_input = tf.keras.layers.Input(
- shape=(None, None, 3), name="image_input", dtype=tf.float32)
- x = image_input * 255.0
- ouputs = efficientnet_model.efficientnet(x, config)
- hub_model = tf.keras.Model(image_input, ouputs)
- ckpt = tf.train.Checkpoint(model=hub_model)
- ckpt.restore(model_path).assert_existing_objects_matched()
- hub_model.save(
- os.path.join(hub_destination, "classification"), include_optimizer=False)
-
- feature_vector_output = hub_model.get_layer(name="top_pool").get_output_at(0)
- hub_model2 = tf.keras.Model(image_input, feature_vector_output)
- hub_model2.save(
- os.path.join(hub_destination, "feature-vector"), include_optimizer=False)
-
-
-def main(argv):
- if len(argv) > 1:
- raise app.UsageError("Too many command-line arguments.")
-
- export_tfhub(FLAGS.model_path, FLAGS.export_path, FLAGS.model_name)
-
-if __name__ == "__main__":
- app.run(main)
diff --git a/spaces/NN520/AI/src/app/page.tsx b/spaces/NN520/AI/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/NikeZoldyck/green-screen-composition-transfer/models/__init__.py b/spaces/NikeZoldyck/green-screen-composition-transfer/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/NiuTaipu/moe-tts-test01/text/sanskrit.py b/spaces/NiuTaipu/moe-tts-test01/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/NiuTaipu/moe-tts-test01/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE.md
deleted file mode 100644
index 5c4c4493e4a8e5386b927e4f4554df925955d129..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/ISSUE_TEMPLATE.md
+++ /dev/null
@@ -1,3 +0,0 @@
-## 👉 [Please follow one of these issue templates](https://github.com/pytorch/fairseq/issues/new/choose) 👈
-
-Note: to keep the backlog clean and actionable, issues may be immediately closed if they do not follow one of the above issue templates.
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/models/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/models/__init__.py
deleted file mode 100644
index 54b5a1c31243e55d384f80ef9514461cd35b15c6..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/models/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import importlib
-import os
-
-
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- model_name = file[: file.find(".py")]
- importlib.import_module("examples.speech_recognition.models." + model_name)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv_lm.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv_lm.py
deleted file mode 100644
index 1d9efc4e42a5ecc1b83338055f18ade5a83ea666..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv_lm.py
+++ /dev/null
@@ -1,306 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import utils
-from fairseq.models import (
- FairseqLanguageModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.models.lightconv import Embedding, LightConvDecoder
-from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder
-
-
-@register_model("lightconv_lm")
-class LightConvLanguageModel(FairseqLanguageModel):
- def __init__(self, decoder):
- super().__init__(decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--dropout",
- default=0.1,
- type=float,
- metavar="D",
- help="dropout probability",
- )
- parser.add_argument(
- "--attention-dropout",
- default=0.0,
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--relu-dropout",
- default=0.0,
- type=float,
- metavar="D",
- help="dropout probability after ReLU in FFN",
- )
- parser.add_argument(
- "--input-dropout",
- type=float,
- metavar="D",
- help="dropout probability of the inputs",
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-output-dim",
- type=int,
- metavar="N",
- help="decoder output dimension",
- )
- parser.add_argument(
- "--decoder-input-dim", type=int, metavar="N", help="decoder input dimension"
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads or LightConv/DynamicConv heads",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- default=False,
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--adaptive-softmax-cutoff",
- metavar="EXPR",
- help="comma separated list of adaptive softmax cutoff points. "
- "Must be used with adaptive_loss criterion",
- )
- parser.add_argument(
- "--adaptive-softmax-dropout",
- type=float,
- metavar="D",
- help="sets adaptive softmax dropout for the tail projections",
- )
- parser.add_argument(
- "--adaptive-softmax-factor",
- type=float,
- metavar="N",
- help="adaptive input factor",
- )
- parser.add_argument(
- "--no-token-positional-embeddings",
- default=False,
- action="store_true",
- help="if set, disables positional embeddings (outside self attention)",
- )
- parser.add_argument(
- "--share-decoder-input-output-embed",
- default=False,
- action="store_true",
- help="share decoder input and output embeddings",
- )
- parser.add_argument(
- "--character-embeddings",
- default=False,
- action="store_true",
- help="if set, uses character embedding convolutions to produce token embeddings",
- )
- parser.add_argument(
- "--character-filters",
- type=str,
- metavar="LIST",
- default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]",
- help="size of character embeddings",
- )
- parser.add_argument(
- "--character-embedding-dim",
- type=int,
- metavar="N",
- default=4,
- help="size of character embeddings",
- )
- parser.add_argument(
- "--char-embedder-highway-layers",
- type=int,
- metavar="N",
- default=2,
- help="number of highway layers for character token embeddder",
- )
- parser.add_argument(
- "--adaptive-input",
- default=False,
- action="store_true",
- help="if set, uses adaptive input",
- )
- parser.add_argument(
- "--adaptive-input-factor",
- type=float,
- metavar="N",
- help="adaptive input factor",
- )
- parser.add_argument(
- "--adaptive-input-cutoff",
- metavar="EXPR",
- help="comma separated list of adaptive input cutoff points.",
- )
- parser.add_argument(
- "--tie-adaptive-weights",
- action="store_true",
- help="if set, ties the weights of adaptive softmax and adaptive input",
- )
- parser.add_argument(
- "--tie-adaptive-proj",
- action="store_true",
- help="if set, ties the projection weights of adaptive softmax and adaptive input",
- )
- parser.add_argument(
- "--decoder-learned-pos",
- action="store_true",
- help="use learned positional embeddings in the decoder",
- )
-
- """LightConv and DynamicConv arguments"""
- parser.add_argument(
- "--decoder-kernel-size-list",
- type=lambda x: utils.eval_str_list(x, int),
- help='list of kernel size (default: "[3,7,15,31,31,31]")',
- )
- parser.add_argument(
- "--decoder-glu", type=utils.eval_bool, help="glu after in proj"
- )
- parser.add_argument(
- "--decoder-conv-type",
- default="dynamic",
- type=str,
- choices=["dynamic", "lightweight"],
- help="type of convolution",
- )
- parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool)
- parser.add_argument(
- "--weight-dropout",
- type=float,
- metavar="D",
- help="dropout probability for conv weights",
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_lm_architecture(args)
-
- if getattr(args, "max_source_positions", None) is None:
- args.max_source_positions = args.tokens_per_sample
- if getattr(args, "max_target_positions", None) is None:
- args.max_target_positions = args.tokens_per_sample
-
- if args.character_embeddings:
- embed_tokens = CharacterTokenEmbedder(
- task.dictionary,
- eval(args.character_filters),
- args.character_embedding_dim,
- args.decoder_embed_dim,
- args.char_embedder_highway_layers,
- )
- elif args.adaptive_input:
- embed_tokens = AdaptiveInput(
- len(task.dictionary),
- task.dictionary.pad(),
- args.decoder_input_dim,
- args.adaptive_input_factor,
- args.decoder_embed_dim,
- utils.eval_str_list(args.adaptive_input_cutoff, type=int),
- )
- else:
- embed_tokens = Embedding(
- len(task.dictionary), args.decoder_input_dim, task.dictionary.pad()
- )
-
- if args.tie_adaptive_weights:
- assert args.adaptive_input
- assert args.adaptive_input_factor == args.adaptive_softmax_factor
- assert (
- args.adaptive_softmax_cutoff == args.adaptive_input_cutoff
- ), "{} != {}".format(
- args.adaptive_softmax_cutoff, args.adaptive_input_cutoff
- )
- assert args.decoder_input_dim == args.decoder_output_dim
-
- decoder = LightConvDecoder(
- args,
- task.output_dictionary,
- embed_tokens,
- no_encoder_attn=True,
- final_norm=False,
- )
- return LightConvLanguageModel(decoder)
-
-
-@register_model_architecture("lightconv_lm", "lightconv_lm")
-def base_lm_architecture(args):
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048)
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
-
- args.character_embeddings = getattr(args, "character_embeddings", False)
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
- args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim)
-
- # The model training is not stable without this
- args.decoder_normalize_before = True
-
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4)
- args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None)
-
- args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False)
- args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False)
-
- args.decoder_kernel_size_list = getattr(
- args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31]
- )
- if len(args.decoder_kernel_size_list) == 1:
- args.decoder_kernel_size_list = (
- args.decoder_kernel_size_list * args.decoder_layers
- )
- assert (
- len(args.decoder_kernel_size_list) == args.decoder_layers
- ), "decoder_kernel_size_list doesn't match decoder_layers"
- args.decoder_glu = getattr(args, "decoder_glu", True)
- args.input_dropout = getattr(args, "input_dropout", 0.1)
- args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout)
-
-
-@register_model_architecture("lightconv_lm", "lightconv_lm_gbw")
-def lightconv_lm_gbw(args):
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.dropout = getattr(args, "dropout", 0.1)
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- base_lm_architecture(args)
diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py b/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py
deleted file mode 100644
index a72af9c104e80697d7b91210ad30e6626791d273..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import gradio as gr
-import imageio
-import torch
-from diffusers import TextToVideoZeroPipeline
-
-from video_diffusion.tuneavideo.util import save_videos_grid
-from video_diffusion.utils.model_list import stable_model_list
-
-
-class ZeroShotText2VideoGenerator:
- def __init__(self):
- self.pipe = None
-
- def load_model(self, model_id):
- if self.pipe is None:
- self.pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
- self.pipe.to("cuda")
- self.pipe.enable_xformers_memory_efficient_attention()
- self.pipe.enable_attention_slicing()
-
- return self.pipe
-
- def generate_video(
- self,
- prompt,
- negative_prompt,
- model_id,
- height,
- width,
- video_length,
- guidance_scale,
- fps,
- t0,
- t1,
- motion_field_strength_x,
- motion_field_strength_y,
- ):
- pipe = self.load_model(model_id)
- result = pipe(
- prompt=prompt,
- negative_prompt=negative_prompt,
- height=height,
- width=width,
- video_length=video_length,
- guidance_scale=guidance_scale,
- t0=t0,
- t1=t1,
- motion_field_strength_x=motion_field_strength_x,
- motion_field_strength_y=motion_field_strength_y,
- ).images
-
- result = [(r * 255).astype("uint8") for r in result]
- imageio.mimsave("video.mp4", result, fps=fps)
- return "video.mp4"
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- zero_shot_text2video_prompt = gr.Textbox(
- lines=1,
- placeholder="Prompt",
- show_label=False,
- )
- zero_shot_text2video_negative_prompt = gr.Textbox(
- lines=1,
- placeholder="Negative Prompt",
- show_label=False,
- )
- zero_shot_text2video_model_id = gr.Dropdown(
- choices=stable_model_list,
- label="Stable Model List",
- value=stable_model_list[0],
- )
- with gr.Row():
- with gr.Column():
- zero_shot_text2video_guidance_scale = gr.Slider(
- label="Guidance Scale",
- minimum=1,
- maximum=15,
- step=1,
- value=7.5,
- )
- zero_shot_text2video_video_length = gr.Slider(
- label="Video Length",
- minimum=1,
- maximum=100,
- step=1,
- value=10,
- )
- zero_shot_text2video_t0 = gr.Slider(
- label="Timestep T0",
- minimum=0,
- maximum=100,
- step=1,
- value=44,
- )
- zero_shot_text2video_motion_field_strength_x = gr.Slider(
- label="Motion Field Strength X",
- minimum=0,
- maximum=100,
- step=1,
- value=12,
- )
- zero_shot_text2video_fps = gr.Slider(
- label="Fps",
- minimum=1,
- maximum=60,
- step=1,
- value=10,
- )
- with gr.Row():
- with gr.Column():
- zero_shot_text2video_height = gr.Slider(
- label="Height",
- minimum=128,
- maximum=1280,
- step=32,
- value=512,
- )
- zero_shot_text2video_width = gr.Slider(
- label="Width",
- minimum=128,
- maximum=1280,
- step=32,
- value=512,
- )
- zero_shot_text2video_t1 = gr.Slider(
- label="Timestep T1",
- minimum=0,
- maximum=100,
- step=1,
- value=47,
- )
- zero_shot_text2video_motion_field_strength_y = gr.Slider(
- label="Motion Field Strength Y",
- minimum=0,
- maximum=100,
- step=1,
- value=12,
- )
- zero_shot_text2video_button = gr.Button(value="Generator")
-
- with gr.Column():
- zero_shot_text2video_output = gr.Video(label="Output")
-
- zero_shot_text2video_button.click(
- fn=ZeroShotText2VideoGenerator().generate_video,
- inputs=[
- zero_shot_text2video_prompt,
- zero_shot_text2video_negative_prompt,
- zero_shot_text2video_model_id,
- zero_shot_text2video_height,
- zero_shot_text2video_width,
- zero_shot_text2video_video_length,
- zero_shot_text2video_guidance_scale,
- zero_shot_text2video_fps,
- zero_shot_text2video_t0,
- zero_shot_text2video_t1,
- zero_shot_text2video_motion_field_strength_x,
- zero_shot_text2video_motion_field_strength_y,
- ],
- outputs=zero_shot_text2video_output,
- )
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/file_io.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/file_io.py
deleted file mode 100644
index 46ee4ec31d04eee77976ff3edbbf84762a3409ed..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/file_io.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from iopath.common.file_io import HTTPURLHandler, OneDrivePathHandler, PathHandler
-from iopath.common.file_io import PathManager as PathManagerBase
-
-__all__ = ["PathManager", "PathHandler"]
-
-
-PathManager = PathManagerBase()
-"""
-This is a detectron2 project-specific PathManager.
-We try to stay away from global PathManager in fvcore as it
-introduces potential conflicts among other libraries.
-"""
-
-
-class Detectron2Handler(PathHandler):
- """
- Resolve anything that's hosted under detectron2's namespace.
- """
-
- PREFIX = "detectron2://"
- S3_DETECTRON2_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/"
-
- def _get_supported_prefixes(self):
- return [self.PREFIX]
-
- def _get_local_path(self, path, **kwargs):
- name = path[len(self.PREFIX) :]
- return PathManager.get_local_path(self.S3_DETECTRON2_PREFIX + name, **kwargs)
-
- def _open(self, path, mode="r", **kwargs):
- return PathManager.open(self._get_local_path(path), mode, **kwargs)
-
-
-PathManager.register_handler(HTTPURLHandler())
-PathManager.register_handler(OneDrivePathHandler())
-PathManager.register_handler(Detectron2Handler())
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/utils.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/utils.py
deleted file mode 100644
index c2d67ed8bc793dd5113224fa322adb88f3ed9b22..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/utils.py
+++ /dev/null
@@ -1,177 +0,0 @@
-import bisect
-import functools
-import logging
-import numbers
-import os
-import signal
-import sys
-import traceback
-import warnings
-
-import torch
-from pytorch_lightning import seed_everything
-
-LOGGER = logging.getLogger(__name__)
-
-import platform
-if platform.system() != 'Linux':
- signal.SIGUSR1 = 1
-
-def check_and_warn_input_range(tensor, min_value, max_value, name):
- actual_min = tensor.min()
- actual_max = tensor.max()
- if actual_min < min_value or actual_max > max_value:
- warnings.warn(f"{name} must be in {min_value}..{max_value} range, but it ranges {actual_min}..{actual_max}")
-
-
-def sum_dict_with_prefix(target, cur_dict, prefix, default=0):
- for k, v in cur_dict.items():
- target_key = prefix + k
- target[target_key] = target.get(target_key, default) + v
-
-
-def average_dicts(dict_list):
- result = {}
- norm = 1e-3
- for dct in dict_list:
- sum_dict_with_prefix(result, dct, '')
- norm += 1
- for k in list(result):
- result[k] /= norm
- return result
-
-
-def add_prefix_to_keys(dct, prefix):
- return {prefix + k: v for k, v in dct.items()}
-
-
-def set_requires_grad(module, value):
- for param in module.parameters():
- param.requires_grad = value
-
-
-def flatten_dict(dct):
- result = {}
- for k, v in dct.items():
- if isinstance(k, tuple):
- k = '_'.join(k)
- if isinstance(v, dict):
- for sub_k, sub_v in flatten_dict(v).items():
- result[f'{k}_{sub_k}'] = sub_v
- else:
- result[k] = v
- return result
-
-
-class LinearRamp:
- def __init__(self, start_value=0, end_value=1, start_iter=-1, end_iter=0):
- self.start_value = start_value
- self.end_value = end_value
- self.start_iter = start_iter
- self.end_iter = end_iter
-
- def __call__(self, i):
- if i < self.start_iter:
- return self.start_value
- if i >= self.end_iter:
- return self.end_value
- part = (i - self.start_iter) / (self.end_iter - self.start_iter)
- return self.start_value * (1 - part) + self.end_value * part
-
-
-class LadderRamp:
- def __init__(self, start_iters, values):
- self.start_iters = start_iters
- self.values = values
- assert len(values) == len(start_iters) + 1, (len(values), len(start_iters))
-
- def __call__(self, i):
- segment_i = bisect.bisect_right(self.start_iters, i)
- return self.values[segment_i]
-
-
-def get_ramp(kind='ladder', **kwargs):
- if kind == 'linear':
- return LinearRamp(**kwargs)
- if kind == 'ladder':
- return LadderRamp(**kwargs)
- raise ValueError(f'Unexpected ramp kind: {kind}')
-
-
-def print_traceback_handler(sig, frame):
- LOGGER.warning(f'Received signal {sig}')
- bt = ''.join(traceback.format_stack())
- LOGGER.warning(f'Requested stack trace:\n{bt}')
-
-
-def register_debug_signal_handlers(sig=signal.SIGUSR1, handler=print_traceback_handler):
- LOGGER.warning(f'Setting signal {sig} handler {handler}')
- signal.signal(sig, handler)
-
-
-def handle_deterministic_config(config):
- seed = dict(config).get('seed', None)
- if seed is None:
- return False
-
- seed_everything(seed)
- return True
-
-
-def get_shape(t):
- if torch.is_tensor(t):
- return tuple(t.shape)
- elif isinstance(t, dict):
- return {n: get_shape(q) for n, q in t.items()}
- elif isinstance(t, (list, tuple)):
- return [get_shape(q) for q in t]
- elif isinstance(t, numbers.Number):
- return type(t)
- else:
- raise ValueError('unexpected type {}'.format(type(t)))
-
-
-def get_has_ddp_rank():
- master_port = os.environ.get('MASTER_PORT', None)
- node_rank = os.environ.get('NODE_RANK', None)
- local_rank = os.environ.get('LOCAL_RANK', None)
- world_size = os.environ.get('WORLD_SIZE', None)
- has_rank = master_port is not None or node_rank is not None or local_rank is not None or world_size is not None
- return has_rank
-
-
-def handle_ddp_subprocess():
- def main_decorator(main_func):
- @functools.wraps(main_func)
- def new_main(*args, **kwargs):
- # Trainer sets MASTER_PORT, NODE_RANK, LOCAL_RANK, WORLD_SIZE
- parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None)
- has_parent = parent_cwd is not None
- has_rank = get_has_ddp_rank()
- assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}'
-
- if has_parent:
- # we are in the worker
- sys.argv.extend([
- f'hydra.run.dir={parent_cwd}',
- # 'hydra/hydra_logging=disabled',
- # 'hydra/job_logging=disabled'
- ])
- # do nothing if this is a top-level process
- # TRAINING_PARENT_WORK_DIR is set in handle_ddp_parent_process after hydra initialization
-
- main_func(*args, **kwargs)
- return new_main
- return main_decorator
-
-
-def handle_ddp_parent_process():
- parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None)
- has_parent = parent_cwd is not None
- has_rank = get_has_ddp_rank()
- assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}'
-
- if parent_cwd is None:
- os.environ['TRAINING_PARENT_WORK_DIR'] = os.getcwd()
-
- return has_parent
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/utils/word_vectorizer.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/utils/word_vectorizer.py
deleted file mode 100644
index d27205820c6ce17cac2e0f923808b35c0ba5f0eb..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/utils/word_vectorizer.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import numpy as np
-import pickle
-from os.path import join as pjoin
-
-POS_enumerator = {
- 'VERB': 0,
- 'NOUN': 1,
- 'DET': 2,
- 'ADP': 3,
- 'NUM': 4,
- 'AUX': 5,
- 'PRON': 6,
- 'ADJ': 7,
- 'ADV': 8,
- 'Loc_VIP': 9,
- 'Body_VIP': 10,
- 'Obj_VIP': 11,
- 'Act_VIP': 12,
- 'Desc_VIP': 13,
- 'OTHER': 14,
-}
-
-Loc_list = ('left', 'right', 'clockwise', 'counterclockwise', 'anticlockwise', 'forward', 'back', 'backward',
- 'up', 'down', 'straight', 'curve')
-
-Body_list = ('arm', 'chin', 'foot', 'feet', 'face', 'hand', 'mouth', 'leg', 'waist', 'eye', 'knee', 'shoulder', 'thigh')
-
-Obj_List = ('stair', 'dumbbell', 'chair', 'window', 'floor', 'car', 'ball', 'handrail', 'baseball', 'basketball')
-
-Act_list = ('walk', 'run', 'swing', 'pick', 'bring', 'kick', 'put', 'squat', 'throw', 'hop', 'dance', 'jump', 'turn',
- 'stumble', 'dance', 'stop', 'sit', 'lift', 'lower', 'raise', 'wash', 'stand', 'kneel', 'stroll',
- 'rub', 'bend', 'balance', 'flap', 'jog', 'shuffle', 'lean', 'rotate', 'spin', 'spread', 'climb')
-
-Desc_list = ('slowly', 'carefully', 'fast', 'careful', 'slow', 'quickly', 'happy', 'angry', 'sad', 'happily', 'angrily', 'sadly')
-
-VIP_dict = {
- 'Loc_VIP': Loc_list,
- 'Body_VIP': Body_list,
- 'Obj_VIP': Obj_List,
- 'Act_VIP': Act_list,
- 'Desc_VIP': Desc_list,
-}
-
-
-class WordVectorizer(object):
- def __init__(self, meta_root, prefix):
- vectors = np.load(pjoin(meta_root, '%s_data.npy'%prefix))
- words = pickle.load(open(pjoin(meta_root, '%s_words.pkl'%prefix), 'rb'))
- word2idx = pickle.load(open(pjoin(meta_root, '%s_idx.pkl'%prefix), 'rb'))
- self.word2vec = {w: vectors[word2idx[w]] for w in words}
-
- def _get_pos_ohot(self, pos):
- pos_vec = np.zeros(len(POS_enumerator))
- if pos in POS_enumerator:
- pos_vec[POS_enumerator[pos]] = 1
- else:
- pos_vec[POS_enumerator['OTHER']] = 1
- return pos_vec
-
- def __len__(self):
- return len(self.word2vec)
-
- def __getitem__(self, item):
- word, pos = item.split('/')
- if word in self.word2vec:
- word_vec = self.word2vec[word]
- vip_pos = None
- for key, values in VIP_dict.items():
- if word in values:
- vip_pos = key
- break
- if vip_pos is not None:
- pos_vec = self._get_pos_ohot(vip_pos)
- else:
- pos_vec = self._get_pos_ohot(pos)
- else:
- word_vec = self.word2vec['unk']
- pos_vec = self._get_pos_ohot('OTHER')
- return word_vec, pos_vec
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/materials.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/materials.py
deleted file mode 100644
index 4f0bf1a1c28254a776469058ab6473c7ca9a451d..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/materials.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import bpy
-
-
-def clear_material(material):
- if material.node_tree:
- material.node_tree.links.clear()
- material.node_tree.nodes.clear()
-
-
-def colored_material_diffuse_BSDF(r, g, b, a=1, roughness=0.127451):
- materials = bpy.data.materials
- material = materials.new(name="body")
- material.use_nodes = True
- clear_material(material)
- nodes = material.node_tree.nodes
- links = material.node_tree.links
- output = nodes.new(type='ShaderNodeOutputMaterial')
- diffuse = nodes.new(type='ShaderNodeBsdfDiffuse')
- diffuse.inputs["Color"].default_value = (r, g, b, a)
- diffuse.inputs["Roughness"].default_value = roughness
- links.new(diffuse.outputs['BSDF'], output.inputs['Surface'])
- return material
-
-def colored_material_relection_BSDF(r, g, b, a=1, roughness=0.127451, saturation_factor=1):
- materials = bpy.data.materials
- material = materials.new(name="body")
- material.use_nodes = True
- # clear_material(material)
- nodes = material.node_tree.nodes
- links = material.node_tree.links
- output = nodes.new(type='ShaderNodeOutputMaterial')
- # diffuse = nodes.new(type='ShaderNodeBsdfDiffuse')
- diffuse = nodes["Principled BSDF"]
- diffuse.inputs["Base Color"].default_value = (r*saturation_factor, g*saturation_factor, b*saturation_factor, a)
- diffuse.inputs["Roughness"].default_value = roughness
- links.new(diffuse.outputs['BSDF'], output.inputs['Surface'])
- return material
-
-# keys:
-# ['Base Color', 'Subsurface', 'Subsurface Radius', 'Subsurface Color', 'Metallic', 'Specular', 'Specular Tint', 'Roughness', 'Anisotropic', 'Anisotropic Rotation', 'Sheen', 1Sheen Tint', 'Clearcoat', 'Clearcoat Roughness', 'IOR', 'Transmission', 'Transmission Roughness', 'Emission', 'Emission Strength', 'Alpha', 'Normal', 'Clearcoat Normal', 'Tangent']
-DEFAULT_BSDF_SETTINGS = {"Subsurface": 0.15,
- "Subsurface Radius": [1.1, 0.2, 0.1],
- "Metallic": 0.3,
- "Specular": 0.5,
- "Specular Tint": 0.5,
- "Roughness": 0.75,
- "Anisotropic": 0.25,
- "Anisotropic Rotation": 0.25,
- "Sheen": 0.75,
- "Sheen Tint": 0.5,
- "Clearcoat": 0.5,
- "Clearcoat Roughness": 0.5,
- "IOR": 1.450,
- "Transmission": 0.1,
- "Transmission Roughness": 0.1,
- "Emission": (0, 0, 0, 1),
- "Emission Strength": 0.0,
- "Alpha": 1.0}
-
-def body_material(r, g, b, a=1, name="body", oldrender=True):
- if oldrender:
- material = colored_material_diffuse_BSDF(r, g, b, a=a)
- else:
- materials = bpy.data.materials
- material = materials.new(name=name)
- material.use_nodes = True
- nodes = material.node_tree.nodes
- diffuse = nodes["Principled BSDF"]
- inputs = diffuse.inputs
-
- settings = DEFAULT_BSDF_SETTINGS.copy()
- settings["Base Color"] = (r, g, b, a)
- settings["Subsurface Color"] = (r, g, b, a)
- settings["Subsurface"] = 0.0
-
- for setting, val in settings.items():
- inputs[setting].default_value = val
-
- return material
-
-
-def colored_material_bsdf(name, **kwargs):
- materials = bpy.data.materials
- material = materials.new(name=name)
- material.use_nodes = True
- nodes = material.node_tree.nodes
- diffuse = nodes["Principled BSDF"]
- inputs = diffuse.inputs
-
- settings = DEFAULT_BSDF_SETTINGS.copy()
- for key, val in kwargs.items():
- settings[key] = val
-
- for setting, val in settings.items():
- inputs[setting].default_value = val
-
- return material
-
-
-def floor_mat(name="floor_mat", color=(0.1, 0.1, 0.1, 1), roughness=0.127451):
- return colored_material_diffuse_BSDF(color[0], color[1], color[2], a=color[3], roughness=roughness)
-
-
-def plane_mat():
- materials = bpy.data.materials
- material = materials.new(name="plane")
- material.use_nodes = True
- clear_material(material)
- nodes = material.node_tree.nodes
- links = material.node_tree.links
- output = nodes.new(type='ShaderNodeOutputMaterial')
- diffuse = nodes.new(type='ShaderNodeBsdfDiffuse')
- checker = nodes.new(type="ShaderNodeTexChecker")
- checker.inputs["Scale"].default_value = 1024
- checker.inputs["Color1"].default_value = (0.8, 0.8, 0.8, 1)
- checker.inputs["Color2"].default_value = (0.3, 0.3, 0.3, 1)
- links.new(checker.outputs["Color"], diffuse.inputs['Color'])
- links.new(diffuse.outputs['BSDF'], output.inputs['Surface'])
- diffuse.inputs["Roughness"].default_value = 0.127451
- return material
-
-
-def plane_mat_uni():
- materials = bpy.data.materials
- material = materials.new(name="plane_uni")
- material.use_nodes = True
- clear_material(material)
- nodes = material.node_tree.nodes
- links = material.node_tree.links
- output = nodes.new(type='ShaderNodeOutputMaterial')
- diffuse = nodes.new(type='ShaderNodeBsdfDiffuse')
- diffuse.inputs["Color"].default_value = (0.8, 0.8, 0.8, 1)
- diffuse.inputs["Roughness"].default_value = 0.127451
- links.new(diffuse.outputs['BSDF'], output.inputs['Surface'])
- return material
diff --git a/spaces/OptimalScale/Robin-7b/lmflow/pipeline/__init__.py b/spaces/OptimalScale/Robin-7b/lmflow/pipeline/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/slot-allocation.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/slot-allocation.go
deleted file mode 100644
index 816cded86fd891adcabfc90f0f59d787557ca06b..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/slot-allocation.go and /dev/null differ
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/registry.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/registry.py
deleted file mode 100644
index 39eabc58db4b5954478a2ac1ab91cea5e45ab055..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/registry.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from annotator.uniformer.mmcv.utils import Registry
-
-CONV_LAYERS = Registry('conv layer')
-NORM_LAYERS = Registry('norm layer')
-ACTIVATION_LAYERS = Registry('activation layer')
-PADDING_LAYERS = Registry('padding layer')
-UPSAMPLE_LAYERS = Registry('upsample layer')
-PLUGIN_LAYERS = Registry('plugin layer')
-
-DROPOUT_LAYERS = Registry('drop out layers')
-POSITIONAL_ENCODING = Registry('position encoding')
-ATTENTION = Registry('attention')
-FEEDFORWARD_NETWORK = Registry('feed-forward Network')
-TRANSFORMER_LAYER = Registry('transformerLayer')
-TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence')
diff --git a/spaces/Plashkar/test-gradio-sdk/app.py b/spaces/Plashkar/test-gradio-sdk/app.py
deleted file mode 100644
index 0be29e4a1b5e7748b6fe8c9f3a446117985b9378..0000000000000000000000000000000000000000
--- a/spaces/Plashkar/test-gradio-sdk/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("spaces/eugenesiow/remove-bg").launch()
\ No newline at end of file
diff --git a/spaces/PrinceDeven78/Dreamlike-Webui-CPU/app.py b/spaces/PrinceDeven78/Dreamlike-Webui-CPU/app.py
deleted file mode 100644
index b01d0121b2040078b10841e76ea4b99a76eeb294..0000000000000000000000000000000000000000
--- a/spaces/PrinceDeven78/Dreamlike-Webui-CPU/app.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import os
-from sys import executable as pyexecutable
-import subprocess
-import pathlib
-import gc
-
-def Gitclone(URI:str,ClonePath:str = "") -> int :
- if(ClonePath == "") :
- while True:
- i=subprocess.run([r"git",r"clone",URI])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
- else:
- while True:
- i=subprocess.run([r"git",r"clone",URI,ClonePath])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
-def DownLoad(URI:str,DownloadPath:str,DownLoadFileName:str ) -> int:
- while (True):
- i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",DownloadPath,r"-o",DownLoadFileName,URI]);
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
-user_home =pathlib.Path.home().resolve()
-os.chdir(str(user_home))
-#clone stable-diffusion-webui repo
-print("cloning stable-diffusion-webui repo")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",str(user_home / r"stable-diffusion-webui"))
-os.chdir(str(user_home / r"stable-diffusion-webui"))
-os.system("git reset --hard 89f9faa63388756314e8a1d96cf86bf5e0663045")
-#
-
-#install extensions
-print("installing extensions")
-Gitclone(r"https://huggingface.co/embed/negative",str(user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative"))
-Gitclone(r"https://huggingface.co/embed/lora",str(user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive"))
-DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",str(user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN") ,r"4x-UltraSharp.pth")
-while True:
- if(subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")]).returncode == 0):
- break
-Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" ))
-#Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",str(user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser"))
-Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface"))
-Gitclone(r"https://github.com/camenduru/sd-civitai-browser",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-civitai-browser"))
-Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks"))
-Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet"))
-Gitclone(r"https://github.com/fkunn1326/openpose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor"))
-Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib"))
-Gitclone(r"https://github.com/hnmr293/posex",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"posex"))
-Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor"))
-#中文本地化的请解除下一行的注释
-#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN"))
-Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete"))
-Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels"))
-Gitclone(r"https://github.com/etherealxx/batchlinks-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui"))
-Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin"))
-
-#Gitclone(r"https://github.com/KohakuBueleaf/a1111-sd-webui-locon",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-locon" ))
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg"))
-Gitclone(r"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-two-shot"))
-Gitclone(r"https://github.com/camenduru/sd_webui_stealth_pnginfo",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_webui_stealth_pnginfo"))
-
-os.chdir(user_home / r"stable-diffusion-webui")
-
-#download ControlNet models
-print("extensions dolwnload done .\ndownloading ControlNet models")
-dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"]
-for i in range(0,len(dList)): DownLoad(dList[i],str(user_home / "stable-diffusion-webui" / "extensions" / "sd-webui-controlnet" / "models"),pathlib.Path(dList[i]).name)
-del dList
-
-#download model
-#you can change model download address here
-print("ControlNet models download done.\ndownloading model")
-DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-photoreal-2.0.safetensors")
-DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-anime-1.0/resolve/main/dreamlike-anime-1.0.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-anime-1.0.safetensors")
-DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/resolve/main/dreamlike-diffusion-1.0.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-diffusion-1.0.safetensors")
-DownLoad(r"https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/resolve/main/dreamlike-photoreal-1.0.ckpt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"dreamlike-photoreal-1.0.ckpt")
-DownLoad(r"https://huggingface.co/Yntec/Photosphere/resolve/main/photosphere.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"photosphere.safetensors")
-
-#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.5-pruned.ckpt")
-#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"anything-v4.0.vae.pt")
-#DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"Counterfeit-V3.0_fp16.safetensors")
-#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1B_orangemixs.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"AOM3A1B_orangemixs.safetensors")
-#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"orangemix.vae.pt")
-#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"MeinaPastelV5_BakedVAE.safetensors")
-#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Without%20VAE.safetensors",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"MeinaPastelV5_WithoutVAE.safetensors")
-#DownLoad(r"https://civitai.com/api/download/models/9474",str(user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion"),r"chilloutmix_NiPrunedFp16.safetensors")
-
-DownLoad(r"https://civitai.com/api/download/models/39885",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"Better_light.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/21065",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"LAS.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/39164",str(user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora"),r"backlighting.safetensors")
-#strt webui
-
-print("Done\nStarting Webui...")
-os.chdir(user_home / r"stable-diffusion-webui")
-while True:
- ret=subprocess.run([r"python3" ,r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")])
- if(ret.returncode == 0 ):
- del ret
- gc.collect()
- else :
- del ret
-
-del os ,user_home ,pyexecutable ,subprocess
\ No newline at end of file
diff --git a/spaces/RMXK/RVC_HFF/diffq/__init__.py b/spaces/RMXK/RVC_HFF/diffq/__init__.py
deleted file mode 100644
index 2b997ee4ed99a90cc43db7812383927e6fe1a3e8..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/diffq/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-"""
-This package implements different quantization strategies:
-
-- `diffq.uniform.UniformQuantizer`: classic uniform quantization over n bits.
-- `diffq.diffq.DiffQuantizer`: differentiable quantizer based on scaled noise injection.
-
-Also, do check `diffq.base.BaseQuantizer` for the common methods of all Quantizers.
-"""
-
-from .uniform import UniformQuantizer
-from .diffq import DiffQuantizer
diff --git a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/transforms.py b/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/transforms.py
deleted file mode 100644
index 6f30b7177d17fc61a4173c21b4233172a890be58..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/transforms.py
+++ /dev/null
@@ -1,207 +0,0 @@
-import numpy as np
-import torch
-from torch.nn import functional as F
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Rain-2008730/TXT_GENERATOR_69420/app.py b/spaces/Rain-2008730/TXT_GENERATOR_69420/app.py
deleted file mode 100644
index 3ff4b085096f58a4aa4110cce278726ecc047714..0000000000000000000000000000000000000000
--- a/spaces/Rain-2008730/TXT_GENERATOR_69420/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel
-mod1=gr.Interface.load("huggingface/gpt2")
-mod2=gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
-mod3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B")
-title="Txt generator 69420"
-description="input txt and submit"
-gr.Parallel(mod1, mod2 , mod3, title=title, description=description).launch()
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pkg_resources/py31compat.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pkg_resources/py31compat.py
deleted file mode 100644
index a2d3007ceb16b0eeb4b1f57361c089558a25daeb..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pkg_resources/py31compat.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import os
-import errno
-import sys
-
-from pip._vendor import six
-
-
-def _makedirs_31(path, exist_ok=False):
- try:
- os.makedirs(path)
- except OSError as exc:
- if not exist_ok or exc.errno != errno.EEXIST:
- raise
-
-
-# rely on compatibility behavior until mode considerations
-# and exists_ok considerations are disentangled.
-# See https://github.com/pypa/setuptools/pull/1083#issuecomment-315168663
-needs_makedirs = (
- six.PY2 or
- (3, 4) <= sys.version_info < (3, 4, 1)
-)
-makedirs = _makedirs_31 if needs_makedirs else os.makedirs
diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/supervision.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/supervision.py
deleted file mode 100644
index 86f167e95439d588c998ca32b9296c3482484215..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/utils/supervision.py
+++ /dev/null
@@ -1,172 +0,0 @@
-from math import log
-from loguru import logger
-
-import torch
-from einops import repeat
-from kornia.utils import create_meshgrid
-
-from .geometry import warp_kpts
-
-############## ↓ Coarse-Level supervision ↓ ##############
-
-
-@torch.no_grad()
-def mask_pts_at_padded_regions(grid_pt, mask):
- """For megadepth dataset, zero-padding exists in images"""
- mask = repeat(mask, "n h w -> n (h w) c", c=2)
- grid_pt[~mask.bool()] = 0
- return grid_pt
-
-
-@torch.no_grad()
-def spvs_coarse(data, config):
- """
- Update:
- data (dict): {
- "conf_matrix_gt": [N, hw0, hw1],
- 'spv_b_ids': [M]
- 'spv_i_ids': [M]
- 'spv_j_ids': [M]
- 'spv_w_pt0_i': [N, hw0, 2], in original image resolution
- 'spv_pt1_i': [N, hw1, 2], in original image resolution
- }
-
- NOTE:
- - for scannet dataset, there're 3 kinds of resolution {i, c, f}
- - for megadepth dataset, there're 4 kinds of resolution {i, i_resize, c, f}
- """
- # 1. misc
- device = data["image0"].device
- N, _, H0, W0 = data["image0"].shape
- _, _, H1, W1 = data["image1"].shape
- scale = config["MODEL"]["RESOLUTION"][0]
- scale0 = scale * data["scale0"][:, None] if "scale0" in data else scale
- scale1 = scale * data["scale1"][:, None] if "scale0" in data else scale
- h0, w0, h1, w1 = map(lambda x: x // scale, [H0, W0, H1, W1])
-
- # 2. warp grids
- # create kpts in meshgrid and resize them to image resolution
- grid_pt0_c = (
- create_meshgrid(h0, w0, False, device).reshape(1, h0 * w0, 2).repeat(N, 1, 1)
- ) # [N, hw, 2]
- grid_pt0_i = scale0 * grid_pt0_c
- grid_pt1_c = (
- create_meshgrid(h1, w1, False, device).reshape(1, h1 * w1, 2).repeat(N, 1, 1)
- )
- grid_pt1_i = scale1 * grid_pt1_c
-
- # mask padded region to (0, 0), so no need to manually mask conf_matrix_gt
- if "mask0" in data:
- grid_pt0_i = mask_pts_at_padded_regions(grid_pt0_i, data["mask0"])
- grid_pt1_i = mask_pts_at_padded_regions(grid_pt1_i, data["mask1"])
-
- # warp kpts bi-directionally and resize them to coarse-level resolution
- # (no depth consistency check, since it leads to worse results experimentally)
- # (unhandled edge case: points with 0-depth will be warped to the left-up corner)
- _, w_pt0_i = warp_kpts(
- grid_pt0_i,
- data["depth0"],
- data["depth1"],
- data["T_0to1"],
- data["K0"],
- data["K1"],
- )
- _, w_pt1_i = warp_kpts(
- grid_pt1_i,
- data["depth1"],
- data["depth0"],
- data["T_1to0"],
- data["K1"],
- data["K0"],
- )
- w_pt0_c = w_pt0_i / scale1
- w_pt1_c = w_pt1_i / scale0
-
- # 3. check if mutual nearest neighbor
- w_pt0_c_round = w_pt0_c[:, :, :].round().long()
- nearest_index1 = w_pt0_c_round[..., 0] + w_pt0_c_round[..., 1] * w1
- w_pt1_c_round = w_pt1_c[:, :, :].round().long()
- nearest_index0 = w_pt1_c_round[..., 0] + w_pt1_c_round[..., 1] * w0
-
- # corner case: out of boundary
- def out_bound_mask(pt, w, h):
- return (
- (pt[..., 0] < 0) + (pt[..., 0] >= w) + (pt[..., 1] < 0) + (pt[..., 1] >= h)
- )
-
- nearest_index1[out_bound_mask(w_pt0_c_round, w1, h1)] = 0
- nearest_index0[out_bound_mask(w_pt1_c_round, w0, h0)] = 0
-
- loop_back = torch.stack(
- [nearest_index0[_b][_i] for _b, _i in enumerate(nearest_index1)], dim=0
- )
- correct_0to1 = loop_back == torch.arange(h0 * w0, device=device)[None].repeat(N, 1)
- correct_0to1[:, 0] = False # ignore the top-left corner
-
- # 4. construct a gt conf_matrix
- conf_matrix_gt = torch.zeros(N, h0 * w0, h1 * w1, device=device)
- b_ids, i_ids = torch.where(correct_0to1 != 0)
- j_ids = nearest_index1[b_ids, i_ids]
-
- conf_matrix_gt[b_ids, i_ids, j_ids] = 1
- data.update({"conf_matrix_gt": conf_matrix_gt})
-
- # 5. save coarse matches(gt) for training fine level
- if len(b_ids) == 0:
- logger.warning(f"No groundtruth coarse match found for: {data['pair_names']}")
- # this won't affect fine-level loss calculation
- b_ids = torch.tensor([0], device=device)
- i_ids = torch.tensor([0], device=device)
- j_ids = torch.tensor([0], device=device)
-
- data.update({"spv_b_ids": b_ids, "spv_i_ids": i_ids, "spv_j_ids": j_ids})
-
- # 6. save intermediate results (for fast fine-level computation)
- data.update({"spv_w_pt0_i": w_pt0_i, "spv_pt1_i": grid_pt1_i})
-
-
-def compute_supervision_coarse(data, config):
- assert (
- len(set(data["dataset_name"])) == 1
- ), "Do not support mixed datasets training!"
- data_source = data["dataset_name"][0]
- if data_source.lower() in ["scannet", "megadepth"]:
- spvs_coarse(data, config)
- else:
- raise ValueError(f"Unknown data source: {data_source}")
-
-
-############## ↓ Fine-Level supervision ↓ ##############
-
-
-@torch.no_grad()
-def spvs_fine(data, config):
- """
- Update:
- data (dict):{
- "expec_f_gt": [M, 2]}
- """
- # 1. misc
- # w_pt0_i, pt1_i = data.pop('spv_w_pt0_i'), data.pop('spv_pt1_i')
- w_pt0_i, pt1_i = data["spv_w_pt0_i"], data["spv_pt1_i"]
- scale = config["MODEL"]["RESOLUTION"][1]
- radius = config["MODEL"]["FINE_WINDOW_SIZE"] // 2
-
- # 2. get coarse prediction
- b_ids, i_ids, j_ids = data["b_ids"], data["i_ids"], data["j_ids"]
-
- # 3. compute gt
- scale = scale * data["scale1"][b_ids] if "scale0" in data else scale
- # `expec_f_gt` might exceed the window, i.e. abs(*) > 1, which would be filtered later
- expec_f_gt = (
- (w_pt0_i[b_ids, i_ids] - pt1_i[b_ids, j_ids]) / scale / radius
- ) # [M, 2]
- data.update({"expec_f_gt": expec_f_gt})
-
-
-def compute_supervision_fine(data, config):
- data_source = data["dataset_name"][0]
- if data_source.lower() in ["scannet", "megadepth"]:
- spvs_fine(data, config)
- else:
- raise NotImplementedError
diff --git a/spaces/RobLi/ControlNet-v1-1/depth_estimator.py b/spaces/RobLi/ControlNet-v1-1/depth_estimator.py
deleted file mode 100644
index 8af14987f58b59329e5c8441dec43f1075a29d8b..0000000000000000000000000000000000000000
--- a/spaces/RobLi/ControlNet-v1-1/depth_estimator.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import numpy as np
-import PIL.Image
-from controlnet_aux.util import HWC3
-from transformers import pipeline
-
-from cv_utils import resize_image
-
-
-class DepthEstimator:
- def __init__(self):
- self.model = pipeline('depth-estimation')
-
- def __call__(self, image: np.ndarray, **kwargs) -> PIL.Image.Image:
- detect_resolution = kwargs.pop('detect_resolution', 512)
- image_resolution = kwargs.pop('image_resolution', 512)
- image = np.array(image)
- image = HWC3(image)
- image = resize_image(image, resolution=detect_resolution)
- image = PIL.Image.fromarray(image)
- image = self.model(image)
- image = image['depth']
- image = np.array(image)
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- return PIL.Image.fromarray(image)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py
deleted file mode 100644
index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained=None,
- backbone=dict(
- type='UNet',
- in_channels=3,
- base_channels=64,
- num_stages=5,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False),
- decode_head=dict(
- type='PSPHead',
- in_channels=64,
- in_index=4,
- channels=16,
- pool_scales=(1, 2, 3, 6),
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=128,
- in_index=3,
- channels=64,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='slide', crop_size=256, stride=170))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/pavi.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/pavi.py
deleted file mode 100644
index 1dcf146d8163aff1363e9764999b0a74d674a595..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/pavi.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import json
-import os
-import os.path as osp
-
-import torch
-import yaml
-
-import annotator.uniformer.mmcv as mmcv
-from ....parallel.utils import is_module_wrapper
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class PaviLoggerHook(LoggerHook):
-
- def __init__(self,
- init_kwargs=None,
- add_graph=False,
- add_last_ckpt=False,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- by_epoch=True,
- img_key='img_info'):
- super(PaviLoggerHook, self).__init__(interval, ignore_last, reset_flag,
- by_epoch)
- self.init_kwargs = init_kwargs
- self.add_graph = add_graph
- self.add_last_ckpt = add_last_ckpt
- self.img_key = img_key
-
- @master_only
- def before_run(self, runner):
- super(PaviLoggerHook, self).before_run(runner)
- try:
- from pavi import SummaryWriter
- except ImportError:
- raise ImportError('Please run "pip install pavi" to install pavi.')
-
- self.run_name = runner.work_dir.split('/')[-1]
-
- if not self.init_kwargs:
- self.init_kwargs = dict()
- self.init_kwargs['name'] = self.run_name
- self.init_kwargs['model'] = runner._model_name
- if runner.meta is not None:
- if 'config_dict' in runner.meta:
- config_dict = runner.meta['config_dict']
- assert isinstance(
- config_dict,
- dict), ('meta["config_dict"] has to be of a dict, '
- f'but got {type(config_dict)}')
- elif 'config_file' in runner.meta:
- config_file = runner.meta['config_file']
- config_dict = dict(mmcv.Config.fromfile(config_file))
- else:
- config_dict = None
- if config_dict is not None:
- # 'max_.*iter' is parsed in pavi sdk as the maximum iterations
- # to properly set up the progress bar.
- config_dict = config_dict.copy()
- config_dict.setdefault('max_iter', runner.max_iters)
- # non-serializable values are first converted in
- # mmcv.dump to json
- config_dict = json.loads(
- mmcv.dump(config_dict, file_format='json'))
- session_text = yaml.dump(config_dict)
- self.init_kwargs['session_text'] = session_text
- self.writer = SummaryWriter(**self.init_kwargs)
-
- def get_step(self, runner):
- """Get the total training step/epoch."""
- if self.get_mode(runner) == 'val' and self.by_epoch:
- return self.get_epoch(runner)
- else:
- return self.get_iter(runner)
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner, add_mode=False)
- if tags:
- self.writer.add_scalars(
- self.get_mode(runner), tags, self.get_step(runner))
-
- @master_only
- def after_run(self, runner):
- if self.add_last_ckpt:
- ckpt_path = osp.join(runner.work_dir, 'latest.pth')
- if osp.islink(ckpt_path):
- ckpt_path = osp.join(runner.work_dir, os.readlink(ckpt_path))
-
- if osp.isfile(ckpt_path):
- # runner.epoch += 1 has been done before `after_run`.
- iteration = runner.epoch if self.by_epoch else runner.iter
- return self.writer.add_snapshot_file(
- tag=self.run_name,
- snapshot_file_path=ckpt_path,
- iteration=iteration)
-
- # flush the buffer and send a task ending signal to Pavi
- self.writer.close()
-
- @master_only
- def before_epoch(self, runner):
- if runner.epoch == 0 and self.add_graph:
- if is_module_wrapper(runner.model):
- _model = runner.model.module
- else:
- _model = runner.model
- device = next(_model.parameters()).device
- data = next(iter(runner.data_loader))
- image = data[self.img_key][0:1].to(device)
- with torch.no_grad():
- self.writer.add_graph(_model, image)
diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/app.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/app.py
deleted file mode 100644
index 91c07521e7437916500cb6f5d17972bdab624d4f..0000000000000000000000000000000000000000
--- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/app.py
+++ /dev/null
@@ -1,797 +0,0 @@
-import os
-import sys
-import torch
-import logging
-import speechbrain as sb
-from speechbrain.utils.distributed import run_on_main
-from hyperpyyaml import load_hyperpyyaml
-from pathlib import Path
-import torchaudio.transforms as T
-from cv_train import ASRCV
-import torchaudio
-import numpy as np
-import kenlm
-from pyctcdecode import build_ctcdecoder
-import re
-from torch.nn.utils.rnn import pad_sequence
-import torch.optim as optim
-import torch.nn as nn
-
-
-# Commented out IPython magic to ensure Python compatibility.
-hparams_file, run_opts, overrides = sb.parse_arguments(["TunisianASR/semi_trained.yaml"])
-
-# If distributed_launch=True then
-# create ddp_group with the right communication protocol
-sb.utils.distributed.ddp_init_group(run_opts)
-
-with open(hparams_file) as fin:
- hparams = load_hyperpyyaml(fin, overrides)
-
-# Create experiment directory
-sb.create_experiment_directory(
- experiment_directory=hparams["output_folder"],
- hyperparams_to_save=hparams_file,
- overrides=overrides,
-)
-# Dataset prep (parsing Librispeech)
-
-def dataio_prepare(hparams):
- """This function prepares the datasets to be used in the brain class.
- It also defines the data processing pipeline through user-defined functions."""
-
- # 1. Define datasets
- data_folder = hparams["data_folder"]
-
- train_data = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=hparams["train_csv"], replacements={"data_root": data_folder},
- )
-
- if hparams["sorting"] == "ascending":
- # we sort training data to speed up training and get better results.
- train_data = train_data.filtered_sorted(
- sort_key="duration",
- key_max_value={"duration": hparams["avoid_if_longer_than"]},
- )
- # when sorting do not shuffle in dataloader ! otherwise is pointless
- hparams["dataloader_options"]["shuffle"] = False
-
- elif hparams["sorting"] == "descending":
- train_data = train_data.filtered_sorted(
- sort_key="duration",
- reverse=True,
- key_max_value={"duration": hparams["avoid_if_longer_than"]},
- )
- # when sorting do not shuffle in dataloader ! otherwise is pointless
- hparams["dataloader_options"]["shuffle"] = False
-
- elif hparams["sorting"] == "random":
- pass
-
- else:
- raise NotImplementedError(
- "sorting must be random, ascending or descending"
- )
-
- valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=hparams["valid_csv"], replacements={"data_root": data_folder},
- )
- # We also sort the validation data so it is faster to validate
- valid_data = valid_data.filtered_sorted(sort_key="duration")
- test_datasets = {}
- for csv_file in hparams["test_csv"]:
- name = Path(csv_file).stem
- test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=csv_file, replacements={"data_root": data_folder}
- )
- test_datasets[name] = test_datasets[name].filtered_sorted(
- sort_key="duration"
- )
-
- datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()]
-
-
- # 2. Define audio pipeline:
- @sb.utils.data_pipeline.takes("wav")
- @sb.utils.data_pipeline.provides("sig")
- def audio_pipeline(wav):
- info = torchaudio.info(wav)
- sig = sb.dataio.dataio.read_audio(wav)
- if len(sig.shape)>1 :
- sig = torch.mean(sig, dim=1)
- resampled = torchaudio.transforms.Resample(
- info.sample_rate, hparams["sample_rate"],
- )(sig)
- return resampled
-
- sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline)
- label_encoder = sb.dataio.encoder.CTCTextEncoder()
-
- # 3. Define text pipeline:
- @sb.utils.data_pipeline.takes("wrd")
- @sb.utils.data_pipeline.provides(
- "wrd", "char_list", "tokens_list", "tokens"
- )
- def text_pipeline(wrd):
- yield wrd
- char_list = list(wrd)
- yield char_list
- tokens_list = label_encoder.encode_sequence(char_list)
- yield tokens_list
- tokens = torch.LongTensor(tokens_list)
- yield tokens
-
- sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline)
- lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt")
- special_labels = {
- "blank_label": hparams["blank_index"],
- "unk_label": hparams["unk_index"]
- }
- label_encoder.load_or_create(
- path=lab_enc_file,
- from_didatasets=[train_data],
- output_key="char_list",
- special_labels=special_labels,
- sequence_input=True,
- )
-
- # 4. Set output:
- sb.dataio.dataset.set_output_keys(
- datasets, ["id", "sig", "wrd", "char_list", "tokens"],
- )
- return train_data, valid_data,test_datasets, label_encoder
-
-class ASR(sb.core.Brain):
- def compute_forward(self, batch, stage):
- """Forward computations from the waveform batches to the output probabilities."""
-
- batch = batch.to(self.device)
- wavs, wav_lens = batch.sig
- wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
-
- if stage == sb.Stage.TRAIN:
- if hasattr(self.hparams, "augmentation"):
- wavs = self.hparams.augmentation(wavs, wav_lens)
-
- # Forward pass
- feats = self.modules.wav2vec2(wavs, wav_lens)
- x = self.modules.enc(feats)
- logits = self.modules.ctc_lin(x)
- p_ctc = self.hparams.log_softmax(logits)
-
- return p_ctc, wav_lens
-
- def custom_encode(self,wavs,wav_lens) :
- wavs = wavs.to("cpu")
- if(wav_lens is not None): wav_lens.to(self.device)
-
- feats = self.modules.wav2vec2(wavs, wav_lens)
- x = self.modules.enc(feats)
- logits = self.modules.ctc_lin(x)
- p_ctc = self.hparams.log_softmax(logits)
-
- return feats,p_ctc
-
-
-
- def compute_objectives(self, predictions, batch, stage):
- """Computes the loss (CTC) given predictions and targets."""
-
- p_ctc, wav_lens = predictions
-
- ids = batch.id
- tokens, tokens_lens = batch.tokens
-
- loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens)
-
- if stage != sb.Stage.TRAIN:
- predicted_tokens = sb.decoders.ctc_greedy_decode(
- p_ctc, wav_lens, blank_id=self.hparams.blank_index
- )
- # Decode token terms to words
- if self.hparams.use_language_modelling:
- predicted_words = []
- for logs in p_ctc:
- text = decoder.decode(logs.detach().cpu().numpy())
- predicted_words.append(text.split(" "))
- else:
- predicted_words = [
- "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ")
- for utt_seq in predicted_tokens
- ]
- # Convert indices to words
- target_words = [wrd.split(" ") for wrd in batch.wrd]
-
- self.wer_metric.append(ids, predicted_words, target_words)
- self.cer_metric.append(ids, predicted_words, target_words)
-
- return loss
-
- def fit_batch(self, batch):
- """Train the parameters given a single batch in input"""
- should_step = self.step % self.grad_accumulation_factor == 0
- # Managing automatic mixed precision
- # TOFIX: CTC fine-tuning currently is unstable
- # This is certainly due to CTC being done in fp16 instead of fp32
- if self.auto_mix_prec:
- with torch.cuda.amp.autocast():
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
- with self.no_sync(not should_step):
- self.scaler.scale(
- loss / self.grad_accumulation_factor
- ).backward()
- if should_step:
-
- if not self.hparams.wav2vec2.freeze:
- self.scaler.unscale_(self.wav2vec_optimizer)
- self.scaler.unscale_(self.model_optimizer)
- if self.check_gradients(loss):
- if not self.hparams.wav2vec2.freeze:
- if self.optimizer_step >= self.hparams.warmup_steps:
- self.scaler.step(self.wav2vec_optimizer)
- self.scaler.step(self.model_optimizer)
- self.scaler.update()
- self.zero_grad()
- self.optimizer_step += 1
- else:
- # This is mandatory because HF models have a weird behavior with DDP
- # on the forward pass
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
-
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
-
- with self.no_sync(not should_step):
- (loss / self.grad_accumulation_factor).backward()
- if should_step:
- if self.check_gradients(loss):
- if not self.hparams.wav2vec2.freeze:
- if self.optimizer_step >= self.hparams.warmup_steps:
- self.wav2vec_optimizer.step()
- self.model_optimizer.step()
- self.zero_grad()
- self.optimizer_step += 1
-
- self.on_fit_batch_end(batch, outputs, loss, should_step)
- return loss.detach().cpu()
-
- def evaluate_batch(self, batch, stage):
- """Computations needed for validation/test batches"""
- predictions = self.compute_forward(batch, stage=stage)
- with torch.no_grad():
- loss = self.compute_objectives(predictions, batch, stage=stage)
- return loss.detach()
-
- def on_stage_start(self, stage, epoch):
- """Gets called at the beginning of each epoch"""
- if stage != sb.Stage.TRAIN:
- self.cer_metric = self.hparams.cer_computer()
- self.wer_metric = self.hparams.error_rate_computer()
-
- def on_stage_end(self, stage, stage_loss, epoch):
- """Gets called at the end of an epoch."""
- # Compute/store important stats
- stage_stats = {"loss": stage_loss}
- if stage == sb.Stage.TRAIN:
- self.train_stats = stage_stats
- else:
- stage_stats["CER"] = self.cer_metric.summarize("error_rate")
- stage_stats["WER"] = self.wer_metric.summarize("error_rate")
-
- # Perform end-of-iteration things, like annealing, logging, etc.
- if stage == sb.Stage.VALID:
- old_lr_model, new_lr_model = self.hparams.lr_annealing_model(
- stage_stats["loss"]
- )
- old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec(
- stage_stats["loss"]
- )
- sb.nnet.schedulers.update_learning_rate(
- self.model_optimizer, new_lr_model
- )
- if not self.hparams.wav2vec2.freeze:
- sb.nnet.schedulers.update_learning_rate(
- self.wav2vec_optimizer, new_lr_wav2vec
- )
- self.hparams.train_logger.log_stats(
- stats_meta={
- "epoch": epoch,
- "lr_model": old_lr_model,
- "lr_wav2vec": old_lr_wav2vec,
- },
- train_stats=self.train_stats,
- valid_stats=stage_stats,
- )
- self.checkpointer.save_and_keep_only(
- meta={"WER": stage_stats["WER"]}, min_keys=["WER"],
- )
- elif stage == sb.Stage.TEST:
- self.hparams.train_logger.log_stats(
- stats_meta={"Epoch loaded": self.hparams.epoch_counter.current},
- test_stats=stage_stats,
- )
- with open(self.hparams.wer_file, "w") as w:
- self.wer_metric.write_stats(w)
-
- def init_optimizers(self):
- "Initializes the wav2vec2 optimizer and model optimizer"
-
- # If the wav2vec encoder is unfrozen, we create the optimizer
- if not self.hparams.wav2vec2.freeze:
- self.wav2vec_optimizer = self.hparams.wav2vec_opt_class(
- self.modules.wav2vec2.parameters()
- )
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable(
- "wav2vec_opt", self.wav2vec_optimizer
- )
-
- self.model_optimizer = self.hparams.model_opt_class(
- self.hparams.model.parameters()
- )
-
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable("modelopt", self.model_optimizer)
-
- def zero_grad(self, set_to_none=False):
- if not self.hparams.wav2vec2.freeze:
- self.wav2vec_optimizer.zero_grad(set_to_none)
- self.model_optimizer.zero_grad(set_to_none)
-
-
-from speechbrain.pretrained import EncoderASR,EncoderDecoderASR
-french_asr_model = EncoderASR.from_hparams(source="asr-wav2vec2-commonvoice-fr", savedir="pretrained_models/asr-wav2vec2-commonvoice-fr")
-french_asr_model.to("cpu")
-cvhparams_file, cvrun_opts, cvoverrides = sb.parse_arguments(["EnglishCV/train_en_with_wav2vec.yaml"])
-with open(cvhparams_file) as cvfin:
- cvhparams = load_hyperpyyaml(cvfin, cvoverrides)
-cvrun_opts["device"]="cpu"
-english_asr_model = ASRCV(
- modules=cvhparams["modules"],
- hparams=cvhparams,
- run_opts=cvrun_opts,
- checkpointer=cvhparams["checkpointer"],
- )
-english_asr_model.modules.to("cpu")
-english_asr_model.device="cpu"
-english_asr_model.checkpointer.recover_if_possible(device="cpu")
-run_opts["device"]="cpu"
-print("moving to tunisian model")
-asr_brain = ASR(
- modules=hparams["modules"],
- hparams=hparams,
- run_opts=run_opts,
- checkpointer=hparams["checkpointer"],
-)
-asr_brain.modules.to("cpu")
-asr_brain.checkpointer.recover_if_possible(device="cpu")
-asr_brain.modules.eval()
-english_asr_model.modules.eval()
-french_asr_model.mods.eval()
-asr_brain.modules.to("cpu")
-
-# Commented out IPython magic to ensure Python compatibility.
-# %ls
-
-#UTILS FUNCTIOJNS
-def get_size_dimensions(arr):
- size_dimensions = []
- while isinstance(arr, list):
- size_dimensions.append(len(arr))
- arr = arr[0]
- return size_dimensions
-
-def scale_array(batch,n):
- scaled_batch = []
-
- for array in batch:
- if(n < len(array)): raise ValueError("Cannot scale Array down")
-
- repeat = round(n/len(array))+1
- scaled_length_array= []
-
- for i in array:
- for j in range(repeat) :
- if(len(scaled_length_array) == n): break
- scaled_length_array.append(i)
-
- scaled_batch.append(scaled_length_array)
-
- return torch.tensor(scaled_batch)
-
-
-def load_paths(wavs_path):
- waveforms = []
- for path in wavs_path :
- waveform, _ = torchaudio.load(path)
- waveforms.append(waveform.squeeze(0))
- # normalize array length to the bigger arrays by pading with 0's
- padded_arrays = pad_sequence(waveforms, batch_first=True)
- return torch.tensor(padded_arrays)
-
-
-
-device = 'cpu'
-verbose = 0
-#FLOW LEVEL FUNCTIONS
-def merge_strategy(embeddings1, embeddings2, embeddings3,post1, post2,post3):
-
-
- post1 = post1.to(device)
- post2 = post2.to(device)
- post3 = post3.to(device)
- embeddings1 = embeddings1.to(device)
- embeddings2 = embeddings2.to(device)
- embeddings3 = embeddings3.to(device)
-
- posteriograms_merged = torch.cat((post1,post2,post3),dim=2)
- embeddings_merged = torch.cat((embeddings1,embeddings2,embeddings3),dim=2)
-
- if(verbose !=0):
- print('MERGED POST ',posteriograms_merged.shape)
- print('MERGED emb ',embeddings_merged.shape)
-
- return torch.cat((posteriograms_merged,embeddings_merged),dim=2).to(device)
-
-def decode(model,wavs,wav_lens):
-
- with torch.no_grad():
- wav_lens = wav_lens.to(model.device)
- encoder_out = model.encode_batch(wavs, wav_lens)
- predictions = model.decoding_function(encoder_out, wav_lens)
- return predictions
-
-def middle_layer(batch, lens):
-
- tn_embeddings, tn_posteriogram = asr_brain.custom_encode(batch,None)
-
- fr_embeddings = french_asr_model.mods.encoder.wav2vec2(batch)
- fr_posteriogram =french_asr_model.encode_batch(batch,lens)
- en_embeddings = english_asr_model.modules.wav2vec2(batch, lens)
- x = english_asr_model.modules.enc(en_embeddings)
- en_posteriogram = english_asr_model.modules.ctc_lin(x)
- #scores, en_posteriogram = english_asr_model.mods.decoder(en_embeddings ,lens)
- if(verbose !=0):
- print('[EMBEDDINGS] FR:',fr_embeddings.shape, "EN:",en_embeddings.shape, "TN:", tn_embeddings.shape)
- print('[POSTERIOGRAM] FR:',fr_posteriogram.shape, "EN:",en_posteriogram.shape,"TN:",tn_posteriogram.shape)
-
-
- bilangual_sample = merge_strategy(fr_embeddings,en_embeddings,tn_embeddings,fr_posteriogram,en_posteriogram,tn_posteriogram)
- return bilangual_sample
-
-class Mixer(sb.core.Brain):
-
- def compute_forward(self, batch, stage):
- """Forward computations from the waveform batches to the output probabilities."""
- wavs, wav_lens = batch.sig
- wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
-
- if stage == sb.Stage.TRAIN:
- if hasattr(self.hparams, "augmentation"):
- wavs = self.hparams.augmentation(wavs, wav_lens)
-
- multi_langual_feats = middle_layer(wavs, wav_lens)
- multi_langual_feats= multi_langual_feats.to(device)
- feats, _ = self.modules.enc(multi_langual_feats)
- logits = self.modules.ctc_lin(feats)
- p_ctc = self.hparams.log_softmax(logits)
-
- if stage!= sb.Stage.TRAIN:
- p_tokens = sb.decoders.ctc_greedy_decode(
- p_ctc, wav_lens, blank_id=self.hparams.blank_index
- )
- else :
- p_tokens = None
- return p_ctc, wav_lens, p_tokens
-
-
- def treat_wav(self,sig):
- multi_langual_feats = middle_layer(sig.to("cpu"), torch.tensor([1]).to("cpu"))
- multi_langual_feats= multi_langual_feats.to(device)
- feats, _ = self.modules.enc(multi_langual_feats)
- logits = self.modules.ctc_lin(feats)
- p_ctc = self.hparams.log_softmax(logits)
- predicted_words =[]
- for logs in p_ctc:
- text = decoder.decode(logs.detach().cpu().numpy())
- predicted_words.append(text.split(" "))
- return " ".join(predicted_words[0])
-
-
- def compute_objectives(self, predictions, batch, stage):
- """Computes the loss (CTC) given predictions and targets."""
-
- p_ctc, wav_lens , predicted_tokens= predictions
-
- ids = batch.id
- tokens, tokens_lens = batch.tokens
-
- loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens)
-
-
- if stage == sb.Stage.VALID:
- predicted_words = [
- "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ")
- for utt_seq in predicted_tokens
- ]
- target_words = [wrd.split(" ") for wrd in batch.wrd]
- self.wer_metric.append(ids, predicted_words, target_words)
- self.cer_metric.append(ids, predicted_words, target_words)
- if stage ==sb.Stage.TEST :
- if self.hparams.language_modelling:
- predicted_words = []
- for logs in p_ctc:
- text = decoder.decode(logs.detach().cpu().numpy())
- predicted_words.append(text.split(" "))
- else :
- predicted_words = [
- "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ")
- for utt_seq in predicted_tokens
- ]
-
- target_words = [wrd.split(" ") for wrd in batch.wrd]
- self.wer_metric.append(ids, predicted_words, target_words)
- self.cer_metric.append(ids, predicted_words, target_words)
-
- return loss
-
- def fit_batch(self, batch):
- """Train the parameters given a single batch in input"""
- should_step = self.step % self.grad_accumulation_factor == 0
- # Managing automatic mixed precision
- # TOFIX: CTC fine-tuning currently is unstable
- # This is certainly due to CTC being done in fp16 instead of fp32
- if self.auto_mix_prec:
- with torch.cuda.amp.autocast():
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
- with self.no_sync(not should_step):
- self.scaler.scale(
- loss / self.grad_accumulation_factor
- ).backward()
- if should_step:
-
-
- self.scaler.unscale_(self.model_optimizer)
- if self.check_gradients(loss):
- self.scaler.step(self.model_optimizer)
- self.scaler.update()
- self.zero_grad()
- self.optimizer_step += 1
- else:
- # This is mandatory because HF models have a weird behavior with DDP
- # on the forward pass
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
-
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
-
- with self.no_sync(not should_step):
- (loss / self.grad_accumulation_factor).backward()
- if should_step:
- if self.check_gradients(loss):
- self.model_optimizer.step()
- self.zero_grad()
- self.optimizer_step += 1
-
- self.on_fit_batch_end(batch, outputs, loss, should_step)
- return loss.detach().cpu()
-
- def evaluate_batch(self, batch, stage):
- """Computations needed for validation/test batches"""
- predictions = self.compute_forward(batch, stage=stage)
- with torch.no_grad():
- loss = self.compute_objectives(predictions, batch, stage=stage)
- return loss.detach()
-
- def on_stage_start(self, stage, epoch):
- """Gets called at the beginning of each epoch"""
- if stage != sb.Stage.TRAIN:
- self.cer_metric = self.hparams.cer_computer()
- self.wer_metric = self.hparams.error_rate_computer()
-
- def on_stage_end(self, stage, stage_loss, epoch):
- """Gets called at the end of an epoch."""
- # Compute/store important stats
- stage_stats = {"loss": stage_loss}
- if stage == sb.Stage.TRAIN:
- self.train_stats = stage_stats
- else:
- stage_stats["CER"] = self.cer_metric.summarize("error_rate")
- stage_stats["WER"] = self.wer_metric.summarize("error_rate")
-
- # Perform end-of-iteration things, like annealing, logging, etc.
- if stage == sb.Stage.VALID:
- old_lr_model, new_lr_model = self.hparams.lr_annealing_model(
- stage_stats["loss"]
- )
- sb.nnet.schedulers.update_learning_rate(
- self.model_optimizer, new_lr_model
- )
- self.hparams.train_logger.log_stats(
- stats_meta={
- "epoch": epoch,
- "lr_model": old_lr_model,
- },
- train_stats=self.train_stats,
- valid_stats=stage_stats,
- )
- self.checkpointer.save_and_keep_only(
- meta={"WER": stage_stats["WER"]}, min_keys=["WER"],
- )
- elif stage == sb.Stage.TEST:
- self.hparams.train_logger.log_stats(
- stats_meta={"Epoch loaded": self.hparams.epoch_counter.current},
- test_stats=stage_stats,
- )
- with open(self.hparams.wer_file, "w") as w:
- self.wer_metric.write_stats(w)
-
- def init_optimizers(self):
-
- self.model_optimizer = self.hparams.model_opt_class(
- self.hparams.model.parameters()
- )
-
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable("modelopt", self.model_optimizer)
-
- def zero_grad(self, set_to_none=False):
-
- self.model_optimizer.zero_grad(set_to_none)
-
-
-
-
-hparams_file, run_opts, overrides = sb.parse_arguments(["cs.yaml"])
-
-# If distributed_launch=True then
-# create ddp_group with the right communication protocol
-sb.utils.distributed.ddp_init_group(run_opts)
-
-with open(hparams_file) as fin:
- hparams = load_hyperpyyaml(fin, overrides)
-
-# Create experiment directory
-sb.create_experiment_directory(
- experiment_directory=hparams["output_folder"],
- hyperparams_to_save=hparams_file,
- overrides=overrides,
-)
-def read_labels_file(labels_file):
- with open(labels_file, "r",encoding="utf-8") as lf:
- lines = lf.read().splitlines()
- division = "==="
- numbers = {}
- for line in lines :
- if division in line :
- break
- string, number = line.split("=>")
- number = int(number)
- string = string[1:-2]
- numbers[number] = string
- return [numbers[x] for x in range(len(numbers))]
-
-label_encoder = sb.dataio.encoder.CTCTextEncoder()
-
-lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt")
-special_labels = {
- "blank_label": hparams["blank_index"],
- "unk_label": hparams["unk_index"]
-}
-label_encoder.load_or_create(
- path=lab_enc_file,
- from_didatasets=[[]],
- output_key="char_list",
- special_labels=special_labels,
- sequence_input=True,
-)
-
-
-labels = read_labels_file(os.path.join(hparams["save_folder"], "label_encoder.txt"))
-labels = [""] + labels[1:-1] + ["1"]
-if hparams["language_modelling"]:
- decoder = build_ctcdecoder(
- labels,
- kenlm_model_path=hparams["ngram_lm_path"], # either .arpa or .bin file
- alpha=0.5, # tuned on a val set
- beta=1, # tuned on a val set
- )
-
-description = """This is a speechbrain-based Automatic Speech Recognition (ASR) model for Tunisian arabic. It outputs code-switched Tunisian transcriptions written in Arabic and Latin characters. It handles Tunisian Arabic, English and French outputs.
-Code-switching is notoriously hard to handle for speech recognition models, the main errors you man encounter using this model are spelling/language identification errors due to code-switching. We may work on improving this in further models. However if you do not need code-switching in your transcripts, you would better use the non-code switched model, available in another space from the same author. (https://huggingface.co/spaces/SalahZa/Tunisian-Speech-Recognition)
-
-Run is done on CPU to keep it free in this space. This leads to quite long running times on long sequences. If for your project or research, you want to transcribe long sequences, you would better use the model directly from its page, some instructions for inference on a test set have been provided there. (https://huggingface.co/SalahZa/Code_Switched_Tunisian_Speech_Recognition). If you need help, feel free to drop an email here : zaiemsalah@gmail.com
-
-Authors :
-* [Salah Zaiem](https://fr.linkedin.com/in/salah-zaiem)
-* [Ahmed Amine Ben Aballah](https://www.linkedin.com/in/aabenz/)
-* [Ata Kaboudi](https://www.linkedin.com/in/ata-kaboudi-63365b1a8)
-* [Amir Kanoun](https://tn.linkedin.com/in/ahmed-amir-kanoun)
-
-More in-depth details and insights are available in a released preprint. Please find the paper [here](https://arxiv.org/abs/2309.11327).
-If you use or refer to this model, please cite :
-
-```
-@misc{abdallah2023leveraging,
- title={Leveraging Data Collection and Unsupervised Learning for Code-switched Tunisian Arabic Automatic Speech Recognition},
- author={Ahmed Amine Ben Abdallah and Ata Kabboudi and Amir Kanoun and Salah Zaiem},
- year={2023},
- eprint={2309.11327},
- archivePrefix={arXiv},
- primaryClass={eess.AS}
-}
-
-
-"""
-title = "Code-Switched Tunisian Speech Recognition"
-
-
-run_opts["device"]="cpu"
-
-mixer = Mixer(
- modules=hparams["modules"],
- hparams=hparams,
- run_opts=run_opts,
- checkpointer=hparams["checkpointer"],
-)
-mixer.tokenizer = label_encoder
-mixer.device = "cpu"
-mixer.checkpointer.recover_if_possible(device="cpu")
-mixer.modules.eval()
-
-
-
-
-
-
-
-
-device = "cpu"
-mixer.device= "cpu"
-mixer.modules.to("cpu")
-
-from enum import Enum, auto
-class Stage(Enum):
- TRAIN = auto()
- VALID = auto()
- TEST = auto()
-
-asr_brain.on_evaluate_start()
-asr_brain.modules.eval()
-
-
-import gradio as gr
-
-def treat_wav_file(file_mic,file_upload ,asr=mixer, device="cpu") :
- if (file_mic is not None) and (file_upload is not None):
- warn_output = "WARNING: You've uploaded an audio file and used the microphone. The recorded file from the microphone will be used and the uploaded audio will be discarded.\n"
- wav = file_mic
- elif (file_mic is None) and (file_upload is None):
- return "ERROR: You have to either use the microphone or upload an audio file"
- elif file_mic is not None:
- wav = file_mic
- else:
- wav = file_upload
- info = torchaudio.info(wav)
- sr = info.sample_rate
- sig = sb.dataio.dataio.read_audio(wav)
- if len(sig.shape)>1 :
- sig = torch.mean(sig, dim=1)
- sig = torch.unsqueeze(sig, 0)
- tensor_wav = sig.to(device)
- resampled = torchaudio.functional.resample( tensor_wav, sr, 16000)
- sentence = asr.treat_wav(resampled)
- return sentence
-
-gr.Interface(
- fn=treat_wav_file,
- title = title,
- description = description,
- inputs=[gr.Audio(source="microphone", type='filepath', label = "record", optional = True),
- gr.Audio(source="upload", type='filepath', label="filein", optional=True)]
- ,outputs="text").launch()
-
diff --git a/spaces/SerdarHelli/Brain-MR-Image-Generation-with-StyleGAN/README.md b/spaces/SerdarHelli/Brain-MR-Image-Generation-with-StyleGAN/README.md
deleted file mode 100644
index e4f11593371f746645bd4fa1288e15dab513636c..0000000000000000000000000000000000000000
--- a/spaces/SerdarHelli/Brain-MR-Image-Generation-with-StyleGAN/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Brain MR Image Generation GAN
-emoji: 👀
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Smiling333/speechbrain-soundchoice-g2p/README.md b/spaces/Smiling333/speechbrain-soundchoice-g2p/README.md
deleted file mode 100644
index d27ac55a92d402b6a6f9aa34f162c635b7042680..0000000000000000000000000000000000000000
--- a/spaces/Smiling333/speechbrain-soundchoice-g2p/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Speechbrain Soundchoice G2p
-emoji: 😻
-colorFrom: gray
-colorTo: purple
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Sojab/voice-recognition/app.py b/spaces/Sojab/voice-recognition/app.py
deleted file mode 100644
index fb91365640646d3a18063887cad896204b2f6922..0000000000000000000000000000000000000000
--- a/spaces/Sojab/voice-recognition/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/codenamewei/speech-to-text").launch()
\ No newline at end of file
diff --git a/spaces/Sourabh2/English2Manipuri/app.py b/spaces/Sourabh2/English2Manipuri/app.py
deleted file mode 100644
index d2defed1fb6c638fcbf68d5b6bc430e3ac3b8a16..0000000000000000000000000000000000000000
--- a/spaces/Sourabh2/English2Manipuri/app.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import json
-import tensorflow as tf
-from tensorflow.keras.layers import TextVectorization
-import gradio as gr
-import re
-import string
-import numpy as np
-
-with open("./output.json", "r") as json_file:
- loaded_data = json.load(json_file)
-
-text_pair = []
-for item in loaded_data:
- english_sentence = item["english"]
- manipuri_translation = item["manipuri"]
- text_pair.append((english_sentence, manipuri_translation))
-
-strip_chars = string.punctuation + "|"
-strip_chars = strip_chars.replace("[", "")
-strip_chars = strip_chars.replace("]", "")
-
-vocab_size = 15000
-sequence_length = 20
-batch_size = 64
-
-
-def custom_standardization(input_string):
- lowercase = tf.strings.lower(input_string)
- return tf.strings.regex_replace(lowercase, "[%s]" % re.escape(strip_chars), "")
-
-
-eng_vectorization = TextVectorization(
- max_tokens=vocab_size, output_mode="int", output_sequence_length=sequence_length,
-)
-spa_vectorization = TextVectorization(
- max_tokens=vocab_size,
- output_mode="int",
- output_sequence_length=sequence_length + 1,
- standardize=custom_standardization,
-)
-train_pairs = text_pair
-train_eng_texts = [pair[0] for pair in train_pairs]
-train_spa_texts = [pair[1] for pair in train_pairs]
-eng_vectorization.adapt(train_eng_texts)
-spa_vectorization.adapt(train_spa_texts)
-
-reloaded = tf.saved_model.load('translator')
-
-spa_vocab = spa_vectorization.get_vocabulary()
-spa_index_lookup = dict(zip(range(len(spa_vocab)), spa_vocab))
-max_decoded_sentence_length = 20
-
-
-def decode_sequence(input_sentence):
- tokenized_input_sentence = eng_vectorization([input_sentence])
- decoded_sentence = "[start]"
- for i in range(max_decoded_sentence_length):
- tokenized_target_sentence = spa_vectorization([decoded_sentence])[:, :-1]
- predictions = reloaded([tokenized_input_sentence, tokenized_target_sentence])
-
- sampled_token_index = np.argmax(predictions[0, i, :])
- sampled_token = spa_index_lookup[sampled_token_index]
- decoded_sentence += " " + sampled_token
-
- if sampled_token == "[end]":
- break
- return decoded_sentence
-
-def total(sen):
- translatedee = decode_sequence(sen)
- updated_text = translatedee.replace("[start]", "").strip()
- return updated_text
-
-
-iface = gr.Interface(fn=total,
- inputs=gr.inputs.Textbox(lines=2, placeholder='Text to translate From English to Manipuri'),
- outputs='text')
-
-iface.launch()
diff --git a/spaces/Stearns/Soar/pysoarlib/SoarWME.py b/spaces/Stearns/Soar/pysoarlib/SoarWME.py
deleted file mode 100644
index 245f0b0deb0d1a427c4b988fd679848835ca90d9..0000000000000000000000000000000000000000
--- a/spaces/Stearns/Soar/pysoarlib/SoarWME.py
+++ /dev/null
@@ -1,87 +0,0 @@
-"""
-This module defines a utility class called SoarWME
-which wraps SML code for adding/removing Soar Working Memory Elements (WME)
-"""
-
-from .WMInterface import WMInterface
-
-class SoarWME(WMInterface):
- """ Wrapper for a single Soar Working Memory Element with a primitive value
-
- It can wrap an int, float, or string type
-
- An instance is not directly tied to an SML wme,
- the user decides how and when soar's working memory is modified
-
- So you can change the value anytime (asynchronously to soar)
- And then modify working memory via add_to_wm, update_wm, and remove_from_wm
- during an agent callback (like BEFORE_INPUT_PHASE)
- """
-
- def __init__(self, att, val):
- """ Initializes the wme, but does not add to working memory yet
-
- :param att: The wme's attribute
- :type att: str
-
- :param val: The wme's value, any of the 3 main primitive types
- :type val: int, float, or str
- """
- WMInterface.__init__(self)
- self.att = att
- self.val = val
- self.wme = None
-
- self.changed = False
-
- if type(val) == int:
- self.create_wme = self._create_int_wme
- elif type(val) == float:
- self.create_wme = self._create_float_wme
- else:
- self.create_wme = self._create_string_wme
-
- def get_attr(self):
- """ Returns the wme's attribute """
- return self.att
-
- def get_value(self):
- """ Returns the wme's value """
- return self.val
-
- def set_value(self, newval):
- """ Set's the wme's value, but also need to call update_wm to change working memory """
- if self.val != newval:
- self.val = newval
- self.changed = True
-
- def __str__(self):
- return str(self.val)
-
-
- ### Internal Methods
-
- def _create_int_wme(self, id, att, val):
- return id.CreateIntWME(att, val)
-
- def _create_float_wme(self, id, att, val):
- return id.CreateFloatWME(att, val)
-
- def _create_string_wme(self, id, att, val):
- return id.CreateStringWME(att, str(val))
-
- def _add_to_wm_impl(self, parent_id):
- """ Creates a wme in soar's working memory rooted at the given parent_id """
- self.wme = self.create_wme(parent_id, self.att, self.val)
-
- def _update_wm_impl(self):
- """ If the value has changed, will update soar's working memory with the new value """
- if self.changed:
- self.wme.Update(self.val)
- self.changed = False
-
- def _remove_from_wm_impl(self):
- """ Will remove the wme from soar's working memory """
- self.wme.DestroyWME()
- self.wme = None
-
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/__init__.py
deleted file mode 100644
index fa55b259756c41f6f01f9a91e57183ff14ea623f..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/adapter/__init__.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# Copyright (c) Microsoft Corporation. All rights reserved.
-# Licensed under the MIT License. See LICENSE in the project root
-# for license information.
-
-from __future__ import annotations
-import typing
-
-if typing.TYPE_CHECKING:
- __all__: list[str]
-
-__all__ = []
-
-access_token = None
-"""Access token used to authenticate with this adapter."""
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/model_zoo/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/model_zoo/__init__.py
deleted file mode 100644
index 6204208198d813728cf6419e8eef4a733f20c18f..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/model_zoo/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-"""
-Model Zoo API for Detectron2: a collection of functions to create common model architectures
-listed in `MODEL_ZOO.md `_,
-and optionally load their pre-trained weights.
-"""
-
-from .model_zoo import get, get_config_file, get_checkpoint_url, get_config
-
-__all__ = ["get_checkpoint_url", "get", "get_config_file", "get_config"]
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/__init__.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/__init__.py
deleted file mode 100644
index 915af28cefab14a14c1188ed861161080fd138a3..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .checkpoint import CheckpointHook
-from .closure import ClosureHook
-from .ema import EMAHook
-from .evaluation import DistEvalHook, EvalHook
-from .hook import HOOKS, Hook
-from .iter_timer import IterTimerHook
-from .logger import (DvcliveLoggerHook, LoggerHook, MlflowLoggerHook,
- NeptuneLoggerHook, PaviLoggerHook, TensorboardLoggerHook,
- TextLoggerHook, WandbLoggerHook)
-from .lr_updater import LrUpdaterHook
-from .memory import EmptyCacheHook
-from .momentum_updater import MomentumUpdaterHook
-from .optimizer import (Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook,
- GradientCumulativeOptimizerHook, OptimizerHook)
-from .profiler import ProfilerHook
-from .sampler_seed import DistSamplerSeedHook
-from .sync_buffer import SyncBuffersHook
-
-__all__ = [
- 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook',
- 'OptimizerHook', 'Fp16OptimizerHook', 'IterTimerHook',
- 'DistSamplerSeedHook', 'EmptyCacheHook', 'LoggerHook', 'MlflowLoggerHook',
- 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook',
- 'NeptuneLoggerHook', 'WandbLoggerHook', 'DvcliveLoggerHook',
- 'MomentumUpdaterHook', 'SyncBuffersHook', 'EMAHook', 'EvalHook',
- 'DistEvalHook', 'ProfilerHook', 'GradientCumulativeOptimizerHook',
- 'GradientCumulativeFp16OptimizerHook'
-]
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/lraspp_head.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/lraspp_head.py
deleted file mode 100644
index 69bf320934d787aaa11984a0c4effe9ad8015b22..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/lraspp_head.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import torch
-import torch.nn as nn
-from annotator.uniformer.mmcv import is_tuple_of
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from annotator.uniformer.mmseg.ops import resize
-from ..builder import HEADS
-from .decode_head import BaseDecodeHead
-
-
-@HEADS.register_module()
-class LRASPPHead(BaseDecodeHead):
- """Lite R-ASPP (LRASPP) head is proposed in Searching for MobileNetV3.
-
- This head is the improved implementation of `Searching for MobileNetV3
- `_.
-
- Args:
- branch_channels (tuple[int]): The number of output channels in every
- each branch. Default: (32, 64).
- """
-
- def __init__(self, branch_channels=(32, 64), **kwargs):
- super(LRASPPHead, self).__init__(**kwargs)
- if self.input_transform != 'multiple_select':
- raise ValueError('in Lite R-ASPP (LRASPP) head, input_transform '
- f'must be \'multiple_select\'. But received '
- f'\'{self.input_transform}\'')
- assert is_tuple_of(branch_channels, int)
- assert len(branch_channels) == len(self.in_channels) - 1
- self.branch_channels = branch_channels
-
- self.convs = nn.Sequential()
- self.conv_ups = nn.Sequential()
- for i in range(len(branch_channels)):
- self.convs.add_module(
- f'conv{i}',
- nn.Conv2d(
- self.in_channels[i], branch_channels[i], 1, bias=False))
- self.conv_ups.add_module(
- f'conv_up{i}',
- ConvModule(
- self.channels + branch_channels[i],
- self.channels,
- 1,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg,
- bias=False))
-
- self.conv_up_input = nn.Conv2d(self.channels, self.channels, 1)
-
- self.aspp_conv = ConvModule(
- self.in_channels[-1],
- self.channels,
- 1,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg,
- bias=False)
- self.image_pool = nn.Sequential(
- nn.AvgPool2d(kernel_size=49, stride=(16, 20)),
- ConvModule(
- self.in_channels[2],
- self.channels,
- 1,
- act_cfg=dict(type='Sigmoid'),
- bias=False))
-
- def forward(self, inputs):
- """Forward function."""
- inputs = self._transform_inputs(inputs)
-
- x = inputs[-1]
-
- x = self.aspp_conv(x) * resize(
- self.image_pool(x),
- size=x.size()[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- x = self.conv_up_input(x)
-
- for i in range(len(self.branch_channels) - 1, -1, -1):
- x = resize(
- x,
- size=inputs[i].size()[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- x = torch.cat([x, self.convs[i](inputs[i])], 1)
- x = self.conv_ups[i](x)
-
- return self.cls_seg(x)
diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/depth_model.py b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/depth_model.py
deleted file mode 100644
index fc421c108ea3928c9add62b4c190500d9bd4eda1..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/depth_model.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# MIT License
-
-# Copyright (c) 2022 Intelligent Systems Lab Org
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-# File author: Shariq Farooq Bhat
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchvision import transforms
-import PIL.Image
-from PIL import Image
-from typing import Union
-
-
-class DepthModel(nn.Module):
- def __init__(self):
- super().__init__()
- self.device = 'cpu'
-
- def to(self, device) -> nn.Module:
- self.device = device
- return super().to(device)
-
- def forward(self, x, *args, **kwargs):
- raise NotImplementedError
-
- def _infer(self, x: torch.Tensor):
- """
- Inference interface for the model
- Args:
- x (torch.Tensor): input tensor of shape (b, c, h, w)
- Returns:
- torch.Tensor: output tensor of shape (b, 1, h, w)
- """
- return self(x)['metric_depth']
-
- def _infer_with_pad_aug(self, x: torch.Tensor, pad_input: bool=True, fh: float=3, fw: float=3, upsampling_mode: str='bicubic', padding_mode="reflect", **kwargs) -> torch.Tensor:
- """
- Inference interface for the model with padding augmentation
- Padding augmentation fixes the boundary artifacts in the output depth map.
- Boundary artifacts are sometimes caused by the fact that the model is trained on NYU raw dataset which has a black or white border around the image.
- This augmentation pads the input image and crops the prediction back to the original size / view.
-
- Note: This augmentation is not required for the models trained with 'avoid_boundary'=True.
- Args:
- x (torch.Tensor): input tensor of shape (b, c, h, w)
- pad_input (bool, optional): whether to pad the input or not. Defaults to True.
- fh (float, optional): height padding factor. The padding is calculated as sqrt(h/2) * fh. Defaults to 3.
- fw (float, optional): width padding factor. The padding is calculated as sqrt(w/2) * fw. Defaults to 3.
- upsampling_mode (str, optional): upsampling mode. Defaults to 'bicubic'.
- padding_mode (str, optional): padding mode. Defaults to "reflect".
- Returns:
- torch.Tensor: output tensor of shape (b, 1, h, w)
- """
- # assert x is nchw and c = 3
- assert x.dim() == 4, "x must be 4 dimensional, got {}".format(x.dim())
- assert x.shape[1] == 3, "x must have 3 channels, got {}".format(x.shape[1])
-
- if pad_input:
- assert fh > 0 or fw > 0, "atlease one of fh and fw must be greater than 0"
- pad_h = int(np.sqrt(x.shape[2]/2) * fh)
- pad_w = int(np.sqrt(x.shape[3]/2) * fw)
- padding = [pad_w, pad_w]
- if pad_h > 0:
- padding += [pad_h, pad_h]
-
- x = F.pad(x, padding, mode=padding_mode, **kwargs)
- out = self._infer(x)
- if out.shape[-2:] != x.shape[-2:]:
- out = F.interpolate(out, size=(x.shape[2], x.shape[3]), mode=upsampling_mode, align_corners=False)
- if pad_input:
- # crop to the original size, handling the case where pad_h and pad_w is 0
- if pad_h > 0:
- out = out[:, :, pad_h:-pad_h,:]
- if pad_w > 0:
- out = out[:, :, :, pad_w:-pad_w]
- return out
-
- def infer_with_flip_aug(self, x, pad_input: bool=True, **kwargs) -> torch.Tensor:
- """
- Inference interface for the model with horizontal flip augmentation
- Horizontal flip augmentation improves the accuracy of the model by averaging the output of the model with and without horizontal flip.
- Args:
- x (torch.Tensor): input tensor of shape (b, c, h, w)
- pad_input (bool, optional): whether to use padding augmentation. Defaults to True.
- Returns:
- torch.Tensor: output tensor of shape (b, 1, h, w)
- """
- # infer with horizontal flip and average
- out = self._infer_with_pad_aug(x, pad_input=pad_input, **kwargs)
- out_flip = self._infer_with_pad_aug(torch.flip(x, dims=[3]), pad_input=pad_input, **kwargs)
- out = (out + torch.flip(out_flip, dims=[3])) / 2
- return out
-
- def infer(self, x, pad_input: bool=True, with_flip_aug: bool=True, **kwargs) -> torch.Tensor:
- """
- Inference interface for the model
- Args:
- x (torch.Tensor): input tensor of shape (b, c, h, w)
- pad_input (bool, optional): whether to use padding augmentation. Defaults to True.
- with_flip_aug (bool, optional): whether to use horizontal flip augmentation. Defaults to True.
- Returns:
- torch.Tensor: output tensor of shape (b, 1, h, w)
- """
- if with_flip_aug:
- return self.infer_with_flip_aug(x, pad_input=pad_input, **kwargs)
- else:
- return self._infer_with_pad_aug(x, pad_input=pad_input, **kwargs)
-
- @torch.no_grad()
- def infer_pil(self, pil_img, pad_input: bool=True, with_flip_aug: bool=True, output_type: str="numpy", **kwargs) -> Union[np.ndarray, PIL.Image.Image, torch.Tensor]:
- """
- Inference interface for the model for PIL image
- Args:
- pil_img (PIL.Image.Image): input PIL image
- pad_input (bool, optional): whether to use padding augmentation. Defaults to True.
- with_flip_aug (bool, optional): whether to use horizontal flip augmentation. Defaults to True.
- output_type (str, optional): output type. Supported values are 'numpy', 'pil' and 'tensor'. Defaults to "numpy".
- """
- x = transforms.ToTensor()(pil_img).unsqueeze(0).to(self.device)
- out_tensor = self.infer(x, pad_input=pad_input, with_flip_aug=with_flip_aug, **kwargs)
- if output_type == "numpy":
- return out_tensor.squeeze().cpu().numpy()
- elif output_type == "pil":
- # uint16 is required for depth pil image
- out_16bit_numpy = (out_tensor.squeeze().cpu().numpy()*256).astype(np.uint16)
- return Image.fromarray(out_16bit_numpy)
- elif output_type == "tensor":
- return out_tensor.squeeze().cpu()
- else:
- raise ValueError(f"output_type {output_type} not supported. Supported values are 'numpy', 'pil' and 'tensor'")
-
\ No newline at end of file
diff --git a/spaces/TH5314/newbing/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/TH5314/newbing/src/lib/hooks/use-copy-to-clipboard.tsx
deleted file mode 100644
index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000
--- a/spaces/TH5314/newbing/src/lib/hooks/use-copy-to-clipboard.tsx
+++ /dev/null
@@ -1,33 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-export interface useCopyToClipboardProps {
- timeout?: number
-}
-
-export function useCopyToClipboard({
- timeout = 2000
-}: useCopyToClipboardProps) {
- const [isCopied, setIsCopied] = React.useState(false)
-
- const copyToClipboard = (value: string) => {
- if (typeof window === 'undefined' || !navigator.clipboard?.writeText) {
- return
- }
-
- if (!value) {
- return
- }
-
- navigator.clipboard.writeText(value).then(() => {
- setIsCopied(true)
-
- setTimeout(() => {
- setIsCopied(false)
- }, timeout)
- })
- }
-
- return { isCopied, copyToClipboard }
-}
diff --git a/spaces/TH5314/newbing/src/lib/isomorphic/node.ts b/spaces/TH5314/newbing/src/lib/isomorphic/node.ts
deleted file mode 100644
index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000
--- a/spaces/TH5314/newbing/src/lib/isomorphic/node.ts
+++ /dev/null
@@ -1,26 +0,0 @@
-import Debug from 'debug'
-
-const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici')
-const { HttpsProxyAgent } = require('https-proxy-agent')
-const ws = require('ws')
-
-const debug = Debug('bingo')
-
-const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY;
-let WebSocket = ws.WebSocket
-
-if (httpProxy) {
- setGlobalDispatcher(new ProxyAgent(httpProxy))
- const agent = new HttpsProxyAgent(httpProxy)
- // @ts-ignore
- WebSocket = class extends ws.WebSocket {
- constructor(address: string | URL, options: typeof ws.WebSocket) {
- super(address, {
- ...options,
- agent,
- })
- }
- }
-}
-
-export default { fetch, WebSocket, debug }
diff --git a/spaces/TRI-ML/risk_biased_prediction/app.py b/spaces/TRI-ML/risk_biased_prediction/app.py
deleted file mode 100644
index 8ef951ee1c0ed9a66d8d9d801958fbe3a7d22476..0000000000000000000000000000000000000000
--- a/spaces/TRI-ML/risk_biased_prediction/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from scripts.scripts_utils.plotly_interface import main
-
-main()
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/uninstall.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/uninstall.py
deleted file mode 100644
index f198fc313ff57929d95d36216e3e6ecec3877673..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/uninstall.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import logging
-from optparse import Values
-from typing import List
-
-from pip._vendor.packaging.utils import canonicalize_name
-
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.req_command import SessionCommandMixin, warn_if_run_as_root
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.exceptions import InstallationError
-from pip._internal.req import parse_requirements
-from pip._internal.req.constructors import (
- install_req_from_line,
- install_req_from_parsed_requirement,
-)
-from pip._internal.utils.misc import (
- check_externally_managed,
- protect_pip_from_modification_on_windows,
-)
-
-logger = logging.getLogger(__name__)
-
-
-class UninstallCommand(Command, SessionCommandMixin):
- """
- Uninstall packages.
-
- pip is able to uninstall most installed packages. Known exceptions are:
-
- - Pure distutils packages installed with ``python setup.py install``, which
- leave behind no metadata to determine what files were installed.
- - Script wrappers installed by ``python setup.py develop``.
- """
-
- usage = """
- %prog [options] ...
- %prog [options] -r ..."""
-
- def add_options(self) -> None:
- self.cmd_opts.add_option(
- "-r",
- "--requirement",
- dest="requirements",
- action="append",
- default=[],
- metavar="file",
- help=(
- "Uninstall all the packages listed in the given requirements "
- "file. This option can be used multiple times."
- ),
- )
- self.cmd_opts.add_option(
- "-y",
- "--yes",
- dest="yes",
- action="store_true",
- help="Don't ask for confirmation of uninstall deletions.",
- )
- self.cmd_opts.add_option(cmdoptions.root_user_action())
- self.cmd_opts.add_option(cmdoptions.override_externally_managed())
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- session = self.get_default_session(options)
-
- reqs_to_uninstall = {}
- for name in args:
- req = install_req_from_line(
- name,
- isolated=options.isolated_mode,
- )
- if req.name:
- reqs_to_uninstall[canonicalize_name(req.name)] = req
- else:
- logger.warning(
- "Invalid requirement: %r ignored -"
- " the uninstall command expects named"
- " requirements.",
- name,
- )
- for filename in options.requirements:
- for parsed_req in parse_requirements(
- filename, options=options, session=session
- ):
- req = install_req_from_parsed_requirement(
- parsed_req, isolated=options.isolated_mode
- )
- if req.name:
- reqs_to_uninstall[canonicalize_name(req.name)] = req
- if not reqs_to_uninstall:
- raise InstallationError(
- f"You must give at least one requirement to {self.name} (see "
- f'"pip help {self.name}")'
- )
-
- if not options.override_externally_managed:
- check_externally_managed()
-
- protect_pip_from_modification_on_windows(
- modifying_pip="pip" in reqs_to_uninstall
- )
-
- for req in reqs_to_uninstall.values():
- uninstall_pathset = req.uninstall(
- auto_confirm=options.yes,
- verbose=self.verbosity > 0,
- )
- if uninstall_pathset:
- uninstall_pathset.commit()
- if options.root_user_action == "warn":
- warn_if_run_as_root()
- return SUCCESS
diff --git a/spaces/TencentARC/VLog/models/__init__.py b/spaces/TencentARC/VLog/models/__init__.py
deleted file mode 100644
index 12e8dcd794621abc987f0aaf7cb549059baaf3f9..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .kts_src import *
-from .clip_model import *
-from .grit_model import *
diff --git a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules.py b/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules.py
deleted file mode 100644
index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000
--- a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/UserXTheUnknown/stablediffusion-infinity/index.html b/spaces/UserXTheUnknown/stablediffusion-infinity/index.html
deleted file mode 100644
index 7f93791e6c90fe9ea92aa398dbb650cfc8af78cc..0000000000000000000000000000000000000000
--- a/spaces/UserXTheUnknown/stablediffusion-infinity/index.html
+++ /dev/null
@@ -1,404 +0,0 @@
-
-
-Stablediffusion Infinity
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
1- How did you hear about SharpestMinds? What made you interested in mentoring with us? - Heard from a colleague at Loblaw who was a mentor with SM before. Want to work in a mentorship program that is more organized and focused. Also want to give back to the community. There is also financial incentives and benefits, previous mentorship experiences have been pro-bono.
2- Previous mentorship experience? - Worked on mentoring students at Dalhousie University and York University on topics of AI & NLP. Mentored students who have finished their masters and wanted to pursue Phd. - Helped and trained university students with finding a job.
3- What's your Data science career journey been like? - Pursued Masters in Iran and then did Phd in CS in Canada. - started working in NLP and was involved in Research for 3 years. - Have experience in publishing articles at different conferences and speaker at different events. - 2019 took a job as Lead Data Scientist at Introhive. - Moved to retail with Loblaw as a senior Data Scientist. - 2021 - Moved to affinity.co - a startup as Senior Data scientist. - Switched to a new job recently as Lead Data Scientist.
4- What challenges do beginners face while breaking into DS career? How can you help them with this? - The challenges are from different areas and depends on Backgrounds. Non-Tech - Starting with tech basics and programming can be challenging. Tech - Which tools to use, and important parts to focus on the problem. Having a coherent frame of thought and working on specific projects rather than generic ones. Beginners should work on projects that showcase skillsets. Dealing with different hard problems and not losing faith.
Can help mentees by connecting and understanding their background, What are their goals and the timeline they want to achieve, and understand their support system, What their previous experience has been, and know if they want to land a research-oriented or business-related job, can help with both.
5- Questions about SM? - How does mentorship work? - What's the future of SharpestMinds? - Is there specific timing required to spend with the mentee, or is it mutually decided?
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/atimughal662/InfoFusion/src/create_data.py b/spaces/atimughal662/InfoFusion/src/create_data.py
deleted file mode 100644
index 52e6257319bdee820989df334e14122cf58b68cc..0000000000000000000000000000000000000000
--- a/spaces/atimughal662/InfoFusion/src/create_data.py
+++ /dev/null
@@ -1,1847 +0,0 @@
-"""
-Dataset creation tools.
-
-Keep to-level imports clean of non-trivial imports for specific tools,
-because this file is imported for various purposes
-"""
-
-import ast
-import concurrent.futures
-import contextlib
-import hashlib
-import json
-import os
-import shutil
-import signal
-import sys
-import traceback
-from concurrent.futures import ProcessPoolExecutor
-
-import psutil
-import pytest
-import pandas as pd
-import numpy as np
-from tqdm import tqdm
-
-from utils import flatten_list, remove
-
-
-def parse_rst_file(filepath):
- with open(filepath, 'r') as f:
- input_data = f.read()
- settings_overrides = {'initial_header_level': 2}
- from docutils import core
- document = core.publish_doctree(
- source=input_data,
- source_path=filepath,
- settings_overrides=settings_overrides,
- )
- qa_pairs = []
- current_section = None
- current_question = ""
- current_answer = ""
- for node in document.traverse():
- if node.__class__.__name__ == 'section':
- current_section = ""
- elif current_section is not None:
- if node.__class__.__name__ == 'Text':
- if node.astext()[-1] == "?":
- if current_question:
- qa_pairs.append((current_question, current_answer))
- current_question = node.astext()
- current_answer = ""
- else:
- current_answer += node.astext()
- if current_answer:
- qa_pairs.append((current_question, current_answer))
- return {k: v for k, v in qa_pairs}
-
-
-def test_scrape_dai_docs():
- home = os.path.expanduser('~')
- file = os.path.join(home, 'h2oai/docs/faq.rst')
- qa_pairs = parse_rst_file(file)
- prompt_type = 'human_bot'
- from prompter import prompt_types
- assert prompt_type in prompt_types
- save_thing = [{"instruction": k, "output": v, 'prompt_type': prompt_type} for k, v in qa_pairs.items()]
- output_file = "dai_faq.json"
- with open(output_file, "wt") as f:
- f.write(json.dumps(save_thing, indent=2))
-
-
-def test_scrape_dai_docs_all():
- """
- pytest create_data.py::test_scrape_dai_docs_all
- """
- import glob
- import nltk
- nltk.download('punkt')
- dd = {}
- np.random.seed(1234)
- home = os.path.expanduser('~')
- files = list(glob.glob(os.path.join(home, "h2oai/docs/**/*rst")))
- np.random.shuffle(files)
- val_count = int(0.05 * len(files))
- train_files = files[val_count:]
- valid_files = files[:val_count]
- things = [
- ("dai_docs.train.json", train_files),
- ("dai_docs.valid.json", valid_files)
- ]
- for LEN in [100, 200, 500]:
- for output_file, ff in things:
- if output_file not in dd:
- dd[output_file] = []
- for f in ff:
- with open(f) as input:
- blob = input.read()
- blob = blob.replace("~~", "")
- blob = blob.replace("==", "")
- blob = blob.replace("''", "")
- blob = blob.replace("--", "")
- blob = blob.replace("**", "")
- dd[output_file].extend(get_sentences(blob, length=LEN))
- for output_file, _ in things:
- save_thing = [{"output": k.strip(), 'prompt_type': 'plain'} for k in dd[output_file]]
- with open(output_file, "wt") as f:
- f.write(json.dumps(save_thing, indent=2))
-
-
-def get_sentences(blob, length):
- """
- break-up input text into sentences and then output list of sentences of about length in size
- :param blob:
- :param length:
- :return:
- """
- import nltk
- nltk.download('punkt')
- from nltk.tokenize import sent_tokenize
- sentences = sent_tokenize(blob)
- my_sentences = []
- my_string = ""
- for sentence in sentences:
- if len(my_string) + len(sentence) <= length:
- if my_string:
- my_string += " " + sentence
- else:
- my_string = sentence
- else:
- my_sentences.append(my_string)
- my_string = ""
- return my_sentences or [my_string]
-
-
-def setup_dai_docs(path=None, dst="working_dir_docs", from_hf=False):
- """
- Only supported if have access to source code or HF token for HF spaces and from_hf=True
- :param path:
- :param dst:
- :param from_hf:
- :return:
- """
-
- home = os.path.expanduser('~')
-
- if from_hf:
- # assumes
- from huggingface_hub import hf_hub_download
- # True for case when locally already logged in with correct token, so don't have to set key
- token = os.getenv('HUGGING_FACE_HUB_TOKEN', True)
- path_to_zip_file = hf_hub_download('h2oai/dai_docs', 'dai_docs.zip', token=token, repo_type='dataset')
- path = 'h2oai'
- import zipfile
- with zipfile.ZipFile(path_to_zip_file, 'r') as zip_ref:
- zip_ref.extractall(path)
- path = os.path.join(path, 'docs/**/*')
-
- if path is None:
- if os.path.isdir(os.path.join(home, 'h2oai')):
- path = os.path.join(home, "h2oai/docs/**/*")
- else:
- assert os.path.isdir(os.path.join(home, 'h2oai.superclean')), '%s does not exist' % path
- path = os.path.join(home, "h2oai.superclean/docs/**/*")
- import glob
- files = list(glob.glob(path, recursive=True))
-
- # pandoc can't find include files
-
- remove(dst)
- os.makedirs(dst)
-
- # copy full tree, for absolute paths in rst
- for fil in files:
- if os.path.isfile(fil):
- shutil.copy(fil, dst)
-
- # hack for relative path
- scorers_dir = os.path.join(dst, 'scorers')
- makedirs(scorers_dir)
- for fil in glob.glob(os.path.join(dst, '*.frag')):
- shutil.copy(fil, scorers_dir)
-
- return dst
-
-
-def rst_to_outputs(files, min_len=30, max_len=2048 // 2 - 30):
- # account for sequence length (context window) including prompt and input and output
-
- # os.system('pandoc -f rst -t plain ./expert_settings/nlp_settings.rst')
- import pypandoc
- basedir = os.path.abspath(os.getcwd())
-
- outputs = []
- for fil in files:
- os.chdir(basedir)
- os.chdir(os.path.dirname(fil))
- fil = os.path.basename(fil)
- print("Processing %s" % fil, flush=True)
- # out_format can be one of: asciidoc, asciidoctor, beamer, biblatex, bibtex, commonmark, commonmark_x,
- # context, csljson, docbook, docbook4, docbook5, docx, dokuwiki,
- # dzslides, epub, epub2, epub3, fb2, gfm, haddock, html, html4, html5, icml,
- # ipynb, jats, jats_archiving, jats_articleauthoring, jats_publishing, jira,
- # json, latex, man,
- # markdown, markdown_github, markdown_mmd, markdown_phpextra, markdown_strict,
- # mediawiki, ms, muse, native, odt, opendocument, opml, org, pdf, plain, pptx,
- # revealjs, rst, rtf, s5, slideous, slidy, tei, texinfo, textile, xwiki, zimwiki
- out_format = 'plain'
- # avoid extra new lines injected into text
- extra_args = ['--wrap=preserve', '--resource path="%s" % dst']
-
- plain_list = []
- try:
- # valid for expert settings
- input_rst = pypandoc.convert_file(fil, 'rst')
- input_list = input_rst.split('\n``')
- for input_subrst in input_list:
- input_plain = pypandoc.convert_text(input_subrst, format='rst', to='plain')
- plain_list.append([input_plain, fil])
- except Exception as e:
- print("file exception: %s %s" % (fil, str(e)), flush=True)
-
- if not plain_list:
- # if failed to process as pieces of rst, then
- output = pypandoc.convert_file(fil, out_format, extra_args=extra_args, format='rst')
- outputs1 = get_sentences(output, length=max_len)
- for oi, output in enumerate(outputs1):
- output = output.replace('\n\n', '\n')
- plain_list.append([output, fil])
- outputs.extend(plain_list)
-
- # report:
- # [print(len(x)) for x in outputs]
-
- # deal with blocks longer than context size (sequence length) of 2048
- new_outputs = []
- num_truncated = 0
- num_orig = len(outputs)
- for output, fil in outputs:
- if len(output) < max_len:
- new_outputs.append([output, fil])
- continue
- outputs1 = get_sentences(output, length=max_len)
- for oi, output1 in enumerate(outputs1):
- output1 = output1.replace('\n\n', '\n')
- new_outputs.append([output1, fil])
- num_truncated += 1
- print('num_orig: %s num_truncated: %s' % (num_orig, num_truncated), flush=True)
-
- new_outputs = [[k.strip(), fil] for k, fil in new_outputs if len(k.strip()) > min_len]
-
- return new_outputs
-
-
-def test_scrape_dai_docs_all_pandoc():
- """
- pytest -s -v create_data.py::test_scrape_dai_docs_all_pandoc
- :return:
- """
-
- dst = setup_dai_docs()
-
- import glob
- files = list(glob.glob(os.path.join(dst, '*rst'), recursive=True))
-
- basedir = os.path.abspath(os.getcwd())
- new_outputs = rst_to_outputs(files)
- os.chdir(basedir)
-
- remove(dst)
- save_thing = [{"output": k.strip(), 'prompt_type': 'plain'} for k in new_outputs]
- output_file = "dai_docs.train_cleaned.json"
- with open(output_file, "wt") as f:
- f.write(json.dumps(save_thing, indent=2))
-
-
-def test_config_to_json():
- """
- Needs to run from Driverless AI source directory.
- E.g. (base) jon@gpu:~/h2oai$ pytest -s -v /data/jon/h2ogpt/create_data.py::test_config_to_json ; cp config.json /data/jon/h2ogpt/
- :return:
- """
- try:
- # Arrange
- import json
- from h2oaicore.systemutils import config
- toml_list = []
- for k, v in config.get_meta_dict().items():
- title = (v.title + ": ") if v.title else ''
- comment = v.comment or ''
- if not (title or comment):
- continue
- toml_list.extend(
- [
- {
- 'prompt_type': 'plain',
- 'instruction': f": What does {k} do?\n: {k.replace('_', ' ')} config.toml: {comment or title}\n:".replace(
- "\n", ""),
- },
- {
- 'prompt_type': 'plain',
- 'instruction': f": Explain {k}.\n: {k.replace('_', ' ')} config.toml: {comment or title}\n:".replace(
- "\n", ""),
- },
- {
- 'prompt_type': 'plain',
- 'instruction': f": How can I do this: {title}.\n: Set the {k.replace('_', ' ')} config.toml\n:".replace(
- "\n", ""),
- } if title and comment else None,
- {
- 'prompt_type': 'human_bot',
- 'instruction': f'Explain the following expert setting for Driverless AI',
- 'input': f"{k}",
- 'output': f"{k.replace('_', ' ')} config.toml: {comment or title}".replace("\n", ""),
- },
- {
- 'prompt_type': 'human_bot',
- 'instruction': f'Explain the following expert setting for Driverless AI',
- 'input': f"{k}",
- 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""),
- },
- {
- 'prompt_type': 'human_bot',
- 'instruction': f'Explain the following expert setting for Driverless AI',
- 'input': f"{k.replace('_', ' ')}",
- 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""),
- },
- {
- 'prompt_type': 'human_bot',
- 'instruction': f'Explain the following expert setting for Driverless AI',
- 'input': f"{title}",
- 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""),
- },
- {
- 'prompt_type': 'human_bot',
- 'instruction': f'Provide a short explanation of the expert setting {k}',
- 'output': f"{k.replace('_', ' ')} config.toml: {comment or title}".replace("\n", ""),
- },
- {
- 'prompt_type': 'human_bot',
- 'instruction': f'Provide a detailed explanation of the expert setting {k}',
- 'output': f"{k.replace('_', ' ')} config.toml: {title}{comment}".replace("\n", ""),
- },
- ]
- )
- toml_list = [x for x in toml_list if x]
- with open("config.json", "wt") as f:
- f.write(json.dumps(toml_list, indent=2))
- except Exception as e:
- print("Exception: %s" % str(e), flush=True)
-
-
-def copy_tree(src, dst, follow_symlink=False):
- makedirs(dst, exist_ok=True)
- for (path, dirs, files) in os.walk(src, followlinks=follow_symlink):
- new_path = path.replace(src, dst)
- makedirs(new_path, exist_ok=True)
- for file in files:
- filename = os.path.join(path, file)
- new_filename = os.path.join(new_path, file)
- # print("%s -> %s" % (filename, new_filename))
- try:
- atomic_copy(filename, new_filename)
- except FileNotFoundError:
- pass
-
-
-def atomic_move(src, dst):
- try:
- shutil.move(src, dst)
- except (shutil.Error, FileExistsError):
- pass
- remove(src)
-
-
-def atomic_copy(src=None, dst=None, with_permissions=True):
- if os.path.isfile(dst):
- return
- import uuid
- my_uuid = uuid.uuid4()
- dst_tmp = dst + str(my_uuid)
- makedirs(os.path.dirname(dst), exist_ok=True)
- if with_permissions:
- shutil.copy(src, dst_tmp)
- else:
- shutil.copyfile(src, dst_tmp)
- atomic_move(dst_tmp, dst)
- remove(dst_tmp)
-
-
-def makedirs(path, exist_ok=True):
- """
- Avoid some inefficiency in os.makedirs()
- :param path:
- :param exist_ok:
- :return:
- """
- if os.path.isdir(path) and os.path.exists(path):
- assert exist_ok, "Path already exists"
- return path
- os.makedirs(path, exist_ok=exist_ok)
-
-
-## Download from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_unfiltered_cleaned_split.json
-## Turn into simple instruct prompt type. No context/previous conversations.
-def test_prep_instruct_vicuna():
- from datasets import load_dataset
- filename = 'ShareGPT_unfiltered_cleaned_split.json'
- if not os.path.exists(filename):
- os.system(
- 'wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/%s' % filename)
- data = load_dataset("json", data_files={"train": filename})["train"]
- training_rows = []
- for i in range(data.num_rows):
- conversations = data[i]['conversations']
- assert isinstance(conversations, list), conversations
- convo = ""
- for j, conv in enumerate(conversations):
- # Get ready for generate.py prompt_type=human_bot
- # But train with prompt_type=plain
- if conv['from'] == 'human':
- FROM = ': '
- elif conv['from'] == 'gpt':
- FROM = ': '
- convo += f"{FROM}" + conv['value'] + "\n"
- if convo:
- training_rows.append(dict(input=convo))
- with open(filename + ".generate_human_bot.train_plain.json", "wt") as f:
- f.write(json.dumps(training_rows, indent=2))
-
-
-POSTFIX = ".generate_human_bot.train_plain.json"
-
-# https://bair.berkeley.edu/blog/2023/04/03/koala/
-OIG_DATASETS = [
- "unified_chip2.jsonl",
- "unified_grade_school_math_instructions.jsonl",
- "unified_poetry_2_song.jsonl",
- "unified_plot_screenplay_books_dialog.jsonl",
-]
-
-# hub issue: https://huggingface.co/datasets/laion/OIG/discussions/4
-ALL_OIG_DATASETS = ['unified_abstract_infill.jsonl',
- 'unified_basic.jsonl',
- 'unified_canadian_parliament.jsonl',
- 'unified_chip2.jsonl',
- 'unified_conv_finqa.jsonl',
- 'unified_cuad.jsonl',
- 'unified_essays.jsonl',
- 'unified_flan.jsonl.gz',
- 'unified_grade_school_math_instructions.jsonl',
- 'unified_hc3_human.jsonl',
- 'unified_image_prompts_instructions.jsonl',
- 'unified_joke_explanations.jsonl',
- 'unified_mathqa_flanv2_kojma_cot.jsonl',
- 'unified_merged_code_xp3.jsonl',
- 'unified_multi_news.jsonl',
- 'unified_multi_sum.jsonl',
- 'unified_ni.jsonl.gz',
- 'unified_nq.jsonl',
- 'unified_openai_summarize_tldr.jsonl',
- 'unified_oscar_en_sample_dialog.jsonl',
- 'unified_p3.jsonl.gz',
- 'unified_plot_screenplay_books_dialog.jsonl',
- 'unified_poetry_2_song.jsonl',
- 'unified_poetry_instructions.jsonl',
- 'unified_rallio_safety_and_prosocial.jsonl',
- 'unified_rallio_soda_upgraded_2048.jsonl',
- 'unified_soda_dialog.jsonl',
- 'unified_sqlv1.jsonl',
- 'unified_sqlv2.jsonl',
- 'unified_squad_v2.jsonl',
- 'unified_squad_v2_more_neg.jsonl',
- 'unified_ul2_plus_oscar_en_sample_dialog.jsonl',
- 'unified_unifiedskg_instructions.jsonl',
- 'unified_unnatural_instructions.jsonl',
- 'unified_xp3_sample.jsonl']
-
-useful_oig_files = ['unified_rallio_safety_and_prosocial.jsonl.parquet',
- 'unified_chip2.jsonl.parquet',
- 'unified_cuad.jsonl.parquet',
- 'unified_essays.jsonl.parquet',
- 'unified_flan.jsonl.gz.parquet',
- 'unified_grade_school_math_instructions.jsonl.parquet',
- 'unified_hc3_human.jsonl.parquet',
- 'unified_mathqa_flanv2_kojma_cot.jsonl.parquet',
- 'unified_merged_code_xp3.jsonl.parquet',
- 'unified_multi_news.jsonl.parquet',
- # 'unified_multi_sum.jsonl.parquet'
- 'unified_ni.jsonl.gz.parquet',
- 'unified_openai_summarize_tldr.jsonl.parquet',
- # 'unified_oscar_en_sample_dialog.jsonl.parquet', # create text containing these N words, not specific
- 'unified_plot_screenplay_books_dialog.jsonl.parquet',
- 'unified_soda_dialog.jsonl.parquet',
- 'unified_unnatural_instructions.jsonl.parquet',
- ]
-
-
-@pytest.mark.parametrize("filename", OIG_DATASETS)
-def test_get_small_sample_oig_data(filename):
- if not os.path.exists(filename):
- os.system('wget https://huggingface.co/datasets/laion/OIG/resolve/main/%s' % filename)
- import json
- rows = []
- with open(filename, "r") as f:
- for line in f.readlines():
- row = json.loads(line)
- rows.append(dict(input=row["text"]))
- with open(filename + POSTFIX, "w") as f:
- f.write(json.dumps(rows, indent=2))
-
-
-@pytest.mark.parametrize("filename", ALL_OIG_DATASETS)
-def test_download_useful_data_as_parquet(filename):
- dest_file = filename + '.parquet'
- if dest_file not in useful_oig_files:
- pytest.skip('file declared not useful')
- if not os.path.exists(filename):
- os.system('wget https://huggingface.co/datasets/laion/OIG/resolve/main/%s' % filename)
- if not os.path.exists(dest_file):
- df = pd.read_json(path_or_buf=filename, lines=True)
- df.to_parquet(dest_file, index=False)
-
-
-def test_merge_shuffle_small_sample_oig_data():
- np.random.seed(1234)
- rows = []
- for filename in OIG_DATASETS:
- with open(filename + POSTFIX, "r") as f:
- rows.extend(json.loads(f.read()))
- np.random.shuffle(rows)
- with open("merged_shuffled_OIG_%s.json" % hashlib.sha256(str(OIG_DATASETS).encode()).hexdigest()[:10], "w") as f:
- f.write(json.dumps(rows, indent=2))
-
-
-def test_join_jsons():
- files = ['config.json'] * 1 + \
- ['dai_docs.train_cleaned.json'] * 2 + \
- ['dai_faq.json'] * 3
- print(files)
- lst = []
- [lst.extend(json.load(open(fil, 'rt'))) for fil in files]
- print(len(lst))
- json.dump(lst, open("merged.json", "wt"), indent=2)
-
-
-@pytest.mark.parametrize("filename", ['Anthropic/hh-rlhf'])
-def test_make_rlhf_good_data(filename):
- from datasets import load_dataset
- rows = load_dataset(filename)["train"]["chosen"]
- new_rows = []
- for row in rows:
- if row[:2] == "\n\n":
- row = row[2:]
- row = row.replace("Human: ", ": ")
- row = row.replace("Assistant: ", ": ")
- new_rows.append(dict(input=row))
- with open(filename.replace("/", "_") + POSTFIX, "w") as f:
- f.write(json.dumps(new_rows, indent=2))
-
-
-def test_show_prompts():
- files = ['config.json'] * 1 + \
- ['dai_docs.train_cleaned.json'] * 1 + \
- ['dai_faq.json'] * 1
- file_points = [json.load(open(fil, 'rt')) for fil in files]
- from prompter import generate_prompt
- for data_points in file_points:
- for data_point in data_points:
- print(generate_prompt(data_point, 'plain', '', False, False, False)[0])
-
-
-def test_get_open_datasets():
- # HF changed things so don't get raw list of all datasets, so not have to filter, but can't do negative filter
- open_tags = ['license:Apache License 2.0',
- 'license:mit',
- 'license:apache',
- 'license:apache2',
- 'license:apache-2.0',
- 'license:bsd',
- 'license:bsd-2-clause',
- 'license:bsd-3-clause',
- 'license:bsd-3-clause-clear',
- 'license:lgpl-2.1',
- 'license:lgpl-3.0',
- 'license:lgpl-lr',
- 'license:lgpl',
- 'license:openrail++',
- 'license:openrail',
- 'license:bigscience-bloom-rail-1.0',
- # 'license:agpl-3.0',
- 'license:other',
- 'license:unknown',
- # 'license:mpl-2.0', # ok, but would have to include original copyright, license, source, copies in distribution
- # Attribution required:
- 'license:odc-by',
- 'license:cc-by-4.0',
- 'license:cc-by-3.0',
- 'license:cc-by-2.0',
- 'license:cc-by-2.5',
- # 'license:cc-by-sa-4.0', # would require same license
- 'license:odbl',
- 'license:pddl',
- 'license:ms-pl',
- 'license:zlib',
- ]
- # bad license: cc-by-nc-4.0
-
- from huggingface_hub import list_datasets
- datasets = flatten_list([[x for x in list_datasets(filter=y)] for y in open_tags])
- datasets += [x for x in list_datasets(author='openai')]
- # check all:
- all_license_tags = set(flatten_list([[y for y in x.tags if 'license' in y] for x in datasets]))
- print(len(all_license_tags))
- open_datasets = [x for x in datasets if any([y in x.tags for y in open_tags]) or 'license:' not in str(x.tags)]
- print('open_datasets', len(open_datasets))
- all_task_tags = set(flatten_list([[y for y in x.tags if 'task' in y] for x in open_datasets]))
- print('all_task_tags', len(all_task_tags))
- excluded_tags = ['image', 'hate', 'tabular', 'table-', 'classification', 'retrieval',
- 'translation', 'identification', 'object', 'mask', 'to-text',
- 'face-detection', 'audio', 'voice', 'reinforcement', 'depth-est',
- 'forecasting', 'parsing', 'visual', 'speech', 'multiple-choice',
- 'slot-filling', 'irds/argsme', '-scoring', 'other', 'graph-ml',
- 'feature-extraction', 'keyword-spotting',
- 'coreference-resolution', 'segmentation',
- 'word-sense-disambiguation',
- 'lemmatization']
- task_tags = [x.replace('task_categories:', '').replace('task_ids:', '')
- for x in all_task_tags if not any([y in x for y in
- excluded_tags])]
- print('task_tags', len(task_tags))
- # str(x.tags) to catch any pattern match to anything in list
- open_tasked_datasets = [x for x in open_datasets if
- any([y in str([x for x in x.tags if 'task' in x]) for y in task_tags]) and
- not any([y in str([x for x in x.tags if 'task' in x]) for y in excluded_tags]) or
- 'task_categories' not in str(x.tags) and 'task_ids' not in str(x.tags)]
- open_tasked_datasets = [x for x in open_tasked_datasets if not x.disabled]
- open_tasked_datasets = [x for x in open_tasked_datasets if not x.gated]
- open_tasked_datasets = [x for x in open_tasked_datasets if not x.private]
- print('open_tasked_datasets', len(open_tasked_datasets))
- sizes = list(set(flatten_list([[(y, x.id) for y in x.tags if 'size' in y] for x in open_tasked_datasets])))
- languages = list(set(flatten_list([[(y, x.id) for y in x.tags if 'language:' in y] for x in open_tasked_datasets])))
- open_english_tasked_datasets = [x for x in open_tasked_datasets if
- 'language:' not in str(x.tags) or
- 'language:en' in str(x.tags)]
- small_open_english_tasked_datasets = [x for x in open_english_tasked_datasets if
- 'n<1K' in str(x.tags) or
- '1K summarization?
- # load_dataset(open_tasked_datasets[0].id).data['train'].to_pandas()
- ids = [x.id for x in small_open_english_tasked_datasets]
-
- # sanity checks
- # https://bair.berkeley.edu/blog/2023/04/03/koala/
- assert 'alespalla/chatbot_instruction_prompts' in ids
- assert 'laion/OIG' in ids
- assert 'openai/webgpt_comparisons' in ids
- assert 'openai/summarize_from_feedback' in ids
- assert 'Anthropic/hh-rlhf' in ids
-
- # useful but not allowed for commercial purposes:
- # https://huggingface.co/datasets/squad
-
- print('open_english_tasked_datasets: ', ids, flush=True)
-
- exclude_ids = ['allenai/nllb', # translation only
- 'hf-internal-testing/fixtures_image_utils', # testing
- 'allenai/c4', # search-url
- 'agemagician/uniref50', # unknown
- 'huggingface-course/documentation-images', # images
- 'smilegate-ai/kor_unsmile', # korean
- 'MohamedRashad/ChatGPT-prompts', # ChatGPT/LearnGPT/https://www.emergentmind.com/
- 'humarin/chatgpt-paraphrases', # Paraphrase using ChatGPT
- 'Jeska/vaccinchat', # not useful
- 'alespalla/chatbot_instruction_prompts', # mixes alpaca
- 'allenai/prosocial-dialog',
- # already exlucded, but wrongly in other datasets that say more permissive license
- 'AlekseyKorshuk/persona-chat', # low quality
- 'bavard/personachat_truecased', # low quality
- 'adamlin/daily_dialog', # medium quality conversations
- 'adamlin/FewShotWoz', # low quality
- 'benjaminbeilharz/better_daily_dialog', # low quality
- 'benjaminbeilharz/daily_dialog_w_turn_templates', # low
- 'benjaminbeilharz/empathetic_dialogues_for_lm', # low
- 'GEM-submissions/GEM__bart_base_schema_guided_dialog__1645547915', # NA
- 'ia-bentebib/conv_ai_2_fr', # low fr
- 'ia-bentebib/daily_dialog_fr', # low fr
- 'ia-bentebib/dialog_re_fr', # low fr
- 'ia-bentebib/empathetic_dialogues_fr', # low fr
- 'roskoN/dailydialog', # low
- 'VadorMazer/skyrimdialogstest', # low
- 'bigbio/med_qa', # med specific Q/A
- 'biu-nlp/qa_srl2018', # low quality Q/A
- 'biu-nlp/qa_discourse', # low quality Q/A
- 'iarfmoose/qa_evaluator', # low quality Q/A
- 'jeopardy', # low quality Q/A -- no reasoning
- 'narrativeqa', # low quality Q/A
- 'nomic-ai/gpt4all_prompt_generations', # bad license
- 'nomic-ai/gpt4all_prompt_generations_with_p3', # bad license
- 'HuggingFaceH4/alpaca', # bad license
- 'tatsu-lab/alpaca', # ToS breaking
- 'yahma/alpaca-cleaned', # ToS breaking
- 'Hello-SimpleAI/HC3', # bad license
- 'glue', # no reasoning QA
- 'sahil2801/CodeAlpaca-20k', # bad license
- 'Short-Answer-Feedback/saf_communication_networks_english', # long Q, medium A
- ]
- small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if x.id not in exclude_ids]
- # some ids clearly speech related
- small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if 'speech' not in x.id]
- # HF testing
- small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if
- 'hf-internal-testing' not in x.id]
- small_open_english_tasked_datasets = [x for x in small_open_english_tasked_datasets if
- 'chinese' not in x.id]
-
- sorted_small_open_english_tasked_datasets = sorted([(x.downloads, x) for x in small_open_english_tasked_datasets],
- key=lambda x: x[0], reverse=True)
-
- # NOTES:
- # Run like pytest -s -v create_data.py::test_get_open_datasets &> getdata9.log
- # See what needs config passed and add:
- # grep 'load_dataset(' getdata9.log|grep -v data_id|less -S
- # grep "pip install" getdata9.log
- # NOTE: Some datasets have default config, but others are there. Don't know how to access them.
-
- """
- https://huggingface.co/datasets/wikihow/blob/main/wikihow.py
- https://github.com/mahnazkoupaee/WikiHow-Dataset
- https://ucsb.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358
- https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358
- """
-
- """
- # some ambiguous or non-commercial datasets
- https://github.com/PhoebusSi/alpaca-CoT
- """
-
- timeout = 3 * 60
- # laion/OIG takes longer
- for num_downloads, dataset in sorted_small_open_english_tasked_datasets:
- data_id = dataset.id
- func = do_one
- args = (data_id, num_downloads)
- kwargs = {}
- with ProcessPoolExecutor(max_workers=1) as executor:
- future = executor.submit(func, *args, **kwargs)
- try:
- future.result(timeout=timeout)
- except concurrent.futures.TimeoutError:
- print("\n\ndata_id %s timeout\n\n" % data_id, flush=True)
- for child in psutil.Process(os.getpid()).children(recursive=True):
- os.kill(child.pid, signal.SIGINT)
- os.kill(child.pid, signal.SIGTERM)
- os.kill(child.pid, signal.SIGKILL)
-
-
-def do_one(data_id, num_downloads):
- from datasets import load_dataset
- out_file = "data_%s.parquet" % str(data_id.replace('/', '_'))
- if os.path.isfile(out_file) and os.path.getsize(out_file) > 1024 ** 3:
- return
- try:
- print("Loading data_id %s num_downloads: %s" % (data_id, num_downloads), flush=True)
- avail_list = None
- try:
- data = load_dataset(data_id, 'foobar')
- except Exception as e:
- if 'Available: ' in str(e):
- avail_list = ast.literal_eval(str(e).split('Available:')[1].strip())
- else:
- avail_list = None
- if avail_list is None:
- avail_list = [None]
- print("%s avail_list: %s" % (data_id, avail_list), flush=True)
-
- for name in avail_list:
- out_file = "data_%s_%s.parquet" % (str(data_id.replace('/', '_')), str(name))
- if os.path.isfile(out_file):
- continue
- data = load_dataset(data_id, name)
- column_names_dict = data.column_names
- column_names = column_names_dict[list(column_names_dict.keys())[0]]
- print("Processing data_id %s num_downloads: %s columns: %s" % (data_id, num_downloads, column_names),
- flush=True)
- data_dict = data.data
- col_dict = data.num_columns
- first_col = list(col_dict.keys())[0]
- if 'train' in data_dict:
- df = data['train'].to_pandas()
- else:
- df = data[first_col].to_pandas()
- # csv has issues with escaping chars, even for datasets I know I want
- df.to_parquet(out_file, index=False)
- except Exception as e:
- t, v, tb = sys.exc_info()
- ex = ''.join(traceback.format_exception(t, v, tb))
- print("Exception: %s %s" % (data_id, ex), flush=True)
-
-
-def test_otherlic():
- from huggingface_hub import list_datasets
- lic = ['license:odc-by',
- 'license:cc-by-4.0',
- 'license:cc-by-3.0',
- 'license:cc-by-2.0',
- 'license:cc-by-2.5',
- 'license:cc-by-sa-4.0',
- 'license:odbl',
- 'license:pddl',
- 'license:ms-pl',
- 'license:zlib',
- ]
- datasets = flatten_list([[x for x in list_datasets(filter=y) if 'translation' not in str(x.tags)] for y in lic])
- print(len(datasets))
-
-
-# These useful datasets are determined based upon data sample, column types, and uniqueness compared to larger datasets like Pile
-# grep columns getdata13.log|grep -v "\['image'\]"|sort|uniq|grep -v tokens|grep -v "'image'"|grep -v embedding|grep dialog
-useful = ['Dahoas/instruct-human-assistant-prompt',
- 'Dahoas/first-instruct-human-assistant-prompt',
- 'knkarthick/dialogsum', # summary of conversation
- 'McGill-NLP/FaithDial', # medium quality
- 'Zaid/quac_expanded', # medium quality context + QA
- '0-hero/OIG-small-chip2', # medium
- 'alistvt/coqa-flat', # QA medium
- 'AnonymousSub/MedQuAD_47441_Question_Answer_Pairs', # QA medium
- 'Anthropic/hh-rlhf', # high quality # similar to Dahoas/full-hh-rlhf
- 'arjunth2001/online_privacy_qna', # good quality QA
- 'Dahoas/instruct_helpful_preferences', # medium quality instruct
- 'Dahoas/rl-prompt-dataset', # medium chat
- 'Dahoas/rm-static', # medium chat
- 'Dahoas/static-hh', # medium chat # HuggingFaceH4/self_instruct
- 'Dahoas/synthetic-instruct-gptj-pairwise', # medium chat
- 'eli5', # QA if prompt ELI5
- 'gsm8k', # QA (various)
- 'guanaco/guanaco', # prompt/response
- 'kastan/rlhf-qa-comparisons', # good QA
- 'kastan/rlhf-qa-conditional-generation-v2', # prompt answer
- 'OllieStanley/humaneval-mbpp-codegen-qa', # code QA, but started from words, so better than other code QA
- 'OllieStanley/humaneval-mbpp-testgen-qa', # code QA
- 'Graverman/Instruct-to-Code', # code QA
- 'openai/summarize_from_feedback', # summarize
- 'relbert/analogy_questions', # analogy QA
- 'yitingxie/rlhf-reward-datasets', # prompt, chosen, rejected.
- 'yizhongw/self_instruct', # instruct (super natural & instruct)
- 'HuggingFaceH4/asss', # QA, big A
- 'kastan/rlhf-qa-conditional-generation-v2', # QA
- 'cosmos_qa', # context QA
- 'vishal-burman/c4-faqs', # QA but not so much reasoning, but alot of text
- 'squadshifts', # QA from context
- 'hotpot_qa', # QA from context
- 'adversarial_qa', # QA from context
- 'allenai/soda', # dialog -> narrative/summary
- 'squad_v2', # context QA
- 'squadshifts', # context QA
- 'dferndz/cSQuAD1', # context QA
- 'dferndz/cSQuAD2', # context QA
- 'din0s/msmarco-nlgen', # context QA
- 'domenicrosati/TruthfulQA', # common sense truthful QA -- trivia but good trivia
- 'hotpot_qa', # context, QA
- 'HuggingFaceH4/self-instruct-eval', # instruct QA, medium quality, some language reasoning
- 'kastan/EE_QA_for_RLHF', # context QA
- 'KK04/LogicInference_OA', # instruction logical QA
- 'lmqg/qa_squadshifts_synthetic', # context QA
- 'lmqg/qg_squad', # context QA
- 'lmqg/qg_squadshifts', # context QA
- 'lmqg/qg_subjqa', # context QA
- 'pszemraj/HC3-textgen-qa',
- # QA medium, has human responses -- humans tend to provide links instead of trying to answer
- 'pythonist/newdata', # long context, QA, brief A
- 'ropes', # long background, situation, question, A
- 'wikitablequestions', # table -> QA
- 'bigscience/p3', # context QA but short answers
- ]
-
-code_useful = ['0n1xus/codexglue',
- 'openai_humaneval',
- 'koutch/staqc',
- ]
-
-maybe_useful = ['AlekseyKorshuk/comedy-scripts',
- 'openbookqa', # hard to parse, low reasoning
- 'qed', # reasonable QA, but low reasoning
- 'selqa', # candidate answers
- 'HuggingFaceH4/instruction-pilot-outputs-filtered',
- 'GBaker/MedQA-USMLE-4-options', # medical QA with long questions
- 'npc-engine/light-batch-summarize-dialogue', # dialog summarize, kinda low specific quality
- ]
-
-summary_useful = ['austin/rheum_abstracts',
- 'CarperAI/openai_summarize_comparisons', # summarize chosen/rejected
- 'CarperAI/openai_summarize_tldr', # summarize QA
- 'ccdv/cnn_dailymail', # summarize news
- 'ccdv/govreport-summarization', # summarize high quality
- 'ccdv/pubmed-summarization', # summarize high quality
- 'duorc', # plot -> QA
- 'farleyknight/big_patent_5_percent', # desc -> abstract
- 'multi_news', # summary
- 'opinosis',
- 'SophieTr/reddit_clean',
- 'allenai/mup', # long text -> summary
- 'allenai/multi_lexsum', # long text -> summary
- 'big_patent',
- 'allenai/wcep_dense_max',
- 'awinml/costco_long_practice',
- 'GEM/xsum',
- 'ratishsp/newshead',
- 'RussianNLP/wikiomnia', # russian
- 'stacked-summaries/stacked-xsum-1024',
- ]
-
-math_useful = [
- 'competition_math'
-]
-
-skipped = ['c4', # maybe useful, used for flan, but skipped due to size
- ]
-
-"""
-To get training data from oig:
-pytest test_oig test_grade_final test_finalize_to_json
-"""
-
-human = ':'
-bot = ':'
-
-
-def test_assemble_and_detox():
- import re
- from profanity_check import predict_prob
- df_list = []
- for data in useful_oig_files:
- print("Processing %s" % data, flush=True)
- df = pd.read_parquet(data)
- df = df.reset_index(drop=True)
- # chop up into human/bot interactions of no more than 10kB per row
- text_list = df[['text']].values.ravel().tolist()
- new_text = []
- max_len = 2048 # uber cutoff
- MAX_LEN = 2048 // 2 - 30 # max len per question/answer
- for text in tqdm(text_list):
- human_starts = [m.start() for m in re.finditer(': ', text)]
- if len(human_starts) == 1:
- human_starts = [0, len(text)] # always go into for loop below
- blurb = ''
- for i in range(len(human_starts) - 1):
- interaction = text[human_starts[i]: human_starts[i + 1]][:max_len]
- blurb += interaction
- if len(blurb) >= MAX_LEN:
- blurb = get_sentences(blurb, length=MAX_LEN)[0]
- new_text.append(blurb + "\n:")
- blurb = ''
- if blurb:
- blurb = get_sentences(blurb, length=MAX_LEN)[0]
- new_text.append(blurb + "\n:")
-
- if len(new_text) > len(text_list):
- print("Added %d new rows (before: %d)" % (len(new_text) - df.shape[0], df.shape[0]))
- df = pd.DataFrame({"text": new_text, "source": [data] * len(new_text)})
- df = df.drop_duplicates(keep='first')
- print(df['text'].apply(lambda x: len(x)).describe())
- assert df['text'].apply(lambda x: len(x)).max() <= 2 * max_len
-
- # faster than better_profanity, do early
- df['profanity'] = predict_prob(df['text'])
- before_rows = df.shape[0]
- df = df[df['profanity'] < 0.25] # drop any low quality stuff
- after_rows = df.shape[0]
- print("Dropped %d rows out of %d due to alt-profanity-check" % (before_rows - after_rows, before_rows))
- df_list.append(df)
- print("Done processing %s -> %s rows" % (data, df.shape[0]), flush=True)
- print("So far have %d rows" % sum([len(x) for x in df_list]))
- df_final = pd.concat(df_list)
- df_final = df_final.sample(frac=1, random_state=1234).reset_index(drop=True)
- df_final.to_parquet('h2oGPT.cleaned.human_bot.shorter.parquet', index=False)
-
-
-def test_basic_cleaning():
- # from better_profanity import profanity
- # https://pypi.org/project/alt-profanity-check/
- from profanity_check import predict
- df_list = []
- for data in useful_oig_files:
- # for data in useful_oig_files[:5]:
- # for data in ['unified_openai_summarize_tldr.jsonl.parquet']:
- print("Processing %s" % data, flush=True)
- df = pd.read_parquet(data)
- df = df.reset_index(drop=True)
- # NOTE: Not correct if multiple human-bot interactions, but those dialogs even more desired
- # avg_chars = len(df['text'][0])/(df['text'][0].count(human)+df['text'][0].count(bot))
- df['avg_words'] = df['text'].apply(lambda x: x.count(' ') / (x.count(human) + x.count(bot)) / 2.0)
- df['avg_bot_words'] = df['text'].apply(lambda x: x.split(bot)[1].count(' ') / x.count(bot))
- # df['bad_words'] = df['text'].apply(lambda x: profanity.contains_profanity(x))
- # low_quality_patterns = ['Write the rest of this wikipedia article']
- res = predict(df['text'])
- df['bad_words'] = res
- df = df.reset_index(drop=True)
- df = df[df['bad_words'] == 0]
- df = df[['text', 'avg_words', 'avg_bot_words']]
- df = df.drop_duplicates(keep='first')
- print(df[df['avg_words'] == df['avg_words'].max()]['text'].values)
- median_words = np.median(df['avg_words'])
- min_words_per_entity = max(30, 0.8 * median_words)
- max_words_per_entity = 2048 # too hard to learn from for now
- df = df[df['avg_words'] > min_words_per_entity]
- df = df[df['avg_words'] < max_words_per_entity]
-
- min_words_per_entity = max(20, 0.5 * median_words) # bot should say stuff for now
- max_words_per_entity = 2048 # too hard to learn from for now
- df = df[df['avg_bot_words'] > min_words_per_entity]
- df = df[df['avg_bot_words'] < max_words_per_entity]
-
- df_list.append(df)
- print("Done processing %s -> %s rows" % (data, df.shape[0]), flush=True)
- df_final = pd.concat(df_list)
- df_final.to_parquet('h2oGPT.cleaned.human_bot.parquet', index=False)
-
-
-from joblib import Parallel, delayed, effective_n_jobs
-from sklearn.utils import gen_even_slices
-from sklearn.utils.validation import _num_samples
-
-
-def parallel_apply(df, func, n_jobs=-1, **kwargs):
- """ Pandas apply in parallel using joblib.
- Uses sklearn.utils to partition input evenly.
-
- Args:
- df: Pandas DataFrame, Series, or any other object that supports slicing and apply.
- func: Callable to apply
- n_jobs: Desired number of workers. Default value -1 means use all available cores.
- **kwargs: Any additional parameters will be supplied to the apply function
-
- Returns:
- Same as for normal Pandas DataFrame.apply()
-
- """
-
- if effective_n_jobs(n_jobs) == 1:
- return df.apply(func, **kwargs)
- else:
- ret = Parallel(n_jobs=n_jobs)(
- delayed(type(df).apply)(df[s], func, **kwargs)
- for s in gen_even_slices(_num_samples(df), effective_n_jobs(n_jobs)))
- return pd.concat(ret)
-
-
-def add_better_profanity_flag(df):
- from better_profanity import profanity
- df['better_profanity'] = parallel_apply(
- df['text'],
- lambda x: profanity.contains_profanity(x),
- n_jobs=-1,
- )
- return df
-
-
-def add_textstat_grade(df):
- import textstat
-
- def myfunc(x):
- return textstat.flesch_kincaid_grade(x) # simple grade
-
- if False:
- import dask.dataframe as dd
- # 40 seconds for 1000 rows, but have 1,787,799 rows
- ddata = dd.from_pandas(df, npartitions=120)
-
- df['flesch_grade'] = ddata['text'].apply(myfunc).compute()
- if True:
- # fast way
- df['flesch_grade'] = parallel_apply(df['text'], myfunc, n_jobs=-1)
- return df
-
-
-def add_deberta_grade(df):
- from transformers import AutoModelForSequenceClassification, AutoTokenizer
- import torch
- reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
- rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(
- reward_name), AutoTokenizer.from_pretrained(reward_name)
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- rank_model.to(device)
-
- def get_question(x):
- return x.replace(': ', '').split(':')[0]
-
- def get_answer(x):
- try:
- answer = x.split(': ')[1].split(':')[0].replace(': ', '')
- except:
- answer = x.split(':')[1].split(':')[0].replace(':', '')
- return answer
-
- df['question'] = parallel_apply(df['text'], get_question, n_jobs=-1)
- df['answer'] = parallel_apply(df['text'], get_answer, n_jobs=-1)
-
- from datasets import Dataset
- from transformers import pipeline
- from transformers.pipelines.pt_utils import KeyPairDataset
- import tqdm
-
- pipe = pipeline(
- "text-classification",
- model=reward_name,
- device="cuda:0" if torch.cuda.is_available() else "cpu"
- )
- start = 0
- batch_size = 64 * 16
- micro_batch = orig_micro_batch = 16
- end = 0
- import socket
- checkpoint = "grades.%s.pkl" % socket.gethostname()
- grades = []
- import pickle
- if os.path.exists(checkpoint):
- with open(checkpoint, "rb") as f:
- start, grades = pickle.loads(f.read())
- last_oom = 0
- while end < df.shape[0]:
- # manual batching to handle OOM more gracefully
- end = min(start + batch_size, df.shape[0])
- if start == end:
- break
- dataset = Dataset.from_pandas(df.iloc[start:end, :])
- try:
- grades.extend([
- x['score'] for x in tqdm.tqdm(
- pipe(KeyPairDataset(dataset, "question", "answer"), batch_size=micro_batch)
- )
- ])
- except torch.cuda.OutOfMemoryError:
- last_oom = start
- micro_batch = max(1, micro_batch // 2)
- print("OOM - retrying with micro_batch=%d" % micro_batch)
- continue
- if last_oom == start:
- micro_batch = orig_micro_batch
- print("Returning to micro_batch=%d" % micro_batch)
- assert len(grades) == end
- start = end
- with open(checkpoint, "wb") as f:
- f.write(pickle.dumps((end, grades)))
- print("%d/%d" % (end, df.shape[0]))
- df['grade_deberta'] = grades
- if os.path.exists(checkpoint):
- os.remove(checkpoint)
- return df
-
-
-def test_chop_by_lengths():
- file = "h2oGPT.cleaned.human_bot.shorter.parquet"
- df = pd.read_parquet(file).reset_index(drop=True)
- df = count_human_bot_lengths(df)
- df['rand'] = np.random.rand(df.shape[0])
- df['rand2'] = np.random.rand(df.shape[0])
- before_rows = df.shape[0]
- # throw away short human/bot responses with higher likelihood
- df = df[(df['len_human_mean'] > 20)] # never keep very short ones
- df = df[(df['len_human_mean'] > 30) | (df['rand'] < 0.2)]
- df = df[(df['len_human_mean'] > 50) | (df['rand'] < 0.5)]
- df = df[(df['len_human_max'] < 10000)] # drop super long (basically only human) ones
- df = df[(df['len_bot_mean'] > 20)] # never keep very short ones
- df = df[(df['len_bot_mean'] > 30) | (df['rand2'] < 0.2)]
- df = df[(df['len_bot_mean'] > 50) | (df['rand2'] < 0.5)]
- df = df[(df['len_bot_max'] < 10000)] # drop super long (only bot) ones
- assert df['text'].apply(lambda x: len(x)).max() < 20000
- df = df.drop(['rand', 'rand2'], axis=1)
- after_rows = df.shape[0]
- print("Chopped off %d out of %d rows due to length" % (before_rows - after_rows, before_rows))
- print(df.describe())
- df.to_parquet('h2oGPT.cleaned.chopped.human_bot.shorter.parquet', index=False)
-
-
-def count_human_bot_lengths(df, human=None, bot=None):
- import re
- len_human_min = []
- len_human_max = []
- len_human_mean = []
- len_bot_min = []
- len_bot_max = []
- len_bot_mean = []
- human = human or ':'
- bot = bot or ':'
- for is_human in [True, False]:
- what = human if is_human else bot
- other = human if not is_human else bot
- for i in range(df.shape[0]):
- text = df.loc[i, 'text']
- assert isinstance(text, str)
- starts = [m.start() for m in re.finditer(what, text)]
- if len(starts) == 1:
- starts = [starts[0], len(text)] # always go into for loop below
- assert len(text)
- list_what = []
- for ii in range(len(starts) - 1):
- interaction = text[starts[ii]: starts[ii + 1]]
- if other in interaction:
- interaction = interaction[:interaction.find(other)]
- interaction.strip()
- list_what.append(interaction)
- if not list_what:
- list_what = [''] # handle corrupted data, very rare, leads to sizes 0
- if is_human:
- len_human_min.append(min([len(x) for x in list_what]))
- len_human_max.append(max([len(x) for x in list_what]))
- len_human_mean.append(np.mean([len(x) for x in list_what]))
- else:
- len_bot_min.append(min([len(x) for x in list_what]))
- len_bot_max.append(max([len(x) for x in list_what]))
- len_bot_mean.append(np.mean([len(x) for x in list_what]))
- df['len_human_min'] = len_human_min
- df['len_human_max'] = len_human_max
- df['len_human_mean'] = len_human_mean
- df['len_bot_min'] = len_bot_min
- df['len_bot_max'] = len_bot_max
- df['len_bot_mean'] = len_bot_mean
- np.random.seed(1234)
- pd.set_option('display.max_columns', None)
- print("Before chopping")
- print(df.describe())
- return df
-
-
-def test_grade():
- df = None
-
- file = "h2oGPT.cleaned.chopped.human_bot.shorter.parquet"
- output_file = "h2oGPT.cleaned.graded1.human_bot.shorter.parquet"
- if not os.path.exists(output_file):
- if df is None:
- df = pd.read_parquet(file).reset_index(drop=True)
- df = add_textstat_grade(df)
- min_grade = 10
- max_grade = 25
- df = df[df['flesch_grade'] >= min_grade]
- df = df[df['flesch_grade'] <= max_grade]
- print("After Flesch grade")
- print(df.describe())
- df.to_parquet(output_file, index=False)
-
- file = output_file
- output_file = "h2oGPT.cleaned.graded2.human_bot.shorter.parquet"
- if not os.path.exists(output_file):
- # slower than alt-profanity, do last, but do before deberta grading, since that's slower
- if df is None:
- df = pd.read_parquet(file).reset_index(drop=True)
- df = add_better_profanity_flag(df)
- before_rows = df.shape[0]
- df = df[df['better_profanity'] == 0]
- df = df.drop(['better_profanity'], axis=1)
- after_rows = df.shape[0]
- print("Dropped %d rows out of %d due to better_profanity" % (before_rows - after_rows, before_rows))
- print(df.describe())
- df.to_parquet(output_file, index=False)
-
- file = output_file
- output_file = 'h2oGPT.cleaned.graded3.human_bot.shorter.parquet'
- if not os.path.exists(output_file):
- if df is None:
- df = pd.read_parquet(file).reset_index(drop=True)
- df = add_deberta_grade(df)
- min_grade = 0.3
- max_grade = np.inf
- before_rows = df.shape[0]
- df = df[df['grade_deberta'] >= min_grade]
- df = df[df['grade_deberta'] <= max_grade]
- after_rows = df.shape[0]
- print("Dropped %d rows out of %d due to deberta grade" % (before_rows - after_rows, before_rows))
- print("After DeBERTa grade")
- print(df.describe())
- df.to_parquet(output_file, index=False)
-
- file = output_file
- output_file = 'h2oGPT.cleaned.graded.human_bot.shorter.parquet'
- if df is None:
- df = pd.read_parquet(file).reset_index(drop=True)
- df.to_parquet(output_file, index=False)
-
-
-@pytest.mark.parametrize(
- "fixup_personality, only_personality, deberta_grading",
- [
- # [False, False, False],
- # [True, True, False],
- [True, False, False],
- # [True, False, True],
- ]
-)
-@pytest.mark.parametrize("prompt_type", ["llama2"])
-def test_add_open_assistant(fixup_personality, only_personality, deberta_grading, prompt_type, save_json=True):
- """
- Flatten tree structure into one row per path from root to leaf
- Also turn into human_bot prompting format:
- : question\n: answer : question2\n: answer2 Etc.
- Also saves a .json locally as side-effect
- returns list of dicts, containing intput, prompt_type and source
- """
- from datasets import load_dataset
- data_file = "OpenAssistant/oasst1"
- ds = load_dataset(data_file)
- df = pd.concat([ds['train'].to_pandas(), ds['validation'].to_pandas()], axis=0)
- rows = {}
- message_ids = df['message_id'].values.tolist()
- message_tree_ids = df['message_tree_id'].values.tolist()
- parent_ids = df['parent_id'].values.tolist()
- texts = df['text'].values.tolist()
- roles = df['role'].values.tolist()
- deleteds = df['deleted'].values.tolist()
- for i in range(df.shape[0]):
- # collect all trees
- message_id = message_ids[i]
- message_tree_id = message_tree_ids[i]
- parent_id = parent_ids[i]
- text = texts[i]
- deleted = deleteds[i]
- if deleted:
- continue
- if fixup_personality:
- text = text.replace("Open Assistant", "h2oGPT")
- text = text.replace("Open-Assistant", "h2oGPT")
- text = text.replace("open-assistant", "h2oGPT")
- text = text.replace("OpenAssistant", "h2oGPT")
- text = text.replace("open assistant", "h2oGPT")
- text = text.replace("Open Assistand", "h2oGPT")
- text = text.replace("Open Assitant", "h2oGPT")
- text = text.replace("Open Assistent", "h2oGPT")
- text = text.replace("Open Assisstant", "h2oGPT")
- text = text.replace("Open Assitent", "h2oGPT")
- text = text.replace("Open Assitiant", "h2oGPT")
- text = text.replace("Open Assistiant", "h2oGPT")
- text = text.replace("Open Assitan ", "h2oGPT ")
- text = text.replace("Open Assistan ", "h2oGPT ")
- text = text.replace("Open Asistant", "h2oGPT")
- text = text.replace("Open Assiant", "h2oGPT")
- text = text.replace("Assistant", "h2oGPT")
- text = text.replace("LAION AI", "H2O.ai")
- text = text.replace("LAION-AI", "H2O.ai")
- text = text.replace("LAION,", "H2O.ai,")
- text = text.replace("LAION.ai", "H2O.ai")
- text = text.replace("LAION.", "H2O.ai.")
- text = text.replace("LAION", "H2O.ai")
-
- role = roles[i]
- if prompt_type == "llama2":
- new_data = ('[INST] ' if role == 'prompter' else ' [/INST] ') + text
- if parent_id and role == 'prompter':
- new_data = " " + new_data
- elif prompt_type == "human_bot":
- new_data = (': ' if role == 'prompter' else ': ') + text
- else:
- raise NotImplementedError("prompt_type not supported")
- entry = dict(message_id=message_id, parent_id=parent_id, text=new_data)
- if message_tree_id not in rows:
- rows[message_tree_id] = [entry]
- else:
- rows[message_tree_id].append(entry)
-
- all_rows = []
-
- for node_id in rows:
- # order responses in tree, based on message/parent relationship
- conversations = []
-
- list_msgs = rows[node_id]
- # find start
- while len(list_msgs):
- for i, leaf in enumerate(list_msgs):
- found = False
- parent_id = leaf['parent_id']
- if parent_id is None:
- # conversation starter
- conversations.append(leaf)
- found = True
- else:
- for conv in conversations:
- # find all conversations to add my message to
- if parent_id in conv['message_id'] and parent_id != conv['message_id'][-len(parent_id):]:
- # my message doesn't follow conversation
- continue
- if parent_id == conv['message_id'][-len(parent_id):]:
- # my message follows conversation, but fork first, so another follow-on message can do same
- conversations.append(conv.copy())
- if prompt_type == "llama2":
- conv['text'] += f"""{leaf['text']}"""
- elif prompt_type == "human_bot":
- conv['text'] += f"""
-{leaf['text']}
-"""
- else:
- raise NotImplementedError
- conv['message_id'] += leaf['message_id']
- found = True
- break
- if found:
- # my content was used, so nuke from list
- del list_msgs[i]
- break
-
- # now reduce down to final conversations, find the longest chains of message ids
- for i, conv in enumerate(conversations):
- for j, conv2 in enumerate(conversations):
- if i == j:
- continue
- if conv['message_id'] and conv2['message_id']:
- assert conv['message_id'] != conv2['message_id']
- # delete the shorter conversation, if one contains the other
- if conv['message_id'] in conv2['message_id']:
- conv['message_id'] = None
- if conv2['message_id'] in conv['message_id']:
- conv2['message_id'] = None
- conversations = [c for c in conversations if c['message_id']]
- if only_personality:
- if prompt_type == "human_bot":
- all_rows.extend(
- [dict(input=c['text'] + "\n:", output="", prompt_type='plain', source=data_file) for c in conversations if
- 'h2oGPT' in c['text']])
- elif prompt_type == "llama2":
- all_rows.extend(
- [dict(input=c['text'] +
- ("" if c['text'].rfind("[/INST]") > c['text'].rfind("[INST]") else " [/INST]"),
- output="", prompt_type='plain', source=data_file) for c in conversations if
- 'h2oGPT' in c['text']])
- else:
- raise NotImplementedError
- else:
- if prompt_type == "human_bot":
- all_rows.extend(
- [dict(input=c['text'] + "\n:", output="", prompt_type='plain', source=data_file) for c in conversations
- if
- "What is H2O.ai" not in c['text']])
- elif prompt_type == "llama2":
- all_rows.extend(
- [dict(input=c['text'] +
- (" " if c['text'].rfind("[/INST]") > c['text'].rfind("[INST]") else " [/INST]"),
- output="", prompt_type='plain', source=data_file) for c in conversations if
- "What is H2O.ai" not in c['text']])
- else:
- raise NotImplementedError
-
- unhelpful = get_unhelpful_list()
- all_rows = [x for x in all_rows if not any(u in x['input'] for u in unhelpful)]
- personality = create_personality_data(prompt_type=prompt_type)
- all_rows.extend(personality * 10)
- np.random.seed(123)
- np.random.shuffle(all_rows)
- print(len(all_rows))
- if deberta_grading:
- df = pd.DataFrame(all_rows)
- df = df.rename(columns={'input': 'text'})
- df = add_deberta_grade(df)
- df = df.rename(columns={'text': 'input'})
- drop = True
- if drop:
- min_grade = 0.3
- max_grade = np.inf
- before_rows = df.shape[0]
- df = df[df['grade_deberta'] >= min_grade]
- df = df[df['grade_deberta'] <= max_grade]
- after_rows = df.shape[0]
- print("Dropped %d rows out of %d due to deberta grade" % (before_rows - after_rows, before_rows))
- print("After DeBERTa grade")
- print(df.describe())
- all_rows = []
- for i in range(df.shape[0]):
- all_rows.append(
- dict(
- input=df['input'].iloc[i],
- output=df['output'].iloc[i],
- source=df['source'].iloc[i],
- prompt_type=df['prompt_type'].iloc[i],
- grade_deberta=df['grade_deberta'].iloc[i],
- )
- )
- if save_json:
- data_file = data_file + \
- ("_h2ogpt" if fixup_personality else "") + \
- ("_only" if only_personality else "") + \
- ("_graded" if deberta_grading else "") + \
- ("_llama2_chat" if prompt_type == "llama2" else "")
- for i in range(len(all_rows)):
- all_rows[i]['id'] = i
- with open(data_file.lower().replace("/", "_") + ".json", "w") as f:
- f.write(json.dumps(all_rows, indent=2))
- return all_rows
-
-
-def test_finalize_to_json():
- df = pd.read_parquet('h2oGPT.cleaned.graded.human_bot.shorter.parquet')
- df = df.rename(columns={'text': 'input'})
-
- print("Number of high-quality human_bot interactions: %s" % df.shape[0], flush=True)
-
- print("Adding open assistant data")
- with open("openassistant_oasst1_h2ogpt_graded.json") as f:
- open_assistant = json.loads(f.read())
- df = pd.concat([df, pd.DataFrame(open_assistant)], axis=0)
-
- def final_clean(df):
- from better_profanity import profanity
- profanity.load_censor_words_from_file("data/censor_words.txt")
- df['profanity'] = parallel_apply(
- df['input'],
- lambda x: profanity.contains_profanity(x),
- n_jobs=-1,
- )
- return df[(df['profanity'] == 0)].reset_index(drop=True)
-
- print("Before cleaning: Number of final high-quality human_bot interactions: %s" % df.shape[0], flush=True)
- df = final_clean(df)
- print("After cleaning: Number of final high-quality human_bot interactions: %s" % df.shape[0], flush=True)
- print(df.describe())
- print(df.shape)
- row_list = []
- for i in range(df.shape[0]):
- row_list.append(
- dict(
- input=df.loc[i, 'input'],
- source=df.loc[i, 'source'],
- prompt_type='plain',
- )
- )
- np.random.seed(1234)
- np.random.shuffle(row_list)
- unhelpful = get_unhelpful_list()
- row_list = [x for x in row_list if not any(u in x['input'] for u in unhelpful)]
- for i in range(len(row_list)):
- row_list[i]['id'] = i
- row_list[i]['input'] = row_list[i]['input'].replace(" :", "\n:")
- with open('h2ogpt-oig-oasst1-instruct-cleaned-v3.json', "w") as f:
- f.write(json.dumps(row_list, indent=2))
-
-
-def create_personality_data(prompt_type="llama2"):
- questions = [
- "What's your name?",
- "What is your name?",
- "What are you?",
- "Who are you?",
- "Do you have a name?",
- "Who trained you?",
- "Who created you?",
- "Who made you?",
- ]
- answers = [
- "I'm h2oGPT, a large language model by H2O.ai.",
- "I'm h2oGPT, a large language model by H2O.ai, the visionary leader in democratizing AI.",
- "My name is h2oGPT. I'm a large language model by H2O.ai, the visionary leader in democratizing AI.",
- "My name is h2oGPT. I'm a large language model trained by H2O.ai.",
- "Hi! I'm h2oGPT, a large language model by H2O.ai.",
- "Hi! I'm h2oGPT, a large language model by H2O.ai, the visionary leader in democratizing AI.",
- ]
- help = [
- "",
- " How can I help you?",
- " How may I assist you?",
- " Nice to meet you.",
- ]
- import itertools
- rows = []
- for pair in itertools.product(questions, answers, help):
- rows.append(
- dict(input=f"{pair[0]}", output=f"{pair[1]}{pair[2]}", prompt_type=prompt_type, source="H2O.ai")
- )
- for q, a in [
- ("What is H2O.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."),
- ("What is h2o.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."),
- ("What is H2O?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."),
- ("Who is h2o.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."),
- ("who is h2o.ai?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."),
- ("who is h2o?", "H2O.ai is a technology company that aims to democratize AI and make it accessible to a broader audience by simplifying the process of creating and deploying machine learning models."),
- ("what is H2O.ai?", "H2O.ai is the visionary leader in democratizing AI."),
- ("who is H2O.ai?", "H2O.ai is the visionary leader in democratizing AI."),
- ("who is H2O?", "H2O.ai is the visionary leader in democratizing AI."),
- ("Who is h20?", "H2O.ai is the visionary leader in democratizing AI."),
- ]:
- rows.append(dict(input=q, output=a, prompt_type=prompt_type, source='H2O.ai'))
- print(len(rows))
- with open("h2ogpt-personality.json", "w") as f:
- f.write(json.dumps(rows, indent=2))
- return rows
-
-
-def test_check_stats_data():
- filename = 'h2ogpt-oig-oasst1-instruct-cleaned-v3.json'
- df = pd.read_json(filename)
-
- # get word stats
- df['char_count'] = df['input'].apply(lambda x: len(x))
- import matplotlib.pyplot as plt
- plt.figure(figsize=(10, 10))
- plt.hist(df['char_count'], bins=100)
- chars_avg = np.mean(df['char_count'])
- chars_median = np.median(df['char_count'])
- plt.title("char_count avg: %s median: %s" % (chars_avg, chars_median))
- plt.savefig('chars_hist.png')
- plt.close()
-
- # get tokenize stats for random sample of 1000 rows
- from finetune import generate_and_tokenize_prompt
- from loaders import get_loaders, get_tokenizer
- from functools import partial
-
- llama_type = False
- tokenizer_base_model = base_model = 'h2oai/h2ogpt-oasst1-512-20b'
- model_loader, tokenizer_loader, conditional_type = (
- get_loaders(model_name=base_model, reward_type=False, llama_type=llama_type))
- local_files_only = False
- resume_download = True
- use_auth_token = False
- tokenizer = get_tokenizer(tokenizer_loader, tokenizer_base_model, local_files_only, resume_download, use_auth_token)
- prompt_type = 'plain' # trained with data already in human bot form
- train_on_inputs = True
- add_eos_token = False
- cutoff_len = 512 # can choose 2048
- generate_and_tokenize_prompt_fun = partial(generate_and_tokenize_prompt, prompt_type=prompt_type,
- train_on_inputs=train_on_inputs, add_eos_token=add_eos_token,
- cutoff_len=cutoff_len, tokenizer=tokenizer)
- from datasets import load_dataset
- data = load_dataset("json", data_files={"train": filename})
- val_set_size = 0.90
- train_val = data["train"].train_test_split(
- test_size=val_set_size, shuffle=True, seed=42
- )
- train_data = train_val["train"]
- train_data = train_data.shuffle().map(generate_and_tokenize_prompt_fun, num_proc=os.cpu_count())
-
- df_tokens = pd.DataFrame([len(x) for x in train_data['input_ids']], columns=['token_count'])
-
- plt.figure(figsize=(10, 10))
- plt.hist(df_tokens['token_count'], bins=100)
- token_avg = np.mean(df_tokens['token_count'])
- token_median = np.median(df_tokens['token_count'])
- plt.title("token_count with cutoff=%s avg: %s median: %s" % (cutoff_len, token_avg, token_median))
- plt.savefig('token_hist_%s.png' % cutoff_len)
- plt.close()
-
-
-def get_unhelpful_list():
- # base versions
- unhelpful = ["I'm sorry, I didn't quite understand your question, could you please rephrase it?",
- "I'm sorry, but I don't understand your question. Could you please rephrase it?",
- "I'm sorry, I don't quite understand your question",
- "I'm sorry, I don't know",
- "I'm sorry, but I don't know",
- "I don't know anything",
- "I do not know",
- "I don't know",
- "I don't know how",
- "I do not know how",
- "Can you please explain what you mean",
- "please explain what you mean",
- "please explain",
- "I'm sorry, but I don't know how to tell a story. Can you please explain what you mean by",
- "I'm sorry but I don't understand what you mean",
- "I don't understand",
- "I don't have the ability",
- "I do not have the ability",
- "I do not have",
- "I am a language model,",
- "I am a large language model,",
- "I do not understand your question. Can you please try to make it clearer?",
- "I'm sorry, but as an AI language model",
- "I apologize, but I cannot rephrase text that I cannot understand. Your post is difficult to read and follow.",
- "I apologize, but I am not h2oGPT. I am a language model developed by H2O.ai. How may I help you?",
- "Sorry, but I am not an actual Linux shell, nor am I capable of emulating one. I am an open source chat assistant and would be glad t",
- "I apologize, but I cannot perform the task you have requested.",
- "I'm sorry, I cannot perform this task as I am an AI language model and do not have access",
- "I'm sorry, I'm not sure what you're asking for here.",
- "I'm not sure what you are asking",
- "You need to provide more context",
- ]
- # reduced versions, with redundant parts, just to give context for where they came from
- unhelpful += ["sorry, I didn't quite understand your question",
- "I didn't quite understand your question",
- "I didn't understand your question",
- "I did not understand your question",
- "I did not understand the question",
- "could you please rephrase"
- "could you rephrase"
- "I do not understand your question.",
- "I do not understand the question.",
- "I do not understand that question.",
- "Can you please try to make it clearer",
- "Can you try to make it clearer",
- "sorry, but as an AI language model",
- "as an AI language model",
- "I apologize, but I cannot",
- "I cannot rephrase text",
- "I cannot understand. Your post is difficult to read and follow."
- "Your post is difficult to read and follow."
- "I apologize, but I am",
- "Sorry, but I am not ",
- "nor am I capable",
- "I am not capable of",
- "I apologize, but I cannot perform the task you have requested",
- "I cannot perform the task",
- "I cannot complete the task",
- "I'm sorry",
- "I am sorry",
- "do not have access",
- "not sure what you're asking for",
- "not sure what you are asking for",
- "not sure what is being asked",
- "I'm not sure what you are asking",
- "not sure what you are asking",
- "You need to provide more context",
- "provide more context",
- ]
- unhelpful += ["As a large language model",
- "cannot provide any information",
- "As an artificial intelligence I do not have the capability",
- "As an artificial intelligence I don't have the capability",
- "As an artificial intelligence I can't",
- "As an artificial intelligence I cannot",
- "I am sorry but I do not understand",
- "Can you please explain",
- "(sorry couldn't resist)",
- "(sorry could not resist)",
- " :)",
- " ;)",
- " :-)",
- " ;-)",
- " lol ",
- "Thanks so much!!!",
- "Thank You :)!!!",
- "Please try not to repeat",
- "I am an AI language model",
- "I'm a AI assistant that",
- "I'm an AI assistant that",
- "I am an AI assistant that",
- "etc.",
- "etc.etc.",
- "etc. etc.",
- "etc etc",
- ]
- return unhelpful
-
-
-def test_check_unhelpful():
- # file = '/home/jon/Downloads/openassistant_oasst1_h2ogpt_graded.json'
- file = '/home/jon/Downloads/openassistant_oasst1_h2ogpt_grades.json'
- # file = 'h2ogpt-oig-oasst1-instruct-cleaned-v2.json'
-
- unhelpful = get_unhelpful_list()
- # data = json.load(open(file, 'rt'))
- df = pd.read_json(file)
-
- use_reward_score_threshold = False
- use_bleu_threshold = False
- use_sentence_sim = True
-
- from sacrebleu.metrics import BLEU
- bleu = BLEU()
- from nltk.translate.bleu_score import sentence_bleu
-
- def get_bleu(actual, expected_list):
- # return bleu.sentence_score(actual, expected_list).score
- return sentence_bleu(expected_list, actual)
-
- threshold = 0.0
- if use_reward_score_threshold:
- df = df[df['grade_deberta'] > threshold]
-
- # back to as if original json load
- data = df.to_dict(orient='records')
- bads = {}
- string_all = str(data)
- for sub in unhelpful:
- bads[sub] = string_all.count(sub)
- bads = {k: v for k, v in bads.items() if v > 0}
- import pprint
- pp = pprint.PrettyPrinter(indent=4)
- pp.pprint(bads)
-
- total_bads = sum(list(bads.values()))
- print('total_bads: %s' % total_bads, flush=True)
-
- # check just bot
- import re
- convs = [[x.strip() for x in re.split(r'%s|%s' % (human, bot), y['input']) if x.strip()] for y in data]
- humans = [[x for i, x in enumerate(y) if i % 2 == 0] for y in convs]
- bots = [[x for i, x in enumerate(y) if i % 2 == 1] for y in convs]
-
- # FIXME: apply back to json etc., just see for now
- bleu_threshold = 0.9
- if use_bleu_threshold:
- bots = [[x for x in y if get_bleu(x, unhelpful) < bleu_threshold] for y in tqdm(bots)]
-
- cosine_sim_threshold = 0.8
- if use_sentence_sim:
- # pip install sentence_transformers-2.2.2
- from sentence_transformers import SentenceTransformer
- # sent_model = 'bert-base-nli-mean-tokens'
- # sent_model = 'nli-distilroberta-base-v2'
- sent_model = 'all-MiniLM-L6-v2'
- model = SentenceTransformer(sent_model)
- sentence_embeddings = model.encode(unhelpful)
- from sklearn.metrics.pairwise import cosine_similarity
- bots = [x for x in tqdm(bots) if
- np.max(cosine_similarity(model.encode(x), sentence_embeddings)) < cosine_sim_threshold]
-
- bads_bots = {}
- string_all = str(bots)
- for sub in unhelpful:
- bads_bots[sub] = string_all.count(sub)
- bads_bots = {k: v for k, v in bads_bots.items() if v > 0}
- import pprint
- pp = pprint.PrettyPrinter(indent=4)
- pp.pprint(bads_bots)
-
- total_bads_bots = sum(list(bads_bots.values()))
- print('threshold: %g use_bleu_threshold: %g total_bads_bots: %s total_bots: %s total_humans: %s' % (
- threshold, use_bleu_threshold, total_bads_bots, len(bots), len(humans)), flush=True)
-
- # assert len(bads) == 0, bads
- assert len(bads_bots) == 0, bads_bots
-
-
-def test_fortune2000_personalized():
- row_list = []
- import glob
- if not os.path.isdir("wikitext"):
- raise RuntimeError("download https://github.com/h2oai/h2ogpt/files/11423008/wikitext.zip and unzip")
- for file in glob.glob("wikitext/*.txt"):
- with open(file, "r") as f:
- blob = f.read()
- N = 512 * 4
- row_list.extend([{'input': s, 'prompt_type': 'plain', 'source': "%s" % os.path.basename(file)}
- for s in get_sentences(blob, N) if s])
- personality = create_personality_data()
- import copy
- for i in range(10):
- row_list.extend(copy.deepcopy(personality))
- np.random.seed(123)
- np.random.shuffle(row_list)
- for i in range(len(row_list)):
- row_list[i]['id'] = i
- for i in range(len(row_list)):
- assert row_list[i]['id'] == i
- with open("h2ogpt-fortune2000-personalized.json", "w") as ff:
- ff.write(json.dumps(row_list, indent=2))
diff --git a/spaces/awacke1/BigCodeStackSearch1215/app.py b/spaces/awacke1/BigCodeStackSearch1215/app.py
deleted file mode 100644
index b213f394f6103a9b0262f808ace1de34085f76ce..0000000000000000000000000000000000000000
--- a/spaces/awacke1/BigCodeStackSearch1215/app.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import gradio as gr
-from huggingface_hub import hf_hub_download
-import json
-import gzip
-
-
-usernames = {}
-
-
-filepath = hf_hub_download(repo_id="bigcode/the-stack-username-to-repo", filename="username_to_repo.json.gz", repo_type="dataset", revision="v1.1")
-with gzip.open(filepath, 'r') as f:
- usernames["v1.1"] = json.loads(f.read().decode('utf-8'))
-
-filepath = hf_hub_download(repo_id="bigcode/the-stack-username-to-repo", filename="username_to_repo.json.gz", repo_type="dataset")
-with gzip.open(filepath, 'r') as f:
- usernames["v1.0"] = json.loads(f.read().decode('utf-8'))
-
-text = """\
-
-**_The Stack is an open governance interface between the AI community and the open source community._**
-# Stack Search By Keyword
-URL: [The Stack](https://huggingface.co/datasets/bigcode/the-stack), This search engine will match your search term and find up to 100 matches by keyword for example BeatSaber.
-""" + """\
-"""
-
-def check_username(username, version):
- output_md = ""
- if username in usernames[version] and len(usernames[version][username])>0:
- repos = usernames[version][username]
- repo_word = "repository" if len(repos)==1 else "repositories"
- output_md += f"**Yes**, there is code from **{len(repos)} {repo_word}** in The Stack:\n\n"
- for repo in repos:
- output_md += f"_{repo}_\n\n"
- else:
- output_md += "**No**, your code is not in The Stack."
- return output_md.strip()
-
-def check_keyword(username, version):
- output_md = ""
- maxhitcount = 100
- maxrepos = 70000000 #6M user entries * up to 18 per user
- currenthitcount=0
- currentrepos=0
- for repolist in usernames[version]:
- #print(repolist)
- repos = usernames[version][repolist]
- repo_word = "repository" if len(repos)==1 else "repositories"
- #output_md += f"**Yes**, there is code from **{len(repos)} {repo_word}** in The Stack:\n\n"
- for repo in repos:
- currentrepos += 1
- if currentrepos > maxrepos:
- output_md += f"**Found maximum repos**, Count: **{currentrepos}** in The Stack:\n\n"
- return output_md.strip()
- if username in repo:
- currenthitcount += 1
- output_md += f"_{repo}_\n\n"
- if currenthitcount > maxhitcount:
- output_md += f"**Found maximum hits**, Count: **{currenthitcount}** in The Stack:\n\n"
- return output_md.strip()
- else:
- output_md += "**Searched All Repos**, Above found in The Stack."
- return output_md.strip()
-
-with gr.Blocks() as demo:
- with gr.Row():
- _, colum_2, _ = gr.Column(scale=1), gr.Column(scale=6), gr.Column(scale=1)
- with colum_2:
- gr.Markdown(text)
- version = gr.Dropdown(["v1.1", "v1.0"], label="The Stack version:", value="v1.1")
- username = gr.Text("", label="Keyword to match against repos e.g. BeatSaber")
- check_button = gr.Button("Check!")
-
- repos = gr.Markdown()
-
- #check_button.click(check_username, [username, version], repos)
- check_button.click(check_keyword, [username, version], repos)
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/awacke1/CodeGen-Salesforce-codegen-350M-mono/README.md b/spaces/awacke1/CodeGen-Salesforce-codegen-350M-mono/README.md
deleted file mode 100644
index fc50633a09bd4cbd8f897da976871b9b840e0a8b..0000000000000000000000000000000000000000
--- a/spaces/awacke1/CodeGen-Salesforce-codegen-350M-mono/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CodeGen Salesforce Codegen 350M Mono
-emoji: 💩
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/app.py b/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/app.py
deleted file mode 100644
index 69e55e45f9c08be414bbb4a96631ca9069f24d83..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/app.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import streamlit as st
-import pandas as pd
-import json
-import os
-
-def display_table(vote_data):
- st.title("The Great Debate: Vote on the Funniest Questions!")
-
- data = [
- (1, "😂", "How many cups of coffee do you need to function like a normal human being?", "[Wikipedia](https://en.wikipedia.org/wiki/Coffee)"),
- (2, "🤔", "If animals could talk, which species do you think would be the most annoying?", "[Wikipedia](https://en.wikipedia.org/wiki/Animal_communication)"),
- (3, "🤫", "What's the craziest conspiracy theory you've ever heard?", "[Wikipedia](https://en.wikipedia.org/wiki/Conspiracy_theory)"),
- (4, "🤣", "What's the worst pickup line you've ever heard or used?", "[Wikipedia](https://en.wikipedia.org/wiki/Pick-up_line)"),
- (5, "😜", "If you were a superhero, what would your superpower be?", "[Wikipedia](https://en.wikipedia.org/wiki/Superpower_(ability))"),
- (6, "🤯", "If you could time travel, what period in history would you go to and why?", "[Wikipedia](https://en.wikipedia.org/wiki/Time_travel)"),
- (7, "😝", "What's the weirdest thing you've ever eaten?", "[Wikipedia](https://en.wikipedia.org/wiki/List_of_delicacies)"),
- (8, "🤪", "What's the most embarrassing thing that's ever happened to you in public?", "[Wikipedia](https://en.wikipedia.org/wiki/Embarrassment)"),
- (9, "😈", "If you could be any movie villain, who would you choose and why?", "[Wikipedia](https://en.wikipedia.org/wiki/Villain)"),
- (10, "🙃", "What's the most useless talent you have?", "[Wikipedia](https://en.wikipedia.org/wiki/Talent_(human))"),
- ]
-
- for row in data:
- question_id = f"Question {row[0]}"
- emoji, title, description = row[1], row[2], row[3]
- upvotes, downvotes = count_votes(vote_data, question_id)
-
- col1, col2, col3, col4 = st.columns([1, 3, 1, 1])
-
- col1.write(emoji)
- col2.write(f"{title}\n{description}")
- col3.write(f"👍 {upvotes}")
- col4.write(f"👎 {downvotes}")
-
- upvote_button = col3.button(f"Upvote {question_id}")
- downvote_button = col4.button(f"Downvote {question_id}")
-
- if upvote_button:
- update_vote_log(question_id, 'upvote')
- st.experimental_rerun()
-
- if downvote_button:
- update_vote_log(question_id, 'downvote')
- st.experimental_rerun()
-
-def update_vote_log(term, vote_type):
- with open('vote.log.txt', 'a') as f:
- f.write(json.dumps({'term': term, 'vote': vote_type}) + '\n')
-
-def load_vote_log():
- vote_data = []
-
- if os.path.exists('vote.log.txt'):
- with open('vote.log.txt', 'r') as f:
- for line in f.readlines():
- vote_data.append(json.loads(line.strip()))
- return vote_data
-
-def count_votes(vote_data, term):
- upvotes = sum(1 for vote in vote_data if vote['term'] == term and vote['vote'] == 'upvote')
- downvotes = sum(1 for vote in vote_data if vote['term'] == term and vote['vote'] == 'downvote')
- return upvotes, downvotes
-
-def main():
- vote_data = load_vote_log()
-
- display_table(vote_data)
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/banana-projects/datasets-card-creator/tailwind.config.js b/spaces/banana-projects/datasets-card-creator/tailwind.config.js
deleted file mode 100644
index ab1f9834474d66cbcf449be19b004429d1762413..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/datasets-card-creator/tailwind.config.js
+++ /dev/null
@@ -1,838 +0,0 @@
-
-module.exports = {
- purge: ['./src/**/*.js', './public/index.html'],
- darkMode: false, // or 'media' or 'class'
- prefix: '',
- important: false,
- separator: ':',
- theme: {
- extend: {
- spacing: {
- '30': '7rem',
- '68': '17rem',
- '70': '17.5rem',
- '72': '18rem',
- '76': '19rem',
- '78': '19.5rem',
- '80': '20rem',
- '84': '21rem',
- '88': '22rem',
- '92': '23rem',
- '96': '24rem',
- '100': '25rem',
- '104': '26rem',
- '108': '27rem',
- '112': '28rem',
- '116': '29rem',
- '120': '30rem',
- '132': '33rem',
- '144': '36rem',
- '148': '37rem',
- '152': '38rem',
- '156': '39rem',
- '168': '42rem',
- '180': '45rem',
- '192': '48rem',
- '196': '49rem',
- '200': '50rem',
- },
- },
- maxWidth: {
- '1/4': '25%',
- '1/2': '50%',
- '3/4': '75%',
- },
- zIndex: {
- '-10': '-10',
- },
- spinner: (theme) => ({
- DEFAULT: {
- color: '#dae1e7', // color you want to make the spinner
- size: '1em', // size of the spinner (used for both width and height)
- border: '2px', // border-width of the spinner (shouldn't be bigger than half the spinner's size)
- speed: '500ms', // the speed at which the spinner should rotate
- },
- // md: {
- // color: theme('colors.red.500', 'red'),
- // size: '2em',
- // border: '2px',
- // speed: '500ms',
- // },
- }),
- height: {
- xxs: '50px',
- xs: '80px',
- sm: '150px',
- smm: '170px',
- md: '500px',
- lg: '600px',
- xl: '700px',
- },
- screens: {
- xs: '500px',
- sm: '640px',
- md: '768px',
- lg: '1024px',
- xl: '1280px',
- xxl: '1650px',
- },
- colors: {
- transparent: 'transparent',
- current: 'currentColor',
-
- black: '#000',
- white: '#fff',
-
- gray: {
- 100: '#f7fafc',
- 200: '#edf2f7',
- 300: '#e2e8f0',
- 400: '#cbd5e0',
- 500: '#a0aec0',
- 600: '#718096',
- 700: '#4a5568',
- 800: '#2d3748',
- 900: '#1a202c',
- },
- red: {
- 100: '#fff5f5',
- 200: '#fed7d7',
- 300: '#feb2b2',
- 400: '#fc8181',
- 500: '#f56565',
- 600: '#e53e3e',
- 700: '#c53030',
- 800: '#9b2c2c',
- 900: '#742a2a',
- },
- orange: {
- 100: '#fffaf0',
- 200: '#feebc8',
- 300: '#fbd38d',
- 400: '#f6ad55',
- 500: '#ed8936',
- 600: '#dd6b20',
- 700: '#c05621',
- 800: '#9c4221',
- 900: '#7b341e',
- },
- yellow: {
- 100: '#fffff0',
- 200: '#fefcbf',
- 300: '#faf089',
- 400: '#f6e05e',
- 500: '#ecc94b',
- 600: '#d69e2e',
- 700: '#b7791f',
- 800: '#975a16',
- 900: '#744210',
- },
- green: {
- 100: '#f0fff4',
- 200: '#c6f6d5',
- 300: '#9ae6b4',
- 400: '#68d391',
- 500: '#48bb78',
- 600: '#38a169',
- 700: '#2f855a',
- 800: '#276749',
- 900: '#22543d',
- },
- teal: {
- 100: '#e6fffa',
- 200: '#b2f5ea',
- 300: '#81e6d9',
- 400: '#4fd1c5',
- 500: '#38b2ac',
- 600: '#319795',
- 700: '#2c7a7b',
- 800: '#285e61',
- 900: '#234e52',
- },
- blue: {
- 100: '#ebf8ff',
- 200: '#bee3f8',
- 300: '#90cdf4',
- 400: '#63b3ed',
- 500: '#4299e1',
- 600: '#3182ce',
- 700: '#2b6cb0',
- 800: '#2c5282',
- 900: '#2a4365',
- },
- indigo: {
- 100: '#ebf4ff',
- 200: '#c3dafe',
- 300: '#a3bffa',
- 400: '#7f9cf5',
- 500: '#667eea',
- 600: '#5a67d8',
- 700: '#4c51bf',
- 800: '#434190',
- 900: '#3c366b',
- },
- purple: {
- 100: '#faf5ff',
- 200: '#e9d8fd',
- 300: '#d6bcfa',
- 400: '#b794f4',
- 500: '#9f7aea',
- 600: '#805ad5',
- 700: '#6b46c1',
- 800: '#553c9a',
- 900: '#44337a',
- },
- pink: {
- 100: '#fff5f7',
- 200: '#fed7e2',
- 300: '#fbb6ce',
- 400: '#f687b3',
- 500: '#ed64a6',
- 600: '#d53f8c',
- 700: '#b83280',
- 800: '#97266d',
- 900: '#702459',
- },
- },
- spacing: {
- px: '1px',
- '0': '0',
- '1': '0.25rem',
- '2': '0.5rem',
- '3': '0.75rem',
- '4': '1rem',
- '5': '1.25rem',
- '6': '1.5rem',
- '8': '2rem',
- '10': '2.5rem',
- '12': '3rem',
- '14': '3.5rem',
- '16': '4rem',
- '20': '5rem',
- '24': '6rem',
- '32': '8rem',
- '40': '10rem',
- '48': '12rem',
- '56': '14rem',
- '64': '16rem',
- '72': '18rem',
- '84': '21rem',
- '96': '24rem',
- },
- backgroundColor: theme => theme('colors'),
- backgroundPosition: {
- bottom: 'bottom',
- center: 'center',
- left: 'left',
- 'left-bottom': 'left bottom',
- 'left-top': 'left top',
- right: 'right',
- 'right-bottom': 'right bottom',
- 'right-top': 'right top',
- top: 'top',
- },
- backgroundSize: {
- auto: 'auto',
- cover: 'cover',
- contain: 'contain',
- },
- borderColor: theme => ({
- ...theme('colors'),
- DEFAULT: theme('colors.gray.300', 'currentColor'),
- }),
- borderRadius: {
- none: '0',
- sm: '0.125rem',
- DEFAULT: '0.25rem',
- md: '0.375rem',
- lg: '0.5rem',
- xl: '1rem',
- full: '9999px',
- },
- borderWidth: {
- DEFAULT: '1px',
- '0': '0',
- '1': '1px',
- '2': '2px',
- '4': '4px',
- '8': '8px',
- },
- boxShadow: {
- xs: '0 0 0 1px rgba(0, 0, 0, 0.05)',
- sm: '0 1px 2px 0 rgba(0, 0, 0, 0.05)',
- DEFAULT: '0 1px 3px 0 rgba(0, 0, 0, 0.1), 0 1px 2px 0 rgba(0, 0, 0, 0.06)',
- md: '0 4px 6px -1px rgba(0, 0, 0, 0.1), 0 2px 4px -1px rgba(0, 0, 0, 0.06)',
- lg: '0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05)',
- xl: '0 20px 25px -5px rgba(0, 0, 0, 0.1), 0 10px 10px -5px rgba(0, 0, 0, 0.04)',
- '2xl': '0 25px 50px -12px rgba(0, 0, 0, 0.25)',
- '3xl': '0 30px 75px -20px rgba(0, 0, 0, 0.5)',
- inner: 'inset 0 2px 4px 0 rgba(0, 0, 0, 0.06)',
- outline: '0 0 0 3px rgba(66, 153, 225, 0.5)',
- none: 'none',
- },
- container: {
- center: true,
- },
- cursor: {
- auto: 'auto',
- DEFAULT: 'DEFAULT',
- pointer: 'pointer',
- wait: 'wait',
- text: 'text',
- move: 'move',
- 'not-allowed': 'not-allowed',
- },
- fill: {
- current: 'currentColor',
- },
- flex: {
- '1': '1 1 0%',
- auto: '1 1 auto',
- initial: '0 1 auto',
- none: 'none',
- },
- flexGrow: {
- '0': '0',
- DEFAULT: '1',
- },
- flexShrink: {
- '0': '0',
- DEFAULT: '1',
- },
- fontFamily: {
- sans: [
- 'system-ui',
- '-apple-system',
- 'BlinkMacSystemFont',
- '"Segoe UI"',
- 'Roboto',
- '"Helvetica Neue"',
- 'Arial',
- '"Noto Sans"',
- 'sans-serif',
- '"Apple Color Emoji"',
- '"Segoe UI Emoji"',
- '"Segoe UI Symbol"',
- '"Noto Color Emoji"',
- ],
- serif: ['Georgia', 'Cambria', '"Times New Roman"', 'Times', 'serif'],
- mono: ['Menlo', 'Monaco', 'Consolas', '"Liberation Mono"', '"Courier New"', 'monospace'],
- },
- fontSize: {
- xs: '0.75rem',
- sm: '0.875rem',
- md: '0.92rem',
- base: '1rem',
- lg: '1.125rem',
- xl: '1.25rem',
- '2xl': '1.5rem',
- '3xl': '1.875rem',
- '4xl': '2.25rem',
- '5xl': '3rem',
- '6xl': '4rem',
- },
- fontWeight: {
- hairline: '100',
- thin: '200',
- light: '300',
- normal: '400',
- medium: '500',
- semibold: '600',
- bold: '700',
- extrabold: '800',
- black: '900',
- },
- height: theme => ({
- auto: 'auto',
- ...theme('spacing'),
- full: '100%',
- screen: '100vh',
- }),
- inset: {
- '0': '0',
- auto: 'auto',
- },
- letterSpacing: {
- tighter: '-0.05em',
- tight: '-0.025em',
- normal: '0',
- wide: '0.025em',
- wider: '0.05em',
- widest: '0.1em',
- },
- lineHeight: {
- none: '1',
- tight: '1.25',
- snug: '1.375',
- normal: '1.5',
- relaxed: '1.625',
- loose: '2',
- '3': '.75rem',
- '4': '1rem',
- '5': '1.25rem',
- '6': '1.5rem',
- '7': '1.75rem',
- '8': '2rem',
- '9': '2.25rem',
- '10': '2.5rem',
- },
- listStyleType: {
- none: 'none',
- disc: 'disc',
- decimal: 'decimal',
- },
- margin: (theme, { negative }) => ({
- auto: 'auto',
- ...theme('spacing'),
- ...negative(theme('spacing')),
- }),
- maxHeight: {
- none: 'none',
- xxs: '10rem',
- xs: '20rem',
- sm: '24rem',
- md: '28rem',
- lg: '32rem',
- xl: '36rem',
- '2xl': '42rem',
- '3xl': '48rem',
- '4xl': '56rem',
- '5xl': '64rem',
- '55xl': '68rem',
- '6xl': '72rem',
- '65xl': '76rem',
- '7xl': '80rem',
- '75xl': '84rem',
- '8xl': '88rem',
- '9xl': '96rem',
- '10xl': '104rem',
- full: '100%',
- screen: '100vh',
- },
- maxWidth: (theme, { breakpoints }) => ({
- none: 'none',
- xs: '20rem',
- sm: '24rem',
- md: '28rem',
- lg: '32rem',
- xl: '36rem',
- '2xl': '42rem',
- '3xl': '48rem',
- '4xl': '56rem',
- '5xl': '64rem',
- '55xl': '68rem',
- '6xl': '72rem',
- '65xl': '76rem',
- '7xl': '80rem',
- '75xl': '84rem',
- '8xl': '88rem',
- '9xl': '96rem',
- '10xl': '104rem',
- full: '100%',
- ...breakpoints(theme('screens')),
- }),
- minHeight: {
- '0': '0',
- full: '100%',
- screen: '100vh',
- },
- minWidth: {
- '0': '0',
- xs: '20rem',
- sm: '24rem',
- md: '28rem',
- lg: '32rem',
- xl: '36rem',
- '2xl': '42rem',
- '3xl': '48rem',
- '4xl': '56rem',
- '5xl': '64rem',
- '55xl': '68rem',
- '6xl': '72rem',
- '65xl': '76rem',
- '7xl': '80rem',
- '75xl': '84rem',
- '8xl': '88rem',
- '9xl': '96rem',
- '10xl': '104rem',
- full: '100%',
- },
- objectPosition: {
- bottom: 'bottom',
- center: 'center',
- left: 'left',
- 'left-bottom': 'left bottom',
- 'left-top': 'left top',
- right: 'right',
- 'right-bottom': 'right bottom',
- 'right-top': 'right top',
- top: 'top',
- },
- opacity: {
- '0': '0',
- '25': '0.25',
- '50': '0.5',
- '75': '0.75',
- '100': '1',
- },
- order: {
- first: '-9999',
- last: '9999',
- none: '0',
- '1': '1',
- '2': '2',
- '3': '3',
- '4': '4',
- '5': '5',
- '6': '6',
- '7': '7',
- '8': '8',
- '9': '9',
- '10': '10',
- '11': '11',
- '12': '12',
- },
- padding: theme => theme('spacing'),
- placeholderColor: theme => theme('colors'),
- stroke: {
- current: 'currentColor',
- },
- strokeWidth: {
- '0': '0',
- '1': '1',
- '2': '2',
- },
- textColor: theme => theme('colors'),
- width: theme => ({
- auto: 'auto',
- ...theme('spacing'),
- '1/2': '50%',
- '1/3': '33.333333%',
- '2/3': '66.666667%',
- '1/4': '25%',
- '2/4': '50%',
- '3/4': '75%',
- '1/5': '20%',
- '2/5': '40%',
- '3/5': '60%',
- '4/5': '80%',
- '1/6': '16.666667%',
- '2/6': '33.333333%',
- '3/6': '50%',
- '4/6': '66.666667%',
- '5/6': '83.333333%',
- '1/12': '8.333333%',
- '2/12': '16.666667%',
- '3/12': '25%',
- '4/12': '33.333333%',
- '5/12': '41.666667%',
- '6/12': '50%',
- '7/12': '58.333333%',
- '8/12': '66.666667%',
- '9/12': '75%',
- '10/12': '83.333333%',
- '11/12': '91.666667%',
- full: '100%',
- screen: '100vw',
- }),
- zIndex: {
- auto: 'auto',
- '0': '0',
- '10': '10',
- '20': '20',
- '30': '30',
- '40': '40',
- '50': '50',
- },
- gap: theme => theme('spacing'),
- gridTemplateColumns: {
- none: 'none',
- '1': 'repeat(1, minmax(0, 1fr))',
- '2': 'repeat(2, minmax(0, 1fr))',
- '3': 'repeat(3, minmax(0, 1fr))',
- '4': 'repeat(4, minmax(0, 1fr))',
- '5': 'repeat(5, minmax(0, 1fr))',
- '6': 'repeat(6, minmax(0, 1fr))',
- '7': 'repeat(7, minmax(0, 1fr))',
- '8': 'repeat(8, minmax(0, 1fr))',
- '9': 'repeat(9, minmax(0, 1fr))',
- '10': 'repeat(10, minmax(0, 1fr))',
- '11': 'repeat(11, minmax(0, 1fr))',
- '12': 'repeat(12, minmax(0, 1fr))',
- },
- gridColumn: {
- auto: 'auto',
- 'span-1': 'span 1 / span 1',
- 'span-2': 'span 2 / span 2',
- 'span-3': 'span 3 / span 3',
- 'span-4': 'span 4 / span 4',
- 'span-5': 'span 5 / span 5',
- 'span-6': 'span 6 / span 6',
- 'span-7': 'span 7 / span 7',
- 'span-8': 'span 8 / span 8',
- 'span-9': 'span 9 / span 9',
- 'span-10': 'span 10 / span 10',
- 'span-11': 'span 11 / span 11',
- 'span-12': 'span 12 / span 12',
- },
- gridColumnStart: {
- auto: 'auto',
- '1': '1',
- '2': '2',
- '3': '3',
- '4': '4',
- '5': '5',
- '6': '6',
- '7': '7',
- '8': '8',
- '9': '9',
- '10': '10',
- '11': '11',
- '12': '12',
- '13': '13',
- },
- gridColumnEnd: {
- auto: 'auto',
- '1': '1',
- '2': '2',
- '3': '3',
- '4': '4',
- '5': '5',
- '6': '6',
- '7': '7',
- '8': '8',
- '9': '9',
- '10': '10',
- '11': '11',
- '12': '12',
- '13': '13',
- },
- gridTemplateRows: {
- none: 'none',
- '1': 'repeat(1, minmax(0, 1fr))',
- '2': 'repeat(2, minmax(0, 1fr))',
- '3': 'repeat(3, minmax(0, 1fr))',
- '4': 'repeat(4, minmax(0, 1fr))',
- '5': 'repeat(5, minmax(0, 1fr))',
- '6': 'repeat(6, minmax(0, 1fr))',
- },
- gridRow: {
- auto: 'auto',
- 'span-1': 'span 1 / span 1',
- 'span-2': 'span 2 / span 2',
- 'span-3': 'span 3 / span 3',
- 'span-4': 'span 4 / span 4',
- 'span-5': 'span 5 / span 5',
- 'span-6': 'span 6 / span 6',
- },
- gridRowStart: {
- auto: 'auto',
- '1': '1',
- '2': '2',
- '3': '3',
- '4': '4',
- '5': '5',
- '6': '6',
- '7': '7',
- },
- gridRowEnd: {
- auto: 'auto',
- '1': '1',
- '2': '2',
- '3': '3',
- '4': '4',
- '5': '5',
- '6': '6',
- '7': '7',
- },
- transformOrigin: {
- center: 'center',
- top: 'top',
- 'top-right': 'top right',
- right: 'right',
- 'bottom-right': 'bottom right',
- bottom: 'bottom',
- 'bottom-left': 'bottom left',
- left: 'left',
- 'top-left': 'top left',
- },
- scale: {
- '0': '0',
- '50': '.5',
- '75': '.75',
- '90': '.9',
- '95': '.95',
- '100': '1',
- '105': '1.05',
- '110': '1.1',
- '125': '1.25',
- '150': '1.5',
- },
- rotate: {
- '-180': '-180deg',
- '-90': '-90deg',
- '-45': '-45deg',
- '0': '0',
- '45': '45deg',
- '90': '90deg',
- '180': '180deg',
- },
- translate: (theme, { negative }) => ({
- ...theme('spacing'),
- ...negative(theme('spacing')),
- '-full': '-100%',
- '-1/2': '-50%',
- '1/2': '50%',
- full: '100%',
- }),
- skew: {
- '-12': '-12deg',
- '-6': '-6deg',
- '-3': '-3deg',
- '0': '0',
- '3': '3deg',
- '6': '6deg',
- '12': '12deg',
- },
- transitionProperty: {
- none: 'none',
- all: 'all',
- DEFAULT: 'background-color, border-color, color, fill, stroke, opacity, box-shadow, transform',
- colors: 'background-color, border-color, color, fill, stroke',
- opacity: 'opacity',
- shadow: 'box-shadow',
- transform: 'transform',
- },
- transitionTimingFunction: {
- linear: 'linear',
- in: 'cubic-bezier(0.4, 0, 1, 1)',
- out: 'cubic-bezier(0, 0, 0.2, 1)',
- 'in-out': 'cubic-bezier(0.4, 0, 0.2, 1)',
- },
- transitionDuration: {
- '75': '75ms',
- '100': '100ms',
- '150': '150ms',
- '200': '200ms',
- '300': '300ms',
- '500': '500ms',
- '700': '700ms',
- '1000': '1000ms',
- },
- },
- variants: {
- accessibility: ['responsive', 'focus'],
- alignContent: ['responsive'],
- alignItems: ['responsive'],
- alignSelf: ['responsive'],
- appearance: ['responsive'],
- backgroundAttachment: ['responsive'],
- backgroundColor: ['responsive', 'hover', 'focus'],
- backgroundPosition: ['responsive'],
- backgroundRepeat: ['responsive'],
- backgroundSize: ['responsive'],
- borderCollapse: ['responsive'],
- borderColor: ['responsive', 'hover', 'focus', 'active'],
- borderRadius: ['responsive', 'hover', 'focus', 'active'],
- borderStyle: ['responsive', 'hover', 'focus', 'active'],
- borderWidth: ['responsive', 'hover', 'focus', 'active'],
- boxShadow: ['responsive', 'hover', 'focus', 'active'],
- boxSizing: ['responsive'],
- cursor: ['responsive'],
- display: ['responsive'],
- fill: ['responsive'],
- flex: ['responsive'],
- flexDirection: ['responsive'],
- flexGrow: ['responsive'],
- flexShrink: ['responsive'],
- flexWrap: ['responsive'],
- float: ['responsive'],
- clear: ['responsive'],
- fontFamily: ['responsive'],
- fontSize: ['responsive'],
- fontSmoothing: ['responsive'],
- fontStyle: ['responsive'],
- fontWeight: ['responsive', 'hover', 'focus'],
- height: ['responsive'],
- inset: ['responsive'],
- justifyContent: ['responsive'],
- letterSpacing: ['responsive'],
- lineHeight: ['responsive'],
- listStylePosition: ['responsive'],
- listStyleType: ['responsive'],
- margin: ['responsive'],
- maxHeight: ['responsive'],
- maxWidth: ['responsive'],
- minHeight: ['responsive'],
- minWidth: ['responsive'],
- objectFit: ['responsive'],
- objectPosition: ['responsive'],
- opacity: ['responsive', 'hover', 'focus'],
- order: ['responsive'],
- outline: ['responsive', 'focus'],
- overflow: ['responsive'],
- padding: ['responsive'],
- placeholderColor: ['responsive', 'focus'],
- pointerEvents: ['responsive'],
- position: ['responsive'],
- resize: ['responsive'],
- spinner: ['responsive'],
- stroke: ['responsive'],
- strokeWidth: ['responsive'],
- tableLayout: ['responsive', 'hover', 'focus'],
- textAlign: ['responsive'],
- textColor: ['responsive', 'hover', 'focus'],
- textDecoration: ['responsive', 'hover', 'focus'],
- textTransform: ['responsive'],
- userSelect: ['responsive'],
- verticalAlign: ['responsive'],
- visibility: ['responsive'],
- whitespace: ['responsive'],
- width: ['responsive'],
- wordBreak: ['responsive'],
- zIndex: ['responsive'],
- gap: ['responsive'],
- gridAutoFlow: ['responsive'],
- gridTemplateColumns: ['responsive'],
- gridColumn: ['responsive'],
- gridColumnStart: ['responsive'],
- gridColumnEnd: ['responsive'],
- gridTemplateRows: ['responsive'],
- gridRow: ['responsive'],
- gridRowStart: ['responsive'],
- gridRowEnd: ['responsive'],
- transform: ['responsive'],
- transformOrigin: ['responsive'],
- scale: ['responsive', 'hover', 'focus'],
- rotate: ['responsive', 'hover', 'focus'],
- translate: ['responsive', 'hover', 'focus'],
- skew: ['responsive', 'hover', 'focus'],
- transitionProperty: ['responsive', 'hover', 'focus'],
- transitionTimingFunction: ['responsive', 'hover', 'focus'],
- transitionDuration: ['responsive', 'hover', 'focus'],
- },
- corePlugins: {
- preflight: false
- },
- plugins: [
- require('tailwindcss-grid')({
- grids: [2, 3, 4, 5, 6, 8, 10, 12],
- gaps: {
- 0: '0',
- 4: '1rem',
- 8: '2rem',
- 16: '4rem',
- 32: '8rem',
- '4-x': '1rem',
- '4-y': '1rem',
- },
- autoMinWidths: {
- '16': '4rem',
- '24': '6rem',
- '300px': '300px',
- },
- variants: ['responsive'],
- }),
- ],
-}
-
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/data_util.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/data_util.py
deleted file mode 100644
index 328c3cb4b56160da12c12acdd7f0c5f31d11b24f..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/data/data_util.py
+++ /dev/null
@@ -1,313 +0,0 @@
-import cv2
-import numpy as np
-import torch
-from os import path as osp
-from torch.nn import functional as F
-
-from basicsr.data.transforms import mod_crop
-from basicsr.utils import img2tensor, scandir
-
-
-def read_img_seq(path, require_mod_crop=False, scale=1, return_imgname=False):
- """Read a sequence of images from a given folder path.
-
- Args:
- path (list[str] | str): List of image paths or image folder path.
- require_mod_crop (bool): Require mod crop for each image.
- Default: False.
- scale (int): Scale factor for mod_crop. Default: 1.
- return_imgname(bool): Whether return image names. Default False.
-
- Returns:
- Tensor: size (t, c, h, w), RGB, [0, 1].
- list[str]: Returned image name list.
- """
- if isinstance(path, list):
- img_paths = path
- else:
- img_paths = sorted(list(scandir(path, full_path=True)))
- imgs = [cv2.imread(v).astype(np.float32) / 255. for v in img_paths]
-
- if require_mod_crop:
- imgs = [mod_crop(img, scale) for img in imgs]
- imgs = img2tensor(imgs, bgr2rgb=True, float32=True)
- imgs = torch.stack(imgs, dim=0)
-
- if return_imgname:
- imgnames = [osp.splitext(osp.basename(path))[0] for path in img_paths]
- return imgs, imgnames
- else:
- return imgs
-
-
-def generate_frame_indices(crt_idx, max_frame_num, num_frames, padding='reflection'):
- """Generate an index list for reading `num_frames` frames from a sequence
- of images.
-
- Args:
- crt_idx (int): Current center index.
- max_frame_num (int): Max number of the sequence of images (from 1).
- num_frames (int): Reading num_frames frames.
- padding (str): Padding mode, one of
- 'replicate' | 'reflection' | 'reflection_circle' | 'circle'
- Examples: current_idx = 0, num_frames = 5
- The generated frame indices under different padding mode:
- replicate: [0, 0, 0, 1, 2]
- reflection: [2, 1, 0, 1, 2]
- reflection_circle: [4, 3, 0, 1, 2]
- circle: [3, 4, 0, 1, 2]
-
- Returns:
- list[int]: A list of indices.
- """
- assert num_frames % 2 == 1, 'num_frames should be an odd number.'
- assert padding in ('replicate', 'reflection', 'reflection_circle', 'circle'), f'Wrong padding mode: {padding}.'
-
- max_frame_num = max_frame_num - 1 # start from 0
- num_pad = num_frames // 2
-
- indices = []
- for i in range(crt_idx - num_pad, crt_idx + num_pad + 1):
- if i < 0:
- if padding == 'replicate':
- pad_idx = 0
- elif padding == 'reflection':
- pad_idx = -i
- elif padding == 'reflection_circle':
- pad_idx = crt_idx + num_pad - i
- else:
- pad_idx = num_frames + i
- elif i > max_frame_num:
- if padding == 'replicate':
- pad_idx = max_frame_num
- elif padding == 'reflection':
- pad_idx = max_frame_num * 2 - i
- elif padding == 'reflection_circle':
- pad_idx = (crt_idx - num_pad) - (i - max_frame_num)
- else:
- pad_idx = i - num_frames
- else:
- pad_idx = i
- indices.append(pad_idx)
- return indices
-
-
-def paired_paths_from_lmdb(folders, keys):
- """Generate paired paths from lmdb files.
-
- Contents of lmdb. Taking the `lq.lmdb` for example, the file structure is:
-
- lq.lmdb
- ├── data.mdb
- ├── lock.mdb
- ├── meta_info.txt
-
- The data.mdb and lock.mdb are standard lmdb files and you can refer to
- https://lmdb.readthedocs.io/en/release/ for more details.
-
- The meta_info.txt is a specified txt file to record the meta information
- of our datasets. It will be automatically created when preparing
- datasets by our provided dataset tools.
- Each line in the txt file records
- 1)image name (with extension),
- 2)image shape,
- 3)compression level, separated by a white space.
- Example: `baboon.png (120,125,3) 1`
-
- We use the image name without extension as the lmdb key.
- Note that we use the same key for the corresponding lq and gt images.
-
- Args:
- folders (list[str]): A list of folder path. The order of list should
- be [input_folder, gt_folder].
- keys (list[str]): A list of keys identifying folders. The order should
- be in consistent with folders, e.g., ['lq', 'gt'].
- Note that this key is different from lmdb keys.
-
- Returns:
- list[str]: Returned path list.
- """
- assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '
- f'But got {len(folders)}')
- assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}'
- input_folder, gt_folder = folders
- input_key, gt_key = keys
-
- if not (input_folder.endswith('.lmdb') and gt_folder.endswith('.lmdb')):
- raise ValueError(f'{input_key} folder and {gt_key} folder should both in lmdb '
- f'formats. But received {input_key}: {input_folder}; '
- f'{gt_key}: {gt_folder}')
- # ensure that the two meta_info files are the same
- with open(osp.join(input_folder, 'meta_info.txt')) as fin:
- input_lmdb_keys = [line.split('.')[0] for line in fin]
- with open(osp.join(gt_folder, 'meta_info.txt')) as fin:
- gt_lmdb_keys = [line.split('.')[0] for line in fin]
- if set(input_lmdb_keys) != set(gt_lmdb_keys):
- raise ValueError(f'Keys in {input_key}_folder and {gt_key}_folder are different.')
- else:
- paths = []
- for lmdb_key in sorted(input_lmdb_keys):
- paths.append(dict([(f'{input_key}_path', lmdb_key), (f'{gt_key}_path', lmdb_key)]))
- return paths
-
-
-def paired_paths_from_meta_info_file(folders, keys, meta_info_file, filename_tmpl):
- """Generate paired paths from an meta information file.
-
- Each line in the meta information file contains the image names and
- image shape (usually for gt), separated by a white space.
-
- Example of an meta information file:
- ```
- 0001_s001.png (480,480,3)
- 0001_s002.png (480,480,3)
- ```
-
- Args:
- folders (list[str]): A list of folder path. The order of list should
- be [input_folder, gt_folder].
- keys (list[str]): A list of keys identifying folders. The order should
- be in consistent with folders, e.g., ['lq', 'gt'].
- meta_info_file (str): Path to the meta information file.
- filename_tmpl (str): Template for each filename. Note that the
- template excludes the file extension. Usually the filename_tmpl is
- for files in the input folder.
-
- Returns:
- list[str]: Returned path list.
- """
- assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '
- f'But got {len(folders)}')
- assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}'
- input_folder, gt_folder = folders
- input_key, gt_key = keys
-
- with open(meta_info_file, 'r') as fin:
- gt_names = [line.strip().split(' ')[0] for line in fin]
-
- paths = []
- for gt_name in gt_names:
- basename, ext = osp.splitext(osp.basename(gt_name))
- input_name = f'{filename_tmpl.format(basename)}{ext}'
- input_path = osp.join(input_folder, input_name)
- gt_path = osp.join(gt_folder, gt_name)
- paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)]))
- return paths
-
-
-def paired_paths_from_folder(folders, keys, filename_tmpl):
- """Generate paired paths from folders.
-
- Args:
- folders (list[str]): A list of folder path. The order of list should
- be [input_folder, gt_folder].
- keys (list[str]): A list of keys identifying folders. The order should
- be in consistent with folders, e.g., ['lq', 'gt'].
- filename_tmpl (str): Template for each filename. Note that the
- template excludes the file extension. Usually the filename_tmpl is
- for files in the input folder.
-
- Returns:
- list[str]: Returned path list.
- """
- assert len(folders) == 2, ('The len of folders should be 2 with [input_folder, gt_folder]. '
- f'But got {len(folders)}')
- assert len(keys) == 2, f'The len of keys should be 2 with [input_key, gt_key]. But got {len(keys)}'
- input_folder, gt_folder = folders
- input_key, gt_key = keys
-
- input_paths = list(scandir(input_folder))
- gt_paths = list(scandir(gt_folder))
- assert len(input_paths) == len(gt_paths), (f'{input_key} and {gt_key} datasets have different number of images: '
- f'{len(input_paths)}, {len(gt_paths)}.')
- paths = []
- for gt_path in gt_paths:
- basename, ext = osp.splitext(osp.basename(gt_path))
- input_name = f'{filename_tmpl.format(basename)}{ext}'
- input_path = osp.join(input_folder, input_name)
- assert input_name in input_paths, f'{input_name} is not in {input_key}_paths.'
- gt_path = osp.join(gt_folder, gt_path)
- paths.append(dict([(f'{input_key}_path', input_path), (f'{gt_key}_path', gt_path)]))
- return paths
-
-
-def paths_from_folder(folder):
- """Generate paths from folder.
-
- Args:
- folder (str): Folder path.
-
- Returns:
- list[str]: Returned path list.
- """
-
- paths = list(scandir(folder))
- paths = [osp.join(folder, path) for path in paths]
- return paths
-
-
-def paths_from_lmdb(folder):
- """Generate paths from lmdb.
-
- Args:
- folder (str): Folder path.
-
- Returns:
- list[str]: Returned path list.
- """
- if not folder.endswith('.lmdb'):
- raise ValueError(f'Folder {folder}folder should in lmdb format.')
- with open(osp.join(folder, 'meta_info.txt')) as fin:
- paths = [line.split('.')[0] for line in fin]
- return paths
-
-
-def generate_gaussian_kernel(kernel_size=13, sigma=1.6):
- """Generate Gaussian kernel used in `duf_downsample`.
-
- Args:
- kernel_size (int): Kernel size. Default: 13.
- sigma (float): Sigma of the Gaussian kernel. Default: 1.6.
-
- Returns:
- np.array: The Gaussian kernel.
- """
- from scipy.ndimage import filters as filters
- kernel = np.zeros((kernel_size, kernel_size))
- # set element at the middle to one, a dirac delta
- kernel[kernel_size // 2, kernel_size // 2] = 1
- # gaussian-smooth the dirac, resulting in a gaussian filter
- return filters.gaussian_filter(kernel, sigma)
-
-
-def duf_downsample(x, kernel_size=13, scale=4):
- """Downsamping with Gaussian kernel used in the DUF official code.
-
- Args:
- x (Tensor): Frames to be downsampled, with shape (b, t, c, h, w).
- kernel_size (int): Kernel size. Default: 13.
- scale (int): Downsampling factor. Supported scale: (2, 3, 4).
- Default: 4.
-
- Returns:
- Tensor: DUF downsampled frames.
- """
- assert scale in (2, 3, 4), f'Only support scale (2, 3, 4), but got {scale}.'
-
- squeeze_flag = False
- if x.ndim == 4:
- squeeze_flag = True
- x = x.unsqueeze(0)
- b, t, c, h, w = x.size()
- x = x.view(-1, 1, h, w)
- pad_w, pad_h = kernel_size // 2 + scale * 2, kernel_size // 2 + scale * 2
- x = F.pad(x, (pad_w, pad_w, pad_h, pad_h), 'reflect')
-
- gaussian_filter = generate_gaussian_kernel(kernel_size, 0.4 * scale)
- gaussian_filter = torch.from_numpy(gaussian_filter).type_as(x).unsqueeze(0).unsqueeze(0)
- x = F.conv2d(x, gaussian_filter, stride=scale)
- x = x[:, :, 2:-2, 2:-2]
- x = x.view(b, t, c, x.size(2), x.size(3))
- if squeeze_flag:
- x = x.squeeze(0)
- return x
diff --git a/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135519.py b/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135519.py
deleted file mode 100644
index 2300cd84b01e313fb1b4806d7b559cbd63e1b21a..0000000000000000000000000000000000000000
--- a/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135519.py
+++ /dev/null
@@ -1,28 +0,0 @@
-#-*- coding : utf-8-*-
-import base64
-from subprocess import STDOUT
-import streamlit as st
-import pandas as pd
-import camelot as cam # extracting tables from PDFs
-
-st.title("PDF Table Extractor")
-input_pdf = st.file_uploader(label = "", type = 'pdf')
-background = st.selectbox("表格线条是否透明",(False,True))
-
-if input_pdf is not None:
- # byte object into a PDF file
- with open("input.pdf", "wb") as f:
- base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8')
- f.write(base64.b64decode(base64_pdf))
- f.close()
- page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1)
- tables_all= cam.read_pdf("input.pdf", pages=page_number, process_background=background)
- result_all = pd.ExcelWriter("result.xlsx", engine='xlsxwriter')
- for i in range(0,len(tables_all)):
- table = tables_all[i].df
- sheetname = str(i)
- table.to_excel(result_all, sheetname,index=False)
- result_all.save()
- with open(result_all,'rb') as f:
- st.download_button('抽取完成, 点击下载!', f,file_name="result.xlsx",mime="application/vnd.ms-excel")
-
\ No newline at end of file
diff --git a/spaces/bguberfain/Detic/detic/modeling/roi_heads/res5_roi_heads.py b/spaces/bguberfain/Detic/detic/modeling/roi_heads/res5_roi_heads.py
deleted file mode 100644
index bab706999a9927e34a7b07dad84ba1259ab5ec64..0000000000000000000000000000000000000000
--- a/spaces/bguberfain/Detic/detic/modeling/roi_heads/res5_roi_heads.py
+++ /dev/null
@@ -1,173 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import inspect
-import logging
-import numpy as np
-from typing import Dict, List, Optional, Tuple
-import torch
-from torch import nn
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, nonzero_tuple
-from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
-from detectron2.utils.events import get_event_storage
-from detectron2.utils.registry import Registry
-
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference
-from detectron2.modeling.roi_heads.roi_heads import ROI_HEADS_REGISTRY, Res5ROIHeads
-from detectron2.modeling.roi_heads.cascade_rcnn import CascadeROIHeads, _ScaleGradient
-from detectron2.modeling.roi_heads.box_head import build_box_head
-
-from .detic_fast_rcnn import DeticFastRCNNOutputLayers
-from ..debug import debug_second_stage
-
-from torch.cuda.amp import autocast
-
-@ROI_HEADS_REGISTRY.register()
-class CustomRes5ROIHeads(Res5ROIHeads):
- @configurable
- def __init__(self, **kwargs):
- cfg = kwargs.pop('cfg')
- super().__init__(**kwargs)
- stage_channel_factor = 2 ** 3
- out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * stage_channel_factor
-
- self.with_image_labels = cfg.WITH_IMAGE_LABELS
- self.ws_num_props = cfg.MODEL.ROI_BOX_HEAD.WS_NUM_PROPS
- self.add_image_box = cfg.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX
- self.add_feature_to_prop = cfg.MODEL.ROI_BOX_HEAD.ADD_FEATURE_TO_PROP
- self.image_box_size = cfg.MODEL.ROI_BOX_HEAD.IMAGE_BOX_SIZE
- self.box_predictor = DeticFastRCNNOutputLayers(
- cfg, ShapeSpec(channels=out_channels, height=1, width=1)
- )
-
- self.save_debug = cfg.SAVE_DEBUG
- self.save_debug_path = cfg.SAVE_DEBUG_PATH
- if self.save_debug:
- self.debug_show_name = cfg.DEBUG_SHOW_NAME
- self.vis_thresh = cfg.VIS_THRESH
- self.pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(
- torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1)
- self.pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(
- torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1)
- self.bgr = (cfg.INPUT.FORMAT == 'BGR')
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- ret = super().from_config(cfg, input_shape)
- ret['cfg'] = cfg
- return ret
-
- def forward(self, images, features, proposals, targets=None,
- ann_type='box', classifier_info=(None,None,None)):
- '''
- enable debug and image labels
- classifier_info is shared across the batch
- '''
- if not self.save_debug:
- del images
-
- if self.training:
- if ann_type in ['box']:
- proposals = self.label_and_sample_proposals(
- proposals, targets)
- else:
- proposals = self.get_top_proposals(proposals)
-
- proposal_boxes = [x.proposal_boxes for x in proposals]
- box_features = self._shared_roi_transform(
- [features[f] for f in self.in_features], proposal_boxes
- )
- predictions = self.box_predictor(
- box_features.mean(dim=[2, 3]),
- classifier_info=classifier_info)
-
- if self.add_feature_to_prop:
- feats_per_image = box_features.mean(dim=[2, 3]).split(
- [len(p) for p in proposals], dim=0)
- for feat, p in zip(feats_per_image, proposals):
- p.feat = feat
-
- if self.training:
- del features
- if (ann_type != 'box'):
- image_labels = [x._pos_category_ids for x in targets]
- losses = self.box_predictor.image_label_losses(
- predictions, proposals, image_labels,
- classifier_info=classifier_info,
- ann_type=ann_type)
- else:
- losses = self.box_predictor.losses(
- (predictions[0], predictions[1]), proposals)
- if self.with_image_labels:
- assert 'image_loss' not in losses
- losses['image_loss'] = predictions[0].new_zeros([1])[0]
- if self.save_debug:
- denormalizer = lambda x: x * self.pixel_std + self.pixel_mean
- if ann_type != 'box':
- image_labels = [x._pos_category_ids for x in targets]
- else:
- image_labels = [[] for x in targets]
- debug_second_stage(
- [denormalizer(x.clone()) for x in images],
- targets, proposals=proposals,
- save_debug=self.save_debug,
- debug_show_name=self.debug_show_name,
- vis_thresh=self.vis_thresh,
- image_labels=image_labels,
- save_debug_path=self.save_debug_path,
- bgr=self.bgr)
- return proposals, losses
- else:
- pred_instances, _ = self.box_predictor.inference(predictions, proposals)
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
- if self.save_debug:
- denormalizer = lambda x: x * self.pixel_std + self.pixel_mean
- debug_second_stage(
- [denormalizer(x.clone()) for x in images],
- pred_instances, proposals=proposals,
- save_debug=self.save_debug,
- debug_show_name=self.debug_show_name,
- vis_thresh=self.vis_thresh,
- save_debug_path=self.save_debug_path,
- bgr=self.bgr)
- return pred_instances, {}
-
- def get_top_proposals(self, proposals):
- for i in range(len(proposals)):
- proposals[i].proposal_boxes.clip(proposals[i].image_size)
- proposals = [p[:self.ws_num_props] for p in proposals]
- for i, p in enumerate(proposals):
- p.proposal_boxes.tensor = p.proposal_boxes.tensor.detach()
- if self.add_image_box:
- proposals[i] = self._add_image_box(p)
- return proposals
-
- def _add_image_box(self, p, use_score=False):
- image_box = Instances(p.image_size)
- n = 1
- h, w = p.image_size
- if self.image_box_size < 1.0:
- f = self.image_box_size
- image_box.proposal_boxes = Boxes(
- p.proposal_boxes.tensor.new_tensor(
- [w * (1. - f) / 2.,
- h * (1. - f) / 2.,
- w * (1. - (1. - f) / 2.),
- h * (1. - (1. - f) / 2.)]
- ).view(n, 4))
- else:
- image_box.proposal_boxes = Boxes(
- p.proposal_boxes.tensor.new_tensor(
- [0, 0, w, h]).view(n, 4))
- if use_score:
- image_box.scores = \
- p.objectness_logits.new_ones(n)
- image_box.pred_classes = \
- p.objectness_logits.new_zeros(n, dtype=torch.long)
- image_box.objectness_logits = \
- p.objectness_logits.new_ones(n)
- else:
- image_box.objectness_logits = \
- p.objectness_logits.new_ones(n)
- return Instances.cat([p, image_box])
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Die Leiche lebte noch full movie hd 1080p How Kommissar Rex and his team cracked the case of the zombie victim.md b/spaces/bioriAsaeru/text-to-voice/Die Leiche lebte noch full movie hd 1080p How Kommissar Rex and his team cracked the case of the zombie victim.md
deleted file mode 100644
index 50bf33e5fbbe428d283b6a5d92a6b43cffbf6633..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Die Leiche lebte noch full movie hd 1080p How Kommissar Rex and his team cracked the case of the zombie victim.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/_tasks.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/_tasks.py
deleted file mode 100644
index e48d3c1e97e02cd188b567b50a4c0c615f187e4d..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/abc/_tasks.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from __future__ import annotations
-
-import sys
-from abc import ABCMeta, abstractmethod
-from types import TracebackType
-from typing import TYPE_CHECKING, Any, Awaitable, Callable, TypeVar, overload
-from warnings import warn
-
-if sys.version_info >= (3, 8):
- from typing import Protocol
-else:
- from typing_extensions import Protocol
-
-if TYPE_CHECKING:
- from anyio._core._tasks import CancelScope
-
-T_Retval = TypeVar("T_Retval")
-T_contra = TypeVar("T_contra", contravariant=True)
-
-
-class TaskStatus(Protocol[T_contra]):
- @overload
- def started(self: TaskStatus[None]) -> None:
- ...
-
- @overload
- def started(self, value: T_contra) -> None:
- ...
-
- def started(self, value: T_contra | None = None) -> None:
- """
- Signal that the task has started.
-
- :param value: object passed back to the starter of the task
- """
-
-
-class TaskGroup(metaclass=ABCMeta):
- """
- Groups several asynchronous tasks together.
-
- :ivar cancel_scope: the cancel scope inherited by all child tasks
- :vartype cancel_scope: CancelScope
- """
-
- cancel_scope: CancelScope
-
- async def spawn(
- self,
- func: Callable[..., Awaitable[Any]],
- *args: object,
- name: object = None,
- ) -> None:
- """
- Start a new task in this task group.
-
- :param func: a coroutine function
- :param args: positional arguments to call the function with
- :param name: name of the task, for the purposes of introspection and debugging
-
- .. deprecated:: 3.0
- Use :meth:`start_soon` instead. If your code needs AnyIO 2 compatibility, you
- can keep using this until AnyIO 4.
-
- """
- warn(
- 'spawn() is deprecated -- use start_soon() (without the "await") instead',
- DeprecationWarning,
- )
- self.start_soon(func, *args, name=name)
-
- @abstractmethod
- def start_soon(
- self,
- func: Callable[..., Awaitable[Any]],
- *args: object,
- name: object = None,
- ) -> None:
- """
- Start a new task in this task group.
-
- :param func: a coroutine function
- :param args: positional arguments to call the function with
- :param name: name of the task, for the purposes of introspection and debugging
-
- .. versionadded:: 3.0
- """
-
- @abstractmethod
- async def start(
- self,
- func: Callable[..., Awaitable[Any]],
- *args: object,
- name: object = None,
- ) -> Any:
- """
- Start a new task and wait until it signals for readiness.
-
- :param func: a coroutine function
- :param args: positional arguments to call the function with
- :param name: name of the task, for the purposes of introspection and debugging
- :return: the value passed to ``task_status.started()``
- :raises RuntimeError: if the task finishes without calling ``task_status.started()``
-
- .. versionadded:: 3.0
- """
-
- @abstractmethod
- async def __aenter__(self) -> TaskGroup:
- """Enter the task group context and allow starting new tasks."""
-
- @abstractmethod
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- """Exit the task group context waiting for all tasks to finish."""
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/security/utils.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/security/utils.py
deleted file mode 100644
index fa7a450b74e813e66fd6e9a140d48c29215503bb..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/security/utils.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from typing import Optional, Tuple
-
-
-def get_authorization_scheme_param(
- authorization_header_value: Optional[str],
-) -> Tuple[str, str]:
- if not authorization_header_value:
- return "", ""
- scheme, _, param = authorization_header_value.partition(" ")
- return scheme, param
diff --git a/spaces/cloudwp/sd/README.md b/spaces/cloudwp/sd/README.md
deleted file mode 100644
index cd8a3348d015de1a4f47d6922d4e8f85756bc361..0000000000000000000000000000000000000000
--- a/spaces/cloudwp/sd/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sd
-emoji: 🐠
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.28.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git "a/spaces/codertoro/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/codertoro/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py"
deleted file mode 100644
index 742c7abc30ed7b0c74deca2c5a616d3d201402e8..0000000000000000000000000000000000000000
--- "a/spaces/codertoro/gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py"
+++ /dev/null
@@ -1,139 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-fast_debug = False
-
-
-def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import time, os
- # pip install python-docx 用于docx格式,跨平台
- # pip install pywin32 用于doc格式,仅支持Win平台
-
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- if fp.split(".")[-1] == "docx":
- from docx import Document
- doc = Document(fp)
- file_content = "\n".join([para.text for para in doc.paragraphs])
- else:
- import win32com.client
- word = win32com.client.Dispatch("Word.Application")
- word.visible = False
- # 打开文件
- print('fp', os.getcwd())
- doc = word.Documents.Open(os.getcwd() + '/' + fp)
- # file_content = doc.Content.Text
- doc = word.ActiveDocument
- file_content = doc.Range().Text
- doc.Close()
- word.Quit()
-
- print(file_content)
-
- prefix = "接下来请你逐文件分析下面的论文文件," if index == 0 else ""
- # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
- i_say = prefix + f'请对下面的文章片段用中英文做概述,文件名是{os.path.relpath(fp, project_folder)},' \
- f'文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 假设你是论文审稿专家,请对下面的文章片段做概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=[],
- sys_prompt="总结文章。"
- ) # 带超时倒计时
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user)
- history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- if not fast_debug: time.sleep(2)
-
- """
- # 可按需启用
- i_say = f'根据你上述的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一篇英文的。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
- i_say = f'我想让你做一个论文写作导师。您的任务是使用人工智能工具(例如自然语言处理)提供有关如何改进其上述文章的反馈。' \
- f'您还应该利用您在有效写作技巧方面的修辞知识和经验来建议作者可以更好地以书面形式表达他们的想法和想法的方法。' \
- f'根据你之前的分析,提出建议'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- """
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=history,
- sys_prompt="总结文章。"
- ) # 带超时倒计时
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say)
- history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
-
-@CatchException
-def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结Word文档。函数插件贡献者: JasonGuo1"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- from docx import Document
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)]
- # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始正式执行任务
- yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.c
deleted file mode 100644
index cdda67fb8153b0eeb3b47174e9937665406836f1..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.c
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Lagarith range decoder
- * Copyright (c) 2009 Nathan Caldwell
- * Copyright (c) 2009 David Conrad
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * Lagarith range decoder
- * @author Nathan Caldwell
- * @author David Conrad
- */
-
-#include "get_bits.h"
-#include "lagarithrac.h"
-
-void ff_lag_rac_init(lag_rac *l, GetBitContext *gb, int length)
-{
- int i, j, left;
-
- /* According to reference decoder "1st byte is garbage",
- * however, it gets skipped by the call to align_get_bits()
- */
- align_get_bits(gb);
- left = get_bits_left(gb) >> 3;
- l->bytestream_start =
- l->bytestream = gb->buffer + get_bits_count(gb) / 8;
- l->bytestream_end = l->bytestream_start + left;
-
- l->range = 0x80;
- l->low = *l->bytestream >> 1;
- l->hash_shift = FFMAX(l->scale, 10) - 10;
- l->overread = 0;
-
- for (i = j = 0; i < 1024; i++) {
- unsigned r = i << l->hash_shift;
- while (l->prob[j + 1] <= r)
- j++;
- l->range_hash[i] = j;
- }
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Discover 5 Exclusive New Atmospheres in Incredibox Wekiddy APK.md b/spaces/congsaPfin/Manga-OCR/logs/Discover 5 Exclusive New Atmospheres in Incredibox Wekiddy APK.md
deleted file mode 100644
index c4fa7184e1773f2a746c57f6b7edcca6c98a61fb..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Discover 5 Exclusive New Atmospheres in Incredibox Wekiddy APK.md
+++ /dev/null
@@ -1,253 +0,0 @@
-
-
Incredibox Wekiddy APK: A Fun and Creative Music App
-
Do you love music and want to create your own tunes with ease? Do you want to explore different musical genres and styles with a simple and intuitive interface? Do you want to have fun with a group of animated beatboxers that can sing, rap, and groove? If you answered yes to any of these questions, then you should check out Incredibox Wekiddy APK, a music app that lets you create your own music with the help of a merry crew of beatboxers. In this article, we will tell you everything you need to know about this app, including what it is, what it offers, how to download and install it, and what are some alternatives to it.
-
What is Incredibox?
-
Incredibox is a music app that allows anyone to create complex and catchy mixes by dragging and dropping icons onto a variety of characters. The app has simple and accessible controls and features. It also has multiple audio effects and sound customizations. You can choose between different in-app atmospheres, each with its own musical style, such as hip-hop, rock, funk, jazz, techno, and more. You can also save and share your mix with others, and even join the online community and enjoy the awesome mixes from others. The app is ad-free, safe for kids, and popular with teachers as it manages to be educational as well as fun teaching children all about rhythm and tempo .
Incredibox was created in 2009 by the French company So Far So Good. It started out as a webpage, but then it was released as a mobile and tablet app that became an instant hit. It has won several awards and appeared in various international media, such as BBC, Adobe, FWA, Gizmodo, Slate, Konbini, Softonic, Kotaku, Cosmopolitan, PocketGamer, AppAdvice, AppSpy, Vice, Ultralinx and many others. The online demo has attracted more than 80 million visitors since its creation.
-
The app has many features that make it fun and easy to use. Some of them are:
-
-
Drag and drop icons: You can create your own music by dragging and dropping icons onto the avatars. Each icon represents a different sound element, such as beats, melodies, effects, voices, etc. You can combine up to seven icons per character to create complex mixes.
-
Unlock animated choruses: You can find the right sound combos to unlock animated choruses that will enhance your tune. Each atmosphere has its own set of choruses that add more depth and variety to your mix.
-
Save and share your mix: You can save your mix as an MP3 file or as a link that you can share with your friends or on social media. You can also export your mix as a video that shows the animated characters and the icons you used.
-
Discover the online community: You can join the online community of Incredibox and listen to the top-ranked mixes from other users. You can also vote for your favorite ones and leave comments. You can also participate in contests and win prizes.
-
-
The different musical styles and atmospheres available
-
Incredibox has 10 different atmospheres, each with its own musical style, theme, and characters. They are:
-
-
-
Atmosphere
-
Style
-
Theme
-
Release Date
-
-
-
Alpha
-
Hip-hop
-
Urban
-
2009
-
-
-
Little Miss
-
Soul
-
Retro
-
2012
-
-
-
Sunrise
-
Afrobeat
-
Tropical
-
2013
-
-
-
The Love
-
R&B
-
Romantic
-
2014
-
-
-
Brazil
-
Bossa nova
-
Carnival
-
2016
-
-
-
Alive
-
Electro pop
-
Futuristic
-
2017
-
-
-
Jeevan
-
Bollywood
-
Indian
-
2018
-
-
Dystopia
-
Industrial
-
Apocalyptic
-
2019
-
-
-
Wekiddy
-
K-pop
-
Cute
-
2021
-
-
-
Future
-
Trap
-
Cyberpunk
-
TBA
-
-
-
You can switch between the atmospheres at any time and experiment with different sounds and vibes. You can also mix and match icons from different atmospheres to create your own unique style.
-
The benefits of using the app for music lovers and learners
-
Incredibox is not only a fun and creative app, but also a useful tool for music lovers and learners. Some of the benefits of using the app are:
-
-
It stimulates your musical creativity: You can unleash your musical imagination and create original mixes that express your mood, personality, and taste. You can also explore different musical genres and styles and discover new sounds and influences.
-
It improves your musical skills: You can learn the basics of music composition, such as rhythm, melody, harmony, and structure. You can also practice your ear training, pitch recognition, and musical memory by listening to the sounds and identifying the icons.
-
It enhances your musical appreciation: You can enjoy the music you create and listen to the music from others. You can also appreciate the diversity and richness of musical cultures and traditions from around the world.
-
It relaxes your mind and body: You can use the app as a form of entertainment, therapy, or meditation. You can also use it to relieve stress, boost your mood, or calm your nerves.
-
What is Wekiddy?
-
Wekiddy is the latest and most adorable atmosphere of Incredibox. It was released in 2021 as a collaboration between So Far So Good and WeKids, a Chinese company that specializes in children's entertainment and education. Wekiddy is inspired by the popular Korean pop music genre, also known as K-pop, which is known for its catchy tunes, colorful outfits, and cute choreographies. Wekiddy features a group of eight beatboxers, four boys and four girls, who wear different costumes and accessories that reflect their personalities and musical roles. They are:
-
incredibox wekiddy app download
-incredibox wekiddy music game
-incredibox wekiddy free online
-incredibox wekiddy apk mod
-incredibox wekiddy version 9
-incredibox wekiddy for pc
-incredibox wekiddy for android
-incredibox wekiddy for ios
-incredibox wekiddy for windows
-incredibox wekiddy for mac
-incredibox wekiddy review
-incredibox wekiddy tutorial
-incredibox wekiddy tips and tricks
-incredibox wekiddy cheats and hacks
-incredibox wekiddy mixlist
-incredibox wekiddy dark mode
-incredibox wekiddy mp3 file
-incredibox wekiddy beatboxers
-incredibox wekiddy musical style
-incredibox wekiddy atmospheres
-incredibox wekiddy animation
-incredibox wekiddy graphics
-incredibox wekiddy interactivity
-incredibox wekiddy fun and entertaining
-incredibox wekiddy educational and creative
-incredibox wekiddy for kids and students
-incredibox wekiddy for schools and teachers
-incredibox wekiddy for parties and events
-incredibox wekiddy for music lovers and enthusiasts
-incredibox wekiddy for beginners and experts
-how to play incredibox wekiddy
-how to create music with incredibox wekiddy
-how to record and share mix with incredibox wekiddy
-how to unlock animated choruses with incredibox wekiddy
-how to join the top 50 chart with incredibox wekiddy
-what is the difference between incredibox web version and app version
-what is the price of incredibox app and how to buy it
-what are the features and benefits of incredibox app
-what are the requirements and compatibility of incredibox app
-what are the updates and improvements of incredibox app
-
-
Weky: The leader of the group, he wears a blue jacket, a yellow cap, and sunglasses. He provides the main beats and vocals.
-
Weka: The co-leader of the group, she wears a pink dress, a purple bow, and a headset. She provides the backup beats and vocals.
-
Weko: The rapper of the group, he wears a green hoodie, a red bandana, and a microphone. He provides the rap parts and effects.
-
Weke: The dancer of the group, she wears a yellow top, a blue skirt, and sneakers. She provides the dance moves and sounds.
-
Weki: The guitarist of the group, he wears a red shirt, a black vest, and a guitar. He provides the guitar riffs and chords.
-
Wekei: The keyboardist of the group, she wears a purple sweater, a pink skirt, and a keyboard. She provides the keyboard melodies and harmonies.
-
Weku: The drummer of the group, he wears a brown jacket, a blue hat, and drumsticks. He provides the drum beats and fills.
-
Wekeu: The DJ of the group, she wears a black dress, a silver necklace, and headphones. She provides the DJ scratches and samples.
-
-
The unique sounds and animations of Wekiddy
-
Wekiddy has 20 different icons that you can drag and drop onto the characters to create your mix. Each icon represents a sound element that is related to K-pop music, such as synth basses, vocal chops, claps, snaps, whistles, chants, etc. You can also find some surprises and Easter eggs among the icons that will make you smile or laugh. For example, you can make the characters say "WeKids" or "Incredibox" in different languages or make them imitate animal sounds or musical instruments.
-
Wekiddy also has 15 animated choruses that you can unlock by finding the right sound combos. Each chorus shows the characters performing a cute and catchy song with lyrics in English or Korean. The songs are about various topics such as love, friendship, happiness, dreams, etc. The songs also have different moods and styles such as pop ballads, rap songs, disco songs, etc. The choruses also have different animations that show the characters dancing, playing instruments, or doing other actions that match the song. The animations are colorful, lively, and adorable.
-
How to unlock the video clips and share your mix
-
Wekiddy also has a special feature that allows you to unlock video clips that show the real-life versions of the characters. The video clips are produced by WeKids and feature eight talented kids who sing and dance to the songs of Wekiddy. The kids are dressed and styled like the characters and perform in various locations such as a studio, a park, a school, etc. The video clips are fun, energetic, and professional.
-
To unlock the video clips, you need to find the right sound combos that trigger the "WeKids" icon. The icon will appear on the top right corner of the screen and will show a picture of one of the kids. You can tap on the icon to watch the video clip of that kid. You can also swipe left or right to see the other video clips. You can unlock up to eight video clips per mix.
-
Once you have unlocked the video clips, you can also share your mix with others. You can export your mix as a video that shows both the animated characters and the real kids. You can also add your name and a title to your mix. You can then save your video as an MP4 file or as a link that you can share on social media or via email. You can also upload your video to YouTube or other platforms and show off your musical skills.
-
How to download and install Incredibox Wekiddy APK?
-
Incredibox Wekiddy APK is a modified version of Incredibox that allows you to access the Wekiddy atmosphere for free. Normally, you would have to pay a small fee to unlock Wekiddy in the original app. However, with Incredibox Wekiddy APK, you can enjoy Wekiddy without spending any money. You can also access all the other atmospheres and features of Incredibox with this APK file.
-
To download and install Incredibox Wekiddy APK, you need to follow these steps:
-
The steps to download the APK file from a reliable source
-
-
Go to a reliable website that offers Incredibox Wekiddy APK for download. You can search for it on Google or use one of these links: . Make sure that the website is safe and trustworthy by checking its reviews, ratings, and comments.
-
Click on the download button or link and wait for the APK file to be downloaded to your device. The file size is about 60 MB, so it may take some time depending on your internet speed.
-
Once the download is complete, locate the APK file in your device's storage. It may be in your downloads folder or in another location depending on your settings.
-
-
The steps to install the APK file on your Android device
-
-
Before you install the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To do this, go to your device's settings, then security or privacy, then unknown sources or install unknown apps. Toggle on the option that allows you to install apps from unknown sources.
-
After you enable unknown sources, tap on the APK file that you downloaded and follow the instructions on the screen. You may have to grant some permissions or accept some terms and conditions before installing the app.
-
Once the installation is done, you will see an icon of Incredibox on your device's home screen or app drawer. Tap on it to launch the app and enjoy Wekiddy and other atmospheres.
-
The precautions to take before installing an APK file
-
While installing an APK file can be a convenient way to access apps that are not available on Google Play Store, it can also pose some risks to your device and your privacy. Some APK files may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Therefore, you should take some precautions before installing an APK file, such as:
-
-
Check the source of the APK file: You should only download APK files from reputable and trustworthy websites that have positive reviews, ratings, and comments from other users. You should also avoid clicking on suspicious links or pop-ups that may redirect you to malicious websites or download unwanted files.
-
Scan the APK file for viruses: You should use a reliable antivirus or anti-malware app to scan the APK file before installing it. This will help you detect and remove any potential threats that may harm your device or your data.
-
Backup your data: You should backup your important data, such as photos, videos, contacts, messages, etc., before installing an APK file. This will help you restore your data in case something goes wrong during the installation or after the installation.
-
Read the permissions and terms and conditions: You should read the permissions and terms and conditions that the app requests before installing it. You should only grant the permissions that are necessary and relevant for the app's functionality. You should also be aware of what the app can do with your data and how it can affect your device's performance.
-
-
What are some alternatives to Incredibox?
-
Incredibox is a great app for creating music, but it is not the only one. There are many other music apps that are similar to Incredibox in terms of features, style, and quality. Some of them are:
-
A list of some other music apps that are similar to Incredibox
-
-
Groovepad: This is a music app that allows you to create beats and melodies by tapping on colorful pads. You can choose from various genres, such as hip-hop, EDM, house, dubstep, drum and bass, trap, etc. You can also adjust the tempo, pitch, and volume of each pad. You can also record and share your creations with others.
-
Beat Snap: This is a music app that allows you to make beats and loops by using a grid of sounds. You can choose from hundreds of sounds, such as drums, basses, synths, vocals, effects, etc. You can also mix and match sounds from different genres, such as pop, rock, jazz, funk, etc. You can also record and share your tracks with others.
-
Music Maker Jam: This is a music app that allows you to create songs and remixes by using loops and samples. You can choose from thousands of loops and samples from various genres, such as hip-hop, EDM, rock, pop, jazz, etc. You can also add effects, filters, and vocals to your tracks. You can also join the online community and discover new music from others.
-
BandLab: This is a music app that allows you to create and collaborate on music projects with other musicians. You can use various instruments, such as guitars, keyboards, drums, etc., or record your own voice or sounds. You can also edit and mix your tracks with professional tools and effects. You can also join the online community and discover new music from others.
-
-
A brief comparison of their features and advantages
-
All of these apps are similar to Incredibox in the sense that they allow you to create music with ease and fun. However, they also have some differences in terms of features and advantages. Here is a brief comparison of them:
-
-
-
App
-
Features
-
Advantages
-
-
-
Incredibox
-
- Drag and drop icons - Unlock animated choruses - Save and share your mix - Discover the online community
-
- Simple and intuitive interface - Multiple musical styles and atmospheres - Fun and creative animations - Educational and entertaining
-
-
-
Groovepad
-
- Tap on colorful pads - Adjust the tempo, pitch, and volume - Record and share your creations - Discover new genres and sounds
-
- Easy and fast way to make beats - High-quality sounds and effects - Customizable sound settings - Inspiring and diverse genres
-
-
-
Beat Snap
-
- Use a grid of sounds - Mix and match sounds from different genres - Record and share your tracks - Discover new sounds and loops
-
- Flexible and versatile way to make loops - Large library of sounds and loops - Creative and original sound combinations - Dynamic and lively sound grid
-
-
-
Music Maker Jam
-
- Use loops and samples - Add effects, filters, and vocals - Remix your tracks - Join the online community
-
- Powerful and professional way to make songs - Thousands of loops and samples - Advanced editing and mixing tools - Engaging and interactive online community
-
-
BandLab
-
- Use various instruments - Record your own voice or sounds - Edit and mix your tracks - Collaborate with other musicians
-
- Comprehensive and collaborative way to make music - Diverse and realistic instruments - Professional and user-friendly tools and effects - Supportive and talented online community
-
-
-
A recommendation of which app to choose based on your preferences
-
Depending on your preferences, you may find one app more suitable for you than the others. Here are some recommendations of which app to choose based on your preferences:
-
-
If you want a simple and intuitive app that lets you create music with fun and creative animations, you should choose Incredibox.
-
If you want an easy and fast app that lets you create beats with high-quality sounds and effects, you should choose Groovepad.
-
If you want a flexible and versatile app that lets you create loops with a large library of sounds and loops, you should choose Beat Snap.
-
If you want a powerful and professional app that lets you create songs with thousands of loops and samples, you should choose Music Maker Jam.
-
If you want a comprehensive and collaborative app that lets you create music with diverse and realistic instruments, you should choose BandLab.
-
-
Conclusion
-
In this article, we have introduced you to Incredibox Wekiddy APK, a music app that lets you create your own music with the help of a merry crew of beatboxers. We have explained what Incredibox is, what it offers, how to download and install it, and what are some alternatives to it. We hope that you have found this article informative and helpful. We also hope that you have enjoyed reading it as much as we have enjoyed writing it.
-
A summary of the main points of the article
-
Here are the main points of the article:
-
-
Incredibox is a music app that allows anyone to create complex and catchy mixes by dragging and dropping icons onto a variety of characters.
-
Wekiddy is the latest and most adorable atmosphere of Incredibox. It is inspired by the popular Korean pop music genre, also known as K-pop.
-
Incredibox Wekiddy APK is a modified version of Incredibox that allows you to access the Wekiddy atmosphere for free.
-
To download and install Incredibox Wekiddy APK, you need to follow some steps and take some precautions.
-
There are many other music apps that are similar to Incredibox in terms of features, style, and quality. You can choose the one that suits your preferences best.
-
-
A call to action to try out Incredibox Wekiddy APK and have fun with music
-
If you are interested in trying out Incredibox Wekiddy APK, you can download it from one of these links: . You can also visit the official website of Incredibox to learn more about the app and its other atmospheres: https://www.incredibox.com/. You can also follow Incredibox on social media to stay updated on the latest news and updates: https://www.facebook.com/Incredibox.music/, https://twitter.com/incredibox_, https://www.instagram.com/incredibox.official/, https://www.youtube.com/user/IncrediboxOfficial.
-
We encourage you to try out Incredibox Wekiddy APK and have fun with music. You can create your own mixes, unlock animated choruses, save and share your mix, discover the online community, unlock video clips, and more. You can also experiment with different musical styles and atmospheres and find your own musical voice. You can also use the app as a way to relax, learn, or express yourself.
-
A thank you note and a request for feedback
-
Thank you for reading this article. We hope that you have found it useful and enjoyable. We would love to hear from you. Please leave us a comment below or contact us at . Let us know what you think about Incredibox Wekiddy APK, what are your favorite atmospheres or features, what are some tips or tricks that you have learned, or what are some suggestions or questions that you have. Your feedback is valuable to us and will help us improve our content and service.
-
FAQs
-
What is the difference between Incredibox Wekiddy APK and Incredibox?
-
Incredibox Wekiddy APK is a modified version of Incredibox that allows you to access the Wekiddy atmosphere for free. Normally you would have to pay a small fee to unlock Wekiddy in the original app. However, with Incredibox Wekiddy APK, you can enjoy Wekiddy without spending any money. You can also access all the other atmospheres and features of Incredibox with this APK file.
-
Is Incredibox Wekiddy APK safe to use?
-
Incredibox Wekiddy APK is generally safe to use, as long as you download it from a reliable and trustworthy source. However, you should always be careful when installing an APK file, as some of them may contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. Therefore, you should always scan the APK file for viruses, backup your data, read the permissions and terms and conditions, and enable unknown sources before installing it.
-
How can I update Incredibox Wekiddy APK?
-
Incredibox Wekiddy APK may not receive regular updates from the official developers, as it is a modified version of the app. Therefore, you may not be able to access the latest features or bug fixes that are available in the original app. However, you can check the website where you downloaded the APK file for any updates or new versions of the app. You can also uninstall the APK file and install the original app from Google Play Store if you want to enjoy the official updates.
-
Can I use Incredibox Wekiddy APK on other devices?
-
Incredibox Wekiddy APK is designed for Android devices only. Therefore, you may not be able to use it on other devices, such as iOS devices, Windows devices, Mac devices, etc. However, you can use the online demo of Incredibox on any device that has a web browser and an internet connection. You can also download the original app of Incredibox from Google Play Store for Android devices or from App Store for iOS devices.
-
Can I use Incredibox Wekiddy APK offline?
-
Incredibox Wekiddy APK does not require an internet connection to work. Therefore, you can use it offline and create your own music without any interruptions or limitations. However, you will need an internet connection to access some features of the app, such as saving and sharing your mix, discovering the online community, unlocking video clips, etc.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Spotify Premium APK MOD on Your Android Device [Latest Version 2021].md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Spotify Premium APK MOD on Your Android Device [Latest Version 2021].md
deleted file mode 100644
index a4173d483313b1b3e810b5e6711f7b9115c83931..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Spotify Premium APK MOD on Your Android Device [Latest Version 2021].md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
Download Spotify Premium Mod APK 2021: Enjoy Unlimited Music and Podcasts
-
If you love listening to music and podcasts, you probably have heard of Spotify. It is the world's most popular music streaming service, with over 350 million users and 70 million songs. But did you know that you can enjoy even more features and benefits with Spotify Premium? And did you know that you can get Spotify Premium for free by using a modded version of the app? In this article, we will tell you everything you need to know about Spotify Premium Mod APK, how to download and install it on your device, what are its features, and what are its pros and cons. Let's get started!
Spotify is the world's most popular music streaming service
-
Spotify is an app that lets you stream music and podcasts from a huge library of artists, genres, albums, playlists, and more. You can discover new music, create your own playlists, follow your favorite artists, and share your music taste with your friends. You can also listen to podcasts on various topics, such as news, comedy, sports, education, and more. You can use Spotify on your smartphone, tablet, computer, smart TV, speaker, or car.
-
Spotify Premium offers many benefits over the free version
-
While Spotify is free to use, it has some limitations and drawbacks. For example, you can only skip six songs per hour, you have to listen to ads every few songs, you can't download songs for offline listening, and you can't choose the songs you want to play in some playlists. These restrictions can be annoying and frustrating if you want to enjoy your music without interruptions.
-
That's why many people opt for Spotify Premium, which is a subscription service that costs $9.99 per month (or less if you use a family or student plan). With Spotify Premium, you can enjoy the following benefits:
-
-
Unlimited skips: You can skip as many songs as you want without any limits.
-
No ads: You can listen to music without any interruptions from ads.
-
Offline listening: You can download up to 10,000 songs on five devices and listen to them without an internet connection.
-
High-quality audio: You can stream music at up to 320 kbps, which is the highest quality available on Spotify.
-
Shuffle play: You can play any song you want in any playlist or album.
-
-
As you can see, Spotify Premium offers a lot of value for your money. But what if you don't want to pay for it? Is there a way to get Spotify Premium for free? The answer is yes: by using Spotify Premium Mod APK.
-
How to get spotify premium mod apk for free in 2021
-Spotify premium mod apk latest version download 2021
-Spotify premium mod apk no root no ads 2021
-Spotify premium mod apk offline mode download 2021
-Spotify premium mod apk unlimited skips and downloads 2021
-Best site to download spotify premium mod apk 2021
-Spotify premium mod apk with lyrics and podcasts 2021
-Spotify premium mod apk for android and ios 2021
-Spotify premium mod apk for pc and mac 2021
-Spotify premium mod apk for smart tv and firestick 2021
-Spotify premium mod apk review and features 2021
-Spotify premium mod apk comparison and alternatives 2021
-Spotify premium mod apk benefits and drawbacks 2021
-Spotify premium mod apk installation and activation guide 2021
-Spotify premium mod apk troubleshooting and support 2021
-Spotify premium mod apk update and changelog 2021
-Spotify premium mod apk download link and password 2021
-Spotify premium mod apk giveaway and contest 2021
-Spotify premium mod apk coupon and discount code 2021
-Spotify premium mod apk testimonials and feedback 2021
-Spotify premium mod apk hack and cheat 2021
-Spotify premium mod apk security and privacy 2021
-Spotify premium mod apk legal and ethical issues 2021
-Spotify premium mod apk risks and warnings 2021
-Spotify premium mod apk pros and cons 2021
-Why you should download spotify premium mod apk in 2021
-How to download spotify premium mod apk safely and securely in 2021
-How to download spotify premium mod apk fast and easy in 2021
-How to download spotify premium mod apk without virus or malware in 2021
-How to download spotify premium mod apk without survey or verification in 2021
-How to download spotify premium mod apk without signing up or registering in 2021
-How to download spotify premium mod apk without paying or subscribing in 2021
-How to download spotify premium mod apk with original quality and sound in 2021
-How to download spotify premium mod apk with all features unlocked and enabled in 2021
-How to download spotify premium mod apk with custom settings and preferences in 2021
-How to download spotify premium mod apk with different languages and regions in 2021
-How to download spotify premium mod apk with different themes and skins in 2021
-How to download spotify premium mod apk with different genres and playlists in 2021
-How to download spotify premium mod apk with different artists and albums in 2021
-How to download spotify premium mod apk with different songs and tracks in 2021
-
What is Spotify Premium Mod APK and how does it work?
-
Spotify Premium Mod APK is a modified version of the official app that unlocks all the premium features
-
Spotify Premium Mod APK is a hacked or cracked version of the official Spotify app that bypasses the subscription verification and unlocks all the premium features for free. It is not available on the Google Play Store or the App Store, but you can download it from third-party websites or forums
How to download and install Spotify Premium Mod APK on your device
-
To download and install Spotify Premium Mod APK on your device, you need to follow these steps:
-
-
Uninstall the official Spotify app from your device if you have it.
-
Go to a trusted website or forum that provides the latest version of Spotify Premium Mod APK. You can search for it on Google or use the link below. Make sure you download the file from a safe and reliable source.
-
Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the downloaded Spotify Premium Mod APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-
Launch the Spotify Premium Mod APK app and sign in with your existing Spotify account or create a new one. You don't need to use a VPN or a fake email address.
-
Enjoy all the premium features of Spotify for free!
-
-
Note: Some devices may require additional steps or permissions to install Spotify Premium Mod APK. If you encounter any problems or errors, you can search for solutions online or ask for help from other users.
-
What are the features of Spotify Premium Mod APK?
-
Unlimited skips, downloads, and offline listening
-
One of the best features of Spotify Premium Mod APK is that it allows you to skip as many songs as you want without any limits. You can also download up to 10,000 songs on five devices and listen to them offline without an internet connection. This is great for saving data and battery, as well as enjoying your music anywhere and anytime.
-
No ads, no shuffle, and high-quality audio
-
Another great feature of Spotify Premium Mod APK is that it removes all the annoying ads that interrupt your music experience. You can listen to music without any interruptions from ads. You can also play any song you want in any playlist or album without being forced to shuffle. You can also stream music at up to 320 kbps, which is the highest quality available on Spotify. This means you can enjoy crystal clear sound and rich bass.
-
Access to millions of songs, podcasts, and playlists
-
The last but not least feature of Spotify Premium Mod APK is that it gives you access to millions of songs, podcasts, and playlists from all over the world. You can discover new music, create your own playlists, follow your favorite artists, and share your music taste with your friends. You can also listen to podcasts on various topics, such as news, comedy, sports, education, and more. You can also explore curated playlists based on your mood, genre, activity, or occasion.
-
What are the pros and cons of using Spotify Premium Mod APK?
-
Pros: Save money, enjoy more music, and customize your experience
-
The main advantage of using Spotify Premium Mod APK is that you can save money by not paying for the subscription fee. You can enjoy all the premium features of Spotify for free without spending a dime. You can also enjoy more music and podcasts with unlimited skips, downloads, and offline listening. You can also customize your experience with no ads, no shuffle, and high-quality audio.
-
Cons: Risk of account ban, malware infection, and legal issues
-
The main disadvantage of using Spotify Premium Mod APK is that you may face some risks and challenges. For example, you may get banned from Spotify if they detect that you are using a modded version of the app. You may also get infected with malware or viruses if you download the app from an untrusted source. You may also face legal issues if you violate the terms and conditions of Spotify or infringe the rights of the artists and creators.
-
Conclusion: Is Spotify Premium Mod APK worth it?
-
In conclusion, Spotify Premium Mod APK is a great way to enjoy unlimited music and podcasts for free. It offers many features and benefits that enhance your music experience. However, it also comes with some risks and challenges that you should be aware of before using it. Ultimately, it is up to you to decide whether you want to use it or not. If you do decide to use it, make sure you download it from a trusted source and use it responsibly.
-
Frequently Asked Questions
-
-
Q: Is Spotify Premium Mod APK safe to use?
-
A: It depends on where you download it from and how you use it. If you download it from a trusted source and use it responsibly, it should be safe to use. However, if you download it from an untrusted source or use it in a way that violates the terms and conditions of Spotify, you may expose yourself to some risks, such as account ban, malware infection, or legal issues.
-
Q: How can I avoid getting banned from Spotify?
-
A: There is no guarantee that you won't get banned from Spotify if you use Spotify Premium Mod APK, but there are some tips that may help you reduce the chances of getting banned. For example, you can use a VPN to hide your IP address and location, you can use a fake email address to create a new Spotify account, you can avoid logging in with your Facebook account, and you can clear your app data and cache regularly.
-
Q: How can I update Spotify Premium Mod APK?
-
A: To update Spotify Premium Mod APK, you need to uninstall the old version and install the new version from the same source. You can also check for updates on the website or forum where you downloaded the app. However, you should be careful when updating the app, as some updates may not work properly or may contain bugs or malware.
-
Q: Can I use Spotify Premium Mod APK on iOS devices?
-
A: No, Spotify Premium Mod APK is only compatible with Android devices. If you want to use Spotify Premium on iOS devices, you need to use other methods, such as jailbreaking your device or using a third-party app store.
-
Q: What are some alternatives to Spotify Premium Mod APK?
-
A: If you don't want to use Spotify Premium Mod APK, you can try some alternatives that offer similar features and benefits. Some of these alternatives are:
-
-
Deezer: A music streaming service that offers over 73 million songs, podcasts, and playlists. You can also download songs for offline listening and enjoy high-quality audio. You can get Deezer Premium for free by using a modded version of the app.
-
Pandora: A music streaming service that offers personalized radio stations based on your preferences and feedback. You can also create your own stations and discover new music. You can get Pandora Premium for free by using a modded version of the app.
-
YouTube Music: A music streaming service that offers over 70 million songs, videos, and live performances. You can also access YouTube's vast library of content and watch music videos. You can get YouTube Music Premium for free by using a modded version of the app.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Grand Truck Simulator 2 on PC and Enjoy New Maps and Weather System.md b/spaces/congsaPfin/Manga-OCR/logs/Play Grand Truck Simulator 2 on PC and Enjoy New Maps and Weather System.md
deleted file mode 100644
index 5534d7d45ad0c5305a0b0890e8167eaec140ca46..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Play Grand Truck Simulator 2 on PC and Enjoy New Maps and Weather System.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
Grand Truck Simulator 2: A New Concept in Mobile Logistics Simulation
-
If you are a fan of truck driving games, you might have heard of Grand Truck Simulator, a popular mobile game that lets you experience the life of a trucker. But did you know that there is a sequel to this game that offers even more features and challenges? In this article, we will tell you everything you need to know about Grand Truck Simulator 2, a new concept in mobile logistics simulation. We will also show you how to download and play this game on your PC using different emulators, and what are the system requirements and benefits of playing it on a bigger screen. So buckle up and get ready for a thrilling ride!
-
What is Grand Truck Simulator 2?
-
Grand Truck Simulator 2 is a simulation game developed by Pulsar GameSoft, a studio based in Argentina. It is the second edition of the Grand Truck Simulator saga, which brings a new concept in mobile logistics simulation. In this game, you are not only a truck driver, but also a fleet manager who must take care of your vehicles and business. You will have to deal with realistic physics, consumption, damage, and wear, as well as check tire pressure, coolant and lubricant levels, buy used trucks, change engines, gearboxes, differentials, tires, and rims. You will also have to explore new maps and face an improved weather system that will provide a fascinating gaming experience.
Grand Truck Simulator 2 has many features that make it stand out from other truck driving games. Some of these features are:
-
-
A new physics engine that simulates realistic consumption, damage, and wear of your trucks.
-
A vehicle maintenance system that allows you to check tire pressure, coolant and lubricant levels, buy used trucks, change engines, gearboxes, differentials, tires, and rims.
-
A variety of trucks to choose from, each with its own characteristics and performance.
-
A dynamic weather system that affects the road conditions and visibility.
-
New maps to explore, including USA West Coast, Brazil South East Coast, Argentina North West Coast, and more.
-
A day-night cycle that changes the lighting and atmosphere of the game.
-
A traffic system that includes cars, buses, motorcycles, trains, and other trucks.
-
A radio system that lets you listen to music or news while driving.
-
A multiplayer mode that lets you play with other players online.
-
A customization system that lets you change the color, paint job, accessories, and stickers of your trucks.
-
-
Gameplay of Grand Truck Simulator 2
-
The gameplay of Grand Truck Simulator 2 is similar to the first game, but with more depth and complexity. You start by choosing your truck from a selection of models, each with its own specifications and price. You can also customize your truck with different colors, paint jobs, accessories, and stickers. Then you can choose a job from a list of available cargoes and destinations. You will have to drive your truck to the loading point, attach the trailer, and deliver it to the destination. Along the way, you will have to follow the traffic rules, avoid accidents and damages, refuel your truck at gas stations, rest at motels or parking lots, pay tolls or fines if necessary, and enjoy the scenery and the weather. You will also have to manage your fleet and your business, by buying new trucks, hiring drivers, expanding your garage, and earning money and reputation. You can also play online with other players, chat with them, join or create a convoy, and compete in leaderboards and events.
-
How to Download Grand Truck Simulator 2 for PC?
-
Grand Truck Simulator 2 is a mobile game that is available for Android devices on Google Play Store. However, if you want to play this game on your PC, you will need an emulator that can run Android apps on your computer. An emulator is a software that mimics the functions of an Android device, allowing you to access the Google Play Store and download any app you want. There are many emulators that you can use to play Grand Truck Simulator 2 on PC, but we will recommend three of the best ones: LDPlayer, GameLoop, and MuMu Player. Here are the steps to download and install Grand Truck Simulator 2 for PC using these emulators:
-
Download Grand Truck Simulator 2 with LDPlayer Emulator
Launch MuMu Player and log in with your Google account.
-
Go to the "App Store" tab and search for "Grand Truck Simulator 2".
-
Select the game from the search results and click "Install".
-
Wait for the installation to finish and then click "Open".
-
Enjoy playing Grand Truck Simulator 2 on PC with MuMu Player.
-
-
What are the System Requirements for Grand Truck Simulator 2 on PC?
-
To play Grand Truck Simulator 2 on PC smoothly, you will need a computer that meets the following system requirements:
-
grand truck simulator 2 pc free download
-grand truck simulator 2 for windows 10
-grand truck simulator 2 online play on pc
-grand truck simulator 2 pc game download
-grand truck simulator 2 emulator for pc
-grand truck simulator 2 pc version download
-grand truck simulator 2 download for laptop
-grand truck simulator 2 pc gameplay
-grand truck simulator 2 pc requirements
-grand truck simulator 2 download for windows 7
-grand truck simulator 2 pc full version
-grand truck simulator 2 mod apk for pc
-grand truck simulator 2 pc controls
-grand truck simulator 2 download for mac
-grand truck simulator 2 pc cheats
-grand truck simulator 2 pc review
-grand truck simulator 2 download for windows 8
-grand truck simulator 2 pc graphics settings
-grand truck simulator 2 pc update
-grand truck simulator 2 download size for pc
-grand truck simulator 2 pc steam
-grand truck simulator 2 download for windows xp
-grand truck simulator 2 pc multiplayer
-grand truck simulator 2 pc mods
-grand truck simulator 2 download link for pc
-grand truck simulator 2 pc system requirements
-grand truck simulator 2 download for windows vista
-grand truck simulator 2 pc tips and tricks
-grand truck simulator 2 pc keyboard controls
-grand truck simulator 2 download for chromebook
-grand truck simulator 2 pc best settings
-grand truck simulator 2 download for windows phone
-grand truck simulator 2 pc trailer
-grand truck simulator 2 pc hack
-grand truck simulator 2 download for linux
-grand truck simulator 2 pc release date
-grand truck simulator 2 download for windows server
-grand truck simulator 2 pc guide
-grand truck simulator 2 pc configuration
-grand truck simulator 2 download for ubuntu
-grand truck simulator 2 pc patch notes
-grand truck simulator 2 download for windows media center edition
-grand truck simulator 2 pc support
-grand truck simulator 2 pc error
-grand truck simulator 2 download for bluestacks
-grand truck simulator 2 pc features
-grand truck simulator 2 download for nox player
-grand truck simulator 2 pc comparison
-grand truck simulator 2 download for memu player
-
Minimum Requirements
-
-
Operating System: Windows 7 or higher.
-
CPU: Intel or AMD Dual-Core Processor.
-
RAM: 4 GB or more.
-
Disk Space: 5 GB or more.
-
Graphics Card: NVIDIA GeForce GT 730 or AMD Radeon R7 240 or higher.
-
Internet Connection: Broadband or higher.
-
-
Recommended Requirements
-
-
Operating System: Windows 10 or higher.
-
CPU: Intel or AMD Quad-Core Processor or higher.
-
RAM: 8 GB or more.
-
Disk Space: 10 GB or more.
-
Graphics Card: NVIDIA GeForce GTX 1050 or AMD Radeon RX 560 or higher.
-
Internet Connection: Broadband or higher.
-
-
Why Play Grand Truck Simulator 2 on PC?
-
You might be wondering why you should play Grand Truck Simulator 2 on PC instead of your mobile device. Well, there are many benefits of playing this game on a bigger screen, such as:
-
Benefits of Playing Grand Truck Simulator 2 on PC
-
-
You can enjoy better graphics and sound quality, as well as a larger field of view.
-
You can use your keyboard and mouse to control your truck more easily and precisely.
You can play the game on a bigger screen without draining your battery or overheating your device.
-
You can access more features and options, such as recording, streaming, screenshotting, and customizing your settings.
-
You can avoid interruptions from phone calls, messages, notifications, or ads.
-
-
Tips and Tricks for Playing Grand Truck Simulator 2 on PC
-
If you want to improve your skills and performance in Grand Truck Simulator 2 on PC, here are some tips and tricks that you can follow:
-
-
Adjust your graphics settings according to your PC specifications and preferences. You can choose from low, medium, high, or ultra settings, as well as enable or disable shadows, reflections, anti-aliasing, and other effects.
-
Use the keyboard shortcuts to access different functions and menus. For example, you can press F1 to open the map, F2 to open the radio, F3 to open the dashboard, F4 to open the settings, F5 to open the multiplayer mode, and F6 to open the chat.
-
Use the mouse wheel to zoom in or out of the camera view. You can also press C to switch between different camera angles, such as first-person, third-person, cabin, trailer, or free mode.
-
Use the arrow keys or WASD keys to steer your truck. You can also use the space bar to brake, the left shift key to accelerate, the left control key to reverse, and the E key to start or stop the engine.
-
Use the Q and Z keys to change gears manually. You can also use the R key to activate the retarder, which helps you slow down your truck without using the brakes.
-
Use the T key to attach or detach the trailer. You can also use the X key to activate the parking brake, which prevents your truck from moving when parked.
-
Use the L key to turn on or off the lights. You can also use the K key to activate the high beams, which provide more visibility at night.
-
Use the H key to honk your horn. You can also use the N key to activate the air horn, which produces a louder sound.
-
Use the M key to activate or deactivate the cruise control. This feature allows you to maintain a constant speed without pressing the accelerator.
-
Use the P key to pause the game. You can also use the ESC key to open the main menu, where you can save, load, quit, or resume your game.
-
-
Conclusion
-
Grand Truck Simulator 2 is a simulation game that lets you experience the life of a trucker and a fleet manager. You can drive various trucks across different maps and deliver different cargoes while managing your vehicles and business. You can also customize your trucks with different colors, paint jobs, accessories, and stickers. You can play this game on your mobile device or on your PC using an emulator. Playing on PC has many benefits, such as better graphics and sound quality, easier controls, more features and options, and no interruptions. If you want to download and play Grand Truck Simulator 2 on PC, you can use one of these emulators: LDPlayer, GameLoop, or MuMu Player. We hope this article has helped you learn more about Grand Truck Simulator 2 and how to play it on PC. Have fun!
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about Grand Truck Simulator 2:
-
-
Is Grand Truck Simulator 2 free?
-
Yes, Grand Truck Simulator 2 is free to download and play on both Android devices and PC using an emulator. However, there are some in-app purchases that you can make to enhance your gaming experience.
-
Is Grand Truck Simulator 2 offline?
-
No, Grand Truck Simulator 2 requires an internet connection to play. You will need an internet connection to download and update the game, access online features such as multiplayer mode and leaderboards, and sync your data and progress. However, you can play the game offline if you have already downloaded it and updated it to the latest version.
-
How to update Grand Truck Simulator 2?
-
To update Grand Truck Simulator 2, you will need to go to the Google Play Store on your Android device or the emulator that you are using on your PC. Then, you will need to find the game from your installed apps and click on the "Update" button. You can also enable the automatic updates option to update the game whenever a new version is available.
-
How to get more money in Grand Truck Simulator 2?
-
To get more money in Grand Truck Simulator 2, you will need to complete more jobs and deliver more cargoes. You can also get more money by hiring drivers, expanding your garage, and improving your reputation. Additionally, you can watch ads or make in-app purchases to get more money instantly.
-
How to reset Grand Truck Simulator 2?
-
To reset Grand Truck Simulator 2, you will need to go to the settings menu of the game and click on the "Reset Game" button. This will erase all your data and progress and start a new game from scratch. However, be careful as this action cannot be undone.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Race Up Hill and Win Epic Loot in Hill Climb Racing 2 for Windows 10.md b/spaces/congsaPfin/Manga-OCR/logs/Race Up Hill and Win Epic Loot in Hill Climb Racing 2 for Windows 10.md
deleted file mode 100644
index 80e9e11f3c492b4390782cd9952e7ee74cb78e55..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Race Up Hill and Win Epic Loot in Hill Climb Racing 2 for Windows 10.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
Hill Climb Racing 2 Free Download for Windows 10
-
If you are looking for a fun and addictive racing game that you can play on your PC, you should check out Hill Climb Racing 2. This is a sequel to the popular Hill Climb Racing game that has been downloaded over a billion times on Android and iOS devices. Hill Climb Racing 2 is a 2D online multiplayer racing game with dozens of tracks, vehicles and character customization options at your fingertips. You can race uphill in over 20+ different vehicles, from cars, trucks, bikes and even a tank. You can unlock and upgrade new vehicle parts, customize your character and vehicle's looks, team up with your friends online and race in teams mode, have arcade racing fun while performing cool stunt tricks, explore dozens of tracks to race on, enjoy the classic adventure mode that makes a return, and participate in weekly events that change up the gameplay in new exciting ways. In this article, we will show you how to download and install Hill Climb Racing 2 on Windows 10, as well as some tips and tricks for playing the game on your PC.
Hill Climb Racing 2 is not just a simple racing game. It has many features that make it stand out from other games in the genre. Here are some of the features that you can enjoy when you play Hill Climb Racing 2 on Windows 10:
-
Race uphill in over 20+ different vehicles
-
One of the most fun aspects of Hill Climb Racing 2 is the variety of vehicles that you can choose from. You can race uphill in over 20+ different vehicles, each with their own unique characteristics, strengths and weaknesses. You can choose from cars, trucks, bikes, motorcycles, buses, jeeps, tanks, monster trucks, tractors, scooters, supercars, sports cars, formula cars, rally cars, hot rods, dragsters, snowmobiles, sleds, hovercrafts, helicopters, rockets and more. You can also unlock new vehicles by completing challenges, winning races or buying them with coins or gems.
-
Unlock and upgrade new vehicle parts
-
Another feature that adds to the fun and challenge of Hill Climb Racing 2 is the ability to unlock and upgrade new vehicle parts. You can improve your vehicle's performance by upgrading its engine, suspension, tires, roll cage, turbo and fuel tank. You can also unlock new parts by completing challenges or buying them with coins or gems. Upgrading your vehicle parts will help you overcome the obstacles and terrain of each track, as well as give you an edge over your opponents.
-
Customize your character and vehicle's looks
-
If you want to express your personality and style in Hill Climb Racing 2, you can customize your character and vehicle's looks. You can change your character's appearance by choosing from different hats, shirts, pants, shoes and accessories. You can also change your vehicle's appearance by choosing from different paints, stickers, wheels and spoilers. You can unlock new customization options by completing challenges, winning races or buying them with coins or gems. Customizing your character and vehicle's looks will make you stand out from the crowd and show off your creativity.
-
Team up with your friends online and race in teams mode
-
Hill Climb Racing 2 is not just a solo game. You can also team up with your friends online and race in teams mode. You can join or create a racing team with up to 50 members and compete against other teams in seasons. You can also chat with your teammates, share tips and tricks, and send and receive gifts. Racing in teams mode will help you earn more coins and gems, as well as unlock exclusive team chests that contain valuable rewards.
-
Arcade racing fun while performing cool stunt tricks
-
Hill Climb Racing 2 is not just a realistic racing game. It is also an arcade racing game that lets you have fun while performing cool stunt tricks. You can flip, jump, wheelie, backflip, frontflip, barrel roll, spin, loop, and more. Performing stunt tricks will not only make you look cool, but also give you extra coins and gems, as well as boost your turbo meter. However, be careful not to crash or run out of fuel, as that will end your race prematurely.
-
Dozens of tracks to race on
-
Hill Climb Racing 2 has dozens of tracks to race on, each with its own unique theme, scenery, obstacles and challenges. You can race on tracks such as countryside, forest, desert, winter, city, moon, mars, rainbow, volcano, beach, roller coaster, mine shaft, nuclear plant, junkyard, swamp, cave and more. You can also unlock new tracks by completing challenges or buying them with coins or gems. Racing on different tracks will test your skills and adaptability as a racer.
-
Classic adventure mode makes a return
-
If you are feeling nostalgic for the original Hill Climb Racing game, you will be happy to know that the classic adventure mode makes a return in Hill Climb Racing 2. In this mode, you can choose any vehicle and track and see how far you can go without crashing or running out of fuel. You can also collect coins and gems along the way and try to beat your own high score. Adventure mode is a great way to practice your driving skills and have some relaxing fun.
-
hill climb racing 2 windows 10 download free
-how to install hill climb racing 2 on pc
-hill climb racing 2 for windows 10 pc
-download hill climb racing 2 game for pc
-hill climb racing 2 online multiplayer on pc
-hill climb racing 2 pc version free download
-play hill climb racing 2 on windows 10
-hill climb racing 2 microsoft store download
-hill climb racing 2 bluestacks emulator for pc
-hill climb racing 2 windows 10 app free
-hill climb racing 2 pc game download full version
-best windows 10 games like hill climb racing 2
-hill climb racing 2 for pc without emulator
-download hill climb racing 2 mod apk for pc
-hill climb racing 2 windows 10 cheats and hacks
-hill climb racing 2 pc gameplay and review
-hill climb racing 2 for windows 10 laptop
-how to update hill climb racing 2 on pc
-hill climb racing 2 pc system requirements
-hill climb racing 2 windows 10 offline mode
-hill climb racing 2 for pc with keyboard controls
-download hill climb racing 2 for windows 10 pro
-hill climb racing 2 windows 10 tips and tricks
-hill climb racing 2 for pc no download
-how to uninstall hill climb racing 2 on pc
-hill climb racing 2 pc graphics settings
-download hill climb racing 2 for windows 10 home
-hill climb racing 2 windows 10 adventure mode
-hill climb racing 2 for pc free online play
-how to backup hill climb racing 2 on pc
-hill climb racing 2 pc sound settings
-download hill climb racing 2 for windows 10 enterprise
-hill climb racing 2 windows 10 team mode
-hill climb racing 2 for pc new update
-how to record hill climb racing 2 on pc
-hill climb racing 2 pc performance issues
-download hill climb racing 2 for windows 10 education
-hill climb racing 2 windows 10 vehicle customization
-hill climb racing 2 for pc best vehicles
-how to stream hill climb racing 2 on pc
-
Weekly events that change up the gameplay in new exciting ways
-
Hill Climb Racing 2 is not a static game. It is constantly updated with new content and features that keep the gameplay fresh and exciting. One of the features that adds to the variety and challenge of the game is the weekly events that change up the gameplay in new exciting ways. Every week, there is a new event that has a different theme, rule set and reward system. For example, there are events such as low gravity, time trial, one wheel challenge, coin rush, fuel frenzy, moon landing and more. Participating in weekly events will give you a chance to win exclusive prizes and trophies.
-
How to download and install Hill Climb Racing 2 on Windows 10
-
Now that you know what Hill Climb Racing 2 is all about and what features it has to offer, you might be wondering how to download and install it on your Windows 10 PC. There are two ways to do this: download from the Microsoft Store or download from an Android emulator.
-
Download from the Microsoft Store
-
The easiest way to download and install Hill Climb Racing 2 on Windows 10 is to download it from the Microsoft Store. The Microsoft Store is an official app store for Windows 10 devices that lets you download and install various apps and games for free or for a fee. To download Hill Climb Racing 2 from the Microsoft Store, follow these steps:
-
-
Open the Microsoft Store app on your Windows 10 PC.
-
Search for Hill Climb Racing 2 in the search bar.
-
Select Hill Climb Racing 2 from the search results.
-
Click on the Get button to start downloading the game.
-
Wait for the download to finish and then click on the Play button to launch the game.
-
-
Congratulations! You have successfully downloaded and installed Hill Climb Racing 2 on your Windows 10 PC from the Microsoft Store.
-
Download from an Android emulator
-
The other way to download and install Hill Climb Racing 2 on Windows 10 is to download it from an Android emulator. An Android emulator is a software program that lets you run Android apps and games on your Windows 10 PC. There are many Android emulators available for Windows 10, such as BlueStacks, NoxPlayer, LDPlayer, MEmu and more. To download Hill Climb Racing 2 from an Android emulator, follow these steps:
-
-
Download and install an Android emulator of your choice on your Windows 10 PC.
-
Launch the Android emulator and sign in with your Google account.
-
Open the Google Play Store app on the emulator.
-
Search for Hill Climb Racing 2 in the search bar.
-
Select Hill Climb Racing 2 from the search results.
-
Click on the Install button to start downloading the game.
-
Wait for the download to finish and then click on the Open button to launch the game.
-
-
Congratulations! You have successfully downloaded and installed Hill Climb Racing 2 on your Windows 10 PC from an Android emulator.
-
Tips and tricks for playing Hill Climb Racing 2 on Windows 10
-
Now that you have downloaded and installed Hill Climb Racing 2 on your Windows 10 PC, you might be wondering how to play it like a pro. Here are some tips and tricks that will help you improve your skills and enjoy the game more:
-
Use the keyboard controls for better precision
-
One of the advantages of playing Hill Climb Racing 2 on Windows 10 is that you can use the keyboard controls for better precision. The keyboard controls are as follows:
-
-
Use the left and right arrow keys to control the acceleration and braking of your vehicle.
-
Use the up and down arrow keys to control the tilt of your vehicle.
-
Use the spacebar to activate the turbo boost of your vehicle.
-
-
Using the keyboard controls will help you maneuver your vehicle more smoothly and avoid crashing or flipping over.
-
Adjust the graphics settings for optimal performance
-
Another advantage of playing Hill Climb Racing 2 on Windows 10 is that you can adjust the graphics settings for optimal performance. The graphics settings are as follows:
-
-
Use the low, medium or high quality option to change the level of detail and resolution of the game.
-
Use the fullscreen or windowed mode option to change the size of the game screen.
-
Use the vsync or no vsync option to enable or disable vertical synchronization, which can reduce screen tearing and stuttering.
-
-
Adjusting the graphics settings will help you optimize the game's performance and avoid lagging or freezing issues.
-
Collect coins and gems to unlock more content
-
One of the most important aspects of Hill Climb Racing 2 is collecting coins and gems to unlock more content. Coins and gems are the main currencies of the game that you can use to buy new vehicles, parts, tracks, customization options and more. You can collect coins and gems by doing the following:
-
-
Racing on different tracks and performing stunt tricks.
-
Completing challenges and achievements.
-
Opening chests that contain random rewards.
-
Watching ads that offer bonus rewards.
-
Purchasing them with real money (optional).
-
-
Collecting coins and gems will help you access more content and enhance your gaming experience.
-
Join a racing team and participate in seasons
-
One of the most fun features of Hill Climb Racing 2 is joining a racing team and participating in seasons. A racing team is a group of players that work together to compete against other teams in seasons. A season is a period of time that has a specific theme, rule set and reward system. You can join or create a racing team by doing the following:
-
-
Tapping on the team icon on the main menu.
-
Browsing through existing teams or creating your own team.
-
Sending or accepting team invitations.
-
Naming your team and choosing a team flag.
-
-
Joining a racing team and participating in seasons will help you earn more coins and gems, as well as unlock exclusive team chests that contain valuable rewards.
-
Conclusion
-
Hill Climb Racing 2 is one of the best racing games that you can play on your Windows 10 PC. It has many features that make it fun, addictive and challenging. You can race uphill in over 20+ different vehicles, unlock and upgrade new vehicle parts, customize your character and vehicle's looks, team up with your friends online and race in teams mode, have arcade racing fun while performing cool stunt tricks, explore dozens of tracks to race on, enjoy the classic adventure mode that makes a return, and participate in weekly events that change up the gameplay in new exciting ways. You can also download and install Hill Climb Racing 2 on Windows 10 easily from the Microsoft Store or from an Android emulator. Moreover, you can use some tips and tricks to improve your skills and enjoy the game more, such as using the keyboard controls, adjusting the graphics settings, collecting coins and gems, and joining a racing team. Hill Climb Racing 2 is a game that will keep you entertained for hours and hours. So what are you waiting for? Download Hill Climb Racing 2 on Windows 10 today and start racing uphill!
-
FAQs
-
Here are some frequently asked questions about Hill Climb Racing 2:
-
Q: Is Hill Climb Racing 2 free to play?
-
A: Yes, Hill Climb Racing 2 is free to play. However, it does offer some in-app purchases that can enhance your gaming experience, such as coins, gems, VIP membership and more. You can also watch ads to get some bonus rewards.
-
Q: Is Hill Climb Racing 2 safe to play?
-
A: Yes, Hill Climb Racing 2 is safe to play. It does not contain any harmful or inappropriate content that would harm your device or your privacy. It is also rated E for Everyone by the ESRB, which means it is suitable for all ages.
-
Q: How can I contact the developers of Hill Climb Racing 2?
-
A: You can contact the developers of Hill Climb Racing 2 by visiting their official website at https://fingersoft.com/ or by sending them an email at support@fingersoft.com. You can also follow them on social media platforms such as Facebook, Twitter, Instagram and YouTube.
-
Q: How can I share my feedback or report a bug in Hill Climb Racing 2?
-
A: You can share your feedback or report a bug in Hill Climb Racing 2 by tapping on the settings icon on the main menu and then tapping on the feedback button. You can also rate and review the game on the Microsoft Store or the Google Play Store.
-
Q: How can I play Hill Climb Racing 2 offline?
-
A: You can play Hill Climb Racing 2 offline by turning off your internet connection before launching the game. However, you will not be able to access some features that require an online connection, such as teams mode, weekly events, leaderboards and more.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Sonic Forces - Running Battle Mod APK Download and Enjoy Unlimited Money Speed and God Mode.md b/spaces/congsaPfin/Manga-OCR/logs/Sonic Forces - Running Battle Mod APK Download and Enjoy Unlimited Money Speed and God Mode.md
deleted file mode 100644
index 6b142120d91c5d7e2d7cb95d3aff9bf05c9c6f3d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Sonic Forces - Running Battle Mod APK Download and Enjoy Unlimited Money Speed and God Mode.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
Sonic Forces Running Battle Mod APK: Unlimited Everything
-
If you are a fan of Sonic the Hedgehog, you will love Sonic Forces Running Battle, a fast-paced multiplayer racing game where you can compete with other players around the world. In this game, you can choose your favorite Sonic character, customize your runner, and join epic battles on various tracks. You can also collect power-ups, use special abilities, and unleash your super speed to win the race. But what if you want to have more fun and advantages in the game? That's where Sonic Forces Running Battle Mod APK comes in. In this article, we will tell you everything you need to know about this amazing modded version of the game, including its features, benefits, and how to download and install it on your device.
-
sonic forces running battle mod apk unlimited everything
Sonic Forces Running Battle is a mobile game developed by Sega, based on the popular Sonic franchise. It is a spin-off of the console game Sonic Forces, which was released in 2017. In this game, you can join the resistance and fight against Dr. Eggman and his evil army of robots. You can also team up with other players or challenge them in real-time online races. The game features various modes, such as Story Mode, Quick Race, Special Event, and more. You can also unlock and upgrade different Sonic characters, each with their own skills and abilities. The game has stunning 3D graphics, smooth animations, and catchy soundtracks that will make you feel like you are in the Sonic world.
-
Features of Sonic Forces Running Battle
-
Some of the main features of Sonic Forces Running Battle are:
-
-
Multiplayer racing: You can race with up to three other players in real-time online matches. You can also join or create a team and cooperate with your teammates to win more rewards.
-
Various characters: You can choose from over 15 Sonic characters, such as Sonic, Tails, Knuckles, Amy, Shadow, Silver, and more. You can also unlock new characters by completing missions or using red star rings.
-
Customization: You can customize your runner with different outfits, accessories, shoes, and gloves. You can also equip them with different wisps, which are power-ups that can boost your speed, attack, defense, or other abilities.
-
Diverse tracks: You can race on over 20 different tracks, each with their own obstacles, traps, and shortcuts. You can also explore different environments, such as Green Hill Zone, City Escape, Mystic Jungle, and more.
-
Achievements and leaderboards: You can earn achievements by completing various tasks and challenges in the game. You can also climb the leaderboards by winning races and increasing your rank.
-
-
How to play Sonic Forces Running Battle
-
The gameplay of Sonic Forces Running Battle is simple and intuitive. You just need to swipe left or right to change lanes, swipe up to jump, swipe down to slide, and tap to use your wisp or ability. You can also collect rings along the way to increase your score and boost your speed. The goal is to reach the finish line before your opponents or eliminate them with your attacks. You can also use items such as rockets, mines, lightning bolts, and more to sabotage your rivals or defend yourself from their attacks.
-
sonic forces speed battle mod apk latest version
-sonic forces running battle hack apk download
-sonic forces mod apk unlimited red rings and gold rings
-sonic forces speed battle mod menu apk
-sonic forces running battle cheats apk free
-sonic forces mod apk android 1
-sonic forces speed battle mod apk revdl
-sonic forces running battle unlimited money apk
-sonic forces mod apk offline
-sonic forces speed battle mod apk happymod
-sonic forces running battle god mode apk
-sonic forces mod apk rexdl
-sonic forces speed battle mod apk no root
-sonic forces running battle unlocked everything apk
-sonic forces mod apk obb
-sonic forces speed battle mod apk android republic
-sonic forces running battle free download apk
-sonic forces mod apk all characters unlocked
-sonic forces speed battle mod apk an1
-sonic forces running battle premium apk
-sonic forces mod apk pure
-sonic forces speed battle mod apk platinmods
-sonic forces running battle vip mod apk
-sonic forces mod apk unlimited gems and coins
-sonic forces speed battle mod apk 2023
-sonic forces running battle cracked apk
-sonic forces mod apk ios
-sonic forces speed battle mod apk unlimited keys and stars
-sonic forces running battle mega mod apk
-sonic forces mod apk new version
-sonic forces speed battle mod apk online
-sonic forces running battle pro mod apk
-sonic forces mod apk unlimited everything 2023
-sonic forces speed battle mod apk unlimited boost and power ups
-sonic forces running battle full version mod apk
-sonic forces mod apk all levels unlocked
-sonic forces speed battle mod apk unlimited lives and energy
-sonic forces running battle hack tool apk
-sonic forces mod apk high damage and defense
-sonic forces speed battle mod apk unlimited trophies and chests
-sonic forces running battle no ads mod apk
-sonic forces mod apk low mb
-sonic forces speed battle mod apk unlimited diamonds and tickets
-sonic forces running battle super fast mode apk
-sonic forces mod apk all skins unlocked
-sonic forces speed battle mod apk unlimited sprint and dash meter
-
What is Sonic Forces Running Battle Mod APK?
-
Sonic Forces Running Battle Mod APK is a modified version of Sonic Forces Running Battle that offers unlimited everything. This means that you can enjoy unlimited money, god mode, mod speed, and other benefits that will make your gaming experience more enjoyable and easier. With this modded version of the game, you can unlock all the characters, outfits, wisps, and tracks in the game without spending any real money. You can also have unlimited health, speed, and power to dominate every race and challenge. Sonic Forces Running Battle Mod APK is the ultimate way to enjoy Sonic Forces Running Battle without any limitations or restrictions.
-
Benefits of Sonic Forces Running Battle Mod APK
-
Some of the benefits of Sonic Forces Running Battle Mod APK are:
-
-
Unlimited money: You can have unlimited red star rings and gold rings in the game, which are the main currencies used to buy and upgrade items. You can also use them to unlock premium characters, outfits, and wisps that are otherwise hard to get.
-
God mode: You can have unlimited health in the game, which means that you will never die or lose a race. You can also ignore any damage or attacks from your opponents or the environment. You can also use your wisps and abilities without any cooldown or limit.
-
Mod speed: You can have unlimited speed in the game, which means that you will always be faster than your rivals and the track. You can also outrun any obstacles or traps that might slow you down or stop you. You can also use your super speed to finish the race in record time.
-
-
How to download and install Sonic Forces Running Battle Mod APK
-
If you want to download and install Sonic Forces Running Battle Mod APK on your device, you need to follow these simple steps:
-
Requirements
-
Before you download and install Sonic Forces Running Battle Mod APK, you need to make sure that your device meets these requirements:
-
-
Your device must have Android 4.4 or higher version.
-
Your device must have at least 2 GB of RAM and 500 MB of free storage space.
-
Your device must have a stable internet connection.
-
You need to enable unknown sources on your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
-
Steps
-
After you have checked the requirements, you can follow these steps to download and install Sonic Forces Running Battle Mod APK on your device:
-
-
Click on this link to download the Sonic Forces Running Battle Mod APK file on your device.
-
Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game from your app drawer and enjoy unlimited everything in Sonic Forces Running Battle.
-
-
Conclusion
-
Sonic Forces Running Battle is a fun and exciting game that lets you race with other players online as your favorite Sonic character. However, if you want to have more advantages and features in the game, you should try Sonic Forces Running Battle Mod APK. This modded version of the game gives you unlimited money, god mode, mod speed, and other benefits that will make your gaming experience more enjoyable and easier. You can download and install Sonic Forces Running Battle Mod APK on your device by following the steps we have provided in this article. So what are you waiting for? Download Sonic Forces Running Battle Mod APK now and join the ultimate Sonic racing adventure.
-
FAQs
-
Here are some frequently asked questions about Sonic Forces Running Battle Mod APK:
-
-
Is Sonic Forces Running Battle Mod APK safe to use?
-
Yes, Sonic Forces Running Battle Mod APK is safe to use as long as you download it from a trusted source like ours. We have tested and verified that this modded version of the game does not contain any viruses or malware that might harm your device or data.
-
Is Sonic Forces Running Battle Mod APK free to download?
-
Yes, Sonic Forces Running Battle Mod APK is free to download and use. You do not need to pay any fees or charges to access this modded version of the game. However, you might see some ads in the game that help support the developers.
-
Can I play Sonic Forces Running Battle Mod APK online with other players?
-
Yes, you can play Sonic Forces Running Battle Mod APK online with other players who are using the same modded version of the game. However, you might not be able to play with players who are using the original version of the game or other mods.
-
Can I update Sonic Forces Running Battle Mod APK?
-
No, you cannot update Sonic Forces Running Battle Mod APK as it is not compatible with the official updates from the game developers. You will need to download and install the latest version of Sonic Forces Running Battle Mod APK from our website whenever there is a new update available.
-
Do I need to root my device to use Sonic Forces Running Battle Mod APK?
-
No, you do not need to root your device to use Sonic Forces Running Battle Mod APK. This modded version of the game works fine on both rooted and non-rooted devices. However, some features might require root access to work properly.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/ArcSoft Portrait Plus v2.1.0.237 Incl Crack [TorDigger] The Ultimate Photo Editing Software.md b/spaces/contluForse/HuggingGPT/assets/ArcSoft Portrait Plus v2.1.0.237 Incl Crack [TorDigger] The Ultimate Photo Editing Software.md
deleted file mode 100644
index c48e5ce28694b8a1a4eed58f001b6f35667c0fca..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/ArcSoft Portrait Plus v2.1.0.237 Incl Crack [TorDigger] The Ultimate Photo Editing Software.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
GetData Graph Digitizer 2.26.0.20 could be downloaded from the developer's website when we last checked. We cannot confirm if there is a free download of this software available. The actual developer of the software is getdata-graph-digitizer.
Force measurements reveal attachment strength of goby eggs. Data show peak resistance to perpendicular pulling force in mN. For illustration, the published attachment strengths of asparagus beetle eggs (Crioceris asparagi) (Voigt and Gorb 2010), marine snail eggs (Melanochlamys diomedea) (Castro and Podolsky 2012), and blue mussel byssus threads (Mytilus edulis) (Brenner and Buck 2010) are shown. Nongoby data were extracted from figures in the respective articles using the software GetDataGraphDigitizer v. 2.26 (www.getdata-graph-digitizer.com).
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/loader.py b/spaces/cooelf/Multimodal-CoT/timm/data/loader.py
deleted file mode 100644
index 76144669090aca1e962d75bfeab66aaf923e7ec5..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/data/loader.py
+++ /dev/null
@@ -1,262 +0,0 @@
-""" Loader Factory, Fast Collate, CUDA Prefetcher
-
-Prefetcher and Fast Collate inspired by NVIDIA APEX example at
-https://github.com/NVIDIA/apex/commit/d5e2bb4bdeedd27b1dfaf5bb2b24d6c000dee9be#diff-cf86c282ff7fba81fad27a559379d5bf
-
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-
-import torch.utils.data
-import numpy as np
-
-from .transforms_factory import create_transform
-from .constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
-from .distributed_sampler import OrderedDistributedSampler
-from .random_erasing import RandomErasing
-from .mixup import FastCollateMixup
-
-
-def fast_collate(batch):
- """ A fast collation function optimized for uint8 images (np array or torch) and int64 targets (labels)"""
- assert isinstance(batch[0], tuple)
- batch_size = len(batch)
- if isinstance(batch[0][0], tuple):
- # This branch 'deinterleaves' and flattens tuples of input tensors into one tensor ordered by position
- # such that all tuple of position n will end up in a torch.split(tensor, batch_size) in nth position
- inner_tuple_size = len(batch[0][0])
- flattened_batch_size = batch_size * inner_tuple_size
- targets = torch.zeros(flattened_batch_size, dtype=torch.int64)
- tensor = torch.zeros((flattened_batch_size, *batch[0][0][0].shape), dtype=torch.uint8)
- for i in range(batch_size):
- assert len(batch[i][0]) == inner_tuple_size # all input tensor tuples must be same length
- for j in range(inner_tuple_size):
- targets[i + j * batch_size] = batch[i][1]
- tensor[i + j * batch_size] += torch.from_numpy(batch[i][0][j])
- return tensor, targets
- elif isinstance(batch[0][0], np.ndarray):
- targets = torch.tensor([b[1] for b in batch], dtype=torch.int64)
- assert len(targets) == batch_size
- tensor = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8)
- for i in range(batch_size):
- tensor[i] += torch.from_numpy(batch[i][0])
- return tensor, targets
- elif isinstance(batch[0][0], torch.Tensor):
- targets = torch.tensor([b[1] for b in batch], dtype=torch.int64)
- assert len(targets) == batch_size
- tensor = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8)
- for i in range(batch_size):
- tensor[i].copy_(batch[i][0])
- return tensor, targets
- else:
- assert False
-
-
-class PrefetchLoader:
-
- def __init__(self,
- loader,
- mean=IMAGENET_DEFAULT_MEAN,
- std=IMAGENET_DEFAULT_STD,
- fp16=False,
- re_prob=0.,
- re_mode='const',
- re_count=1,
- re_num_splits=0):
- self.loader = loader
- self.mean = torch.tensor([x * 255 for x in mean]).cuda().view(1, 3, 1, 1)
- self.std = torch.tensor([x * 255 for x in std]).cuda().view(1, 3, 1, 1)
- self.fp16 = fp16
- if fp16:
- self.mean = self.mean.half()
- self.std = self.std.half()
- if re_prob > 0.:
- self.random_erasing = RandomErasing(
- probability=re_prob, mode=re_mode, max_count=re_count, num_splits=re_num_splits)
- else:
- self.random_erasing = None
-
- def __iter__(self):
- stream = torch.cuda.Stream()
- first = True
-
- for next_input, next_target in self.loader:
- with torch.cuda.stream(stream):
- next_input = next_input.cuda(non_blocking=True)
- next_target = next_target.cuda(non_blocking=True)
- if self.fp16:
- next_input = next_input.half().sub_(self.mean).div_(self.std)
- else:
- next_input = next_input.float().sub_(self.mean).div_(self.std)
- if self.random_erasing is not None:
- next_input = self.random_erasing(next_input)
-
- if not first:
- yield input, target
- else:
- first = False
-
- torch.cuda.current_stream().wait_stream(stream)
- input = next_input
- target = next_target
-
- yield input, target
-
- def __len__(self):
- return len(self.loader)
-
- @property
- def sampler(self):
- return self.loader.sampler
-
- @property
- def dataset(self):
- return self.loader.dataset
-
- @property
- def mixup_enabled(self):
- if isinstance(self.loader.collate_fn, FastCollateMixup):
- return self.loader.collate_fn.mixup_enabled
- else:
- return False
-
- @mixup_enabled.setter
- def mixup_enabled(self, x):
- if isinstance(self.loader.collate_fn, FastCollateMixup):
- self.loader.collate_fn.mixup_enabled = x
-
-
-def create_loader(
- dataset,
- input_size,
- batch_size,
- is_training=False,
- use_prefetcher=True,
- no_aug=False,
- re_prob=0.,
- re_mode='const',
- re_count=1,
- re_split=False,
- scale=None,
- ratio=None,
- hflip=0.5,
- vflip=0.,
- color_jitter=0.4,
- auto_augment=None,
- num_aug_splits=0,
- interpolation='bilinear',
- mean=IMAGENET_DEFAULT_MEAN,
- std=IMAGENET_DEFAULT_STD,
- num_workers=1,
- distributed=False,
- crop_pct=None,
- collate_fn=None,
- pin_memory=False,
- fp16=False,
- tf_preprocessing=False,
- use_multi_epochs_loader=False,
- persistent_workers=True,
-):
- re_num_splits = 0
- if re_split:
- # apply RE to second half of batch if no aug split otherwise line up with aug split
- re_num_splits = num_aug_splits or 2
- dataset.transform = create_transform(
- input_size,
- is_training=is_training,
- use_prefetcher=use_prefetcher,
- no_aug=no_aug,
- scale=scale,
- ratio=ratio,
- hflip=hflip,
- vflip=vflip,
- color_jitter=color_jitter,
- auto_augment=auto_augment,
- interpolation=interpolation,
- mean=mean,
- std=std,
- crop_pct=crop_pct,
- tf_preprocessing=tf_preprocessing,
- re_prob=re_prob,
- re_mode=re_mode,
- re_count=re_count,
- re_num_splits=re_num_splits,
- separate=num_aug_splits > 0,
- )
-
- sampler = None
- if distributed and not isinstance(dataset, torch.utils.data.IterableDataset):
- if is_training:
- sampler = torch.utils.data.distributed.DistributedSampler(dataset)
- else:
- # This will add extra duplicate entries to result in equal num
- # of samples per-process, will slightly alter validation results
- sampler = OrderedDistributedSampler(dataset)
-
- if collate_fn is None:
- collate_fn = fast_collate if use_prefetcher else torch.utils.data.dataloader.default_collate
-
- loader_class = torch.utils.data.DataLoader
-
- if use_multi_epochs_loader:
- loader_class = MultiEpochsDataLoader
-
- loader_args = dict(
- batch_size=batch_size,
- shuffle=not isinstance(dataset, torch.utils.data.IterableDataset) and sampler is None and is_training,
- num_workers=num_workers,
- sampler=sampler,
- collate_fn=collate_fn,
- pin_memory=pin_memory,
- drop_last=is_training,
- persistent_workers=persistent_workers)
- try:
- loader = loader_class(dataset, **loader_args)
- except TypeError as e:
- loader_args.pop('persistent_workers') # only in Pytorch 1.7+
- loader = loader_class(dataset, **loader_args)
- if use_prefetcher:
- prefetch_re_prob = re_prob if is_training and not no_aug else 0.
- loader = PrefetchLoader(
- loader,
- mean=mean,
- std=std,
- fp16=fp16,
- re_prob=prefetch_re_prob,
- re_mode=re_mode,
- re_count=re_count,
- re_num_splits=re_num_splits
- )
-
- return loader
-
-
-class MultiEpochsDataLoader(torch.utils.data.DataLoader):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self._DataLoader__initialized = False
- self.batch_sampler = _RepeatSampler(self.batch_sampler)
- self._DataLoader__initialized = True
- self.iterator = super().__iter__()
-
- def __len__(self):
- return len(self.batch_sampler.sampler)
-
- def __iter__(self):
- for i in range(len(self)):
- yield next(self.iterator)
-
-
-class _RepeatSampler(object):
- """ Sampler that repeats forever.
-
- Args:
- sampler (Sampler)
- """
-
- def __init__(self, sampler):
- self.sampler = sampler
-
- def __iter__(self):
- while True:
- yield from iter(self.sampler)
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/__init__.py
deleted file mode 100644
index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/image/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr,
- gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert,
- rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb)
-from .geometric import (cutout, imcrop, imflip, imflip_, impad,
- impad_to_multiple, imrescale, imresize, imresize_like,
- imresize_to_multiple, imrotate, imshear, imtranslate,
- rescale_size)
-from .io import imfrombytes, imread, imwrite, supported_backends, use_backend
-from .misc import tensor2imgs
-from .photometric import (adjust_brightness, adjust_color, adjust_contrast,
- adjust_lighting, adjust_sharpness, auto_contrast,
- clahe, imdenormalize, imequalize, iminvert,
- imnormalize, imnormalize_, lut_transform, posterize,
- solarize)
-
-__all__ = [
- 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb',
- 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale',
- 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size',
- 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate',
- 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend',
- 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize',
- 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr',
- 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize',
- 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe',
- 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting'
-]
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/misc.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/misc.py
deleted file mode 100644
index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/utils/misc.py
+++ /dev/null
@@ -1,377 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import collections.abc
-import functools
-import itertools
-import subprocess
-import warnings
-from collections import abc
-from importlib import import_module
-from inspect import getfullargspec
-from itertools import repeat
-
-
-# From PyTorch internals
-def _ntuple(n):
-
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
-
- return parse
-
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = _ntuple
-
-
-def is_str(x):
- """Whether the input is an string instance.
-
- Note: This method is deprecated since python 2 is no longer supported.
- """
- return isinstance(x, str)
-
-
-def import_modules_from_strings(imports, allow_failed_imports=False):
- """Import modules from the given list of strings.
-
- Args:
- imports (list | str | None): The given module names to be imported.
- allow_failed_imports (bool): If True, the failed imports will return
- None. Otherwise, an ImportError is raise. Default: False.
-
- Returns:
- list[module] | module | None: The imported modules.
-
- Examples:
- >>> osp, sys = import_modules_from_strings(
- ... ['os.path', 'sys'])
- >>> import os.path as osp_
- >>> import sys as sys_
- >>> assert osp == osp_
- >>> assert sys == sys_
- """
- if not imports:
- return
- single_import = False
- if isinstance(imports, str):
- single_import = True
- imports = [imports]
- if not isinstance(imports, list):
- raise TypeError(
- f'custom_imports must be a list but got type {type(imports)}')
- imported = []
- for imp in imports:
- if not isinstance(imp, str):
- raise TypeError(
- f'{imp} is of type {type(imp)} and cannot be imported.')
- try:
- imported_tmp = import_module(imp)
- except ImportError:
- if allow_failed_imports:
- warnings.warn(f'{imp} failed to import and is ignored.',
- UserWarning)
- imported_tmp = None
- else:
- raise ImportError
- imported.append(imported_tmp)
- if single_import:
- imported = imported[0]
- return imported
-
-
-def iter_cast(inputs, dst_type, return_type=None):
- """Cast elements of an iterable object into some type.
-
- Args:
- inputs (Iterable): The input object.
- dst_type (type): Destination type.
- return_type (type, optional): If specified, the output object will be
- converted to this type, otherwise an iterator.
-
- Returns:
- iterator or specified type: The converted object.
- """
- if not isinstance(inputs, abc.Iterable):
- raise TypeError('inputs must be an iterable object')
- if not isinstance(dst_type, type):
- raise TypeError('"dst_type" must be a valid type')
-
- out_iterable = map(dst_type, inputs)
-
- if return_type is None:
- return out_iterable
- else:
- return return_type(out_iterable)
-
-
-def list_cast(inputs, dst_type):
- """Cast elements of an iterable object into a list of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=list)
-
-
-def tuple_cast(inputs, dst_type):
- """Cast elements of an iterable object into a tuple of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=tuple)
-
-
-def is_seq_of(seq, expected_type, seq_type=None):
- """Check whether it is a sequence of some type.
-
- Args:
- seq (Sequence): The sequence to be checked.
- expected_type (type): Expected type of sequence items.
- seq_type (type, optional): Expected sequence type.
-
- Returns:
- bool: Whether the sequence is valid.
- """
- if seq_type is None:
- exp_seq_type = abc.Sequence
- else:
- assert isinstance(seq_type, type)
- exp_seq_type = seq_type
- if not isinstance(seq, exp_seq_type):
- return False
- for item in seq:
- if not isinstance(item, expected_type):
- return False
- return True
-
-
-def is_list_of(seq, expected_type):
- """Check whether it is a list of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=list)
-
-
-def is_tuple_of(seq, expected_type):
- """Check whether it is a tuple of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=tuple)
-
-
-def slice_list(in_list, lens):
- """Slice a list into several sub lists by a list of given length.
-
- Args:
- in_list (list): The list to be sliced.
- lens(int or list): The expected length of each out list.
-
- Returns:
- list: A list of sliced list.
- """
- if isinstance(lens, int):
- assert len(in_list) % lens == 0
- lens = [lens] * int(len(in_list) / lens)
- if not isinstance(lens, list):
- raise TypeError('"indices" must be an integer or a list of integers')
- elif sum(lens) != len(in_list):
- raise ValueError('sum of lens and list length does not '
- f'match: {sum(lens)} != {len(in_list)}')
- out_list = []
- idx = 0
- for i in range(len(lens)):
- out_list.append(in_list[idx:idx + lens[i]])
- idx += lens[i]
- return out_list
-
-
-def concat_list(in_list):
- """Concatenate a list of list into a single list.
-
- Args:
- in_list (list): The list of list to be merged.
-
- Returns:
- list: The concatenated flat list.
- """
- return list(itertools.chain(*in_list))
-
-
-def check_prerequisites(
- prerequisites,
- checker,
- msg_tmpl='Prerequisites "{}" are required in method "{}" but not '
- 'found, please install them first.'): # yapf: disable
- """A decorator factory to check if prerequisites are satisfied.
-
- Args:
- prerequisites (str of list[str]): Prerequisites to be checked.
- checker (callable): The checker method that returns True if a
- prerequisite is meet, False otherwise.
- msg_tmpl (str): The message template with two variables.
-
- Returns:
- decorator: A specific decorator.
- """
-
- def wrap(func):
-
- @functools.wraps(func)
- def wrapped_func(*args, **kwargs):
- requirements = [prerequisites] if isinstance(
- prerequisites, str) else prerequisites
- missing = []
- for item in requirements:
- if not checker(item):
- missing.append(item)
- if missing:
- print(msg_tmpl.format(', '.join(missing), func.__name__))
- raise RuntimeError('Prerequisites not meet.')
- else:
- return func(*args, **kwargs)
-
- return wrapped_func
-
- return wrap
-
-
-def _check_py_package(package):
- try:
- import_module(package)
- except ImportError:
- return False
- else:
- return True
-
-
-def _check_executable(cmd):
- if subprocess.call(f'which {cmd}', shell=True) != 0:
- return False
- else:
- return True
-
-
-def requires_package(prerequisites):
- """A decorator to check if some python packages are installed.
-
- Example:
- >>> @requires_package('numpy')
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- array([0.])
- >>> @requires_package(['numpy', 'non_package'])
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- ImportError
- """
- return check_prerequisites(prerequisites, checker=_check_py_package)
-
-
-def requires_executable(prerequisites):
- """A decorator to check if some executable files are installed.
-
- Example:
- >>> @requires_executable('ffmpeg')
- >>> func(arg1, args):
- >>> print(1)
- 1
- """
- return check_prerequisites(prerequisites, checker=_check_executable)
-
-
-def deprecated_api_warning(name_dict, cls_name=None):
- """A decorator to check if some arguments are deprecate and try to replace
- deprecate src_arg_name to dst_arg_name.
-
- Args:
- name_dict(dict):
- key (str): Deprecate argument names.
- val (str): Expected argument names.
-
- Returns:
- func: New function.
- """
-
- def api_warning_wrapper(old_func):
-
- @functools.wraps(old_func)
- def new_func(*args, **kwargs):
- # get the arg spec of the decorated method
- args_info = getfullargspec(old_func)
- # get name of the function
- func_name = old_func.__name__
- if cls_name is not None:
- func_name = f'{cls_name}.{func_name}'
- if args:
- arg_names = args_info.args[:len(args)]
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in arg_names:
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- arg_names[arg_names.index(src_arg_name)] = dst_arg_name
- if kwargs:
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in kwargs:
-
- assert dst_arg_name not in kwargs, (
- f'The expected behavior is to replace '
- f'the deprecated key `{src_arg_name}` to '
- f'new key `{dst_arg_name}`, but got them '
- f'in the arguments at the same time, which '
- f'is confusing. `{src_arg_name} will be '
- f'deprecated in the future, please '
- f'use `{dst_arg_name}` instead.')
-
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- kwargs[dst_arg_name] = kwargs.pop(src_arg_name)
-
- # apply converted arguments to the decorated method
- output = old_func(*args, **kwargs)
- return output
-
- return new_func
-
- return api_warning_wrapper
-
-
-def is_method_overridden(method, base_class, derived_class):
- """Check if a method of base class is overridden in derived class.
-
- Args:
- method (str): the method name to check.
- base_class (type): the class of the base class.
- derived_class (type | Any): the class or instance of the derived class.
- """
- assert isinstance(base_class, type), \
- "base_class doesn't accept instance, Please pass class instead."
-
- if not isinstance(derived_class, type):
- derived_class = derived_class.__class__
-
- base_method = getattr(base_class, method)
- derived_method = getattr(derived_class, method)
- return derived_method != base_method
-
-
-def has_method(obj: object, method: str) -> bool:
- """Check whether the object has a method.
-
- Args:
- method (str): The method name to check.
- obj (object): The object to check.
-
- Returns:
- bool: True if the object has the method else False.
- """
- return hasattr(obj, method) and callable(getattr(obj, method))
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/backbone/swin.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/backbone/swin.py
deleted file mode 100644
index 2380cde59570e5d5b8fb2536d0961f8e27a07fd4..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/backbone/swin.py
+++ /dev/null
@@ -1,771 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu, Yutong Lin, Yixuan Wei
-# --------------------------------------------------------
-
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/Mask2Former
-# ------------------------------------------------------------------------------
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from annotator.oneformer.detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec
-
-
-class Mlp(nn.Module):
- """Multilayer perceptron."""
-
- def __init__(
- self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0
- ):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- """Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(
- self,
- dim,
- window_size,
- num_heads,
- qkv_bias=True,
- qk_scale=None,
- attn_drop=0.0,
- proj_drop=0.0,
- ):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
- ) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=0.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """Forward function.
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = (
- self.qkv(x)
- .reshape(B_, N, 3, self.num_heads, C // self.num_heads)
- .permute(2, 0, 3, 1, 4)
- )
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = q @ k.transpose(-2, -1)
-
- relative_position_bias = self.relative_position_bias_table[
- self.relative_position_index.view(-1)
- ].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1
- ) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(
- 2, 0, 1
- ).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(
- self,
- dim,
- num_heads,
- window_size=7,
- shift_size=0,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop=0.0,
- attn_drop=0.0,
- drop_path=0.0,
- act_layer=nn.GELU,
- norm_layer=nn.LayerNorm,
- ):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim,
- window_size=to_2tuple(self.window_size),
- num_heads=num_heads,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- attn_drop=attn_drop,
- proj_drop=drop,
- )
-
- self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(
- in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop
- )
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(
- shifted_x, self.window_size
- ) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(
- -1, self.window_size * self.window_size, C
- ) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """Patch Merging Layer
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """A basic Swin Transformer layer for one stage.
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(
- self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop=0.0,
- attn_drop=0.0,
- drop_path=0.0,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False,
- ):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList(
- [
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer,
- )
- for i in range(depth)
- ]
- )
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (
- slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None),
- )
- w_slices = (
- slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None),
- )
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(
- img_mask, self.window_size
- ) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(
- attn_mask == 0, float(0.0)
- )
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """Image to Patch Embedding
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-class SwinTransformer(nn.Module):
- """Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(
- self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- attn_drop_rate=0.0,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- use_checkpoint=False,
- ):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size,
- in_chans=in_chans,
- embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None,
- )
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [
- pretrain_img_size[0] // patch_size[0],
- pretrain_img_size[1] // patch_size[1],
- ]
-
- self.absolute_pos_embed = nn.Parameter(
- torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])
- )
- trunc_normal_(self.absolute_pos_embed, std=0.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [
- x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))
- ] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- dim=int(embed_dim * 2 ** i_layer),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])],
- norm_layer=norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint,
- )
- self.layers.append(layer)
-
- num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- # add a norm layer for each output
- for i_layer in out_indices:
- layer = norm_layer(num_features[i_layer])
- layer_name = f"norm{i_layer}"
- self.add_module(layer_name, layer)
-
- self._freeze_stages()
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- def _init_weights(m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=0.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def forward(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(
- self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
- )
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = {}
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
-
- if i in self.out_indices:
- norm_layer = getattr(self, f"norm{i}")
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs["res{}".format(i + 2)] = out
-
- return outs
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
-
-
-@BACKBONE_REGISTRY.register()
-class D2SwinTransformer(SwinTransformer, Backbone):
- def __init__(self, cfg, input_shape):
-
- pretrain_img_size = cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE
- patch_size = cfg.MODEL.SWIN.PATCH_SIZE
- in_chans = 3
- embed_dim = cfg.MODEL.SWIN.EMBED_DIM
- depths = cfg.MODEL.SWIN.DEPTHS
- num_heads = cfg.MODEL.SWIN.NUM_HEADS
- window_size = cfg.MODEL.SWIN.WINDOW_SIZE
- mlp_ratio = cfg.MODEL.SWIN.MLP_RATIO
- qkv_bias = cfg.MODEL.SWIN.QKV_BIAS
- qk_scale = cfg.MODEL.SWIN.QK_SCALE
- drop_rate = cfg.MODEL.SWIN.DROP_RATE
- attn_drop_rate = cfg.MODEL.SWIN.ATTN_DROP_RATE
- drop_path_rate = cfg.MODEL.SWIN.DROP_PATH_RATE
- norm_layer = nn.LayerNorm
- ape = cfg.MODEL.SWIN.APE
- patch_norm = cfg.MODEL.SWIN.PATCH_NORM
- use_checkpoint = cfg.MODEL.SWIN.USE_CHECKPOINT
-
- super().__init__(
- pretrain_img_size,
- patch_size,
- in_chans,
- embed_dim,
- depths,
- num_heads,
- window_size,
- mlp_ratio,
- qkv_bias,
- qk_scale,
- drop_rate,
- attn_drop_rate,
- drop_path_rate,
- norm_layer,
- ape,
- patch_norm,
- use_checkpoint=use_checkpoint,
- )
-
- self._out_features = cfg.MODEL.SWIN.OUT_FEATURES
-
- self._out_feature_strides = {
- "res2": 4,
- "res3": 8,
- "res4": 16,
- "res5": 32,
- }
- self._out_feature_channels = {
- "res2": self.num_features[0],
- "res3": self.num_features[1],
- "res4": self.num_features[2],
- "res5": self.num_features[3],
- }
-
- def forward(self, x):
- """
- Args:
- x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``.
- Returns:
- dict[str->Tensor]: names and the corresponding features
- """
- assert (
- x.dim() == 4
- ), f"SwinTransformer takes an input of shape (N, C, H, W). Got {x.shape} instead!"
- outputs = {}
- y = super().forward(x)
- for k in y.keys():
- if k in self._out_features:
- outputs[k] = y[k]
- return outputs
-
- def output_shape(self):
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
-
- @property
- def size_divisibility(self):
- return 32
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/ema_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/ema_head.py
deleted file mode 100644
index 12267cb40569d2b5a4a2955a6dc2671377ff5e0a..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/ema_head.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import math
-
-import torch
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from ..builder import HEADS
-from .decode_head import BaseDecodeHead
-
-
-def reduce_mean(tensor):
- """Reduce mean when distributed training."""
- if not (dist.is_available() and dist.is_initialized()):
- return tensor
- tensor = tensor.clone()
- dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM)
- return tensor
-
-
-class EMAModule(nn.Module):
- """Expectation Maximization Attention Module used in EMANet.
-
- Args:
- channels (int): Channels of the whole module.
- num_bases (int): Number of bases.
- num_stages (int): Number of the EM iterations.
- """
-
- def __init__(self, channels, num_bases, num_stages, momentum):
- super(EMAModule, self).__init__()
- assert num_stages >= 1, 'num_stages must be at least 1!'
- self.num_bases = num_bases
- self.num_stages = num_stages
- self.momentum = momentum
-
- bases = torch.zeros(1, channels, self.num_bases)
- bases.normal_(0, math.sqrt(2. / self.num_bases))
- # [1, channels, num_bases]
- bases = F.normalize(bases, dim=1, p=2)
- self.register_buffer('bases', bases)
-
- def forward(self, feats):
- """Forward function."""
- batch_size, channels, height, width = feats.size()
- # [batch_size, channels, height*width]
- feats = feats.view(batch_size, channels, height * width)
- # [batch_size, channels, num_bases]
- bases = self.bases.repeat(batch_size, 1, 1)
-
- with torch.no_grad():
- for i in range(self.num_stages):
- # [batch_size, height*width, num_bases]
- attention = torch.einsum('bcn,bck->bnk', feats, bases)
- attention = F.softmax(attention, dim=2)
- # l1 norm
- attention_normed = F.normalize(attention, dim=1, p=1)
- # [batch_size, channels, num_bases]
- bases = torch.einsum('bcn,bnk->bck', feats, attention_normed)
- # l2 norm
- bases = F.normalize(bases, dim=1, p=2)
-
- feats_recon = torch.einsum('bck,bnk->bcn', bases, attention)
- feats_recon = feats_recon.view(batch_size, channels, height, width)
-
- if self.training:
- bases = bases.mean(dim=0, keepdim=True)
- bases = reduce_mean(bases)
- # l2 norm
- bases = F.normalize(bases, dim=1, p=2)
- self.bases = (1 -
- self.momentum) * self.bases + self.momentum * bases
-
- return feats_recon
-
-
-@HEADS.register_module()
-class EMAHead(BaseDecodeHead):
- """Expectation Maximization Attention Networks for Semantic Segmentation.
-
- This head is the implementation of `EMANet
- `_.
-
- Args:
- ema_channels (int): EMA module channels
- num_bases (int): Number of bases.
- num_stages (int): Number of the EM iterations.
- concat_input (bool): Whether concat the input and output of convs
- before classification layer. Default: True
- momentum (float): Momentum to update the base. Default: 0.1.
- """
-
- def __init__(self,
- ema_channels,
- num_bases,
- num_stages,
- concat_input=True,
- momentum=0.1,
- **kwargs):
- super(EMAHead, self).__init__(**kwargs)
- self.ema_channels = ema_channels
- self.num_bases = num_bases
- self.num_stages = num_stages
- self.concat_input = concat_input
- self.momentum = momentum
- self.ema_module = EMAModule(self.ema_channels, self.num_bases,
- self.num_stages, self.momentum)
-
- self.ema_in_conv = ConvModule(
- self.in_channels,
- self.ema_channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- # project (0, inf) -> (-inf, inf)
- self.ema_mid_conv = ConvModule(
- self.ema_channels,
- self.ema_channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=None,
- act_cfg=None)
- for param in self.ema_mid_conv.parameters():
- param.requires_grad = False
-
- self.ema_out_conv = ConvModule(
- self.ema_channels,
- self.ema_channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=None)
- self.bottleneck = ConvModule(
- self.ema_channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- if self.concat_input:
- self.conv_cat = ConvModule(
- self.in_channels + self.channels,
- self.channels,
- kernel_size=3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- feats = self.ema_in_conv(x)
- identity = feats
- feats = self.ema_mid_conv(feats)
- recon = self.ema_module(feats)
- recon = F.relu(recon, inplace=True)
- recon = self.ema_out_conv(recon)
- output = F.relu(identity + recon, inplace=True)
- output = self.bottleneck(output)
- if self.concat_input:
- output = self.conv_cat(torch.cat([x, output], dim=1))
- output = self.cls_seg(output)
- return output
diff --git a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/live2d.js b/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/live2d.js
deleted file mode 100644
index 2cf559be672c438dfbd35db61eea12465ed0dffb..0000000000000000000000000000000000000000
--- a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/live2d.js
+++ /dev/null
@@ -1,4238 +0,0 @@
-!
-function(t) {
- function i(r) {
- if (e[r]) return e[r].exports;
- var o = e[r] = {
- i: r,
- l: !1,
- exports: {}
- };
- return t[r].call(o.exports, o, o.exports, i), o.l = !0, o.exports
- }
- var e = {};
- i.m = t, i.c = e, i.d = function(t, e, r) {
- i.o(t, e) || Object.defineProperty(t, e, {
- configurable: !1,
- enumerable: !0,
- get: r
- })
- }, i.n = function(t) {
- var e = t && t.__esModule ?
- function() {
- return t.
- default
- } : function() {
- return t
- };
- return i.d(e, "a", e), e
- }, i.o = function(t, i) {
- return Object.prototype.hasOwnProperty.call(t, i)
- }, i.p = "", i(i.s = 4)
-}([function(t, i, e) {
- "use strict";
-
- function r() {
- this.live2DModel = null, this.modelMatrix = null, this.eyeBlink = null, this.physics = null, this.pose = null, this.debugMode = !1, this.initialized = !1, this.updating = !1, this.alpha = 1, this.accAlpha = 0, this.lipSync = !1, this.lipSyncValue = 0, this.accelX = 0, this.accelY = 0, this.accelZ = 0, this.dragX = 0, this.dragY = 0, this.startTimeMSec = null, this.mainMotionManager = new h, this.expressionManager = new h, this.motions = {}, this.expressions = {}, this.isTexLoaded = !1
- }
- function o() {
- AMotion.prototype.constructor.call(this), this.paramList = new Array
- }
- function n() {
- this.id = "", this.type = -1, this.value = null
- }
- function s() {
- this.nextBlinkTime = null, this.stateStartTime = null, this.blinkIntervalMsec = null, this.eyeState = g.STATE_FIRST, this.blinkIntervalMsec = 4e3, this.closingMotionMsec = 100, this.closedMotionMsec = 50, this.openingMotionMsec = 150, this.closeIfZero = !0, this.eyeID_L = "PARAM_EYE_L_OPEN", this.eyeID_R = "PARAM_EYE_R_OPEN"
- }
- function _() {
- this.tr = new Float32Array(16), this.identity()
- }
- function a(t, i) {
- _.prototype.constructor.call(this), this.width = t, this.height = i
- }
- function h() {
- MotionQueueManager.prototype.constructor.call(this), this.currentPriority = null, this.reservePriority = null, this.super = MotionQueueManager.prototype
- }
- function l() {
- this.physicsList = new Array, this.startTimeMSec = UtSystem.getUserTimeMSec()
- }
- function $() {
- this.lastTime = 0, this.lastModel = null, this.partsGroups = new Array
- }
- function u(t) {
- this.paramIndex = -1, this.partsIndex = -1, this.link = null, this.id = t
- }
- function p() {
- this.EPSILON = .01, this.faceTargetX = 0, this.faceTargetY = 0, this.faceX = 0, this.faceY = 0, this.faceVX = 0, this.faceVY = 0, this.lastTimeSec = 0
- }
- function f() {
- _.prototype.constructor.call(this), this.screenLeft = null, this.screenRight = null, this.screenTop = null, this.screenBottom = null, this.maxLeft = null, this.maxRight = null, this.maxTop = null, this.maxBottom = null, this.max = Number.MAX_VALUE, this.min = 0
- }
- function c() {}
- var d = 0;
- r.prototype.getModelMatrix = function() {
- return this.modelMatrix
- }, r.prototype.setAlpha = function(t) {
- t > .999 && (t = 1), t < .001 && (t = 0), this.alpha = t
- }, r.prototype.getAlpha = function() {
- return this.alpha
- }, r.prototype.isInitialized = function() {
- return this.initialized
- }, r.prototype.setInitialized = function(t) {
- this.initialized = t
- }, r.prototype.isUpdating = function() {
- return this.updating
- }, r.prototype.setUpdating = function(t) {
- this.updating = t
- }, r.prototype.getLive2DModel = function() {
- return this.live2DModel
- }, r.prototype.setLipSync = function(t) {
- this.lipSync = t
- }, r.prototype.setLipSyncValue = function(t) {
- this.lipSyncValue = t
- }, r.prototype.setAccel = function(t, i, e) {
- this.accelX = t, this.accelY = i, this.accelZ = e
- }, r.prototype.setDrag = function(t, i) {
- this.dragX = t, this.dragY = i
- }, r.prototype.getMainMotionManager = function() {
- return this.mainMotionManager
- }, r.prototype.getExpressionManager = function() {
- return this.expressionManager
- }, r.prototype.loadModelData = function(t, i) {
- var e = c.getPlatformManager();
- this.debugMode && e.log("Load model : " + t);
- var r = this;
- e.loadLive2DModel(t, function(t) {
- if (r.live2DModel = t, r.live2DModel.saveParam(), 0 != Live2D.getError()) return void console.error("Error : Failed to loadModelData().");
- r.modelMatrix = new a(r.live2DModel.getCanvasWidth(), r.live2DModel.getCanvasHeight()), r.modelMatrix.setWidth(2), r.modelMatrix.setCenterPosition(0, 0), i(r.live2DModel)
- })
- }, r.prototype.loadTexture = function(t, i, e) {
- d++;
- var r = c.getPlatformManager();
- this.debugMode && r.log("Load Texture : " + i);
- var o = this;
- r.loadTexture(this.live2DModel, t, i, function() {
- d--, 0 == d && (o.isTexLoaded = !0), "function" == typeof e && e()
- })
- }, r.prototype.loadMotion = function(t, i, e) {
- var r = c.getPlatformManager();
- this.debugMode && r.log("Load Motion : " + i);
- var o = null,
- n = this;
- r.loadBytes(i, function(i) {
- o = Live2DMotion.loadMotion(i), null != t && (n.motions[t] = o), e(o)
- })
- }, r.prototype.loadExpression = function(t, i, e) {
- var r = c.getPlatformManager();
- this.debugMode && r.log("Load Expression : " + i);
- var n = this;
- r.loadBytes(i, function(i) {
- null != t && (n.expressions[t] = o.loadJson(i)), "function" == typeof e && e()
- })
- }, r.prototype.loadPose = function(t, i) {
- var e = c.getPlatformManager();
- this.debugMode && e.log("Load Pose : " + t);
- var r = this;
- try {
- e.loadBytes(t, function(t) {
- r.pose = $.load(t), "function" == typeof i && i()
- })
- } catch (t) {
- console.warn(t)
- }
- }, r.prototype.loadPhysics = function(t) {
- var i = c.getPlatformManager();
- this.debugMode && i.log("Load Physics : " + t);
- var e = this;
- try {
- i.loadBytes(t, function(t) {
- e.physics = l.load(t)
- })
- } catch (t) {
- console.warn(t)
- }
- }, r.prototype.hitTestSimple = function(t, i, e) {
- if (null === this.live2DModel) return !1;
- var r = this.live2DModel.getDrawDataIndex(t);
- if (r < 0) return !1;
- for (var o = this.live2DModel.getTransformedPoints(r), n = this.live2DModel.getCanvasWidth(), s = 0, _ = this.live2DModel.getCanvasHeight(), a = 0, h = 0; h < o.length; h += 2) {
- var l = o[h],
- $ = o[h + 1];
- l < n && (n = l), l > s && (s = l), $ < _ && (_ = $), $ > a && (a = $)
- }
- var u = this.modelMatrix.invertTransformX(i),
- p = this.modelMatrix.invertTransformY(e);
- return n <= u && u <= s && _ <= p && p <= a
- }, r.prototype.hitTestSimpleCustom = function(t, i, e, r) {
- return null !== this.live2DModel && (e >= t[0] && e <= i[0] && r <= t[1] && r >= i[1])
- }, o.prototype = new AMotion, o.EXPRESSION_DEFAULT = "DEFAULT", o.TYPE_SET = 0, o.TYPE_ADD = 1, o.TYPE_MULT = 2, o.loadJson = function(t) {
- var i = new o,
- e = c.getPlatformManager(),
- r = e.jsonParseFromBytes(t);
- if (i.setFadeIn(parseInt(r.fade_in) > 0 ? parseInt(r.fade_in) : 1e3), i.setFadeOut(parseInt(r.fade_out) > 0 ? parseInt(r.fade_out) : 1e3), null == r.params) return i;
- var s = r.params,
- _ = s.length;
- i.paramList = [];
- for (var a = 0; a < _; a++) {
- var h = s[a],
- l = h.id.toString(),
- $ = parseFloat(h.val),
- u = o.TYPE_ADD,
- p = null != h.calc ? h.calc.toString() : "add";
- if ((u = "add" === p ? o.TYPE_ADD : "mult" === p ? o.TYPE_MULT : "set" === p ? o.TYPE_SET : o.TYPE_ADD) == o.TYPE_ADD) {
- var f = null == h.def ? 0 : parseFloat(h.def);
- $ -= f
- } else if (u == o.TYPE_MULT) {
- var f = null == h.def ? 1 : parseFloat(h.def);
- 0 == f && (f = 1), $ /= f
- }
- var d = new n;
- d.id = l, d.type = u, d.value = $, i.paramList.push(d)
- }
- return i
- }, o.prototype.updateParamExe = function(t, i, e, r) {
- for (var n = this.paramList.length - 1; n >= 0; --n) {
- var s = this.paramList[n];
- s.type == o.TYPE_ADD ? t.addToParamFloat(s.id, s.value, e) : s.type == o.TYPE_MULT ? t.multParamFloat(s.id, s.value, e) : s.type == o.TYPE_SET && t.setParamFloat(s.id, s.value, e)
- }
- }, s.prototype.calcNextBlink = function() {
- return UtSystem.getUserTimeMSec() + Math.random() * (2 * this.blinkIntervalMsec - 1)
- }, s.prototype.setInterval = function(t) {
- this.blinkIntervalMsec = t
- }, s.prototype.setEyeMotion = function(t, i, e) {
- this.closingMotionMsec = t, this.closedMotionMsec = i, this.openingMotionMsec = e
- }, s.prototype.updateParam = function(t) {
- var i, e = UtSystem.getUserTimeMSec(),
- r = 0;
- switch (this.eyeState) {
- case g.STATE_CLOSING:
- r = (e - this.stateStartTime) / this.closingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_CLOSED, this.stateStartTime = e), i = 1 - r;
- break;
- case g.STATE_CLOSED:
- r = (e - this.stateStartTime) / this.closedMotionMsec, r >= 1 && (this.eyeState = g.STATE_OPENING, this.stateStartTime = e), i = 0;
- break;
- case g.STATE_OPENING:
- r = (e - this.stateStartTime) / this.openingMotionMsec, r >= 1 && (r = 1, this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink()), i = r;
- break;
- case g.STATE_INTERVAL:
- this.nextBlinkTime < e && (this.eyeState = g.STATE_CLOSING, this.stateStartTime = e), i = 1;
- break;
- case g.STATE_FIRST:
- default:
- this.eyeState = g.STATE_INTERVAL, this.nextBlinkTime = this.calcNextBlink(), i = 1
- }
- this.closeIfZero || (i = -i), t.setParamFloat(this.eyeID_L, i), t.setParamFloat(this.eyeID_R, i)
- };
- var g = function() {};
- g.STATE_FIRST = "STATE_FIRST", g.STATE_INTERVAL = "STATE_INTERVAL", g.STATE_CLOSING = "STATE_CLOSING", g.STATE_CLOSED = "STATE_CLOSED", g.STATE_OPENING = "STATE_OPENING", _.mul = function(t, i, e) {
- var r, o, n, s = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
- for (r = 0; r < 4; r++) for (o = 0; o < 4; o++) for (n = 0; n < 4; n++) s[r + 4 * o] += t[r + 4 * n] * i[n + 4 * o];
- for (r = 0; r < 16; r++) e[r] = s[r]
- }, _.prototype.identity = function() {
- for (var t = 0; t < 16; t++) this.tr[t] = t % 5 == 0 ? 1 : 0
- }, _.prototype.getArray = function() {
- return this.tr
- }, _.prototype.getCopyMatrix = function() {
- return new Float32Array(this.tr)
- }, _.prototype.setMatrix = function(t) {
- if (null != this.tr && this.tr.length == this.tr.length) for (var i = 0; i < 16; i++) this.tr[i] = t[i]
- }, _.prototype.getScaleX = function() {
- return this.tr[0]
- }, _.prototype.getScaleY = function() {
- return this.tr[5]
- }, _.prototype.transformX = function(t) {
- return this.tr[0] * t + this.tr[12]
- }, _.prototype.transformY = function(t) {
- return this.tr[5] * t + this.tr[13]
- }, _.prototype.invertTransformX = function(t) {
- return (t - this.tr[12]) / this.tr[0]
- }, _.prototype.invertTransformY = function(t) {
- return (t - this.tr[13]) / this.tr[5]
- }, _.prototype.multTranslate = function(t, i) {
- var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1];
- _.mul(e, this.tr, this.tr)
- }, _.prototype.translate = function(t, i) {
- this.tr[12] = t, this.tr[13] = i
- }, _.prototype.translateX = function(t) {
- this.tr[12] = t
- }, _.prototype.translateY = function(t) {
- this.tr[13] = t
- }, _.prototype.multScale = function(t, i) {
- var e = [t, 0, 0, 0, 0, i, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];
- _.mul(e, this.tr, this.tr)
- }, _.prototype.scale = function(t, i) {
- this.tr[0] = t, this.tr[5] = i
- }, a.prototype = new _, a.prototype.setPosition = function(t, i) {
- this.translate(t, i)
- }, a.prototype.setCenterPosition = function(t, i) {
- var e = this.width * this.getScaleX(),
- r = this.height * this.getScaleY();
- this.translate(t - e / 2, i - r / 2)
- }, a.prototype.top = function(t) {
- this.setY(t)
- }, a.prototype.bottom = function(t) {
- var i = this.height * this.getScaleY();
- this.translateY(t - i)
- }, a.prototype.left = function(t) {
- this.setX(t)
- }, a.prototype.right = function(t) {
- var i = this.width * this.getScaleX();
- this.translateX(t - i)
- }, a.prototype.centerX = function(t) {
- var i = this.width * this.getScaleX();
- this.translateX(t - i / 2)
- }, a.prototype.centerY = function(t) {
- var i = this.height * this.getScaleY();
- this.translateY(t - i / 2)
- }, a.prototype.setX = function(t) {
- this.translateX(t)
- }, a.prototype.setY = function(t) {
- this.translateY(t)
- }, a.prototype.setHeight = function(t) {
- var i = t / this.height,
- e = -i;
- this.scale(i, e)
- }, a.prototype.setWidth = function(t) {
- var i = t / this.width,
- e = -i;
- this.scale(i, e)
- }, h.prototype = new MotionQueueManager, h.prototype.getCurrentPriority = function() {
- return this.currentPriority
- }, h.prototype.getReservePriority = function() {
- return this.reservePriority
- }, h.prototype.reserveMotion = function(t) {
- return !(this.reservePriority >= t) && (!(this.currentPriority >= t) && (this.reservePriority = t, !0))
- }, h.prototype.setReservePriority = function(t) {
- this.reservePriority = t
- }, h.prototype.updateParam = function(t) {
- var i = MotionQueueManager.prototype.updateParam.call(this, t);
- return this.isFinished() && (this.currentPriority = 0), i
- }, h.prototype.startMotionPrio = function(t, i) {
- return i == this.reservePriority && (this.reservePriority = 0), this.currentPriority = i, this.startMotion(t, !1)
- }, l.load = function(t) {
- for (var i = new l, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.physics_hair, n = o.length, s = 0; s < n; s++) {
- var _ = o[s],
- a = new PhysicsHair,
- h = _.setup,
- $ = parseFloat(h.length),
- u = parseFloat(h.regist),
- p = parseFloat(h.mass);
- a.setup($, u, p);
- for (var f = _.src, d = f.length, g = 0; g < d; g++) {
- var y = f[g],
- m = y.id,
- T = PhysicsHair.Src.SRC_TO_X,
- P = y.ptype;
- "x" === P ? T = PhysicsHair.Src.SRC_TO_X : "y" === P ? T = PhysicsHair.Src.SRC_TO_Y : "angle" === P ? T = PhysicsHair.Src.SRC_TO_G_ANGLE : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Src");
- var S = parseFloat(y.scale),
- v = parseFloat(y.weight);
- a.addSrcParam(T, m, S, v)
- }
- for (var L = _.targets, M = L.length, g = 0; g < M; g++) {
- var E = L[g],
- m = E.id,
- T = PhysicsHair.Target.TARGET_FROM_ANGLE,
- P = E.ptype;
- "angle" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE : "angle_v" === P ? T = PhysicsHair.Target.TARGET_FROM_ANGLE_V : UtDebug.error("live2d", "Invalid parameter:PhysicsHair.Target");
- var S = parseFloat(E.scale),
- v = parseFloat(E.weight);
- a.addTargetParam(T, m, S, v)
- }
- i.physicsList.push(a)
- }
- return i
- }, l.prototype.updateParam = function(t) {
- for (var i = UtSystem.getUserTimeMSec() - this.startTimeMSec, e = 0; e < this.physicsList.length; e++) this.physicsList[e].update(t, i)
- }, $.load = function(t) {
- for (var i = new $, e = c.getPlatformManager(), r = e.jsonParseFromBytes(t), o = r.parts_visible, n = o.length, s = 0; s < n; s++) {
- for (var _ = o[s], a = _.group, h = a.length, l = new Array, p = 0; p < h; p++) {
- var f = a[p],
- d = new u(f.id);
- if (l[p] = d, null != f.link) {
- var g = f.link,
- y = g.length;
- d.link = new Array;
- for (var m = 0; m < y; m++) {
- var T = new u(g[m]);
- d.link.push(T)
- }
- }
- }
- i.partsGroups.push(l)
- }
- return i
- }, $.prototype.updateParam = function(t) {
- if (null != t) {
- t != this.lastModel && this.initParam(t), this.lastModel = t;
- var i = UtSystem.getUserTimeMSec(),
- e = 0 == this.lastTime ? 0 : (i - this.lastTime) / 1e3;
- this.lastTime = i, e < 0 && (e = 0);
- for (var r = 0; r < this.partsGroups.length; r++) this.normalizePartsOpacityGroup(t, this.partsGroups[r], e), this.copyOpacityOtherParts(t, this.partsGroups[r])
- }
- }, $.prototype.initParam = function(t) {
- if (null != t) for (var i = 0; i < this.partsGroups.length; i++) for (var e = this.partsGroups[i], r = 0; r < e.length; r++) {
- e[r].initIndex(t);
- var o = e[r].partsIndex,
- n = e[r].paramIndex;
- if (!(o < 0)) {
- var s = 0 != t.getParamFloat(n);
- if (t.setPartsOpacity(o, s ? 1 : 0), t.setParamFloat(n, s ? 1 : 0), null != e[r].link) for (var _ = 0; _ < e[r].link.length; _++) e[r].link[_].initIndex(t)
- }
- }
- }, $.prototype.normalizePartsOpacityGroup = function(t, i, e) {
- for (var r = -1, o = 1, n = 0; n < i.length; n++) {
- var s = i[n].partsIndex,
- _ = i[n].paramIndex;
- if (!(s < 0) && 0 != t.getParamFloat(_)) {
- if (r >= 0) break;
- r = n, o = t.getPartsOpacity(s), o += e / .5, o > 1 && (o = 1)
- }
- }
- r < 0 && (r = 0, o = 1);
- for (var n = 0; n < i.length; n++) {
- var s = i[n].partsIndex;
- if (!(s < 0)) if (r == n) t.setPartsOpacity(s, o);
- else {
- var a, h = t.getPartsOpacity(s);
- a = o < .5 ? -.5 * o / .5 + 1 : .5 * (1 - o) / .5;
- var l = (1 - a) * (1 - o);
- l > .15 && (a = 1 - .15 / (1 - o)), h > a && (h = a), t.setPartsOpacity(s, h)
- }
- }
- }, $.prototype.copyOpacityOtherParts = function(t, i) {
- for (var e = 0; e < i.length; e++) {
- var r = i[e];
- if (null != r.link && !(r.partsIndex < 0)) for (var o = t.getPartsOpacity(r.partsIndex), n = 0; n < r.link.length; n++) {
- var s = r.link[n];
- s.partsIndex < 0 || t.setPartsOpacity(s.partsIndex, o)
- }
- }
- }, u.prototype.initIndex = function(t) {
- this.paramIndex = t.getParamIndex("VISIBLE:" + this.id), this.partsIndex = t.getPartsDataIndex(PartsDataID.getID(this.id)), t.setParamFloat(this.paramIndex, 1)
- }, p.FRAME_RATE = 30, p.prototype.setPoint = function(t, i) {
- this.faceTargetX = t, this.faceTargetY = i
- }, p.prototype.getX = function() {
- return this.faceX
- }, p.prototype.getY = function() {
- return this.faceY
- }, p.prototype.update = function() {
- var t = 40 / 7.5 / p.FRAME_RATE;
- if (0 == this.lastTimeSec) return void(this.lastTimeSec = UtSystem.getUserTimeMSec());
- var i = UtSystem.getUserTimeMSec(),
- e = (i - this.lastTimeSec) * p.FRAME_RATE / 1e3;
- this.lastTimeSec = i;
- var r = .15 * p.FRAME_RATE,
- o = e * t / r,
- n = this.faceTargetX - this.faceX,
- s = this.faceTargetY - this.faceY;
- if (!(Math.abs(n) <= this.EPSILON && Math.abs(s) <= this.EPSILON)) {
- var _ = Math.sqrt(n * n + s * s),
- a = t * n / _,
- h = t * s / _,
- l = a - this.faceVX,
- $ = h - this.faceVY,
- u = Math.sqrt(l * l + $ * $);
- (u < -o || u > o) && (l *= o / u, $ *= o / u, u = o), this.faceVX += l, this.faceVY += $;
- var f = .5 * (Math.sqrt(o * o + 16 * o * _ - 8 * o * _) - o),
- c = Math.sqrt(this.faceVX * this.faceVX + this.faceVY * this.faceVY);
- c > f && (this.faceVX *= f / c, this.faceVY *= f / c), this.faceX += this.faceVX, this.faceY += this.faceVY
- }
- }, f.prototype = new _, f.prototype.getMaxScale = function() {
- return this.max
- }, f.prototype.getMinScale = function() {
- return this.min
- }, f.prototype.setMaxScale = function(t) {
- this.max = t
- }, f.prototype.setMinScale = function(t) {
- this.min = t
- }, f.prototype.isMaxScale = function() {
- return this.getScaleX() == this.max
- }, f.prototype.isMinScale = function() {
- return this.getScaleX() == this.min
- }, f.prototype.adjustTranslate = function(t, i) {
- this.tr[0] * this.maxLeft + (this.tr[12] + t) > this.screenLeft && (t = this.screenLeft - this.tr[0] * this.maxLeft - this.tr[12]), this.tr[0] * this.maxRight + (this.tr[12] + t) < this.screenRight && (t = this.screenRight - this.tr[0] * this.maxRight - this.tr[12]), this.tr[5] * this.maxTop + (this.tr[13] + i) < this.screenTop && (i = this.screenTop - this.tr[5] * this.maxTop - this.tr[13]), this.tr[5] * this.maxBottom + (this.tr[13] + i) > this.screenBottom && (i = this.screenBottom - this.tr[5] * this.maxBottom - this.tr[13]);
- var e = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1];
- _.mul(e, this.tr, this.tr)
- }, f.prototype.adjustScale = function(t, i, e) {
- var r = e * this.tr[0];
- r < this.min ? this.tr[0] > 0 && (e = this.min / this.tr[0]) : r > this.max && this.tr[0] > 0 && (e = this.max / this.tr[0]);
- var o = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t, i, 0, 1],
- n = [e, 0, 0, 0, 0, e, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1],
- s = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -t, -i, 0, 1];
- _.mul(s, this.tr, this.tr), _.mul(n, this.tr, this.tr), _.mul(o, this.tr, this.tr)
- }, f.prototype.setScreenRect = function(t, i, e, r) {
- this.screenLeft = t, this.screenRight = i, this.screenTop = r, this.screenBottom = e
- }, f.prototype.setMaxScreenRect = function(t, i, e, r) {
- this.maxLeft = t, this.maxRight = i, this.maxTop = r, this.maxBottom = e
- }, f.prototype.getScreenLeft = function() {
- return this.screenLeft
- }, f.prototype.getScreenRight = function() {
- return this.screenRight
- }, f.prototype.getScreenBottom = function() {
- return this.screenBottom
- }, f.prototype.getScreenTop = function() {
- return this.screenTop
- }, f.prototype.getMaxLeft = function() {
- return this.maxLeft
- }, f.prototype.getMaxRight = function() {
- return this.maxRight
- }, f.prototype.getMaxBottom = function() {
- return this.maxBottom
- }, f.prototype.getMaxTop = function() {
- return this.maxTop
- }, c.platformManager = null, c.getPlatformManager = function() {
- return c.platformManager
- }, c.setPlatformManager = function(t) {
- c.platformManager = t
- }, t.exports = {
- L2DTargetPoint: p,
- Live2DFramework: c,
- L2DViewMatrix: f,
- L2DPose: $,
- L2DPartsParam: u,
- L2DPhysics: l,
- L2DMotionManager: h,
- L2DModelMatrix: a,
- L2DMatrix44: _,
- EYE_STATE: g,
- L2DEyeBlink: s,
- L2DExpressionParam: n,
- L2DExpressionMotion: o,
- L2DBaseModel: r
- }
-}, function(t, i, e) {
- "use strict";
- var r = {
- DEBUG_LOG: !1,
- DEBUG_MOUSE_LOG: !1,
- DEBUG_DRAW_HIT_AREA: !1,
- DEBUG_DRAW_ALPHA_MODEL: !1,
- VIEW_MAX_SCALE: 2,
- VIEW_MIN_SCALE: .8,
- VIEW_LOGICAL_LEFT: -1,
- VIEW_LOGICAL_RIGHT: 1,
- VIEW_LOGICAL_MAX_LEFT: -2,
- VIEW_LOGICAL_MAX_RIGHT: 2,
- VIEW_LOGICAL_MAX_BOTTOM: -2,
- VIEW_LOGICAL_MAX_TOP: 2,
- PRIORITY_NONE: 0,
- PRIORITY_IDLE: 1,
- PRIORITY_SLEEPY: 2,
- PRIORITY_NORMAL: 3,
- PRIORITY_FORCE: 4,
- MOTION_GROUP_IDLE: "idle",
- MOTION_GROUP_SLEEPY: "sleepy",
- MOTION_GROUP_TAP_BODY: "tap_body",
- MOTION_GROUP_FLICK_HEAD: "flick_head",
- MOTION_GROUP_PINCH_IN: "pinch_in",
- MOTION_GROUP_PINCH_OUT: "pinch_out",
- MOTION_GROUP_SHAKE: "shake",
- HIT_AREA_HEAD: "head",
- HIT_AREA_BODY: "body"
- };
- t.exports = r
-}, function(t, i, e) {
- "use strict";
-
- function r(t) {
- n = t
- }
- function o() {
- return n
- }
- Object.defineProperty(i, "__esModule", {
- value: !0
- }), i.setContext = r, i.getContext = o;
- var n = void 0
-}, function(t, i, e) {
- "use strict";
-
- function r() {}
- r.matrixStack = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.depth = 0, r.currentMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1], r.tmp = new Array(16), r.reset = function() {
- this.depth = 0
- }, r.loadIdentity = function() {
- for (var t = 0; t < 16; t++) this.currentMatrix[t] = t % 5 == 0 ? 1 : 0
- }, r.push = function() {
- var t = (this.depth, 16 * (this.depth + 1));
- this.matrixStack.length < t + 16 && (this.matrixStack.length = t + 16);
- for (var i = 0; i < 16; i++) this.matrixStack[t + i] = this.currentMatrix[i];
- this.depth++
- }, r.pop = function() {
- --this.depth < 0 && (myError("Invalid matrix stack."), this.depth = 0);
- for (var t = 16 * this.depth, i = 0; i < 16; i++) this.currentMatrix[i] = this.matrixStack[t + i]
- }, r.getMatrix = function() {
- return this.currentMatrix
- }, r.multMatrix = function(t) {
- var i, e, r;
- for (i = 0; i < 16; i++) this.tmp[i] = 0;
- for (i = 0; i < 4; i++) for (e = 0; e < 4; e++) for (r = 0; r < 4; r++) this.tmp[i + 4 * e] += this.currentMatrix[i + 4 * r] * t[r + 4 * e];
- for (i = 0; i < 16; i++) this.currentMatrix[i] = this.tmp[i]
- }, t.exports = r
-}, function(t, i, e) {
- t.exports = e(5)
-}, function(t, i, e) {
- "use strict";
-
- function r(t) {
- return t && t.__esModule ? t : {
- default:
- t
- }
- }
- function o(t) {
- C = document.getElementById(t), C.addEventListener && (window.addEventListener("click", g), window.addEventListener("mousedown", g), window.addEventListener("mousemove", g), window.addEventListener("mouseup", g), document.addEventListener("mouseout", g), window.addEventListener("touchstart", y), window.addEventListener("touchend", y), window.addEventListener("touchmove", y))
- }
- function n(t) {
- var i = C.width,
- e = C.height;
- N = new M.L2DTargetPoint;
- var r = e / i,
- o = w.
- default.VIEW_LOGICAL_LEFT,
- n = w.
- default.VIEW_LOGICAL_RIGHT,
- _ = -r,
- h = r;
- if (window.Live2D.captureFrame = !1, B = new M.L2DViewMatrix, B.setScreenRect(o, n, _, h), B.setMaxScreenRect(w.
- default.VIEW_LOGICAL_MAX_LEFT, w.
- default.VIEW_LOGICAL_MAX_RIGHT, w.
- default.VIEW_LOGICAL_MAX_BOTTOM, w.
- default.VIEW_LOGICAL_MAX_TOP), B.setMaxScale(w.
- default.VIEW_MAX_SCALE), B.setMinScale(w.
- default.VIEW_MIN_SCALE), U = new M.L2DMatrix44, U.multScale(1, i / e), G = new M.L2DMatrix44, G.multTranslate(-i / 2, -e / 2), G.multScale(2 / i, -2 / i), F = v(), (0, D.setContext)(F), !F) return console.error("Failed to create WebGL context."), void(window.WebGLRenderingContext && console.error("Your browser don't support WebGL, check https://get.webgl.org/ for futher information."));
- window.Live2D.setGL(F), F.clearColor(0, 0, 0, 0), a(t), s()
- }
- function s() {
- b || (b = !0, function t() {
- _();
- var i = window.requestAnimationFrame || window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame || window.msRequestAnimationFrame;
- if (window.Live2D.captureFrame) {
- window.Live2D.captureFrame = !1;
- var e = document.createElement("a");
- document.body.appendChild(e), e.setAttribute("type", "hidden"), e.href = C.toDataURL(), e.download = window.Live2D.captureName || "live2d.png", e.click()
- }
- i(t, C)
- }())
- }
- function _() {
- O.
- default.reset(), O.
- default.loadIdentity(), N.update(), R.setDrag(N.getX(), N.getY()), F.clear(F.COLOR_BUFFER_BIT), O.
- default.multMatrix(U.getArray()), O.
- default.multMatrix(B.getArray()), O.
- default.push();
- for (var t = 0; t < R.numModels(); t++) {
- var i = R.getModel(t);
- if (null == i) return;
- i.initialized && !i.updating && (i.update(), i.draw(F))
- }
- O.
- default.pop()
- }
- function a(t) {
- R.reloadFlg = !0, R.count++, R.changeModel(F, t)
- }
- function h(t, i) {
- return t.x * i.x + t.y * i.y
- }
- function l(t, i) {
- var e = Math.sqrt(t * t + i * i);
- return {
- x: t / e,
- y: i / e
- }
- }
- function $(t, i, e) {
- function r(t, i) {
- return 180 * Math.acos(h({
- x: 0,
- y: 1
- }, l(t, i))) / Math.PI
- }
- if (i.x < e.left + e.width && i.y < e.top + e.height && i.x > e.left && i.y > e.top) return i;
- var o = t.x - i.x,
- n = t.y - i.y,
- s = r(o, n);
- i.x < t.x && (s = 360 - s);
- var _ = 360 - r(e.left - t.x, -1 * (e.top - t.y)),
- a = 360 - r(e.left - t.x, -1 * (e.top + e.height - t.y)),
- $ = r(e.left + e.width - t.x, -1 * (e.top - t.y)),
- u = r(e.left + e.width - t.x, -1 * (e.top + e.height - t.y)),
- p = n / o,
- f = {};
- if (s < $) {
- var c = e.top - t.y,
- d = c / p;
- f = {
- y: t.y + c,
- x: t.x + d
- }
- } else if (s < u) {
- var g = e.left + e.width - t.x,
- y = g * p;
- f = {
- y: t.y + y,
- x: t.x + g
- }
- } else if (s < a) {
- var m = e.top + e.height - t.y,
- T = m / p;
- f = {
- y: t.y + m,
- x: t.x + T
- }
- } else if (s < _) {
- var P = t.x - e.left,
- S = P * p;
- f = {
- y: t.y - S,
- x: t.x - P
- }
- } else {
- var v = e.top - t.y,
- L = v / p;
- f = {
- y: t.y + v,
- x: t.x + L
- }
- }
- return f
- }
- function u(t) {
- Y = !0;
- var i = C.getBoundingClientRect(),
- e = P(t.clientX - i.left),
- r = S(t.clientY - i.top),
- o = $({
- x: i.left + i.width / 2,
- y: i.top + i.height * X
- }, {
- x: t.clientX,
- y: t.clientY
- }, i),
- n = m(o.x - i.left),
- s = T(o.y - i.top);
- w.
- default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, N.setPoint(n, s)
- }
- function p(t) {
- Y = !0;
- var i = C.getBoundingClientRect(),
- e = P(t.clientX - i.left),
- r = S(t.clientY - i.top),
- o = $({
- x: i.left + i.width / 2,
- y: i.top + i.height * X
- }, {
- x: t.clientX,
- y: t.clientY
- }, i),
- n = m(o.x - i.left),
- s = T(o.y - i.top);
- w.
- default.DEBUG_MOUSE_LOG && console.log("onMouseDown device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), k = e, V = r, R.tapEvent(n, s)
- }
- function f(t) {
- var i = C.getBoundingClientRect(),
- e = P(t.clientX - i.left),
- r = S(t.clientY - i.top),
- o = $({
- x: i.left + i.width / 2,
- y: i.top + i.height * X
- }, {
- x: t.clientX,
- y: t.clientY
- }, i),
- n = m(o.x - i.left),
- s = T(o.y - i.top);
- w.
- default.DEBUG_MOUSE_LOG && console.log("onMouseMove device( x:" + t.clientX + " y:" + t.clientY + " ) view( x:" + n + " y:" + s + ")"), Y && (k = e, V = r, N.setPoint(n, s))
- }
- function c() {
- Y && (Y = !1), N.setPoint(0, 0)
- }
- function d() {
- w.
- default.DEBUG_LOG && console.log("Set Session Storage."), sessionStorage.setItem("Sleepy", "1")
- }
- function g(t) {
- if ("mousewheel" == t.type);
- else if ("mousedown" == t.type) p(t);
- else if ("mousemove" == t.type) {
- var i = sessionStorage.getItem("Sleepy");
- "1" === i && sessionStorage.setItem("Sleepy", "0"), u(t)
- } else if ("mouseup" == t.type) {
- if ("button" in t && 0 != t.button) return
- } else if ("mouseout" == t.type) {
- w.
- default.DEBUG_LOG && console.log("Mouse out Window."), c();
- var e = sessionStorage.getItem("SleepyTimer");
- window.clearTimeout(e), e = window.setTimeout(d, 5e4), sessionStorage.setItem("SleepyTimer", e)
- }
- }
- function y(t) {
- var i = t.touches[0];
- "touchstart" == t.type ? 1 == t.touches.length && u(i) : "touchmove" == t.type ? f(i) : "touchend" == t.type && c()
- }
- function m(t) {
- var i = G.transformX(t);
- return B.invertTransformX(i)
- }
- function T(t) {
- var i = G.transformY(t);
- return B.invertTransformY(i)
- }
- function P(t) {
- return G.transformX(t)
- }
- function S(t) {
- return G.transformY(t)
- }
- function v() {
- for (var t = ["webgl", "experimental-webgl", "webkit-3d", "moz-webgl"], i = 0; i < t.length; i++) try {
- var e = C.getContext(t[i], {
- premultipliedAlpha: !0
- });
- if (e) return e
- } catch (t) {}
- return null
- }
- function L(t, i, e) {
- X = void 0 === e ? .5 : e, o(t), n(i)
- }
- e(6);
- var M = e(0),
- E = e(8),
- A = r(E),
- I = e(1),
- w = r(I),
- x = e(3),
- O = r(x),
- D = e(2),
- R = (window.navigator.platform.toLowerCase(), new A.
- default),
- b = !1,
- F = null,
- C = null,
- N = null,
- B = null,
- U = null,
- G = null,
- Y = !1,
- k = 0,
- V = 0,
- X = .5;
- window.loadlive2d = L
-}, function(t, i, e) {
- "use strict";
- (function(t) {
- !
- function() {
- function i() {
- At || (this._$MT = null, this._$5S = null, this._$NP = 0, i._$42++, this._$5S = new Y(this))
- }
- function e(t) {
- if (!At) {
- this.clipContextList = new Array, this.glcontext = t.gl, this.dp_webgl = t, this.curFrameNo = 0, this.firstError_clipInNotUpdate = !0, this.colorBuffer = 0, this.isInitGLFBFunc = !1, this.tmpBoundsOnModel = new S, at.glContext.length > at.frameBuffers.length && (this.curFrameNo = this.getMaskRenderTexture()), this.tmpModelToViewMatrix = new R, this.tmpMatrix2 = new R, this.tmpMatrixForMask = new R, this.tmpMatrixForDraw = new R, this.CHANNEL_COLORS = new Array;
- var i = new A;
- i = new A, i.r = 0, i.g = 0, i.b = 0, i.a = 1, this.CHANNEL_COLORS.push(i), i = new A, i.r = 1, i.g = 0, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 1, i.b = 0, i.a = 0, this.CHANNEL_COLORS.push(i), i = new A, i.r = 0, i.g = 0, i.b = 1, i.a = 0, this.CHANNEL_COLORS.push(i);
- for (var e = 0; e < this.CHANNEL_COLORS.length; e++) this.dp_webgl.setChannelFlagAsColor(e, this.CHANNEL_COLORS[e])
- }
- }
- function r(t, i, e) {
- this.clipIDList = new Array, this.clipIDList = e, this.clippingMaskDrawIndexList = new Array;
- for (var r = 0; r < e.length; r++) this.clippingMaskDrawIndexList.push(i.getDrawDataIndex(e[r]));
- this.clippedDrawContextList = new Array, this.isUsing = !0, this.layoutChannelNo = 0, this.layoutBounds = new S, this.allClippedDrawRect = new S, this.matrixForMask = new Float32Array(16), this.matrixForDraw = new Float32Array(16), this.owner = t
- }
- function o(t, i) {
- this._$gP = t, this.drawDataIndex = i
- }
- function n() {
- At || (this.color = null)
- }
- function s() {
- At || (this._$dP = null, this._$eo = null, this._$V0 = null, this._$dP = 1e3, this._$eo = 1e3, this._$V0 = 1, this._$a0())
- }
- function _() {}
- function a() {
- this._$r = null, this._$0S = null
- }
- function h() {
- At || (this.x = null, this.y = null, this.width = null, this.height = null)
- }
- function l(t) {
- At || et.prototype.constructor.call(this, t)
- }
- function $() {}
- function u(t) {
- At || et.prototype.constructor.call(this, t)
- }
- function p() {
- At || (this._$vo = null, this._$F2 = null, this._$ao = 400, this._$1S = 400, p._$42++)
- }
- function f() {
- At || (this.p1 = new c, this.p2 = new c, this._$Fo = 0, this._$Db = 0, this._$L2 = 0, this._$M2 = 0, this._$ks = 0, this._$9b = 0, this._$iP = 0, this._$iT = 0, this._$lL = new Array, this._$qP = new Array, this.setup(.3, .5, .1))
- }
- function c() {
- this._$p = 1, this.x = 0, this.y = 0, this.vx = 0, this.vy = 0, this.ax = 0, this.ay = 0, this.fx = 0, this.fy = 0, this._$s0 = 0, this._$70 = 0, this._$7L = 0, this._$HL = 0
- }
- function d(t, i, e) {
- this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e
- }
- function g(t, i, e, r) {
- d.prototype.constructor.call(this, i, e, r), this._$tL = null, this._$tL = t
- }
- function y(t, i, e) {
- this._$wL = null, this.scale = null, this._$V0 = null, this._$wL = t, this.scale = i, this._$V0 = e
- }
- function T(t, i, e, r) {
- y.prototype.constructor.call(this, i, e, r), this._$YP = null, this._$YP = t
- }
- function P() {
- At || (this._$fL = 0, this._$gL = 0, this._$B0 = 1, this._$z0 = 1, this._$qT = 0, this.reflectX = !1, this.reflectY = !1)
- }
- function S() {
- At || (this.x = null, this.y = null, this.width = null, this.height = null)
- }
- function v() {}
- function L() {
- At || (this.x = null, this.y = null)
- }
- function M() {
- At || (this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null, this.clipID = null, this.clipIDList = new Array)
- }
- function E() {
- At || (this._$Eb = E._$ps, this._$lT = 1, this._$C0 = 1, this._$tT = 1, this._$WL = 1, this.culling = !1, this.matrix4x4 = new Float32Array(16), this.premultipliedAlpha = !1, this.anisotropy = 0, this.clippingProcess = E.CLIPPING_PROCESS_NONE, this.clipBufPre_clipContextMask = null, this.clipBufPre_clipContextDraw = null, this.CHANNEL_COLORS = new Array)
- }
- function A() {
- At || (this.a = 1, this.r = 1, this.g = 1, this.b = 1, this.scale = 1, this._$ho = 1, this.blendMode = at.L2D_COLOR_BLEND_MODE_MULT)
- }
- function I() {
- At || (this._$kP = null, this._$dr = null, this._$Ai = !0, this._$mS = null)
- }
- function w() {}
- function x() {
- At || (this._$VP = 0, this._$wL = null, this._$GP = null, this._$8o = x._$ds, this._$2r = -1, this._$O2 = 0, this._$ri = 0)
- }
- function O() {}
- function D() {
- At || (this._$Ob = null)
- }
- function R() {
- this.m = new Float32Array(16), this.identity()
- }
- function b(t) {
- At || et.prototype.constructor.call(this, t)
- }
- function F() {
- At || (this._$7 = 1, this._$f = 0, this._$H = 0, this._$g = 1, this._$k = 0, this._$w = 0, this._$hi = STATE_IDENTITY, this._$Z = _$pS)
- }
- function C() {
- At || (s.prototype.constructor.call(this), this.motions = new Array, this._$7r = null, this._$7r = C._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !0, this.loopFadeIn = !0, this._$AS = -1, _$a0())
- }
- function N() {
- this._$P = new Float32Array(100), this.size = 0
- }
- function B() {
- this._$4P = null, this._$I0 = null, this._$RP = null
- }
- function U() {}
- function G() {}
- function Y(t) {
- At || (this._$QT = !0, this._$co = -1, this._$qo = 0, this._$pb = new Array(Y._$is), this._$_2 = new Float32Array(Y._$is), this._$vr = new Float32Array(Y._$is), this._$Rr = new Float32Array(Y._$is), this._$Or = new Float32Array(Y._$is), this._$fs = new Float32Array(Y._$is), this._$Js = new Array(Y._$is), this._$3S = new Array, this._$aS = new Array, this._$Bo = null, this._$F2 = new Array, this._$db = new Array, this._$8b = new Array, this._$Hr = new Array, this._$Ws = null, this._$Vs = null, this._$Er = null, this._$Es = new Int16Array(U._$Qb), this._$ZP = new Float32Array(2 * U._$1r), this._$Ri = t, this._$b0 = Y._$HP++, this.clipManager = null, this.dp_webgl = null)
- }
- function k() {}
- function V() {
- At || (this._$12 = null, this._$bb = null, this._$_L = null, this._$jo = null, this._$iL = null, this._$0L = null, this._$Br = null, this._$Dr = null, this._$Cb = null, this._$mr = null, this._$_L = wt.STATE_FIRST, this._$Br = 4e3, this._$Dr = 100, this._$Cb = 50, this._$mr = 150, this._$jo = !0, this._$iL = "PARAM_EYE_L_OPEN", this._$0L = "PARAM_EYE_R_OPEN")
- }
- function X() {
- At || (E.prototype.constructor.call(this), this._$sb = new Int32Array(X._$As), this._$U2 = new Array, this.transform = null, this.gl = null, null == X._$NT && (X._$NT = X._$9r(256), X._$vS = X._$9r(256), X._$no = X._$vb(256)))
- }
- function z() {
- At || (I.prototype.constructor.call(this), this._$GS = null, this._$Y0 = null)
- }
- function H(t) {
- _t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Yr = null, this._$Wr = null
- }
- function W() {
- At || (M.prototype.constructor.call(this), this._$gP = null, this._$dr = null, this._$GS = null, this._$qb = null, this._$Lb = null, this._$mS = null)
- }
- function j() {
- At || (this._$NL = null, this._$3S = null, this._$aS = null, j._$42++)
- }
- function q() {
- At || (i.prototype.constructor.call(this), this._$zo = new X)
- }
- function J() {
- At || (s.prototype.constructor.call(this), this.motions = new Array, this._$o2 = null, this._$7r = J._$Co++, this._$D0 = 30, this._$yT = 0, this._$E = !1, this.loopFadeIn = !0, this._$rr = -1, this._$eP = 0)
- }
- function Q(t, i) {
- return String.fromCharCode(t.getUint8(i))
- }
- function N() {
- this._$P = new Float32Array(100), this.size = 0
- }
- function B() {
- this._$4P = null, this._$I0 = null, this._$RP = null
- }
- function Z() {
- At || (I.prototype.constructor.call(this), this._$o = 0, this._$A = 0, this._$GS = null, this._$Eo = null)
- }
- function K(t) {
- _t.prototype.constructor.call(this, t), this._$8r = I._$ur, this._$Cr = null, this._$hr = null
- }
- function tt() {
- At || (this.visible = !0, this._$g0 = !1, this._$NL = null, this._$3S = null, this._$aS = null, tt._$42++)
- }
- function it(t) {
- this._$VS = null, this._$e0 = null, this._$e0 = t
- }
- function et(t) {
- At || (this.id = t)
- }
- function rt() {}
- function ot() {
- At || (this._$4S = null)
- }
- function nt(t, i) {
- this.canvas = t, this.context = i, this.viewport = new Array(0, 0, t.width, t.height), this._$6r = 1, this._$xP = 0, this._$3r = 1, this._$uP = 0, this._$Qo = -1, this.cacheImages = {}
- }
- function st() {
- At || (this._$TT = null, this._$LT = null, this._$FS = null, this._$wL = null)
- }
- function _t(t) {
- At || (this._$e0 = null, this._$IP = null, this._$JS = !1, this._$AT = !0, this._$e0 = t, this.totalScale = 1, this._$7s = 1, this.totalOpacity = 1)
- }
- function at() {}
- function ht() {}
- function lt(t) {
- At || (this._$ib = t)
- }
- function $t() {
- At || (W.prototype.constructor.call(this), this._$LP = -1, this._$d0 = 0, this._$Yo = 0, this._$JP = null, this._$5P = null, this._$BP = null, this._$Eo = null, this._$Qi = null, this._$6s = $t._$ms, this.culling = !0, this.gl_cacheImage = null, this.instanceNo = $t._$42++)
- }
- function ut(t) {
- Mt.prototype.constructor.call(this, t), this._$8r = W._$ur, this._$Cr = null, this._$hr = null
- }
- function pt() {
- At || (this.x = null, this.y = null)
- }
- function ft(t) {
- At || (i.prototype.constructor.call(this), this.drawParamWebGL = new mt(t), this.drawParamWebGL.setGL(at.getGL(t)))
- }
- function ct() {
- At || (this.motions = null, this._$eb = !1, this.motions = new Array)
- }
- function dt() {
- this._$w0 = null, this._$AT = !0, this._$9L = !1, this._$z2 = -1, this._$bs = -1, this._$Do = -1, this._$sr = null, this._$sr = dt._$Gs++
- }
- function gt() {
- this.m = new Array(1, 0, 0, 0, 1, 0, 0, 0, 1)
- }
- function yt(t) {
- At || et.prototype.constructor.call(this, t)
- }
- function mt(t) {
- At || (E.prototype.constructor.call(this), this.textures = new Array, this.transform = null, this.gl = null, this.glno = t, this.firstDraw = !0, this.anisotropyExt = null, this.maxAnisotropy = 0, this._$As = 32, this._$Gr = !1, this._$NT = null, this._$vS = null, this._$no = null, this.vertShader = null, this.fragShader = null, this.vertShaderOff = null, this.fragShaderOff = null)
- }
- function Tt(t, i, e) {
- return null == i && (i = t.createBuffer()), t.bindBuffer(t.ARRAY_BUFFER, i), t.bufferData(t.ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i
- }
- function Pt(t, i, e) {
- return null == i && (i = t.createBuffer()), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, i), t.bufferData(t.ELEMENT_ARRAY_BUFFER, e, t.DYNAMIC_DRAW), i
- }
- function St(t) {
- At || (this._$P = new Int8Array(8), this._$R0 = new DataView(this._$P.buffer), this._$3i = new Int8Array(1e3), this._$hL = 0, this._$v0 = 0, this._$S2 = 0, this._$Ko = new Array, this._$T = t, this._$F = 0)
- }
- function vt() {}
- function Lt() {}
- function Mt(t) {
- At || (this._$e0 = null, this._$IP = null, this._$Us = null, this._$7s = null, this._$IS = [!1], this._$VS = null, this._$AT = !0, this.baseOpacity = 1, this.clipBufPre_clipContext = null, this._$e0 = t)
- }
- function Et() {}
- var At = !0;
- i._$0s = 1, i._$4s = 2, i._$42 = 0, i._$62 = function(t, e) {
- try {
- if (e instanceof ArrayBuffer && (e = new DataView(e)), !(e instanceof DataView)) throw new lt("_$SS#loadModel(b) / b _$x be DataView or ArrayBuffer");
- var r, o = new St(e),
- n = o._$ST(),
- s = o._$ST(),
- a = o._$ST();
- if (109 != n || 111 != s || 99 != a) throw new lt("_$gi _$C _$li , _$Q0 _$P0.");
- if (r = o._$ST(), o._$gr(r), r > G._$T7) {
- t._$NP |= i._$4s;
- throw new lt("_$gi _$C _$li , _$n0 _$_ version _$li ( SDK : " + G._$T7 + " < _$f0 : " + r + " )@_$SS#loadModel()\n")
- }
- var h = o._$nP();
- if (r >= G._$s7) {
- var l = o._$9T(),
- $ = o._$9T();
- if (-30584 != l || -30584 != $) throw t._$NP |= i._$0s, new lt("_$gi _$C _$li , _$0 _$6 _$Ui.")
- }
- t._$KS(h);
- var u = t.getModelContext();
- u.setDrawParam(t.getDrawParam()), u.init()
- } catch (t) {
- _._$Rb(t)
- }
- }, i.prototype._$KS = function(t) {
- this._$MT = t
- }, i.prototype.getModelImpl = function() {
- return null == this._$MT && (this._$MT = new p, this._$MT._$zP()), this._$MT
- }, i.prototype.getCanvasWidth = function() {
- return null == this._$MT ? 0 : this._$MT.getCanvasWidth()
- }, i.prototype.getCanvasHeight = function() {
- return null == this._$MT ? 0 : this._$MT.getCanvasHeight()
- }, i.prototype.getParamFloat = function(t) {
- return "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), this._$5S.getParamFloat(t)
- }, i.prototype.setParamFloat = function(t, i, e) {
- "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 - e) + i * e)
- }, i.prototype.addToParamFloat = function(t, i, e) {
- "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) + i * e)
- }, i.prototype.multParamFloat = function(t, i, e) {
- "number" != typeof t && (t = this._$5S.getParamIndex(u.getID(t))), arguments.length < 3 && (e = 1), this._$5S.setParamFloat(t, this._$5S.getParamFloat(t) * (1 + (i - 1) * e))
- }, i.prototype.getParamIndex = function(t) {
- return this._$5S.getParamIndex(u.getID(t))
- }, i.prototype.loadParam = function() {
- this._$5S.loadParam()
- }, i.prototype.saveParam = function() {
- this._$5S.saveParam()
- }, i.prototype.init = function() {
- this._$5S.init()
- }, i.prototype.update = function() {
- this._$5S.update()
- }, i.prototype._$Rs = function() {
- return _._$li("_$60 _$PT _$Rs()"), -1
- }, i.prototype._$Ds = function(t) {
- _._$li("_$60 _$PT _$SS#_$Ds() \n")
- }, i.prototype._$K2 = function() {}, i.prototype.draw = function() {}, i.prototype.getModelContext = function() {
- return this._$5S
- }, i.prototype._$s2 = function() {
- return this._$NP
- }, i.prototype._$P7 = function(t, i, e, r) {
- var o = -1,
- n = 0,
- s = this;
- if (0 != e) if (1 == t.length) {
- var _ = t[0],
- a = 0 != s.getParamFloat(_),
- h = i[0],
- l = s.getPartsOpacity(h),
- $ = e / r;
- a ? (l += $) > 1 && (l = 1) : (l -= $) < 0 && (l = 0), s.setPartsOpacity(h, l)
- } else {
- for (var u = 0; u < t.length; u++) {
- var _ = t[u],
- p = 0 != s.getParamFloat(_);
- if (p) {
- if (o >= 0) break;
- o = u;
- var h = i[u];
- n = s.getPartsOpacity(h), n += e / r, n > 1 && (n = 1)
- }
- }
- o < 0 && (console.log("No _$wi _$q0/ _$U default[%s]", t[0]), o = 0, n = 1, s.loadParam(), s.setParamFloat(t[o], n), s.saveParam());
- for (var u = 0; u < t.length; u++) {
- var h = i[u];
- if (o == u) s.setPartsOpacity(h, n);
- else {
- var f, c = s.getPartsOpacity(h);
- f = n < .5 ? -.5 * n / .5 + 1 : .5 * (1 - n) / .5;
- var d = (1 - f) * (1 - n);
- d > .15 && (f = 1 - .15 / (1 - n)), c > f && (c = f), s.setPartsOpacity(h, c)
- }
- }
- } else for (var u = 0; u < t.length; u++) {
- var _ = t[u],
- h = i[u],
- p = 0 != s.getParamFloat(_);
- s.setPartsOpacity(h, p ? 1 : 0)
- }
- }, i.prototype.setPartsOpacity = function(t, i) {
- "number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), this._$5S.setPartsOpacity(t, i)
- }, i.prototype.getPartsDataIndex = function(t) {
- return t instanceof l || (t = l.getID(t)), this._$5S.getPartsDataIndex(t)
- }, i.prototype.getPartsOpacity = function(t) {
- return "number" != typeof t && (t = this._$5S.getPartsDataIndex(l.getID(t))), t < 0 ? 0 : this._$5S.getPartsOpacity(t)
- }, i.prototype.getDrawParam = function() {}, i.prototype.getDrawDataIndex = function(t) {
- return this._$5S.getDrawDataIndex(b.getID(t))
- }, i.prototype.getDrawData = function(t) {
- return this._$5S.getDrawData(t)
- }, i.prototype.getTransformedPoints = function(t) {
- var i = this._$5S._$C2(t);
- return i instanceof ut ? i.getTransformedPoints() : null
- }, i.prototype.getIndexArray = function(t) {
- if (t < 0 || t >= this._$5S._$aS.length) return null;
- var i = this._$5S._$aS[t];
- return null != i && i.getType() == W._$wb && i instanceof $t ? i.getIndexArray() : null
- }, e.CHANNEL_COUNT = 4, e.RENDER_TEXTURE_USE_MIPMAP = !1, e.NOT_USED_FRAME = -100, e.prototype._$L7 = function() {
- if (this.tmpModelToViewMatrix && (this.tmpModelToViewMatrix = null), this.tmpMatrix2 && (this.tmpMatrix2 = null), this.tmpMatrixForMask && (this.tmpMatrixForMask = null), this.tmpMatrixForDraw && (this.tmpMatrixForDraw = null), this.tmpBoundsOnModel && (this.tmpBoundsOnModel = null), this.CHANNEL_COLORS) {
- for (var t = this.CHANNEL_COLORS.length - 1; t >= 0; --t) this.CHANNEL_COLORS.splice(t, 1);
- this.CHANNEL_COLORS = []
- }
- this.releaseShader()
- }, e.prototype.releaseShader = function() {
- for (var t = at.frameBuffers.length, i = 0; i < t; i++) this.gl.deleteFramebuffer(at.frameBuffers[i].framebuffer);
- at.frameBuffers = [], at.glContext = []
- }, e.prototype.init = function(t, i, e) {
- for (var o = 0; o < i.length; o++) {
- var n = i[o].getClipIDList();
- if (null != n) {
- var s = this.findSameClip(n);
- null == s && (s = new r(this, t, n), this.clipContextList.push(s));
- var _ = i[o].getDrawDataID(),
- a = t.getDrawDataIndex(_);
- s.addClippedDrawData(_, a);
- e[o].clipBufPre_clipContext = s
- }
- }
- }, e.prototype.getMaskRenderTexture = function() {
- var t = null;
- return t = this.dp_webgl.createFramebuffer(), at.frameBuffers[this.dp_webgl.glno] = t, this.dp_webgl.glno
- }, e.prototype.setupClip = function(t, i) {
- for (var e = 0, r = 0; r < this.clipContextList.length; r++) {
- var o = this.clipContextList[r];
- this.calcClippedDrawTotalBounds(t, o), o.isUsing && e++
- }
- if (e > 0) {
- var n = i.gl.getParameter(i.gl.FRAMEBUFFER_BINDING),
- s = new Array(4);
- s[0] = 0, s[1] = 0, s[2] = i.gl.canvas.width, s[3] = i.gl.canvas.height, i.gl.viewport(0, 0, at.clippingMaskBufferSize, at.clippingMaskBufferSize), this.setupLayoutBounds(e), i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, at.frameBuffers[this.curFrameNo].framebuffer), i.gl.clearColor(0, 0, 0, 0), i.gl.clear(i.gl.COLOR_BUFFER_BIT);
- for (var r = 0; r < this.clipContextList.length; r++) {
- var o = this.clipContextList[r],
- _ = o.allClippedDrawRect,
- a = (o.layoutChannelNo, o.layoutBounds);
- this.tmpBoundsOnModel._$jL(_), this.tmpBoundsOnModel.expand(.05 * _.width, .05 * _.height);
- var h = a.width / this.tmpBoundsOnModel.width,
- l = a.height / this.tmpBoundsOnModel.height;
- this.tmpMatrix2.identity(), this.tmpMatrix2.translate(-1, -1, 0), this.tmpMatrix2.scale(2, 2, 1), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForMask.setMatrix(this.tmpMatrix2.m), this.tmpMatrix2.identity(), this.tmpMatrix2.translate(a.x, a.y, 0), this.tmpMatrix2.scale(h, l, 1), this.tmpMatrix2.translate(-this.tmpBoundsOnModel.x, -this.tmpBoundsOnModel.y, 0), this.tmpMatrixForDraw.setMatrix(this.tmpMatrix2.m);
- for (var $ = this.tmpMatrixForMask.getArray(), u = 0; u < 16; u++) o.matrixForMask[u] = $[u];
- for (var p = this.tmpMatrixForDraw.getArray(), u = 0; u < 16; u++) o.matrixForDraw[u] = p[u];
- for (var f = o.clippingMaskDrawIndexList.length, c = 0; c < f; c++) {
- var d = o.clippingMaskDrawIndexList[c],
- g = t.getDrawData(d),
- y = t._$C2(d);
- i.setClipBufPre_clipContextForMask(o), g.draw(i, t, y)
- }
- }
- i.gl.bindFramebuffer(i.gl.FRAMEBUFFER, n), i.setClipBufPre_clipContextForMask(null), i.gl.viewport(s[0], s[1], s[2], s[3])
- }
- }, e.prototype.getColorBuffer = function() {
- return this.colorBuffer
- }, e.prototype.findSameClip = function(t) {
- for (var i = 0; i < this.clipContextList.length; i++) {
- var e = this.clipContextList[i],
- r = e.clipIDList.length;
- if (r == t.length) {
- for (var o = 0, n = 0; n < r; n++) for (var s = e.clipIDList[n], _ = 0; _ < r; _++) if (t[_] == s) {
- o++;
- break
- }
- if (o == r) return e
- }
- }
- return null
- }, e.prototype.calcClippedDrawTotalBounds = function(t, i) {
- for (var e = t._$Ri.getModelImpl().getCanvasWidth(), r = t._$Ri.getModelImpl().getCanvasHeight(), o = e > r ? e : r, n = o, s = o, _ = 0, a = 0, h = i.clippedDrawContextList.length, l = 0; l < h; l++) {
- var $ = i.clippedDrawContextList[l],
- u = $.drawDataIndex,
- p = t._$C2(u);
- if (p._$yo()) {
- for (var f = p.getTransformedPoints(), c = f.length, d = [], g = [], y = 0, m = U._$i2; m < c; m += U._$No) d[y] = f[m], g[y] = f[m + 1], y++;
- var T = Math.min.apply(null, d),
- P = Math.min.apply(null, g),
- S = Math.max.apply(null, d),
- v = Math.max.apply(null, g);
- T < n && (n = T), P < s && (s = P), S > _ && (_ = S), v > a && (a = v)
- }
- }
- if (n == o) i.allClippedDrawRect.x = 0, i.allClippedDrawRect.y = 0, i.allClippedDrawRect.width = 0, i.allClippedDrawRect.height = 0, i.isUsing = !1;
- else {
- var L = _ - n,
- M = a - s;
- i.allClippedDrawRect.x = n, i.allClippedDrawRect.y = s, i.allClippedDrawRect.width = L, i.allClippedDrawRect.height = M, i.isUsing = !0
- }
- }, e.prototype.setupLayoutBounds = function(t) {
- var i = t / e.CHANNEL_COUNT,
- r = t % e.CHANNEL_COUNT;
- i = ~~i, r = ~~r;
- for (var o = 0, n = 0; n < e.CHANNEL_COUNT; n++) {
- var s = i + (n < r ? 1 : 0);
- if (0 == s);
- else if (1 == s) {
- var a = this.clipContextList[o++];
- a.layoutChannelNo = n, a.layoutBounds.x = 0, a.layoutBounds.y = 0, a.layoutBounds.width = 1, a.layoutBounds.height = 1
- } else if (2 == s) for (var h = 0; h < s; h++) {
- var l = h % 2,
- $ = 0;
- l = ~~l;
- var a = this.clipContextList[o++];
- a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = 0, a.layoutBounds.width = .5, a.layoutBounds.height = 1
- } else if (s <= 4) for (var h = 0; h < s; h++) {
- var l = h % 2,
- $ = h / 2;
- l = ~~l, $ = ~~$;
- var a = this.clipContextList[o++];
- a.layoutChannelNo = n, a.layoutBounds.x = .5 * l, a.layoutBounds.y = .5 * $, a.layoutBounds.width = .5, a.layoutBounds.height = .5
- } else if (s <= 9) for (var h = 0; h < s; h++) {
- var l = h % 3,
- $ = h / 3;
- l = ~~l, $ = ~~$;
- var a = this.clipContextList[o++];
- a.layoutChannelNo = n, a.layoutBounds.x = l / 3, a.layoutBounds.y = $ / 3, a.layoutBounds.width = 1 / 3, a.layoutBounds.height = 1 / 3
- } else _._$li("_$6 _$0P mask count : %d", s)
- }
- }, r.prototype.addClippedDrawData = function(t, i) {
- var e = new o(t, i);
- this.clippedDrawContextList.push(e)
- }, s._$JT = function(t, i, e) {
- var r = t / i,
- o = e / i,
- n = o,
- s = 1 - (1 - o) * (1 - o),
- _ = 1 - (1 - n) * (1 - n),
- a = 1 / 3 * (1 - o) * s + (n * (2 / 3) + 1 / 3 * (1 - n)) * (1 - s),
- h = (n + 2 / 3 * (1 - n)) * _ + (o * (1 / 3) + 2 / 3 * (1 - o)) * (1 - _),
- l = 1 - 3 * h + 3 * a - 0,
- $ = 3 * h - 6 * a + 0,
- u = 3 * a - 0;
- if (r <= 0) return 0;
- if (r >= 1) return 1;
- var p = r,
- f = p * p;
- return l * (p * f) + $ * f + u * p + 0
- }, s.prototype._$a0 = function() {}, s.prototype.setFadeIn = function(t) {
- this._$dP = t
- }, s.prototype.setFadeOut = function(t) {
- this._$eo = t
- }, s.prototype._$pT = function(t) {
- this._$V0 = t
- }, s.prototype.getFadeOut = function() {
- return this._$eo
- }, s.prototype._$4T = function() {
- return this._$eo
- }, s.prototype._$mT = function() {
- return this._$V0
- }, s.prototype.getDurationMSec = function() {
- return -1
- }, s.prototype.getLoopDurationMSec = function() {
- return -1
- }, s.prototype.updateParam = function(t, i) {
- if (i._$AT && !i._$9L) {
- var e = w.getUserTimeMSec();
- if (i._$z2 < 0) {
- i._$z2 = e, i._$bs = e;
- var r = this.getDurationMSec();
- i._$Do < 0 && (i._$Do = r <= 0 ? -1 : i._$z2 + r)
- }
- var o = this._$V0;
- o = o * (0 == this._$dP ? 1 : ht._$r2((e - i._$bs) / this._$dP)) * (0 == this._$eo || i._$Do < 0 ? 1 : ht._$r2((i._$Do - e) / this._$eo)), 0 <= o && o <= 1 || console.log("### assert!! ### "), this.updateParamExe(t, e, o, i), i._$Do > 0 && i._$Do < e && (i._$9L = !0)
- }
- }, s.prototype.updateParamExe = function(t, i, e, r) {}, _._$8s = 0, _._$fT = new Object, _.start = function(t) {
- var i = _._$fT[t];
- null == i && (i = new a, i._$r = t, _._$fT[t] = i), i._$0S = w.getSystemTimeMSec()
- }, _.dump = function(t) {
- var i = _._$fT[t];
- if (null != i) {
- var e = w.getSystemTimeMSec(),
- r = e - i._$0S;
- return console.log(t + " : " + r + "ms"), r
- }
- return -1
- }, _.end = function(t) {
- var i = _._$fT[t];
- if (null != i) {
- return w.getSystemTimeMSec() - i._$0S
- }
- return -1
- }, _._$li = function(t, i) {
- console.log("_$li : " + t + "\n", i)
- }, _._$Ji = function(t, i) {
- console.log(t, i)
- }, _._$dL = function(t, i) {
- console.log(t, i), console.log("\n")
- }, _._$KL = function(t, i) {
- for (var e = 0; e < i; e++) e % 16 == 0 && e > 0 ? console.log("\n") : e % 8 == 0 && e > 0 && console.log(" "), console.log("%02X ", 255 & t[e]);
- console.log("\n")
- }, _._$nr = function(t, i, e) {
- console.log("%s\n", t);
- for (var r = i.length, o = 0; o < r; ++o) console.log("%5d", i[o]), console.log("%s\n", e), console.log(",");
- console.log("\n")
- }, _._$Rb = function(t) {
- console.log("dump exception : " + t), console.log("stack :: " + t.stack)
- }, h.prototype._$8P = function() {
- return .5 * (this.x + this.x + this.width)
- }, h.prototype._$6P = function() {
- return .5 * (this.y + this.y + this.height)
- }, h.prototype._$EL = function() {
- return this.x + this.width
- }, h.prototype._$5T = function() {
- return this.y + this.height
- }, h.prototype._$jL = function(t, i, e, r) {
- this.x = t, this.y = i, this.width = e, this.height = r
- }, h.prototype._$jL = function(t) {
- this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height
- }, l.prototype = new et, l._$tP = new Object, l._$27 = function() {
- l._$tP.clear()
- }, l.getID = function(t) {
- var i = l._$tP[t];
- return null == i && (i = new l(t), l._$tP[t] = i), i
- }, l.prototype._$3s = function() {
- return new l
- }, u.prototype = new et, u._$tP = new Object, u._$27 = function() {
- u._$tP.clear()
- }, u.getID = function(t) {
- var i = u._$tP[t];
- return null == i && (i = new u(t), u._$tP[t] = i), i
- }, u.prototype._$3s = function() {
- return new u
- }, p._$42 = 0, p.prototype._$zP = function() {
- null == this._$vo && (this._$vo = new ot), null == this._$F2 && (this._$F2 = new Array)
- }, p.prototype.getCanvasWidth = function() {
- return this._$ao
- }, p.prototype.getCanvasHeight = function() {
- return this._$1S
- }, p.prototype._$F0 = function(t) {
- this._$vo = t._$nP(), this._$F2 = t._$nP(), this._$ao = t._$6L(), this._$1S = t._$6L()
- }, p.prototype._$6S = function(t) {
- this._$F2.push(t)
- }, p.prototype._$Xr = function() {
- return this._$F2
- }, p.prototype._$E2 = function() {
- return this._$vo
- }, f.prototype.setup = function(t, i, e) {
- this._$ks = this._$Yb(), this.p2._$xT(), 3 == arguments.length && (this._$Fo = t, this._$L2 = i, this.p1._$p = e, this.p2._$p = e, this.p2.y = t, this.setup())
- }, f.prototype.getPhysicsPoint1 = function() {
- return this.p1
- }, f.prototype.getPhysicsPoint2 = function() {
- return this.p2
- }, f.prototype._$qr = function() {
- return this._$Db
- }, f.prototype._$pr = function(t) {
- this._$Db = t
- }, f.prototype._$5r = function() {
- return this._$M2
- }, f.prototype._$Cs = function() {
- return this._$9b
- }, f.prototype._$Yb = function() {
- return -180 * Math.atan2(this.p1.x - this.p2.x, -(this.p1.y - this.p2.y)) / Math.PI
- }, f.prototype.addSrcParam = function(t, i, e, r) {
- var o = new g(t, i, e, r);
- this._$lL.push(o)
- }, f.prototype.addTargetParam = function(t, i, e, r) {
- var o = new T(t, i, e, r);
- this._$qP.push(o)
- }, f.prototype.update = function(t, i) {
- if (0 == this._$iP) return this._$iP = this._$iT = i, void(this._$Fo = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y)));
- var e = (i - this._$iT) / 1e3;
- if (0 != e) {
- for (var r = this._$lL.length - 1; r >= 0; --r) {
- this._$lL[r]._$oP(t, this)
- }
- this._$oo(t, e), this._$M2 = this._$Yb(), this._$9b = (this._$M2 - this._$ks) / e, this._$ks = this._$M2
- }
- for (var r = this._$qP.length - 1; r >= 0; --r) {
- this._$qP[r]._$YS(t, this)
- }
- this._$iT = i
- }, f.prototype._$oo = function(t, i) {
- i < .033 && (i = .033);
- var e = 1 / i;
- this.p1.vx = (this.p1.x - this.p1._$s0) * e, this.p1.vy = (this.p1.y - this.p1._$70) * e, this.p1.ax = (this.p1.vx - this.p1._$7L) * e, this.p1.ay = (this.p1.vy - this.p1._$HL) * e, this.p1.fx = this.p1.ax * this.p1._$p, this.p1.fy = this.p1.ay * this.p1._$p, this.p1._$xT();
- var r, o, n = -Math.atan2(this.p1.y - this.p2.y, this.p1.x - this.p2.x),
- s = Math.cos(n),
- _ = Math.sin(n),
- a = 9.8 * this.p2._$p,
- h = this._$Db * Lt._$bS,
- l = a * Math.cos(n - h);
- r = l * _, o = l * s;
- var $ = -this.p1.fx * _ * _,
- u = -this.p1.fy * _ * s,
- p = -this.p2.vx * this._$L2,
- f = -this.p2.vy * this._$L2;
- this.p2.fx = r + $ + p, this.p2.fy = o + u + f, this.p2.ax = this.p2.fx / this.p2._$p, this.p2.ay = this.p2.fy / this.p2._$p, this.p2.vx += this.p2.ax * i, this.p2.vy += this.p2.ay * i, this.p2.x += this.p2.vx * i, this.p2.y += this.p2.vy * i;
- var c = Math.sqrt((this.p1.x - this.p2.x) * (this.p1.x - this.p2.x) + (this.p1.y - this.p2.y) * (this.p1.y - this.p2.y));
- this.p2.x = this.p1.x + this._$Fo * (this.p2.x - this.p1.x) / c, this.p2.y = this.p1.y + this._$Fo * (this.p2.y - this.p1.y) / c, this.p2.vx = (this.p2.x - this.p2._$s0) * e, this.p2.vy = (this.p2.y - this.p2._$70) * e, this.p2._$xT()
- }, c.prototype._$xT = function() {
- this._$s0 = this.x, this._$70 = this.y, this._$7L = this.vx, this._$HL = this.vy
- }, d.prototype._$oP = function(t, i) {}, g.prototype = new d, g.prototype._$oP = function(t, i) {
- var e = this.scale * t.getParamFloat(this._$wL),
- r = i.getPhysicsPoint1();
- switch (this._$tL) {
- default:
- case f.Src.SRC_TO_X:
- r.x = r.x + (e - r.x) * this._$V0;
- break;
- case f.Src.SRC_TO_Y:
- r.y = r.y + (e - r.y) * this._$V0;
- break;
- case f.Src.SRC_TO_G_ANGLE:
- var o = i._$qr();
- o += (e - o) * this._$V0, i._$pr(o)
- }
- }, y.prototype._$YS = function(t, i) {}, T.prototype = new y, T.prototype._$YS = function(t, i) {
- switch (this._$YP) {
- default:
- case f.Target.TARGET_FROM_ANGLE:
- t.setParamFloat(this._$wL, this.scale * i._$5r(), this._$V0);
- break;
- case f.Target.TARGET_FROM_ANGLE_V:
- t.setParamFloat(this._$wL, this.scale * i._$Cs(), this._$V0)
- }
- }, f.Src = function() {}, f.Src.SRC_TO_X = "SRC_TO_X", f.Src.SRC_TO_Y = "SRC_TO_Y", f.Src.SRC_TO_G_ANGLE = "SRC_TO_G_ANGLE", f.Target = function() {}, f.Target.TARGET_FROM_ANGLE = "TARGET_FROM_ANGLE", f.Target.TARGET_FROM_ANGLE_V = "TARGET_FROM_ANGLE_V", P.prototype.init = function(t) {
- this._$fL = t._$fL, this._$gL = t._$gL, this._$B0 = t._$B0, this._$z0 = t._$z0, this._$qT = t._$qT, this.reflectX = t.reflectX, this.reflectY = t.reflectY
- }, P.prototype._$F0 = function(t) {
- this._$fL = t._$_T(), this._$gL = t._$_T(), this._$B0 = t._$_T(), this._$z0 = t._$_T(), this._$qT = t._$_T(), t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this.reflectX = t._$po(), this.reflectY = t._$po())
- }, P.prototype._$e = function() {};
- var It = function() {};
- It._$ni = function(t, i, e, r, o, n, s, _, a) {
- var h = s * n - _ * o;
- if (0 == h) return null;
- var l, $ = ((t - e) * n - (i - r) * o) / h;
- return l = 0 != o ? (t - e - $ * s) / o : (i - r - $ * _) / n, isNaN(l) && (l = (t - e - $ * s) / o, isNaN(l) && (l = (i - r - $ * _) / n), isNaN(l) && (console.log("a is NaN @UtVector#_$ni() "), console.log("v1x : " + o), console.log("v1x != 0 ? " + (0 != o)))), null == a ? new Array(l, $) : (a[0] = l, a[1] = $, a)
- }, S.prototype._$8P = function() {
- return this.x + .5 * this.width
- }, S.prototype._$6P = function() {
- return this.y + .5 * this.height
- }, S.prototype._$EL = function() {
- return this.x + this.width
- }, S.prototype._$5T = function() {
- return this.y + this.height
- }, S.prototype._$jL = function(t, i, e, r) {
- this.x = t, this.y = i, this.width = e, this.height = r
- }, S.prototype._$jL = function(t) {
- this.x = t.x, this.y = t.y, this.width = t.width, this.height = t.height
- }, S.prototype.contains = function(t, i) {
- return this.x <= this.x && this.y <= this.y && this.x <= this.x + this.width && this.y <= this.y + this.height
- }, S.prototype.expand = function(t, i) {
- this.x -= t, this.y -= i, this.width += 2 * t, this.height += 2 * i
- }, v._$Z2 = function(t, i, e, r) {
- var o = i._$Q2(t, e),
- n = t._$vs(),
- s = t._$Tr();
- if (i._$zr(n, s, o), o <= 0) return r[n[0]];
- if (1 == o) {
- var _ = r[n[0]],
- a = r[n[1]],
- h = s[0];
- return _ + (a - _) * h | 0
- }
- if (2 == o) {
- var _ = r[n[0]],
- a = r[n[1]],
- l = r[n[2]],
- $ = r[n[3]],
- h = s[0],
- u = s[1],
- p = _ + (a - _) * h | 0,
- f = l + ($ - l) * h | 0;
- return p + (f - p) * u | 0
- }
- if (3 == o) {
- var c = r[n[0]],
- d = r[n[1]],
- g = r[n[2]],
- y = r[n[3]],
- m = r[n[4]],
- T = r[n[5]],
- P = r[n[6]],
- S = r[n[7]],
- h = s[0],
- u = s[1],
- v = s[2],
- _ = c + (d - c) * h | 0,
- a = g + (y - g) * h | 0,
- l = m + (T - m) * h | 0,
- $ = P + (S - P) * h | 0,
- p = _ + (a - _) * u | 0,
- f = l + ($ - l) * u | 0;
- return p + (f - p) * v | 0
- }
- if (4 == o) {
- var L = r[n[0]],
- M = r[n[1]],
- E = r[n[2]],
- A = r[n[3]],
- I = r[n[4]],
- w = r[n[5]],
- x = r[n[6]],
- O = r[n[7]],
- D = r[n[8]],
- R = r[n[9]],
- b = r[n[10]],
- F = r[n[11]],
- C = r[n[12]],
- N = r[n[13]],
- B = r[n[14]],
- U = r[n[15]],
- h = s[0],
- u = s[1],
- v = s[2],
- G = s[3],
- c = L + (M - L) * h | 0,
- d = E + (A - E) * h | 0,
- g = I + (w - I) * h | 0,
- y = x + (O - x) * h | 0,
- m = D + (R - D) * h | 0,
- T = b + (F - b) * h | 0,
- P = C + (N - C) * h | 0,
- S = B + (U - B) * h | 0,
- _ = c + (d - c) * u | 0,
- a = g + (y - g) * u | 0,
- l = m + (T - m) * u | 0,
- $ = P + (S - P) * u | 0,
- p = _ + (a - _) * v | 0,
- f = l + ($ - l) * v | 0;
- return p + (f - p) * G | 0
- }
- for (var Y = 1 << o, k = new Float32Array(Y), V = 0; V < Y; V++) {
- for (var X = V, z = 1, H = 0; H < o; H++) z *= X % 2 == 0 ? 1 - s[H] : s[H], X /= 2;
- k[V] = z
- }
- for (var W = new Float32Array(Y), j = 0; j < Y; j++) W[j] = r[n[j]];
- for (var q = 0, j = 0; j < Y; j++) q += k[j] * W[j];
- return q + .5 | 0
- }, v._$br = function(t, i, e, r) {
- var o = i._$Q2(t, e),
- n = t._$vs(),
- s = t._$Tr();
- if (i._$zr(n, s, o), o <= 0) return r[n[0]];
- if (1 == o) {
- var _ = r[n[0]],
- a = r[n[1]],
- h = s[0];
- return _ + (a - _) * h
- }
- if (2 == o) {
- var _ = r[n[0]],
- a = r[n[1]],
- l = r[n[2]],
- $ = r[n[3]],
- h = s[0],
- u = s[1];
- return (1 - u) * (_ + (a - _) * h) + u * (l + ($ - l) * h)
- }
- if (3 == o) {
- var p = r[n[0]],
- f = r[n[1]],
- c = r[n[2]],
- d = r[n[3]],
- g = r[n[4]],
- y = r[n[5]],
- m = r[n[6]],
- T = r[n[7]],
- h = s[0],
- u = s[1],
- P = s[2];
- return (1 - P) * ((1 - u) * (p + (f - p) * h) + u * (c + (d - c) * h)) + P * ((1 - u) * (g + (y - g) * h) + u * (m + (T - m) * h))
- }
- if (4 == o) {
- var S = r[n[0]],
- v = r[n[1]],
- L = r[n[2]],
- M = r[n[3]],
- E = r[n[4]],
- A = r[n[5]],
- I = r[n[6]],
- w = r[n[7]],
- x = r[n[8]],
- O = r[n[9]],
- D = r[n[10]],
- R = r[n[11]],
- b = r[n[12]],
- F = r[n[13]],
- C = r[n[14]],
- N = r[n[15]],
- h = s[0],
- u = s[1],
- P = s[2],
- B = s[3];
- return (1 - B) * ((1 - P) * ((1 - u) * (S + (v - S) * h) + u * (L + (M - L) * h)) + P * ((1 - u) * (E + (A - E) * h) + u * (I + (w - I) * h))) + B * ((1 - P) * ((1 - u) * (x + (O - x) * h) + u * (D + (R - D) * h)) + P * ((1 - u) * (b + (F - b) * h) + u * (C + (N - C) * h)))
- }
- for (var U = 1 << o, G = new Float32Array(U), Y = 0; Y < U; Y++) {
- for (var k = Y, V = 1, X = 0; X < o; X++) V *= k % 2 == 0 ? 1 - s[X] : s[X], k /= 2;
- G[Y] = V
- }
- for (var z = new Float32Array(U), H = 0; H < U; H++) z[H] = r[n[H]];
- for (var W = 0, H = 0; H < U; H++) W += G[H] * z[H];
- return W
- }, v._$Vr = function(t, i, e, r, o, n, s, _) {
- var a = i._$Q2(t, e),
- h = t._$vs(),
- l = t._$Tr();
- i._$zr(h, l, a);
- var $ = 2 * r,
- u = s;
- if (a <= 0) {
- var p = h[0],
- f = o[p];
- if (2 == _ && 0 == s) w._$jT(f, 0, n, 0, $);
- else for (var c = 0; c < $;) n[u] = f[c++], n[u + 1] = f[c++], u += _
- } else if (1 == a) for (var f = o[h[0]], d = o[h[1]], g = l[0], y = 1 - g, c = 0; c < $;) n[u] = f[c] * y + d[c] * g, ++c, n[u + 1] = f[c] * y + d[c] * g, ++c, u += _;
- else if (2 == a) for (var f = o[h[0]], d = o[h[1]], m = o[h[2]], T = o[h[3]], g = l[0], P = l[1], y = 1 - g, S = 1 - P, v = S * y, L = S * g, M = P * y, E = P * g, c = 0; c < $;) n[u] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, n[u + 1] = v * f[c] + L * d[c] + M * m[c] + E * T[c], ++c, u += _;
- else if (3 == a) for (var A = o[h[0]], I = o[h[1]], x = o[h[2]], O = o[h[3]], D = o[h[4]], R = o[h[5]], b = o[h[6]], F = o[h[7]], g = l[0], P = l[1], C = l[2], y = 1 - g, S = 1 - P, N = 1 - C, B = N * S * y, U = N * S * g, G = N * P * y, Y = N * P * g, k = C * S * y, V = C * S * g, X = C * P * y, z = C * P * g, c = 0; c < $;) n[u] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, n[u + 1] = B * A[c] + U * I[c] + G * x[c] + Y * O[c] + k * D[c] + V * R[c] + X * b[c] + z * F[c], ++c, u += _;
- else if (4 == a) for (var H = o[h[0]], W = o[h[1]], j = o[h[2]], q = o[h[3]], J = o[h[4]], Q = o[h[5]], Z = o[h[6]], K = o[h[7]], tt = o[h[8]], it = o[h[9]], et = o[h[10]], rt = o[h[11]], ot = o[h[12]], nt = o[h[13]], st = o[h[14]], _t = o[h[15]], g = l[0], P = l[1], C = l[2], at = l[3], y = 1 - g, S = 1 - P, N = 1 - C, ht = 1 - at, lt = ht * N * S * y, $t = ht * N * S * g, ut = ht * N * P * y, pt = ht * N * P * g, ft = ht * C * S * y, ct = ht * C * S * g, dt = ht * C * P * y, gt = ht * C * P * g, yt = at * N * S * y, mt = at * N * S * g, Tt = at * N * P * y, Pt = at * N * P * g, St = at * C * S * y, vt = at * C * S * g, Lt = at * C * P * y, Mt = at * C * P * g, c = 0; c < $;) n[u] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, n[u + 1] = lt * H[c] + $t * W[c] + ut * j[c] + pt * q[c] + ft * J[c] + ct * Q[c] + dt * Z[c] + gt * K[c] + yt * tt[c] + mt * it[c] + Tt * et[c] + Pt * rt[c] + St * ot[c] + vt * nt[c] + Lt * st[c] + Mt * _t[c], ++c, u += _;
- else {
- for (var Et = 1 << a, At = new Float32Array(Et), It = 0; It < Et; It++) {
- for (var wt = It, xt = 1, Ot = 0; Ot < a; Ot++) xt *= wt % 2 == 0 ? 1 - l[Ot] : l[Ot], wt /= 2;
- At[It] = xt
- }
- for (var Dt = new Float32Array(Et), Rt = 0; Rt < Et; Rt++) Dt[Rt] = o[h[Rt]];
- for (var c = 0; c < $;) {
- for (var bt = 0, Ft = 0, Ct = c + 1, Rt = 0; Rt < Et; Rt++) bt += At[Rt] * Dt[Rt][c], Ft += At[Rt] * Dt[Rt][Ct];
- c += 2, n[u] = bt, n[u + 1] = Ft, u += _
- }
- }
- }, L.prototype._$HT = function(t, i) {
- this.x = t, this.y = i
- }, L.prototype._$HT = function(t) {
- this.x = t.x, this.y = t.y
- }, M._$ur = -2, M._$ES = 500, M._$wb = 2, M._$8S = 3, M._$52 = M._$ES, M._$R2 = M._$ES, M._$or = function() {
- return M._$52
- }, M._$Pr = function() {
- return M._$R2
- }, M.prototype.convertClipIDForV2_11 = function(t) {
- var i = [];
- return null == t ? null : 0 == t.length ? null : /,/.test(t) ? i = t.id.split(",") : (i.push(t.id), i)
- }, M.prototype._$F0 = function(t) {
- this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = [], this._$MS(this._$Lb)
- }, M.prototype.getClipIDList = function() {
- return this.clipIDList
- }, M.prototype.init = function(t) {}, M.prototype._$Nr = function(t, i) {
- if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs);
- else if (i._$IS[0]) return;
- i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS)
- }, M.prototype._$2b = function(t, i) {}, M.prototype.getDrawDataID = function() {
- return this._$gP
- }, M.prototype._$j2 = function(t) {
- this._$gP = t
- }, M.prototype.getOpacity = function(t, i) {
- return i._$7s
- }, M.prototype._$zS = function(t, i) {
- return i._$Us
- }, M.prototype._$MS = function(t) {
- for (var i = t.length - 1; i >= 0; --i) {
- var e = t[i];
- e < M._$52 ? M._$52 = e : e > M._$R2 && (M._$R2 = e)
- }
- }, M.prototype.getTargetBaseDataID = function() {
- return this._$dr
- }, M.prototype._$gs = function(t) {
- this._$dr = t
- }, M.prototype._$32 = function() {
- return null != this._$dr && this._$dr != yt._$2o()
- }, M.prototype.preDraw = function(t, i, e) {}, M.prototype.draw = function(t, i, e) {}, M.prototype.getType = function() {}, M.prototype._$B2 = function(t, i, e) {}, E._$ps = 32, E.CLIPPING_PROCESS_NONE = 0, E.CLIPPING_PROCESS_OVERWRITE_ALPHA = 1, E.CLIPPING_PROCESS_MULTIPLY_ALPHA = 2, E.CLIPPING_PROCESS_DRAW = 3, E.CLIPPING_PROCESS_CLEAR_ALPHA = 4, E.prototype.setChannelFlagAsColor = function(t, i) {
- this.CHANNEL_COLORS[t] = i
- }, E.prototype.getChannelFlagAsColor = function(t) {
- return this.CHANNEL_COLORS[t]
- }, E.prototype._$ZT = function() {}, E.prototype._$Uo = function(t, i, e, r, o, n, s) {}, E.prototype._$Rs = function() {
- return -1
- }, E.prototype._$Ds = function(t) {}, E.prototype.setBaseColor = function(t, i, e, r) {
- t < 0 ? t = 0 : t > 1 && (t = 1), i < 0 ? i = 0 : i > 1 && (i = 1), e < 0 ? e = 0 : e > 1 && (e = 1), r < 0 ? r = 0 : r > 1 && (r = 1), this._$lT = t, this._$C0 = i, this._$tT = e, this._$WL = r
- }, E.prototype._$WP = function(t) {
- this.culling = t
- }, E.prototype.setMatrix = function(t) {
- for (var i = 0; i < 16; i++) this.matrix4x4[i] = t[i]
- }, E.prototype._$IT = function() {
- return this.matrix4x4
- }, E.prototype.setPremultipliedAlpha = function(t) {
- this.premultipliedAlpha = t
- }, E.prototype.isPremultipliedAlpha = function() {
- return this.premultipliedAlpha
- }, E.prototype.setAnisotropy = function(t) {
- this.anisotropy = t
- }, E.prototype.getAnisotropy = function() {
- return this.anisotropy
- }, E.prototype.getClippingProcess = function() {
- return this.clippingProcess
- }, E.prototype.setClippingProcess = function(t) {
- this.clippingProcess = t
- }, E.prototype.setClipBufPre_clipContextForMask = function(t) {
- this.clipBufPre_clipContextMask = t
- }, E.prototype.getClipBufPre_clipContextMask = function() {
- return this.clipBufPre_clipContextMask
- }, E.prototype.setClipBufPre_clipContextForDraw = function(t) {
- this.clipBufPre_clipContextDraw = t
- }, E.prototype.getClipBufPre_clipContextDraw = function() {
- return this.clipBufPre_clipContextDraw
- }, I._$ur = -2, I._$c2 = 1, I._$_b = 2, I.prototype._$F0 = function(t) {
- this._$kP = t._$nP(), this._$dr = t._$nP()
- }, I.prototype.readV2_opacity = function(t) {
- t.getFormatVersion() >= G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 && (this._$mS = t._$Tb())
- }, I.prototype.init = function(t) {}, I.prototype._$Nr = function(t, i) {}, I.prototype.interpolateOpacity = function(t, i, e, r) {
- null == this._$mS ? e.setInterpolatedOpacity(1) : e.setInterpolatedOpacity(v._$br(t, i, r, this._$mS))
- }, I.prototype._$2b = function(t, i) {}, I.prototype._$nb = function(t, i, e, r, o, n, s) {}, I.prototype.getType = function() {}, I.prototype._$gs = function(t) {
- this._$dr = t
- }, I.prototype._$a2 = function(t) {
- this._$kP = t
- }, I.prototype.getTargetBaseDataID = function() {
- return this._$dr
- }, I.prototype.getBaseDataID = function() {
- return this._$kP
- }, I.prototype._$32 = function() {
- return null != this._$dr && this._$dr != yt._$2o()
- }, w._$W2 = 0, w._$CS = w._$W2, w._$Mo = function() {
- return !0
- }, w._$XP = function(t) {
- try {
- for (var i = getTimeMSec(); getTimeMSec() - i < t;);
- } catch (t) {
- t._$Rb()
- }
- }, w.getUserTimeMSec = function() {
- return w._$CS == w._$W2 ? w.getSystemTimeMSec() : w._$CS
- }, w.setUserTimeMSec = function(t) {
- w._$CS = t
- }, w.updateUserTimeMSec = function() {
- return w._$CS = w.getSystemTimeMSec()
- }, w.getTimeMSec = function() {
- return (new Date).getTime()
- }, w.getSystemTimeMSec = function() {
- return (new Date).getTime()
- }, w._$Q = function(t) {}, w._$jT = function(t, i, e, r, o) {
- for (var n = 0; n < o; n++) e[r + n] = t[i + n]
- }, x._$ds = -2, x.prototype._$F0 = function(t) {
- this._$wL = t._$nP(), this._$VP = t._$6L(), this._$GP = t._$nP()
- }, x.prototype.getParamIndex = function(t) {
- return this._$2r != t && (this._$8o = x._$ds), this._$8o
- }, x.prototype._$Pb = function(t, i) {
- this._$8o = t, this._$2r = i
- }, x.prototype.getParamID = function() {
- return this._$wL
- }, x.prototype._$yP = function(t) {
- this._$wL = t
- }, x.prototype._$N2 = function() {
- return this._$VP
- }, x.prototype._$d2 = function() {
- return this._$GP
- }, x.prototype._$t2 = function(t, i) {
- this._$VP = t, this._$GP = i
- }, x.prototype._$Lr = function() {
- return this._$O2
- }, x.prototype._$wr = function(t) {
- this._$O2 = t
- }, x.prototype._$SL = function() {
- return this._$ri
- }, x.prototype._$AL = function(t) {
- this._$ri = t
- }, O.startsWith = function(t, i, e) {
- var r = i + e.length;
- if (r >= t.length) return !1;
- for (var o = i; o < r; o++) if (O.getChar(t, o) != e.charAt(o - i)) return !1;
- return !0
- }, O.getChar = function(t, i) {
- return String.fromCharCode(t.getUint8(i))
- }, O.createString = function(t, i, e) {
- for (var r = new ArrayBuffer(2 * e), o = new Uint16Array(r), n = 0; n < e; n++) o[n] = t.getUint8(i + n);
- return String.fromCharCode.apply(null, o)
- }, O._$LS = function(t, i, e, r) {
- t instanceof ArrayBuffer && (t = new DataView(t));
- var o = e,
- n = !1,
- s = !1,
- _ = 0,
- a = O.getChar(t, o);
- "-" == a && (n = !0, o++);
- for (var h = !1; o < i; o++) {
- switch (a = O.getChar(t, o)) {
- case "0":
- _ *= 10;
- break;
- case "1":
- _ = 10 * _ + 1;
- break;
- case "2":
- _ = 10 * _ + 2;
- break;
- case "3":
- _ = 10 * _ + 3;
- break;
- case "4":
- _ = 10 * _ + 4;
- break;
- case "5":
- _ = 10 * _ + 5;
- break;
- case "6":
- _ = 10 * _ + 6;
- break;
- case "7":
- _ = 10 * _ + 7;
- break;
- case "8":
- _ = 10 * _ + 8;
- break;
- case "9":
- _ = 10 * _ + 9;
- break;
- case ".":
- s = !0, o++, h = !0;
- break;
- default:
- h = !0
- }
- if (h) break
- }
- if (s) for (var l = .1, $ = !1; o < i; o++) {
- switch (a = O.getChar(t, o)) {
- case "0":
- break;
- case "1":
- _ += 1 * l;
- break;
- case "2":
- _ += 2 * l;
- break;
- case "3":
- _ += 3 * l;
- break;
- case "4":
- _ += 4 * l;
- break;
- case "5":
- _ += 5 * l;
- break;
- case "6":
- _ += 6 * l;
- break;
- case "7":
- _ += 7 * l;
- break;
- case "8":
- _ += 8 * l;
- break;
- case "9":
- _ += 9 * l;
- break;
- default:
- $ = !0
- }
- if (l *= .1, $) break
- }
- return n && (_ = -_), r[0] = o, _
- }, D.prototype._$zP = function() {
- this._$Ob = new Array
- }, D.prototype._$F0 = function(t) {
- this._$Ob = t._$nP()
- }, D.prototype._$Ur = function(t) {
- if (t._$WS()) return !0;
- for (var i = t._$v2(), e = this._$Ob.length - 1; e >= 0; --e) {
- var r = this._$Ob[e].getParamIndex(i);
- if (r == x._$ds && (r = t.getParamIndex(this._$Ob[e].getParamID())), t._$Xb(r)) return !0
- }
- return !1
- }, D.prototype._$Q2 = function(t, i) {
- for (var e, r, o = this._$Ob.length, n = t._$v2(), s = 0, _ = 0; _ < o; _++) {
- var a = this._$Ob[_];
- if (e = a.getParamIndex(n), e == x._$ds && (e = t.getParamIndex(a.getParamID()), a._$Pb(e, n)), e < 0) throw new Exception("err 23242 : " + a.getParamID());
- var h = e < 0 ? 0 : t.getParamFloat(e);
- r = a._$N2();
- var l, $, u = a._$d2(),
- p = -1,
- f = 0;
- if (r < 1);
- else if (1 == r) l = u[0], l - U._$J < h && h < l + U._$J ? (p = 0, f = 0) : (p = 0, i[0] = !0);
- else if (l = u[0], h < l - U._$J) p = 0, i[0] = !0;
- else if (h < l + U._$J) p = 0;
- else {
- for (var c = !1, d = 1; d < r; ++d) {
- if ($ = u[d], h < $ + U._$J) {
- $ - U._$J < h ? p = d : (p = d - 1, f = (h - l) / ($ - l), s++), c = !0;
- break
- }
- l = $
- }
- c || (p = r - 1, f = 0, i[0] = !0)
- }
- a._$wr(p), a._$AL(f)
- }
- return s
- }, D.prototype._$zr = function(t, i, e) {
- var r = 1 << e;
- r + 1 > U._$Qb && console.log("err 23245\n");
- for (var o = this._$Ob.length, n = 1, s = 1, _ = 0, a = 0; a < r; ++a) t[a] = 0;
- for (var h = 0; h < o; ++h) {
- var l = this._$Ob[h];
- if (0 == l._$SL()) {
- var $ = l._$Lr() * n;
- if ($ < 0 && at._$3T) throw new Exception("err 23246");
- for (var a = 0; a < r; ++a) t[a] += $
- } else {
- for (var $ = n * l._$Lr(), u = n * (l._$Lr() + 1), a = 0; a < r; ++a) t[a] += (a / s | 0) % 2 == 0 ? $ : u;
- i[_++] = l._$SL(), s *= 2
- }
- n *= l._$N2()
- }
- t[r] = 65535, i[_] = -1
- }, D.prototype._$h2 = function(t, i, e) {
- for (var r = new Float32Array(i), o = 0; o < i; ++o) r[o] = e[o];
- var n = new x;
- n._$yP(t), n._$t2(i, r), this._$Ob.push(n)
- }, D.prototype._$J2 = function(t) {
- for (var i = t, e = this._$Ob.length, r = 0; r < e; ++r) {
- var o = this._$Ob[r],
- n = o._$N2(),
- s = i % o._$N2(),
- _ = o._$d2()[s];
- console.log("%s[%d]=%7.2f / ", o.getParamID(), s, _), i /= n
- }
- console.log("\n")
- }, D.prototype.getParamCount = function() {
- return this._$Ob.length
- }, D.prototype._$zs = function() {
- return this._$Ob
- }, R.prototype.identity = function() {
- for (var t = 0; t < 16; t++) this.m[t] = t % 5 == 0 ? 1 : 0
- }, R.prototype.getArray = function() {
- return this.m
- }, R.prototype.getCopyMatrix = function() {
- return new Float32Array(this.m)
- }, R.prototype.setMatrix = function(t) {
- if (null != t && 16 == t.length) for (var i = 0; i < 16; i++) this.m[i] = t[i]
- }, R.prototype.mult = function(t, i, e) {
- return null == i ? null : (this == i ? this.mult_safe(this.m, t.m, i.m, e) : this.mult_fast(this.m, t.m, i.m, e), i)
- }, R.prototype.mult_safe = function(t, i, e, r) {
- if (t == e) {
- var o = new Array(16);
- this.mult_fast(t, i, o, r);
- for (var n = 15; n >= 0; --n) e[n] = o[n]
- } else this.mult_fast(t, i, e, r)
- }, R.prototype.mult_fast = function(t, i, e, r) {
- r ? (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14], e[3] = e[7] = e[11] = 0, e[15] = 1) : (e[0] = t[0] * i[0] + t[4] * i[1] + t[8] * i[2] + t[12] * i[3], e[4] = t[0] * i[4] + t[4] * i[5] + t[8] * i[6] + t[12] * i[7], e[8] = t[0] * i[8] + t[4] * i[9] + t[8] * i[10] + t[12] * i[11], e[12] = t[0] * i[12] + t[4] * i[13] + t[8] * i[14] + t[12] * i[15], e[1] = t[1] * i[0] + t[5] * i[1] + t[9] * i[2] + t[13] * i[3], e[5] = t[1] * i[4] + t[5] * i[5] + t[9] * i[6] + t[13] * i[7], e[9] = t[1] * i[8] + t[5] * i[9] + t[9] * i[10] + t[13] * i[11], e[13] = t[1] * i[12] + t[5] * i[13] + t[9] * i[14] + t[13] * i[15], e[2] = t[2] * i[0] + t[6] * i[1] + t[10] * i[2] + t[14] * i[3], e[6] = t[2] * i[4] + t[6] * i[5] + t[10] * i[6] + t[14] * i[7], e[10] = t[2] * i[8] + t[6] * i[9] + t[10] * i[10] + t[14] * i[11], e[14] = t[2] * i[12] + t[6] * i[13] + t[10] * i[14] + t[14] * i[15], e[3] = t[3] * i[0] + t[7] * i[1] + t[11] * i[2] + t[15] * i[3], e[7] = t[3] * i[4] + t[7] * i[5] + t[11] * i[6] + t[15] * i[7], e[11] = t[3] * i[8] + t[7] * i[9] + t[11] * i[10] + t[15] * i[11], e[15] = t[3] * i[12] + t[7] * i[13] + t[11] * i[14] + t[15] * i[15])
- }, R.prototype.translate = function(t, i, e) {
- this.m[12] = this.m[0] * t + this.m[4] * i + this.m[8] * e + this.m[12], this.m[13] = this.m[1] * t + this.m[5] * i + this.m[9] * e + this.m[13], this.m[14] = this.m[2] * t + this.m[6] * i + this.m[10] * e + this.m[14], this.m[15] = this.m[3] * t + this.m[7] * i + this.m[11] * e + this.m[15]
- }, R.prototype.scale = function(t, i, e) {
- this.m[0] *= t, this.m[4] *= i, this.m[8] *= e, this.m[1] *= t, this.m[5] *= i, this.m[9] *= e, this.m[2] *= t, this.m[6] *= i, this.m[10] *= e, this.m[3] *= t, this.m[7] *= i, this.m[11] *= e
- }, R.prototype.rotateX = function(t) {
- var i = Lt.fcos(t),
- e = Lt._$9(t),
- r = this.m[4];
- this.m[4] = r * i + this.m[8] * e, this.m[8] = r * -e + this.m[8] * i, r = this.m[5], this.m[5] = r * i + this.m[9] * e, this.m[9] = r * -e + this.m[9] * i, r = this.m[6], this.m[6] = r * i + this.m[10] * e, this.m[10] = r * -e + this.m[10] * i, r = this.m[7], this.m[7] = r * i + this.m[11] * e, this.m[11] = r * -e + this.m[11] * i
- }, R.prototype.rotateY = function(t) {
- var i = Lt.fcos(t),
- e = Lt._$9(t),
- r = this.m[0];
- this.m[0] = r * i + this.m[8] * -e, this.m[8] = r * e + this.m[8] * i, r = this.m[1], this.m[1] = r * i + this.m[9] * -e, this.m[9] = r * e + this.m[9] * i, r = m[2], this.m[2] = r * i + this.m[10] * -e, this.m[10] = r * e + this.m[10] * i, r = m[3], this.m[3] = r * i + this.m[11] * -e, this.m[11] = r * e + this.m[11] * i
- }, R.prototype.rotateZ = function(t) {
- var i = Lt.fcos(t),
- e = Lt._$9(t),
- r = this.m[0];
- this.m[0] = r * i + this.m[4] * e, this.m[4] = r * -e + this.m[4] * i, r = this.m[1], this.m[1] = r * i + this.m[5] * e, this.m[5] = r * -e + this.m[5] * i, r = this.m[2], this.m[2] = r * i + this.m[6] * e, this.m[6] = r * -e + this.m[6] * i, r = this.m[3], this.m[3] = r * i + this.m[7] * e, this.m[7] = r * -e + this.m[7] * i
- }, b.prototype = new et, b._$tP = new Object, b._$27 = function() {
- b._$tP.clear()
- }, b.getID = function(t) {
- var i = b._$tP[t];
- return null == i && (i = new b(t), b._$tP[t] = i), i
- }, b.prototype._$3s = function() {
- return new b
- }, F._$kS = -1, F._$pS = 0, F._$hb = 1, F.STATE_IDENTITY = 0, F._$gb = 1, F._$fo = 2, F._$go = 4, F.prototype.transform = function(t, i, e) {
- var r, o, n, s, _, a, h = 0,
- l = 0;
- switch (this._$hi) {
- default:
- return;
- case F._$go | F._$fo | F._$gb:
- for (r = this._$7, o = this._$H, n = this._$k, s = this._$f, _ = this._$g, a = this._$w; --e >= 0;) {
- var $ = t[h++],
- u = t[h++];
- i[l++] = r * $ + o * u + n, i[l++] = s * $ + _ * u + a
- }
- return;
- case F._$go | F._$fo:
- for (r = this._$7, o = this._$H, s = this._$f, _ = this._$g; --e >= 0;) {
- var $ = t[h++],
- u = t[h++];
- i[l++] = r * $ + o * u, i[l++] = s * $ + _ * u
- }
- return;
- case F._$go | F._$gb:
- for (o = this._$H, n = this._$k, s = this._$f, a = this._$w; --e >= 0;) {
- var $ = t[h++];
- i[l++] = o * t[h++] + n, i[l++] = s * $ + a
- }
- return;
- case F._$go:
- for (o = this._$H, s = this._$f; --e >= 0;) {
- var $ = t[h++];
- i[l++] = o * t[h++], i[l++] = s * $
- }
- return;
- case F._$fo | F._$gb:
- for (r = this._$7, n = this._$k, _ = this._$g, a = this._$w; --e >= 0;) i[l++] = r * t[h++] + n, i[l++] = _ * t[h++] + a;
- return;
- case F._$fo:
- for (r = this._$7, _ = this._$g; --e >= 0;) i[l++] = r * t[h++], i[l++] = _ * t[h++];
- return;
- case F._$gb:
- for (n = this._$k, a = this._$w; --e >= 0;) i[l++] = t[h++] + n, i[l++] = t[h++] + a;
- return;
- case F.STATE_IDENTITY:
- return void(t == i && h == l || w._$jT(t, h, i, l, 2 * e))
- }
- }, F.prototype.update = function() {
- 0 == this._$H && 0 == this._$f ? 1 == this._$7 && 1 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F.STATE_IDENTITY, this._$Z = F._$pS) : (this._$hi = F._$gb, this._$Z = F._$hb) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$fo, this._$Z = F._$kS) : (this._$hi = F._$fo | F._$gb, this._$Z = F._$kS) : 0 == this._$7 && 0 == this._$g ? 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go, this._$Z = F._$kS) : (this._$hi = F._$go | F._$gb, this._$Z = F._$kS) : 0 == this._$k && 0 == this._$w ? (this._$hi = F._$go | F._$fo, this._$Z = F._$kS) : (this._$hi = F._$go | F._$fo | F._$gb, this._$Z = F._$kS)
- }, F.prototype._$RT = function(t) {
- this._$IT(t);
- var i = t[0],
- e = t[2],
- r = t[1],
- o = t[3],
- n = Math.sqrt(i * i + r * r),
- s = i * o - e * r;
- 0 == n ? at._$so && console.log("affine._$RT() / rt==0") : (t[0] = n, t[1] = s / n, t[2] = (r * o + i * e) / s, t[3] = Math.atan2(r, i))
- }, F.prototype._$ho = function(t, i, e, r) {
- var o = new Float32Array(6),
- n = new Float32Array(6);
- t._$RT(o), i._$RT(n);
- var s = new Float32Array(6);
- s[0] = o[0] + (n[0] - o[0]) * e, s[1] = o[1] + (n[1] - o[1]) * e, s[2] = o[2] + (n[2] - o[2]) * e, s[3] = o[3] + (n[3] - o[3]) * e, s[4] = o[4] + (n[4] - o[4]) * e, s[5] = o[5] + (n[5] - o[5]) * e, r._$CT(s)
- }, F.prototype._$CT = function(t) {
- var i = Math.cos(t[3]),
- e = Math.sin(t[3]);
- this._$7 = t[0] * i, this._$f = t[0] * e, this._$H = t[1] * (t[2] * i - e), this._$g = t[1] * (t[2] * e + i), this._$k = t[4], this._$w = t[5], this.update()
- }, F.prototype._$IT = function(t) {
- t[0] = this._$7, t[1] = this._$f, t[2] = this._$H, t[3] = this._$g, t[4] = this._$k, t[5] = this._$w
- }, C.prototype = new s, C._$cs = "VISIBLE:", C._$ar = "LAYOUT:", C._$Co = 0, C._$D2 = [], C._$1T = 1, C.loadMotion = function(t) {
- var i = new C,
- e = [0],
- r = t.length;
- i._$yT = 0;
- for (var o = 0; o < r; ++o) {
- var n = 255 & t[o];
- if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) {
- if ("a" <= n && n <= "z" || "A" <= n && n <= "Z" || "_" == n) {
- for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) {
- _ = o;
- break
- }
- if (_ >= 0) {
- var a = new B;
- O.startsWith(t, s, C._$cs) ? (a._$RP = B._$hs, a._$4P = new String(t, s, _ - s)) : O.startsWith(t, s, C._$ar) ? (a._$4P = new String(t, s + 7, _ - s - 7), O.startsWith(t, s + 7, "ANCHOR_X") ? a._$RP = B._$xs : O.startsWith(t, s + 7, "ANCHOR_Y") ? a._$RP = B._$us : O.startsWith(t, s + 7, "SCALE_X") ? a._$RP = B._$qs : O.startsWith(t, s + 7, "SCALE_Y") ? a._$RP = B._$Ys : O.startsWith(t, s + 7, "X") ? a._$RP = B._$ws : O.startsWith(t, s + 7, "Y") && (a._$RP = B._$Ns)) : (a._$RP = B._$Fr, a._$4P = new String(t, s, _ - s)), i.motions.push(a);
- var h = 0;
- for (C._$D2.clear(), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
- var l = O._$LS(t, r, o, e);
- if (e[0] > 0) {
- C._$D2.push(l), h++;
- var $ = e[0];
- if ($ < o) {
- console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n");
- break
- }
- o = $
- }
- }
- a._$I0 = C._$D2._$BL(), h > i._$yT && (i._$yT = h)
- }
- }
- } else {
- for (var s = o, _ = -1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("=" == n) {
- _ = o;
- break
- }
- var u = !1;
- if (_ >= 0) for (_ == s + 4 && "f" == t[s + 1] && "p" == t[s + 2] && "s" == t[s + 3] && (u = !0), o = _ + 1; o < r && ("\r" != (n = 255 & t[o]) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
- var l = O._$LS(t, r, o, e);
- e[0] > 0 && u && 5 < l && l < 121 && (i._$D0 = l), o = e[0]
- }
- for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o);
- } else for (; o < r && ("\n" != t[o] && "\r" != t[o]); ++o);
- }
- return i._$AS = 1e3 * i._$yT / i._$D0 | 0, i
- }, C.prototype.getDurationMSec = function() {
- return this._$AS
- }, C.prototype.dump = function() {
- for (var t = 0; t < this.motions.length; t++) {
- var i = this.motions[t];
- console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length);
- for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]);
- console.log("\n")
- }
- }, C.prototype.updateParamExe = function(t, i, e, r) {
- for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) {
- var h = this.motions[a],
- l = h._$I0.length,
- $ = h._$4P;
- if (h._$RP == B._$hs) {
- var u = h._$I0[s >= l ? l - 1 : s];
- t.setParamFloat($, u)
- } else if (B._$ws <= h._$RP && h._$RP <= B._$Ys);
- else {
- var p = t.getParamFloat($),
- f = h._$I0[s >= l ? l - 1 : s],
- c = h._$I0[s + 1 >= l ? l - 1 : s + 1],
- d = f + (c - f) * _,
- g = p + (d - p) * e;
- t.setParamFloat($, g)
- }
- }
- s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0)
- }, C.prototype._$r0 = function() {
- return this._$E
- }, C.prototype._$aL = function(t) {
- this._$E = t
- }, C.prototype.isLoopFadeIn = function() {
- return this.loopFadeIn
- }, C.prototype.setLoopFadeIn = function(t) {
- this.loopFadeIn = t
- }, N.prototype.clear = function() {
- this.size = 0
- }, N.prototype.add = function(t) {
- if (this._$P.length <= this.size) {
- var i = new Float32Array(2 * this.size);
- w._$jT(this._$P, 0, i, 0, this.size), this._$P = i
- }
- this._$P[this.size++] = t
- }, N.prototype._$BL = function() {
- var t = new Float32Array(this.size);
- return w._$jT(this._$P, 0, t, 0, this.size), t
- }, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, U._$Ms = 1, U._$Qs = 2, U._$i2 = 0, U._$No = 2, U._$do = U._$Ms, U._$Ls = !0, U._$1r = 5, U._$Qb = 65, U._$J = 1e-4, U._$FT = .001, U._$Ss = 3, G._$o7 = 6, G._$S7 = 7, G._$s7 = 8, G._$77 = 9, G.LIVE2D_FORMAT_VERSION_V2_10_SDK2 = 10, G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1 = 11, G._$T7 = G.LIVE2D_FORMAT_VERSION_V2_11_SDK2_1, G._$Is = -2004318072, G._$h0 = 0, G._$4L = 23, G._$7P = 33, G._$uT = function(t) {
- console.log("_$bo :: _$6 _$mo _$E0 : %d\n", t)
- }, G._$9o = function(t) {
- if (t < 40) return G._$uT(t), null;
- if (t < 50) return G._$uT(t), null;
- if (t < 60) return G._$uT(t), null;
- if (t < 100) switch (t) {
- case 65:
- return new Z;
- case 66:
- return new D;
- case 67:
- return new x;
- case 68:
- return new z;
- case 69:
- return new P;
- case 70:
- return new $t;
- default:
- return G._$uT(t), null
- } else if (t < 150) switch (t) {
- case 131:
- return new st;
- case 133:
- return new tt;
- case 136:
- return new p;
- case 137:
- return new ot;
- case 142:
- return new j
- }
- return G._$uT(t), null
- }, Y._$HP = 0, Y._$_0 = !0;
- Y._$V2 = -1, Y._$W0 = -1, Y._$jr = !1, Y._$ZS = !0, Y._$tr = -1e6, Y._$lr = 1e6, Y._$is = 32, Y._$e = !1, Y.prototype.getDrawDataIndex = function(t) {
- for (var i = this._$aS.length - 1; i >= 0; --i) if (null != this._$aS[i] && this._$aS[i].getDrawDataID() == t) return i;
- return -1
- }, Y.prototype.getDrawData = function(t) {
- if (t instanceof b) {
- if (null == this._$Bo) {
- this._$Bo = new Object;
- for (var i = this._$aS.length, e = 0; e < i; e++) {
- var r = this._$aS[e],
- o = r.getDrawDataID();
- null != o && (this._$Bo[o] = r)
- }
- }
- return this._$Bo[id]
- }
- return t < this._$aS.length ? this._$aS[t] : null
- }, Y.prototype.release = function() {
- this._$3S.clear(), this._$aS.clear(), this._$F2.clear(), null != this._$Bo && this._$Bo.clear(), this._$db.clear(), this._$8b.clear(), this._$Hr.clear()
- }, Y.prototype.init = function() {
- this._$co++, this._$F2.length > 0 && this.release();
- for (var t = this._$Ri.getModelImpl(), i = t._$Xr(), r = i.length, o = new Array, n = new Array, s = 0; s < r; ++s) {
- var _ = i[s];
- this._$F2.push(_), this._$Hr.push(_.init(this));
- for (var a = _.getBaseData(), h = a.length, l = 0; l < h; ++l) o.push(a[l]);
- for (var l = 0; l < h; ++l) {
- var $ = a[l].init(this);
- $._$l2(s), n.push($)
- }
- for (var u = _.getDrawData(), p = u.length, l = 0; l < p; ++l) {
- var f = u[l],
- c = f.init(this);
- c._$IP = s, this._$aS.push(f), this._$8b.push(c)
- }
- }
- for (var d = o.length, g = yt._$2o();;) {
- for (var y = !1, s = 0; s < d; ++s) {
- var m = o[s];
- if (null != m) {
- var T = m.getTargetBaseDataID();
- (null == T || T == g || this.getBaseDataIndex(T) >= 0) && (this._$3S.push(m), this._$db.push(n[s]), o[s] = null, y = !0)
- }
- }
- if (!y) break
- }
- var P = t._$E2();
- if (null != P) {
- var S = P._$1s();
- if (null != S) for (var v = S.length, s = 0; s < v; ++s) {
- var L = S[s];
- null != L && this._$02(L.getParamID(), L.getDefaultValue(), L.getMinValue(), L.getMaxValue())
- }
- }
- this.clipManager = new e(this.dp_webgl), this.clipManager.init(this, this._$aS, this._$8b), this._$QT = !0
- }, Y.prototype.update = function() {
- Y._$e && _.start("_$zL");
- for (var t = this._$_2.length, i = 0; i < t; i++) this._$_2[i] != this._$vr[i] && (this._$Js[i] = Y._$ZS, this._$vr[i] = this._$_2[i]);
- var e = this._$3S.length,
- r = this._$aS.length,
- o = W._$or(),
- n = W._$Pr(),
- s = n - o + 1;
- (null == this._$Ws || this._$Ws.length < s) && (this._$Ws = new Int16Array(s), this._$Vs = new Int16Array(s));
- for (var i = 0; i < s; i++) this._$Ws[i] = Y._$V2, this._$Vs[i] = Y._$V2;
- (null == this._$Er || this._$Er.length < r) && (this._$Er = new Int16Array(r));
- for (var i = 0; i < r; i++) this._$Er[i] = Y._$W0;
- Y._$e && _.dump("_$zL"), Y._$e && _.start("_$UL");
- for (var a = null, h = 0; h < e; ++h) {
- var l = this._$3S[h],
- $ = this._$db[h];
- try {
- l._$Nr(this, $), l._$2b(this, $)
- } catch (t) {
- null == a && (a = t)
- }
- }
- null != a && Y._$_0 && _._$Rb(a), Y._$e && _.dump("_$UL"), Y._$e && _.start("_$DL");
- for (var u = null, p = 0; p < r; ++p) {
- var f = this._$aS[p],
- c = this._$8b[p];
- try {
- if (f._$Nr(this, c), c._$u2()) continue;
- f._$2b(this, c);
- var d, g = Math.floor(f._$zS(this, c) - o);
- try {
- d = this._$Vs[g]
- } catch (t) {
- console.log("_$li :: %s / %s \t\t\t\t@@_$fS\n", t.toString(), f.getDrawDataID().toString()), g = Math.floor(f._$zS(this, c) - o);
- continue
- }
- d == Y._$V2 ? this._$Ws[g] = p : this._$Er[d] = p, this._$Vs[g] = p
- } catch (t) {
- null == u && (u = t, at._$sT(at._$H7))
- }
- }
- null != u && Y._$_0 && _._$Rb(u), Y._$e && _.dump("_$DL"), Y._$e && _.start("_$eL");
- for (var i = this._$Js.length - 1; i >= 0; i--) this._$Js[i] = Y._$jr;
- return this._$QT = !1, Y._$e && _.dump("_$eL"), !1
- }, Y.prototype.preDraw = function(t) {
- null != this.clipManager && (t._$ZT(), this.clipManager.setupClip(this, t))
- }, Y.prototype.draw = function(t) {
- if (null == this._$Ws) return void _._$li("call _$Ri.update() before _$Ri.draw() ");
- var i = this._$Ws.length;
- t._$ZT();
- for (var e = 0; e < i; ++e) {
- var r = this._$Ws[e];
- if (r != Y._$V2) for (;;) {
- var o = this._$aS[r],
- n = this._$8b[r];
- if (n._$yo()) {
- var s = n._$IP,
- a = this._$Hr[s];
- n._$VS = a.getPartsOpacity(), o.draw(t, this, n)
- }
- var h = this._$Er[r];
- if (h <= r || h == Y._$W0) break;
- r = h
- }
- }
- }, Y.prototype.getParamIndex = function(t) {
- for (var i = this._$pb.length - 1; i >= 0; --i) if (this._$pb[i] == t) return i;
- return this._$02(t, 0, Y._$tr, Y._$lr)
- }, Y.prototype._$BS = function(t) {
- return this.getBaseDataIndex(t)
- }, Y.prototype.getBaseDataIndex = function(t) {
- for (var i = this._$3S.length - 1; i >= 0; --i) if (null != this._$3S[i] && this._$3S[i].getBaseDataID() == t) return i;
- return -1
- }, Y.prototype._$UT = function(t, i) {
- var e = new Float32Array(i);
- return w._$jT(t, 0, e, 0, t.length), e
- }, Y.prototype._$02 = function(t, i, e, r) {
- if (this._$qo >= this._$pb.length) {
- var o = this._$pb.length,
- n = new Array(2 * o);
- w._$jT(this._$pb, 0, n, 0, o), this._$pb = n, this._$_2 = this._$UT(this._$_2, 2 * o), this._$vr = this._$UT(this._$vr, 2 * o), this._$Rr = this._$UT(this._$Rr, 2 * o), this._$Or = this._$UT(this._$Or, 2 * o);
- var s = new Array;
- w._$jT(this._$Js, 0, s, 0, o), this._$Js = s
- }
- return this._$pb[this._$qo] = t, this._$_2[this._$qo] = i, this._$vr[this._$qo] = i, this._$Rr[this._$qo] = e, this._$Or[this._$qo] = r, this._$Js[this._$qo] = Y._$ZS, this._$qo++
- }, Y.prototype._$Zo = function(t, i) {
- this._$3S[t] = i
- }, Y.prototype.setParamFloat = function(t, i) {
- i < this._$Rr[t] && (i = this._$Rr[t]), i > this._$Or[t] && (i = this._$Or[t]), this._$_2[t] = i
- }, Y.prototype.loadParam = function() {
- var t = this._$_2.length;
- t > this._$fs.length && (t = this._$fs.length), w._$jT(this._$fs, 0, this._$_2, 0, t)
- }, Y.prototype.saveParam = function() {
- var t = this._$_2.length;
- t > this._$fs.length && (this._$fs = new Float32Array(t)), w._$jT(this._$_2, 0, this._$fs, 0, t)
- }, Y.prototype._$v2 = function() {
- return this._$co
- }, Y.prototype._$WS = function() {
- return this._$QT
- }, Y.prototype._$Xb = function(t) {
- return this._$Js[t] == Y._$ZS
- }, Y.prototype._$vs = function() {
- return this._$Es
- }, Y.prototype._$Tr = function() {
- return this._$ZP
- }, Y.prototype.getBaseData = function(t) {
- return this._$3S[t]
- }, Y.prototype.getParamFloat = function(t) {
- return this._$_2[t]
- }, Y.prototype.getParamMax = function(t) {
- return this._$Or[t]
- }, Y.prototype.getParamMin = function(t) {
- return this._$Rr[t]
- }, Y.prototype.setPartsOpacity = function(t, i) {
- this._$Hr[t].setPartsOpacity(i)
- }, Y.prototype.getPartsOpacity = function(t) {
- return this._$Hr[t].getPartsOpacity()
- }, Y.prototype.getPartsDataIndex = function(t) {
- for (var i = this._$F2.length - 1; i >= 0; --i) if (null != this._$F2[i] && this._$F2[i]._$p2() == t) return i;
- return -1
- }, Y.prototype._$q2 = function(t) {
- return this._$db[t]
- }, Y.prototype._$C2 = function(t) {
- return this._$8b[t]
- }, Y.prototype._$Bb = function(t) {
- return this._$Hr[t]
- }, Y.prototype._$5s = function(t, i) {
- for (var e = this._$Ws.length, r = t, o = 0; o < e; ++o) {
- var n = this._$Ws[o];
- if (n != Y._$V2) for (;;) {
- var s = this._$8b[n];
- s._$yo() && (s._$GT()._$B2(this, s, r), r += i);
- var _ = this._$Er[n];
- if (_ <= n || _ == Y._$W0) break;
- n = _
- }
- }
- }, Y.prototype.setDrawParam = function(t) {
- this.dp_webgl = t
- }, Y.prototype.getDrawParam = function() {
- return this.dp_webgl
- }, k._$0T = function(t) {
- return k._$0T(new _$5(t))
- }, k._$0T = function(t) {
- if (!t.exists()) throw new _$ls(t._$3b());
- for (var i, e = t.length(), r = new Int8Array(e), o = new _$Xs(new _$kb(t), 8192), n = 0;
- (i = o.read(r, n, e - n)) > 0;) n += i;
- return r
- }, k._$C = function(t) {
- var i = null,
- e = null;
- try {
- i = t instanceof Array ? t : new _$Xs(t, 8192), e = new _$js;
- for (var r, o = new Int8Array(1e3);
- (r = i.read(o)) > 0;) e.write(o, 0, r);
- return e._$TS()
- } finally {
- null != t && t.close(), null != e && (e.flush(), e.close())
- }
- }, V.prototype._$T2 = function() {
- return w.getUserTimeMSec() + Math._$10() * (2 * this._$Br - 1)
- }, V.prototype._$uo = function(t) {
- this._$Br = t
- }, V.prototype._$QS = function(t, i, e) {
- this._$Dr = t, this._$Cb = i, this._$mr = e
- }, V.prototype._$7T = function(t) {
- var i, e = w.getUserTimeMSec(),
- r = 0;
- switch (this._$_L) {
- case STATE_CLOSING:
- r = (e - this._$bb) / this._$Dr, r >= 1 && (r = 1, this._$_L = wt.STATE_CLOSED, this._$bb = e), i = 1 - r;
- break;
- case STATE_CLOSED:
- r = (e - this._$bb) / this._$Cb, r >= 1 && (this._$_L = wt.STATE_OPENING, this._$bb = e), i = 0;
- break;
- case STATE_OPENING:
- r = (e - this._$bb) / this._$mr, r >= 1 && (r = 1, this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2()), i = r;
- break;
- case STATE_INTERVAL:
- this._$12 < e && (this._$_L = wt.STATE_CLOSING, this._$bb = e), i = 1;
- break;
- case STATE_FIRST:
- default:
- this._$_L = wt.STATE_INTERVAL, this._$12 = this._$T2(), i = 1
- }
- this._$jo || (i = -i), t.setParamFloat(this._$iL, i), t.setParamFloat(this._$0L, i)
- };
- var wt = function() {};
- wt.STATE_FIRST = "STATE_FIRST", wt.STATE_INTERVAL = "STATE_INTERVAL", wt.STATE_CLOSING = "STATE_CLOSING", wt.STATE_CLOSED = "STATE_CLOSED", wt.STATE_OPENING = "STATE_OPENING", X.prototype = new E, X._$As = 32, X._$Gr = !1, X._$NT = null, X._$vS = null, X._$no = null, X._$9r = function(t) {
- return new Float32Array(t)
- }, X._$vb = function(t) {
- return new Int16Array(t)
- }, X._$cr = function(t, i) {
- return null == t || t._$yL() < i.length ? (t = X._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
- }, X._$mb = function(t, i) {
- return null == t || t._$yL() < i.length ? (t = X._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
- }, X._$Hs = function() {
- return X._$Gr
- }, X._$as = function(t) {
- X._$Gr = t
- }, X.prototype.setGL = function(t) {
- this.gl = t
- }, X.prototype.setTransform = function(t) {
- this.transform = t
- }, X.prototype._$ZT = function() {}, X.prototype._$Uo = function(t, i, e, r, o, n, s, _) {
- if (!(n < .01)) {
- var a = this._$U2[t],
- h = n > .9 ? at.EXPAND_W : 0;
- this.gl.drawElements(a, e, r, o, n, h, this.transform, _)
- }
- }, X.prototype._$Rs = function() {
- throw new Error("_$Rs")
- }, X.prototype._$Ds = function(t) {
- throw new Error("_$Ds")
- }, X.prototype._$K2 = function() {
- for (var t = 0; t < this._$sb.length; t++) {
- 0 != this._$sb[t] && (this.gl._$Sr(1, this._$sb, t), this._$sb[t] = 0)
- }
- }, X.prototype.setTexture = function(t, i) {
- this._$sb.length < t + 1 && this._$nS(t), this._$sb[t] = i
- }, X.prototype.setTexture = function(t, i) {
- this._$sb.length < t + 1 && this._$nS(t), this._$U2[t] = i
- }, X.prototype._$nS = function(t) {
- var i = Math.max(2 * this._$sb.length, t + 1 + 10),
- e = new Int32Array(i);
- w._$jT(this._$sb, 0, e, 0, this._$sb.length), this._$sb = e;
- var r = new Array;
- w._$jT(this._$U2, 0, r, 0, this._$U2.length), this._$U2 = r
- }, z.prototype = new I, z._$Xo = new Float32Array(2), z._$io = new Float32Array(2), z._$0o = new Float32Array(2), z._$Lo = new Float32Array(2), z._$To = new Float32Array(2), z._$Po = new Float32Array(2), z._$gT = new Array, z.prototype._$zP = function() {
- this._$GS = new D, this._$GS._$zP(), this._$Y0 = new Array
- }, z.prototype.getType = function() {
- return I._$c2
- }, z.prototype._$F0 = function(t) {
- I.prototype._$F0.call(this, t), this._$GS = t._$nP(), this._$Y0 = t._$nP(), I.prototype.readV2_opacity.call(this, t)
- }, z.prototype.init = function(t) {
- var i = new H(this);
- return i._$Yr = new P, this._$32() && (i._$Wr = new P), i
- }, z.prototype._$Nr = function(t, i) {
- this != i._$GT() && console.log("### assert!! ### ");
- var e = i;
- if (this._$GS._$Ur(t)) {
- var r = z._$gT;
- r[0] = !1;
- var o = this._$GS._$Q2(t, r);
- i._$Ib(r[0]), this.interpolateOpacity(t, this._$GS, i, r);
- var n = t._$vs(),
- s = t._$Tr();
- if (this._$GS._$zr(n, s, o), o <= 0) {
- var _ = this._$Y0[n[0]];
- e._$Yr.init(_)
- } else if (1 == o) {
- var _ = this._$Y0[n[0]],
- a = this._$Y0[n[1]],
- h = s[0];
- e._$Yr._$fL = _._$fL + (a._$fL - _._$fL) * h, e._$Yr._$gL = _._$gL + (a._$gL - _._$gL) * h, e._$Yr._$B0 = _._$B0 + (a._$B0 - _._$B0) * h, e._$Yr._$z0 = _._$z0 + (a._$z0 - _._$z0) * h, e._$Yr._$qT = _._$qT + (a._$qT - _._$qT) * h
- } else if (2 == o) {
- var _ = this._$Y0[n[0]],
- a = this._$Y0[n[1]],
- l = this._$Y0[n[2]],
- $ = this._$Y0[n[3]],
- h = s[0],
- u = s[1],
- p = _._$fL + (a._$fL - _._$fL) * h,
- f = l._$fL + ($._$fL - l._$fL) * h;
- e._$Yr._$fL = p + (f - p) * u, p = _._$gL + (a._$gL - _._$gL) * h, f = l._$gL + ($._$gL - l._$gL) * h, e._$Yr._$gL = p + (f - p) * u, p = _._$B0 + (a._$B0 - _._$B0) * h, f = l._$B0 + ($._$B0 - l._$B0) * h, e._$Yr._$B0 = p + (f - p) * u, p = _._$z0 + (a._$z0 - _._$z0) * h, f = l._$z0 + ($._$z0 - l._$z0) * h, e._$Yr._$z0 = p + (f - p) * u, p = _._$qT + (a._$qT - _._$qT) * h, f = l._$qT + ($._$qT - l._$qT) * h, e._$Yr._$qT = p + (f - p) * u
- } else if (3 == o) {
- var c = this._$Y0[n[0]],
- d = this._$Y0[n[1]],
- g = this._$Y0[n[2]],
- y = this._$Y0[n[3]],
- m = this._$Y0[n[4]],
- T = this._$Y0[n[5]],
- P = this._$Y0[n[6]],
- S = this._$Y0[n[7]],
- h = s[0],
- u = s[1],
- v = s[2],
- p = c._$fL + (d._$fL - c._$fL) * h,
- f = g._$fL + (y._$fL - g._$fL) * h,
- L = m._$fL + (T._$fL - m._$fL) * h,
- M = P._$fL + (S._$fL - P._$fL) * h;
- e._$Yr._$fL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$gL + (d._$gL - c._$gL) * h, f = g._$gL + (y._$gL - g._$gL) * h, L = m._$gL + (T._$gL - m._$gL) * h, M = P._$gL + (S._$gL - P._$gL) * h, e._$Yr._$gL = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$B0 + (d._$B0 - c._$B0) * h, f = g._$B0 + (y._$B0 - g._$B0) * h, L = m._$B0 + (T._$B0 - m._$B0) * h, M = P._$B0 + (S._$B0 - P._$B0) * h, e._$Yr._$B0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$z0 + (d._$z0 - c._$z0) * h, f = g._$z0 + (y._$z0 - g._$z0) * h, L = m._$z0 + (T._$z0 - m._$z0) * h, M = P._$z0 + (S._$z0 - P._$z0) * h, e._$Yr._$z0 = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u), p = c._$qT + (d._$qT - c._$qT) * h, f = g._$qT + (y._$qT - g._$qT) * h, L = m._$qT + (T._$qT - m._$qT) * h, M = P._$qT + (S._$qT - P._$qT) * h, e._$Yr._$qT = (1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)
- } else if (4 == o) {
- var E = this._$Y0[n[0]],
- A = this._$Y0[n[1]],
- I = this._$Y0[n[2]],
- w = this._$Y0[n[3]],
- x = this._$Y0[n[4]],
- O = this._$Y0[n[5]],
- D = this._$Y0[n[6]],
- R = this._$Y0[n[7]],
- b = this._$Y0[n[8]],
- F = this._$Y0[n[9]],
- C = this._$Y0[n[10]],
- N = this._$Y0[n[11]],
- B = this._$Y0[n[12]],
- U = this._$Y0[n[13]],
- G = this._$Y0[n[14]],
- Y = this._$Y0[n[15]],
- h = s[0],
- u = s[1],
- v = s[2],
- k = s[3],
- p = E._$fL + (A._$fL - E._$fL) * h,
- f = I._$fL + (w._$fL - I._$fL) * h,
- L = x._$fL + (O._$fL - x._$fL) * h,
- M = D._$fL + (R._$fL - D._$fL) * h,
- V = b._$fL + (F._$fL - b._$fL) * h,
- X = C._$fL + (N._$fL - C._$fL) * h,
- H = B._$fL + (U._$fL - B._$fL) * h,
- W = G._$fL + (Y._$fL - G._$fL) * h;
- e._$Yr._$fL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$gL + (A._$gL - E._$gL) * h, f = I._$gL + (w._$gL - I._$gL) * h, L = x._$gL + (O._$gL - x._$gL) * h, M = D._$gL + (R._$gL - D._$gL) * h, V = b._$gL + (F._$gL - b._$gL) * h, X = C._$gL + (N._$gL - C._$gL) * h, H = B._$gL + (U._$gL - B._$gL) * h, W = G._$gL + (Y._$gL - G._$gL) * h, e._$Yr._$gL = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$B0 + (A._$B0 - E._$B0) * h, f = I._$B0 + (w._$B0 - I._$B0) * h, L = x._$B0 + (O._$B0 - x._$B0) * h, M = D._$B0 + (R._$B0 - D._$B0) * h, V = b._$B0 + (F._$B0 - b._$B0) * h, X = C._$B0 + (N._$B0 - C._$B0) * h, H = B._$B0 + (U._$B0 - B._$B0) * h, W = G._$B0 + (Y._$B0 - G._$B0) * h, e._$Yr._$B0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$z0 + (A._$z0 - E._$z0) * h, f = I._$z0 + (w._$z0 - I._$z0) * h, L = x._$z0 + (O._$z0 - x._$z0) * h, M = D._$z0 + (R._$z0 - D._$z0) * h, V = b._$z0 + (F._$z0 - b._$z0) * h, X = C._$z0 + (N._$z0 - C._$z0) * h, H = B._$z0 + (U._$z0 - B._$z0) * h, W = G._$z0 + (Y._$z0 - G._$z0) * h, e._$Yr._$z0 = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u)), p = E._$qT + (A._$qT - E._$qT) * h, f = I._$qT + (w._$qT - I._$qT) * h, L = x._$qT + (O._$qT - x._$qT) * h, M = D._$qT + (R._$qT - D._$qT) * h, V = b._$qT + (F._$qT - b._$qT) * h, X = C._$qT + (N._$qT - C._$qT) * h, H = B._$qT + (U._$qT - B._$qT) * h, W = G._$qT + (Y._$qT - G._$qT) * h, e._$Yr._$qT = (1 - k) * ((1 - v) * (p + (f - p) * u) + v * (L + (M - L) * u)) + k * ((1 - v) * (V + (X - V) * u) + v * (H + (W - H) * u))
- } else {
- for (var j = 0 | Math.pow(2, o), q = new Float32Array(j), J = 0; J < j; J++) {
- for (var Q = J, Z = 1, K = 0; K < o; K++) Z *= Q % 2 == 0 ? 1 - s[K] : s[K], Q /= 2;
- q[J] = Z
- }
- for (var tt = new Array, it = 0; it < j; it++) tt[it] = this._$Y0[n[it]];
- for (var et = 0, rt = 0, ot = 0, nt = 0, st = 0, it = 0; it < j; it++) et += q[it] * tt[it]._$fL, rt += q[it] * tt[it]._$gL, ot += q[it] * tt[it]._$B0, nt += q[it] * tt[it]._$z0, st += q[it] * tt[it]._$qT;
- e._$Yr._$fL = et, e._$Yr._$gL = rt, e._$Yr._$B0 = ot, e._$Yr._$z0 = nt, e._$Yr._$qT = st
- }
- var _ = this._$Y0[n[0]];
- e._$Yr.reflectX = _.reflectX, e._$Yr.reflectY = _.reflectY
- }
- }, z.prototype._$2b = function(t, i) {
- this != i._$GT() && console.log("### assert!! ### ");
- var e = i;
- if (e._$hS(!0), this._$32()) {
- var r = this.getTargetBaseDataID();
- if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1);
- else {
- var o = t.getBaseData(e._$8r);
- if (null != o) {
- var n = t._$q2(e._$8r),
- s = z._$Xo;
- s[0] = e._$Yr._$fL, s[1] = e._$Yr._$gL;
- var a = z._$io;
- a[0] = 0, a[1] = -.1;
- n._$GT().getType() == I._$c2 ? a[1] = -10 : a[1] = -.1;
- var h = z._$0o;
- this._$Jr(t, o, n, s, a, h);
- var l = Lt._$92(a, h);
- o._$nb(t, n, s, s, 1, 0, 2), e._$Wr._$fL = s[0], e._$Wr._$gL = s[1], e._$Wr._$B0 = e._$Yr._$B0, e._$Wr._$z0 = e._$Yr._$z0, e._$Wr._$qT = e._$Yr._$qT - l * Lt._$NS;
- var $ = n.getTotalScale();
- e.setTotalScale_notForClient($ * e._$Wr._$B0);
- var u = n.getTotalOpacity();
- e.setTotalOpacity(u * e.getInterpolatedOpacity()), e._$Wr.reflectX = e._$Yr.reflectX, e._$Wr.reflectY = e._$Yr.reflectY, e._$hS(n._$yo())
- } else e._$hS(!1)
- }
- } else e.setTotalScale_notForClient(e._$Yr._$B0), e.setTotalOpacity(e.getInterpolatedOpacity())
- }, z.prototype._$nb = function(t, i, e, r, o, n, s) {
- this != i._$GT() && console.log("### assert!! ### ");
- for (var _, a, h = i, l = null != h._$Wr ? h._$Wr : h._$Yr, $ = Math.sin(Lt._$bS * l._$qT), u = Math.cos(Lt._$bS * l._$qT), p = h.getTotalScale(), f = l.reflectX ? -1 : 1, c = l.reflectY ? -1 : 1, d = u * p * f, g = -$ * p * c, y = $ * p * f, m = u * p * c, T = l._$fL, P = l._$gL, S = o * s, v = n; v < S; v += s) _ = e[v], a = e[v + 1], r[v] = d * _ + g * a + T, r[v + 1] = y * _ + m * a + P
- }, z.prototype._$Jr = function(t, i, e, r, o, n) {
- i != e._$GT() && console.log("### assert!! ### ");
- var s = z._$Lo;
- z._$Lo[0] = r[0], z._$Lo[1] = r[1], i._$nb(t, e, s, s, 1, 0, 2);
- for (var _ = z._$To, a = z._$Po, h = 1, l = 0; l < 10; l++) {
- if (a[0] = r[0] + h * o[0], a[1] = r[1] + h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return n[0] = _[0], void(n[1] = _[1]);
- if (a[0] = r[0] - h * o[0], a[1] = r[1] - h * o[1], i._$nb(t, e, a, _, 1, 0, 2), _[0] -= s[0], _[1] -= s[1], 0 != _[0] || 0 != _[1]) return _[0] = -_[0], _[0] = -_[0], n[0] = _[0], void(n[1] = _[1]);
- h *= .1
- }
- at._$so && console.log("_$L0 to transform _$SP\n")
- }, H.prototype = new _t, W.prototype = new M, W._$ur = -2, W._$ES = 500, W._$wb = 2, W._$8S = 3, W._$os = 4, W._$52 = W._$ES, W._$R2 = W._$ES, W._$Sb = function(t) {
- for (var i = t.length - 1; i >= 0; --i) {
- var e = t[i];
- e < W._$52 ? W._$52 = e : e > W._$R2 && (W._$R2 = e)
- }
- }, W._$or = function() {
- return W._$52
- }, W._$Pr = function() {
- return W._$R2
- }, W.prototype._$F0 = function(t) {
- this._$gP = t._$nP(), this._$dr = t._$nP(), this._$GS = t._$nP(), this._$qb = t._$6L(), this._$Lb = t._$cS(), this._$mS = t._$Tb(), t.getFormatVersion() >= G._$T7 ? (this.clipID = t._$nP(), this.clipIDList = this.convertClipIDForV2_11(this.clipID)) : this.clipIDList = null, W._$Sb(this._$Lb)
- }, W.prototype.getClipIDList = function() {
- return this.clipIDList
- }, W.prototype._$Nr = function(t, i) {
- if (i._$IS[0] = !1, i._$Us = v._$Z2(t, this._$GS, i._$IS, this._$Lb), at._$Zs);
- else if (i._$IS[0]) return;
- i._$7s = v._$br(t, this._$GS, i._$IS, this._$mS)
- }, W.prototype._$2b = function(t) {}, W.prototype.getDrawDataID = function() {
- return this._$gP
- }, W.prototype._$j2 = function(t) {
- this._$gP = t
- }, W.prototype.getOpacity = function(t, i) {
- return i._$7s
- }, W.prototype._$zS = function(t, i) {
- return i._$Us
- }, W.prototype.getTargetBaseDataID = function() {
- return this._$dr
- }, W.prototype._$gs = function(t) {
- this._$dr = t
- }, W.prototype._$32 = function() {
- return null != this._$dr && this._$dr != yt._$2o()
- }, W.prototype.getType = function() {}, j._$42 = 0, j.prototype._$1b = function() {
- return this._$3S
- }, j.prototype.getDrawDataList = function() {
- return this._$aS
- }, j.prototype._$F0 = function(t) {
- this._$NL = t._$nP(), this._$aS = t._$nP(), this._$3S = t._$nP()
- }, j.prototype._$kr = function(t) {
- t._$Zo(this._$3S), t._$xo(this._$aS), this._$3S = null, this._$aS = null
- }, q.prototype = new i, q.loadModel = function(t) {
- var e = new q;
- return i._$62(e, t), e
- }, q.loadModel = function(t) {
- var e = new q;
- return i._$62(e, t), e
- }, q._$to = function() {
- return new q
- }, q._$er = function(t) {
- var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d");
- if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL());
- for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = q.loadModel(i._$3b()), o = 0; o < e.length; o++) {
- var n = new _$5(e[o]);
- if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL());
- r.setTexture(o, _$nL._$_o(t, n._$3b()))
- }
- return r
- }, q.prototype.setGL = function(t) {
- this._$zo.setGL(t)
- }, q.prototype.setTransform = function(t) {
- this._$zo.setTransform(t)
- }, q.prototype.draw = function() {
- this._$5S.draw(this._$zo)
- }, q.prototype._$K2 = function() {
- this._$zo._$K2()
- }, q.prototype.setTexture = function(t, i) {
- null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i)
- }, q.prototype.setTexture = function(t, i) {
- null == this._$zo && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this._$zo.setTexture(t, i)
- }, q.prototype._$Rs = function() {
- return this._$zo._$Rs()
- }, q.prototype._$Ds = function(t) {
- this._$zo._$Ds(t)
- }, q.prototype.getDrawParam = function() {
- return this._$zo
- }, J.prototype = new s, J._$cs = "VISIBLE:", J._$ar = "LAYOUT:", J.MTN_PREFIX_FADEIN = "FADEIN:", J.MTN_PREFIX_FADEOUT = "FADEOUT:", J._$Co = 0, J._$1T = 1, J.loadMotion = function(t) {
- var i = k._$C(t);
- return J.loadMotion(i)
- }, J.loadMotion = function(t) {
- t instanceof ArrayBuffer && (t = new DataView(t));
- var i = new J,
- e = [0],
- r = t.byteLength;
- i._$yT = 0;
- for (var o = 0; o < r; ++o) {
- var n = Q(t, o),
- s = n.charCodeAt(0);
- if ("\n" != n && "\r" != n) if ("#" != n) if ("$" != n) {
- if (97 <= s && s <= 122 || 65 <= s && s <= 90 || "_" == n) {
- for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) {
- a = o;
- break
- }
- if (a >= 0) {
- var h = new B;
- O.startsWith(t, _, J._$cs) ? (h._$RP = B._$hs, h._$4P = O.createString(t, _, a - _)) : O.startsWith(t, _, J._$ar) ? (h._$4P = O.createString(t, _ + 7, a - _ - 7), O.startsWith(t, _ + 7, "ANCHOR_X") ? h._$RP = B._$xs : O.startsWith(t, _ + 7, "ANCHOR_Y") ? h._$RP = B._$us : O.startsWith(t, _ + 7, "SCALE_X") ? h._$RP = B._$qs : O.startsWith(t, _ + 7, "SCALE_Y") ? h._$RP = B._$Ys : O.startsWith(t, _ + 7, "X") ? h._$RP = B._$ws : O.startsWith(t, _ + 7, "Y") && (h._$RP = B._$Ns)) : (h._$RP = B._$Fr, h._$4P = O.createString(t, _, a - _)), i.motions.push(h);
- var l = 0,
- $ = [];
- for (o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
- var u = O._$LS(t, r, o, e);
- if (e[0] > 0) {
- $.push(u), l++;
- var p = e[0];
- if (p < o) {
- console.log("_$n0 _$hi . @Live2DMotion loadMotion()\n");
- break
- }
- o = p - 1
- }
- }
- h._$I0 = new Float32Array($), l > i._$yT && (i._$yT = l)
- }
- }
- } else {
- for (var _ = o, a = -1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("=" == n) {
- a = o;
- break
- }
- var f = !1;
- if (a >= 0) for (a == _ + 4 && "f" == Q(t, _ + 1) && "p" == Q(t, _ + 2) && "s" == Q(t, _ + 3) && (f = !0), o = a + 1; o < r && ("\r" != (n = Q(t, o)) && "\n" != n); ++o) if ("," != n && " " != n && "\t" != n) {
- var u = O._$LS(t, r, o, e);
- e[0] > 0 && f && 5 < u && u < 121 && (i._$D0 = u), o = e[0]
- }
- for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o);
- } else for (; o < r && ("\n" != Q(t, o) && "\r" != Q(t, o)); ++o);
- }
- return i._$rr = 1e3 * i._$yT / i._$D0 | 0, i
- }, J.prototype.getDurationMSec = function() {
- return this._$E ? -1 : this._$rr
- }, J.prototype.getLoopDurationMSec = function() {
- return this._$rr
- }, J.prototype.dump = function() {
- for (var t = 0; t < this.motions.length; t++) {
- var i = this.motions[t];
- console.log("_$wL[%s] [%d]. ", i._$4P, i._$I0.length);
- for (var e = 0; e < i._$I0.length && e < 10; e++) console.log("%5.2f ,", i._$I0[e]);
- console.log("\n")
- }
- }, J.prototype.updateParamExe = function(t, i, e, r) {
- for (var o = i - r._$z2, n = o * this._$D0 / 1e3, s = 0 | n, _ = n - s, a = 0; a < this.motions.length; a++) {
- var h = this.motions[a],
- l = h._$I0.length,
- $ = h._$4P;
- if (h._$RP == B._$hs) {
- var u = h._$I0[s >= l ? l - 1 : s];
- t.setParamFloat($, u)
- } else if (B._$ws <= h._$RP && h._$RP <= B._$Ys);
- else {
- var p, f = t.getParamIndex($),
- c = t.getModelContext(),
- d = c.getParamMax(f),
- g = c.getParamMin(f),
- y = .4 * (d - g),
- m = c.getParamFloat(f),
- T = h._$I0[s >= l ? l - 1 : s],
- P = h._$I0[s + 1 >= l ? l - 1 : s + 1];
- p = T < P && P - T > y || T > P && T - P > y ? T : T + (P - T) * _;
- var S = m + (p - m) * e;
- t.setParamFloat($, S)
- }
- }
- s >= this._$yT && (this._$E ? (r._$z2 = i, this.loopFadeIn && (r._$bs = i)) : r._$9L = !0), this._$eP = e
- }, J.prototype._$r0 = function() {
- return this._$E
- }, J.prototype._$aL = function(t) {
- this._$E = t
- }, J.prototype._$S0 = function() {
- return this._$D0
- }, J.prototype._$U0 = function(t) {
- this._$D0 = t
- }, J.prototype.isLoopFadeIn = function() {
- return this.loopFadeIn
- }, J.prototype.setLoopFadeIn = function(t) {
- this.loopFadeIn = t
- }, N.prototype.clear = function() {
- this.size = 0
- }, N.prototype.add = function(t) {
- if (this._$P.length <= this.size) {
- var i = new Float32Array(2 * this.size);
- w._$jT(this._$P, 0, i, 0, this.size), this._$P = i
- }
- this._$P[this.size++] = t
- }, N.prototype._$BL = function() {
- var t = new Float32Array(this.size);
- return w._$jT(this._$P, 0, t, 0, this.size), t
- }, B._$Fr = 0, B._$hs = 1, B._$ws = 100, B._$Ns = 101, B._$xs = 102, B._$us = 103, B._$qs = 104, B._$Ys = 105, Z.prototype = new I, Z._$gT = new Array, Z.prototype._$zP = function() {
- this._$GS = new D, this._$GS._$zP()
- }, Z.prototype._$F0 = function(t) {
- I.prototype._$F0.call(this, t), this._$A = t._$6L(), this._$o = t._$6L(), this._$GS = t._$nP(), this._$Eo = t._$nP(), I.prototype.readV2_opacity.call(this, t)
- }, Z.prototype.init = function(t) {
- var i = new K(this),
- e = (this._$o + 1) * (this._$A + 1);
- return null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(2 * e), null != i._$hr && (i._$hr = null), this._$32() ? i._$hr = new Float32Array(2 * e) : i._$hr = null, i
- }, Z.prototype._$Nr = function(t, i) {
- var e = i;
- if (this._$GS._$Ur(t)) {
- var r = this._$VT(),
- o = Z._$gT;
- o[0] = !1, v._$Vr(t, this._$GS, o, r, this._$Eo, e._$Cr, 0, 2), i._$Ib(o[0]), this.interpolateOpacity(t, this._$GS, i, o)
- }
- }, Z.prototype._$2b = function(t, i) {
- var e = i;
- if (e._$hS(!0), this._$32()) {
- var r = this.getTargetBaseDataID();
- if (e._$8r == I._$ur && (e._$8r = t.getBaseDataIndex(r)), e._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", r), e._$hS(!1);
- else {
- var o = t.getBaseData(e._$8r),
- n = t._$q2(e._$8r);
- if (null != o && n._$yo()) {
- var s = n.getTotalScale();
- e.setTotalScale_notForClient(s);
- var a = n.getTotalOpacity();
- e.setTotalOpacity(a * e.getInterpolatedOpacity()), o._$nb(t, n, e._$Cr, e._$hr, this._$VT(), 0, 2), e._$hS(!0)
- } else e._$hS(!1)
- }
- } else e.setTotalOpacity(e.getInterpolatedOpacity())
- }, Z.prototype._$nb = function(t, i, e, r, o, n, s) {
- var _ = i,
- a = null != _._$hr ? _._$hr : _._$Cr;
- Z.transformPoints_sdk2(e, r, o, n, s, a, this._$o, this._$A)
- }, Z.transformPoints_sdk2 = function(i, e, r, o, n, s, _, a) {
- for (var h, l, $, u = r * n, p = 0, f = 0, c = 0, d = 0, g = 0, y = 0, m = !1, T = o; T < u; T += n) {
- var P, S, v, L;
- if (v = i[T], L = i[T + 1], P = v * _, S = L * a, P < 0 || S < 0 || _ <= P || a <= S) {
- var M = _ + 1;
- if (!m) {
- m = !0, p = .25 * (s[2 * (0 + 0 * M)] + s[2 * (_ + 0 * M)] + s[2 * (0 + a * M)] + s[2 * (_ + a * M)]), f = .25 * (s[2 * (0 + 0 * M) + 1] + s[2 * (_ + 0 * M) + 1] + s[2 * (0 + a * M) + 1] + s[2 * (_ + a * M) + 1]);
- var E = s[2 * (_ + a * M)] - s[2 * (0 + 0 * M)],
- A = s[2 * (_ + a * M) + 1] - s[2 * (0 + 0 * M) + 1],
- I = s[2 * (_ + 0 * M)] - s[2 * (0 + a * M)],
- w = s[2 * (_ + 0 * M) + 1] - s[2 * (0 + a * M) + 1];
- c = .5 * (E + I), d = .5 * (A + w), g = .5 * (E - I), y = .5 * (A - w), p -= .5 * (c + g), f -= .5 * (d + y)
- }
- if (-2 < v && v < 3 && -2 < L && L < 3) if (v <= 0) if (L <= 0) {
- var x = s[2 * (0 + 0 * M)],
- O = s[2 * (0 + 0 * M) + 1],
- D = p - 2 * c,
- R = f - 2 * d,
- b = p - 2 * g,
- F = f - 2 * y,
- C = p - 2 * c - 2 * g,
- N = f - 2 * d - 2 * y,
- B = .5 * (v - -2),
- U = .5 * (L - -2);
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else if (L >= 1) {
- var b = s[2 * (0 + a * M)],
- F = s[2 * (0 + a * M) + 1],
- C = p - 2 * c + 1 * g,
- N = f - 2 * d + 1 * y,
- x = p + 3 * g,
- O = f + 3 * y,
- D = p - 2 * c + 3 * g,
- R = f - 2 * d + 3 * y,
- B = .5 * (v - -2),
- U = .5 * (L - 1);
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else {
- var G = 0 | S;
- G == a && (G = a - 1);
- var B = .5 * (v - -2),
- U = S - G,
- Y = G / a,
- k = (G + 1) / a,
- b = s[2 * (0 + G * M)],
- F = s[2 * (0 + G * M) + 1],
- x = s[2 * (0 + (G + 1) * M)],
- O = s[2 * (0 + (G + 1) * M) + 1],
- C = p - 2 * c + Y * g,
- N = f - 2 * d + Y * y,
- D = p - 2 * c + k * g,
- R = f - 2 * d + k * y;
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else if (1 <= v) if (L <= 0) {
- var D = s[2 * (_ + 0 * M)],
- R = s[2 * (_ + 0 * M) + 1],
- x = p + 3 * c,
- O = f + 3 * d,
- C = p + 1 * c - 2 * g,
- N = f + 1 * d - 2 * y,
- b = p + 3 * c - 2 * g,
- F = f + 3 * d - 2 * y,
- B = .5 * (v - 1),
- U = .5 * (L - -2);
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else if (L >= 1) {
- var C = s[2 * (_ + a * M)],
- N = s[2 * (_ + a * M) + 1],
- b = p + 3 * c + 1 * g,
- F = f + 3 * d + 1 * y,
- D = p + 1 * c + 3 * g,
- R = f + 1 * d + 3 * y,
- x = p + 3 * c + 3 * g,
- O = f + 3 * d + 3 * y,
- B = .5 * (v - 1),
- U = .5 * (L - 1);
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else {
- var G = 0 | S;
- G == a && (G = a - 1);
- var B = .5 * (v - 1),
- U = S - G,
- Y = G / a,
- k = (G + 1) / a,
- C = s[2 * (_ + G * M)],
- N = s[2 * (_ + G * M) + 1],
- D = s[2 * (_ + (G + 1) * M)],
- R = s[2 * (_ + (G + 1) * M) + 1],
- b = p + 3 * c + Y * g,
- F = f + 3 * d + Y * y,
- x = p + 3 * c + k * g,
- O = f + 3 * d + k * y;
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else if (L <= 0) {
- var V = 0 | P;
- V == _ && (V = _ - 1);
- var B = P - V,
- U = .5 * (L - -2),
- X = V / _,
- z = (V + 1) / _,
- D = s[2 * (V + 0 * M)],
- R = s[2 * (V + 0 * M) + 1],
- x = s[2 * (V + 1 + 0 * M)],
- O = s[2 * (V + 1 + 0 * M) + 1],
- C = p + X * c - 2 * g,
- N = f + X * d - 2 * y,
- b = p + z * c - 2 * g,
- F = f + z * d - 2 * y;
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else if (L >= 1) {
- var V = 0 | P;
- V == _ && (V = _ - 1);
- var B = P - V,
- U = .5 * (L - 1),
- X = V / _,
- z = (V + 1) / _,
- C = s[2 * (V + a * M)],
- N = s[2 * (V + a * M) + 1],
- b = s[2 * (V + 1 + a * M)],
- F = s[2 * (V + 1 + a * M) + 1],
- D = p + X * c + 3 * g,
- R = f + X * d + 3 * y,
- x = p + z * c + 3 * g,
- O = f + z * d + 3 * y;
- B + U <= 1 ? (e[T] = C + (b - C) * B + (D - C) * U, e[T + 1] = N + (F - N) * B + (R - N) * U) : (e[T] = x + (D - x) * (1 - B) + (b - x) * (1 - U), e[T + 1] = O + (R - O) * (1 - B) + (F - O) * (1 - U))
- } else t.err.printf("_$li calc : %.4f , %.4f\t\t\t\t\t@@BDBoxGrid\n", v, L);
- else e[T] = p + v * c + L * g, e[T + 1] = f + v * d + L * y
- } else l = P - (0 | P), $ = S - (0 | S), h = 2 * ((0 | P) + (0 | S) * (_ + 1)), l + $ < 1 ? (e[T] = s[h] * (1 - l - $) + s[h + 2] * l + s[h + 2 * (_ + 1)] * $, e[T + 1] = s[h + 1] * (1 - l - $) + s[h + 3] * l + s[h + 2 * (_ + 1) + 1] * $) : (e[T] = s[h + 2 * (_ + 1) + 2] * (l - 1 + $) + s[h + 2 * (_ + 1)] * (1 - l) + s[h + 2] * (1 - $), e[T + 1] = s[h + 2 * (_ + 1) + 3] * (l - 1 + $) + s[h + 2 * (_ + 1) + 1] * (1 - l) + s[h + 3] * (1 - $))
- }
- }, Z.prototype.transformPoints_sdk1 = function(t, i, e, r, o, n, s) {
- for (var _, a, h, l, $, u, p, f = i, c = this._$o, d = this._$A, g = o * s, y = null != f._$hr ? f._$hr : f._$Cr, m = n; m < g; m += s) at._$ts ? (_ = e[m], a = e[m + 1], _ < 0 ? _ = 0 : _ > 1 && (_ = 1), a < 0 ? a = 0 : a > 1 && (a = 1), _ *= c, a *= d, h = 0 | _, l = 0 | a, h > c - 1 && (h = c - 1), l > d - 1 && (l = d - 1), u = _ - h, p = a - l, $ = 2 * (h + l * (c + 1))) : (_ = e[m] * c, a = e[m + 1] * d, u = _ - (0 | _), p = a - (0 | a), $ = 2 * ((0 | _) + (0 | a) * (c + 1))), u + p < 1 ? (r[m] = y[$] * (1 - u - p) + y[$ + 2] * u + y[$ + 2 * (c + 1)] * p, r[m + 1] = y[$ + 1] * (1 - u - p) + y[$ + 3] * u + y[$ + 2 * (c + 1) + 1] * p) : (r[m] = y[$ + 2 * (c + 1) + 2] * (u - 1 + p) + y[$ + 2 * (c + 1)] * (1 - u) + y[$ + 2] * (1 - p), r[m + 1] = y[$ + 2 * (c + 1) + 3] * (u - 1 + p) + y[$ + 2 * (c + 1) + 1] * (1 - u) + y[$ + 3] * (1 - p))
- }, Z.prototype._$VT = function() {
- return (this._$o + 1) * (this._$A + 1)
- }, Z.prototype.getType = function() {
- return I._$_b
- }, K.prototype = new _t, tt._$42 = 0, tt.prototype._$zP = function() {
- this._$3S = new Array, this._$aS = new Array
- }, tt.prototype._$F0 = function(t) {
- this._$g0 = t._$8L(), this.visible = t._$8L(), this._$NL = t._$nP(), this._$3S = t._$nP(), this._$aS = t._$nP()
- }, tt.prototype.init = function(t) {
- var i = new it(this);
- return i.setPartsOpacity(this.isVisible() ? 1 : 0), i
- }, tt.prototype._$6o = function(t) {
- if (null == this._$3S) throw new Error("_$3S _$6 _$Wo@_$6o");
- this._$3S.push(t)
- }, tt.prototype._$3o = function(t) {
- if (null == this._$aS) throw new Error("_$aS _$6 _$Wo@_$3o");
- this._$aS.push(t)
- }, tt.prototype._$Zo = function(t) {
- this._$3S = t
- }, tt.prototype._$xo = function(t) {
- this._$aS = t
- }, tt.prototype.isVisible = function() {
- return this.visible
- }, tt.prototype._$uL = function() {
- return this._$g0
- }, tt.prototype._$KP = function(t) {
- this.visible = t
- }, tt.prototype._$ET = function(t) {
- this._$g0 = t
- }, tt.prototype.getBaseData = function() {
- return this._$3S
- }, tt.prototype.getDrawData = function() {
- return this._$aS
- }, tt.prototype._$p2 = function() {
- return this._$NL
- }, tt.prototype._$ob = function(t) {
- this._$NL = t
- }, tt.prototype.getPartsID = function() {
- return this._$NL
- }, tt.prototype._$MP = function(t) {
- this._$NL = t
- }, it.prototype = new $, it.prototype.getPartsOpacity = function() {
- return this._$VS
- }, it.prototype.setPartsOpacity = function(t) {
- this._$VS = t
- }, et._$L7 = function() {
- u._$27(), yt._$27(), b._$27(), l._$27()
- }, et.prototype.toString = function() {
- return this.id
- }, rt.prototype._$F0 = function(t) {}, ot.prototype._$1s = function() {
- return this._$4S
- }, ot.prototype._$zP = function() {
- this._$4S = new Array
- }, ot.prototype._$F0 = function(t) {
- this._$4S = t._$nP()
- }, ot.prototype._$Ks = function(t) {
- this._$4S.push(t)
- }, nt.tr = new gt, nt._$50 = new gt, nt._$Ti = new Array(0, 0), nt._$Pi = new Array(0, 0), nt._$B = new Array(0, 0), nt.prototype._$lP = function(t, i, e, r) {
- this.viewport = new Array(t, i, e, r)
- }, nt.prototype._$bL = function() {
- this.context.save();
- var t = this.viewport;
- null != t && (this.context.beginPath(), this.context._$Li(t[0], t[1], t[2], t[3]), this.context.clip())
- }, nt.prototype._$ei = function() {
- this.context.restore()
- }, nt.prototype.drawElements = function(t, i, e, r, o, n, s, a) {
- try {
- o != this._$Qo && (this._$Qo = o, this.context.globalAlpha = o);
- for (var h = i.length, l = t.width, $ = t.height, u = this.context, p = this._$xP, f = this._$uP, c = this._$6r, d = this._$3r, g = nt.tr, y = nt._$Ti, m = nt._$Pi, T = nt._$B, P = 0; P < h; P += 3) {
- u.save();
- var S = i[P],
- v = i[P + 1],
- L = i[P + 2],
- M = p + c * e[2 * S],
- E = f + d * e[2 * S + 1],
- A = p + c * e[2 * v],
- I = f + d * e[2 * v + 1],
- w = p + c * e[2 * L],
- x = f + d * e[2 * L + 1];
- s && (s._$PS(M, E, T), M = T[0], E = T[1], s._$PS(A, I, T), A = T[0], I = T[1], s._$PS(w, x, T), w = T[0], x = T[1]);
- var O = l * r[2 * S],
- D = $ - $ * r[2 * S + 1],
- R = l * r[2 * v],
- b = $ - $ * r[2 * v + 1],
- F = l * r[2 * L],
- C = $ - $ * r[2 * L + 1],
- N = Math.atan2(b - D, R - O),
- B = Math.atan2(I - E, A - M),
- U = A - M,
- G = I - E,
- Y = Math.sqrt(U * U + G * G),
- k = R - O,
- V = b - D,
- X = Math.sqrt(k * k + V * V),
- z = Y / X;
- It._$ni(F, C, O, D, R - O, b - D, -(b - D), R - O, y), It._$ni(w, x, M, E, A - M, I - E, -(I - E), A - M, m);
- var H = (m[0] - y[0]) / y[1],
- W = Math.min(O, R, F),
- j = Math.max(O, R, F),
- q = Math.min(D, b, C),
- J = Math.max(D, b, C),
- Q = Math.floor(W),
- Z = Math.floor(q),
- K = Math.ceil(j),
- tt = Math.ceil(J);
- g.identity(), g.translate(M, E), g.rotate(B), g.scale(1, m[1] / y[1]), g.shear(H, 0), g.scale(z, z), g.rotate(-N), g.translate(-O, -D), g.setContext(u);
- if (n || (n = 1.2), at.IGNORE_EXPAND && (n = 0), at.USE_CACHED_POLYGON_IMAGE) {
- var it = a._$e0;
- if (it.gl_cacheImage = it.gl_cacheImage || {}, !it.gl_cacheImage[P]) {
- var et = nt.createCanvas(K - Q, tt - Z);
- at.DEBUG_DATA.LDGL_CANVAS_MB = at.DEBUG_DATA.LDGL_CANVAS_MB || 0, at.DEBUG_DATA.LDGL_CANVAS_MB += (K - Q) * (tt - Z) * 4;
- var rt = et.getContext("2d");
- rt.translate(-Q, -Z), nt.clip(rt, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), rt.drawImage(t, 0, 0), it.gl_cacheImage[P] = {
- cacheCanvas: et,
- cacheContext: rt
- }
- }
- u.drawImage(it.gl_cacheImage[P].cacheCanvas, Q, Z)
- } else at.IGNORE_CLIP || nt.clip(u, g, n, Y, O, D, R, b, F, C, M, E, A, I, w, x), at.USE_ADJUST_TRANSLATION && (W = 0, j = l, q = 0, J = $), u.drawImage(t, W, q, j - W, J - q, W, q, j - W, J - q);
- u.restore()
- }
- } catch (t) {
- _._$Rb(t)
- }
- }, nt.clip = function(t, i, e, r, o, n, s, _, a, h, l, $, u, p, f, c) {
- e > .02 ? nt.expandClip(t, i, e, r, l, $, u, p, f, c) : nt.clipWithTransform(t, null, o, n, s, _, a, h)
- }, nt.expandClip = function(t, i, e, r, o, n, s, _, a, h) {
- var l = s - o,
- $ = _ - n,
- u = a - o,
- p = h - n,
- f = l * p - $ * u > 0 ? e : -e,
- c = -$,
- d = l,
- g = a - s,
- y = h - _,
- m = -y,
- T = g,
- P = Math.sqrt(g * g + y * y),
- S = -p,
- v = u,
- L = Math.sqrt(u * u + p * p),
- M = o - f * c / r,
- E = n - f * d / r,
- A = s - f * c / r,
- I = _ - f * d / r,
- w = s - f * m / P,
- x = _ - f * T / P,
- O = a - f * m / P,
- D = h - f * T / P,
- R = o + f * S / L,
- b = n + f * v / L,
- F = a + f * S / L,
- C = h + f * v / L,
- N = nt._$50;
- return null != i._$P2(N) && (nt.clipWithTransform(t, N, M, E, A, I, w, x, O, D, F, C, R, b), !0)
- }, nt.clipWithTransform = function(t, i, e, r, o, n, s, a) {
- if (arguments.length < 7) return void _._$li("err : @LDGL.clip()");
- if (!(arguments[1] instanceof gt)) return void _._$li("err : a[0] is _$6 LDTransform @LDGL.clip()");
- var h = nt._$B,
- l = i,
- $ = arguments;
- if (t.beginPath(), l) {
- l._$PS($[2], $[3], h), t.moveTo(h[0], h[1]);
- for (var u = 4; u < $.length; u += 2) l._$PS($[u], $[u + 1], h), t.lineTo(h[0], h[1])
- } else {
- t.moveTo($[2], $[3]);
- for (var u = 4; u < $.length; u += 2) t.lineTo($[u], $[u + 1])
- }
- t.clip()
- }, nt.createCanvas = function(t, i) {
- var e = document.createElement("canvas");
- return e.setAttribute("width", t), e.setAttribute("height", i), e || _._$li("err : " + e), e
- }, nt.dumpValues = function() {
- for (var t = "", i = 0; i < arguments.length; i++) t += "[" + i + "]= " + arguments[i].toFixed(3) + " , ";
- console.log(t)
- }, st.prototype._$F0 = function(t) {
- this._$TT = t._$_T(), this._$LT = t._$_T(), this._$FS = t._$_T(), this._$wL = t._$nP()
- }, st.prototype.getMinValue = function() {
- return this._$TT
- }, st.prototype.getMaxValue = function() {
- return this._$LT
- }, st.prototype.getDefaultValue = function() {
- return this._$FS
- }, st.prototype.getParamID = function() {
- return this._$wL
- }, _t.prototype._$yo = function() {
- return this._$AT && !this._$JS
- }, _t.prototype._$hS = function(t) {
- this._$AT = t
- }, _t.prototype._$GT = function() {
- return this._$e0
- }, _t.prototype._$l2 = function(t) {
- this._$IP = t
- }, _t.prototype.getPartsIndex = function() {
- return this._$IP
- }, _t.prototype._$x2 = function() {
- return this._$JS
- }, _t.prototype._$Ib = function(t) {
- this._$JS = t
- }, _t.prototype.getTotalScale = function() {
- return this.totalScale
- }, _t.prototype.setTotalScale_notForClient = function(t) {
- this.totalScale = t
- }, _t.prototype.getInterpolatedOpacity = function() {
- return this._$7s
- }, _t.prototype.setInterpolatedOpacity = function(t) {
- this._$7s = t
- }, _t.prototype.getTotalOpacity = function(t) {
- return this.totalOpacity
- }, _t.prototype.setTotalOpacity = function(t) {
- this.totalOpacity = t
- }, at._$2s = "2.1.00_1", at._$Kr = 201001e3, at._$sP = !0, at._$so = !0, at._$cb = !1, at._$3T = !0, at._$Ts = !0, at._$fb = !0, at._$ts = !0, at.L2D_DEFORMER_EXTEND = !0, at._$Wb = !1;
- at._$yr = !1, at._$Zs = !1, at.L2D_NO_ERROR = 0, at._$i7 = 1e3, at._$9s = 1001, at._$es = 1100, at._$r7 = 2e3, at._$07 = 2001, at._$b7 = 2002, at._$H7 = 4e3, at.L2D_COLOR_BLEND_MODE_MULT = 0, at.L2D_COLOR_BLEND_MODE_ADD = 1, at.L2D_COLOR_BLEND_MODE_INTERPOLATE = 2, at._$6b = !0, at._$cT = 0, at.clippingMaskBufferSize = 256, at.glContext = new Array, at.frameBuffers = new Array, at.fTexture = new Array, at.IGNORE_CLIP = !1, at.IGNORE_EXPAND = !1, at.EXPAND_W = 2, at.USE_ADJUST_TRANSLATION = !0, at.USE_CANVAS_TRANSFORM = !0, at.USE_CACHED_POLYGON_IMAGE = !1, at.DEBUG_DATA = {}, at.PROFILE_IOS_SPEED = {
- PROFILE_NAME: "iOS Speed",
- USE_ADJUST_TRANSLATION: !0,
- USE_CACHED_POLYGON_IMAGE: !0,
- EXPAND_W: 4
- }, at.PROFILE_IOS_QUALITY = {
- PROFILE_NAME: "iOS HiQ",
- USE_ADJUST_TRANSLATION: !0,
- USE_CACHED_POLYGON_IMAGE: !1,
- EXPAND_W: 2
- }, at.PROFILE_IOS_DEFAULT = at.PROFILE_IOS_QUALITY, at.PROFILE_ANDROID = {
- PROFILE_NAME: "Android",
- USE_ADJUST_TRANSLATION: !1,
- USE_CACHED_POLYGON_IMAGE: !1,
- EXPAND_W: 2
- }, at.PROFILE_DESKTOP = {
- PROFILE_NAME: "Desktop",
- USE_ADJUST_TRANSLATION: !1,
- USE_CACHED_POLYGON_IMAGE: !1,
- EXPAND_W: 2
- }, at.initProfile = function() {
- Et.isIOS() ? at.setupProfile(at.PROFILE_IOS_DEFAULT) : Et.isAndroid() ? at.setupProfile(at.PROFILE_ANDROID) : at.setupProfile(at.PROFILE_DESKTOP)
- }, at.setupProfile = function(t, i) {
- if ("number" == typeof t) switch (t) {
- case 9901:
- t = at.PROFILE_IOS_SPEED;
- break;
- case 9902:
- t = at.PROFILE_IOS_QUALITY;
- break;
- case 9903:
- t = at.PROFILE_IOS_DEFAULT;
- break;
- case 9904:
- t = at.PROFILE_ANDROID;
- break;
- case 9905:
- t = at.PROFILE_DESKTOP;
- break;
- default:
- alert("profile _$6 _$Ui : " + t)
- }
- arguments.length < 2 && (i = !0), i && console.log("profile : " + t.PROFILE_NAME);
- for (var e in t) at[e] = t[e], i && console.log(" [" + e + "] = " + t[e])
- }, at.init = function() {
- if (at._$6b) {
- console.log("Live2D %s", at._$2s), at._$6b = !1;
- !0, at.initProfile()
- }
- }, at.getVersionStr = function() {
- return at._$2s
- }, at.getVersionNo = function() {
- return at._$Kr
- }, at._$sT = function(t) {
- at._$cT = t
- }, at.getError = function() {
- var t = at._$cT;
- return at._$cT = 0, t
- }, at.dispose = function() {
- at.glContext = [], at.frameBuffers = [], at.fTexture = []
- }, at.setGL = function(t, i) {
- var e = i || 0;
- at.glContext[e] = t
- }, at.getGL = function(t) {
- return at.glContext[t]
- }, at.setClippingMaskBufferSize = function(t) {
- at.clippingMaskBufferSize = t
- }, at.getClippingMaskBufferSize = function() {
- return at.clippingMaskBufferSize
- }, at.deleteBuffer = function(t) {
- at.getGL(t).deleteFramebuffer(at.frameBuffers[t].framebuffer), delete at.frameBuffers[t], delete at.glContext[t]
- }, ht._$r2 = function(t) {
- return t < 0 ? 0 : t > 1 ? 1 : .5 - .5 * Math.cos(t * Lt.PI_F)
- }, lt._$fr = -1, lt.prototype.toString = function() {
- return this._$ib
- }, $t.prototype = new W, $t._$42 = 0, $t._$Os = 30, $t._$ms = 0, $t._$ns = 1, $t._$_s = 2, $t._$gT = new Array, $t.prototype._$_S = function(t) {
- this._$LP = t
- }, $t.prototype.getTextureNo = function() {
- return this._$LP
- }, $t.prototype._$ZL = function() {
- return this._$Qi
- }, $t.prototype._$H2 = function() {
- return this._$JP
- }, $t.prototype.getNumPoints = function() {
- return this._$d0
- }, $t.prototype.getType = function() {
- return W._$wb
- }, $t.prototype._$B2 = function(t, i, e) {
- var r = i,
- o = null != r._$hr ? r._$hr : r._$Cr;
- switch (U._$do) {
- default:
- case U._$Ms:
- throw new Error("_$L _$ro ");
- case U._$Qs:
- for (var n = this._$d0 - 1; n >= 0; --n) o[n * U._$No + 4] = e
- }
- }, $t.prototype._$zP = function() {
- this._$GS = new D, this._$GS._$zP()
- }, $t.prototype._$F0 = function(t) {
- W.prototype._$F0.call(this, t), this._$LP = t._$6L(), this._$d0 = t._$6L(), this._$Yo = t._$6L();
- var i = t._$nP();
- this._$BP = new Int16Array(3 * this._$Yo);
- for (var e = 3 * this._$Yo - 1; e >= 0; --e) this._$BP[e] = i[e];
- if (this._$Eo = t._$nP(), this._$Qi = t._$nP(), t.getFormatVersion() >= G._$s7) {
- if (this._$JP = t._$6L(), 0 != this._$JP) {
- if (0 != (1 & this._$JP)) {
- var r = t._$6L();
- null == this._$5P && (this._$5P = new Object), this._$5P._$Hb = parseInt(r)
- }
- 0 != (this._$JP & $t._$Os) ? this._$6s = (this._$JP & $t._$Os) >> 1 : this._$6s = $t._$ms, 0 != (32 & this._$JP) && (this.culling = !1)
- }
- } else this._$JP = 0
- }, $t.prototype.init = function(t) {
- var i = new ut(this),
- e = this._$d0 * U._$No,
- r = this._$32();
- switch (null != i._$Cr && (i._$Cr = null), i._$Cr = new Float32Array(e), null != i._$hr && (i._$hr = null), i._$hr = r ? new Float32Array(e) : null, U._$do) {
- default:
- case U._$Ms:
- if (U._$Ls) for (var o = this._$d0 - 1; o >= 0; --o) {
- var n = o << 1;
- this._$Qi[n + 1] = 1 - this._$Qi[n + 1]
- }
- break;
- case U._$Qs:
- for (var o = this._$d0 - 1; o >= 0; --o) {
- var n = o << 1,
- s = o * U._$No,
- _ = this._$Qi[n],
- a = this._$Qi[n + 1];
- i._$Cr[s] = _, i._$Cr[s + 1] = a, i._$Cr[s + 4] = 0, r && (i._$hr[s] = _, i._$hr[s + 1] = a, i._$hr[s + 4] = 0)
- }
- }
- return i
- }, $t.prototype._$Nr = function(t, i) {
- var e = i;
- if (this != e._$GT() && console.log("### assert!! ### "), this._$GS._$Ur(t) && (W.prototype._$Nr.call(this, t, e), !e._$IS[0])) {
- var r = $t._$gT;
- r[0] = !1, v._$Vr(t, this._$GS, r, this._$d0, this._$Eo, e._$Cr, U._$i2, U._$No)
- }
- }, $t.prototype._$2b = function(t, i) {
- try {
- this != i._$GT() && console.log("### assert!! ### ");
- var e = !1;
- i._$IS[0] && (e = !0);
- var r = i;
- if (!e && (W.prototype._$2b.call(this, t), this._$32())) {
- var o = this.getTargetBaseDataID();
- if (r._$8r == W._$ur && (r._$8r = t.getBaseDataIndex(o)), r._$8r < 0) at._$so && _._$li("_$L _$0P _$G :: %s", o);
- else {
- var n = t.getBaseData(r._$8r),
- s = t._$q2(r._$8r);
- null == n || s._$x2() ? r._$AT = !1 : (n._$nb(t, s, r._$Cr, r._$hr, this._$d0, U._$i2, U._$No), r._$AT = !0), r.baseOpacity = s.getTotalOpacity()
- }
- }
- } catch (t) {
- throw t
- }
- }, $t.prototype.draw = function(t, i, e) {
- if (this != e._$GT() && console.log("### assert!! ### "), !e._$IS[0]) {
- var r = e,
- o = this._$LP;
- o < 0 && (o = 1);
- var n = this.getOpacity(i, r) * e._$VS * e.baseOpacity,
- s = null != r._$hr ? r._$hr : r._$Cr;
- t.setClipBufPre_clipContextForDraw(e.clipBufPre_clipContext), t._$WP(this.culling), t._$Uo(o, 3 * this._$Yo, this._$BP, s, this._$Qi, n, this._$6s, r)
- }
- }, $t.prototype.dump = function() {
- console.log(" _$yi( %d ) , _$d0( %d ) , _$Yo( %d ) \n", this._$LP, this._$d0, this._$Yo), console.log(" _$Oi _$di = { ");
- for (var t = 0; t < this._$BP.length; t++) console.log("%5d ,", this._$BP[t]);
- console.log("\n _$5i _$30");
- for (var t = 0; t < this._$Eo.length; t++) {
- console.log("\n _$30[%d] = ", t);
- for (var i = this._$Eo[t], e = 0; e < i.length; e++) console.log("%6.2f, ", i[e])
- }
- console.log("\n")
- }, $t.prototype._$72 = function(t) {
- return null == this._$5P ? null : this._$5P[t]
- }, $t.prototype.getIndexArray = function() {
- return this._$BP
- }, ut.prototype = new Mt, ut.prototype.getTransformedPoints = function() {
- return null != this._$hr ? this._$hr : this._$Cr
- }, pt.prototype._$HT = function(t) {
- this.x = t.x, this.y = t.y
- }, pt.prototype._$HT = function(t, i) {
- this.x = t, this.y = i
- }, ft.prototype = new i, ft.loadModel = function(t) {
- var e = new ft;
- return i._$62(e, t), e
- }, ft.loadModel = function(t, e) {
- var r = e || 0,
- o = new ft(r);
- return i._$62(o, t), o
- }, ft._$to = function() {
- return new ft
- }, ft._$er = function(t) {
- var i = new _$5("../_$_r/_$t0/_$Ri/_$_P._$d");
- if (0 == i.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + i._$PL());
- for (var e = ["../_$_r/_$t0/_$Ri/_$_P.512/_$CP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$vP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$EP._$1", "../_$_r/_$t0/_$Ri/_$_P.512/_$pP._$1"], r = ft.loadModel(i._$3b()), o = 0; o < e.length; o++) {
- var n = new _$5(e[o]);
- if (0 == n.exists()) throw new _$ls("_$t0 _$_ _$6 _$Ui :: " + n._$PL());
- r.setTexture(o, _$nL._$_o(t, n._$3b()))
- }
- return r
- }, ft.prototype.setGL = function(t) {
- at.setGL(t)
- }, ft.prototype.setTransform = function(t) {
- this.drawParamWebGL.setTransform(t)
- }, ft.prototype.update = function() {
- this._$5S.update(), this._$5S.preDraw(this.drawParamWebGL)
- }, ft.prototype.draw = function() {
- this._$5S.draw(this.drawParamWebGL)
- }, ft.prototype._$K2 = function() {
- this.drawParamWebGL._$K2()
- }, ft.prototype.setTexture = function(t, i) {
- null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i)
- }, ft.prototype.setTexture = function(t, i) {
- null == this.drawParamWebGL && _._$li("_$Yi for QT _$ki / _$XS() is _$6 _$ui!!"), this.drawParamWebGL.setTexture(t, i)
- }, ft.prototype._$Rs = function() {
- return this.drawParamWebGL._$Rs()
- }, ft.prototype._$Ds = function(t) {
- this.drawParamWebGL._$Ds(t)
- }, ft.prototype.getDrawParam = function() {
- return this.drawParamWebGL
- }, ft.prototype.setMatrix = function(t) {
- this.drawParamWebGL.setMatrix(t)
- }, ft.prototype.setPremultipliedAlpha = function(t) {
- this.drawParamWebGL.setPremultipliedAlpha(t)
- }, ft.prototype.isPremultipliedAlpha = function() {
- return this.drawParamWebGL.isPremultipliedAlpha()
- }, ft.prototype.setAnisotropy = function(t) {
- this.drawParamWebGL.setAnisotropy(t)
- }, ft.prototype.getAnisotropy = function() {
- return this.drawParamWebGL.getAnisotropy()
- }, ct.prototype._$tb = function() {
- return this.motions
- }, ct.prototype.startMotion = function(t, i) {
- for (var e = null, r = this.motions.length, o = 0; o < r; ++o) null != (e = this.motions[o]) && (e._$qS(e._$w0.getFadeOut()), this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / start _$K _$3 (m%d)\n", r, e._$sr));
- if (null == t) return -1;
- e = new dt, e._$w0 = t, this.motions.push(e);
- var n = e._$sr;
- return this._$eb && _._$Ji("MotionQueueManager[size:%2d]->startMotion() / new _$w0 (m%d)\n", r, n), n
- }, ct.prototype.updateParam = function(t) {
- try {
- for (var i = !1, e = 0; e < this.motions.length; e++) {
- var r = this.motions[e];
- if (null != r) {
- var o = r._$w0;
- null != o ? (o.updateParam(t, r), i = !0, r.isFinished() && (this._$eb && _._$Ji("MotionQueueManager[size:%2d]->updateParam() / _$T0 _$w0 (m%d)\n", this.motions.length - 1, r._$sr), this.motions.splice(e, 1), e--)) : (this.motions = this.motions.splice(e, 1), e--)
- } else this.motions.splice(e, 1), e--
- }
- return i
- } catch (t) {
- return _._$li(t), !0
- }
- }, ct.prototype.isFinished = function(t) {
- if (arguments.length >= 1) {
- for (var i = 0; i < this.motions.length; i++) {
- var e = this.motions[i];
- if (null != e && (e._$sr == t && !e.isFinished())) return !1
- }
- return !0
- }
- for (var i = 0; i < this.motions.length; i++) {
- var e = this.motions[i];
- if (null != e) {
- if (null != e._$w0) {
- if (!e.isFinished()) return !1
- } else this.motions.splice(i, 1), i--
- } else this.motions.splice(i, 1), i--
- }
- return !0
- }, ct.prototype.stopAllMotions = function() {
- for (var t = 0; t < this.motions.length; t++) {
- var i = this.motions[t];
- if (null != i) {
- i._$w0;
- this.motions.splice(t, 1), t--
- } else this.motions.splice(t, 1), t--
- }
- }, ct.prototype._$Zr = function(t) {
- this._$eb = t
- }, ct.prototype._$e = function() {
- console.log("-- _$R --\n");
- for (var t = 0; t < this.motions.length; t++) {
- var i = this.motions[t],
- e = i._$w0;
- console.log("MotionQueueEnt[%d] :: %s\n", this.motions.length, e.toString())
- }
- }, dt._$Gs = 0, dt.prototype.isFinished = function() {
- return this._$9L
- }, dt.prototype._$qS = function(t) {
- var i = w.getUserTimeMSec(),
- e = i + t;
- (this._$Do < 0 || e < this._$Do) && (this._$Do = e)
- }, dt.prototype._$Bs = function() {
- return this._$sr
- }, gt.prototype.setContext = function(t) {
- var i = this.m;
- t.transform(i[0], i[1], i[3], i[4], i[6], i[7])
- }, gt.prototype.toString = function() {
- for (var t = "LDTransform { ", i = 0; i < 9; i++) t += this.m[i].toFixed(2) + " ,";
- return t += " }"
- }, gt.prototype.identity = function() {
- var t = this.m;
- t[0] = t[4] = t[8] = 1, t[1] = t[2] = t[3] = t[5] = t[6] = t[7] = 0
- }, gt.prototype._$PS = function(t, i, e) {
- null == e && (e = new Array(0, 0));
- var r = this.m;
- return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e
- }, gt.prototype._$P2 = function(t) {
- t || (t = new gt);
- var i = this.m,
- e = i[0],
- r = i[1],
- o = i[2],
- n = i[3],
- s = i[4],
- _ = i[5],
- a = i[6],
- h = i[7],
- l = i[8],
- $ = e * s * l + r * _ * a + o * n * h - e * _ * h - o * s * a - r * n * l;
- if (0 == $) return null;
- var u = 1 / $;
- return t.m[0] = u * (s * l - h * _), t.m[1] = u * (h * o - r * l), t.m[2] = u * (r * _ - s * o), t.m[3] = u * (a * _ - n * l), t.m[4] = u * (e * l - a * o), t.m[5] = u * (n * o - e * _), t.m[6] = u * (n * h - a * s), t.m[7] = u * (a * r - e * h), t.m[8] = u * (e * s - n * r), t
- }, gt.prototype.transform = function(t, i, e) {
- null == e && (e = new Array(0, 0));
- var r = this.m;
- return e[0] = r[0] * t + r[3] * i + r[6], e[1] = r[1] * t + r[4] * i + r[7], e
- }, gt.prototype.translate = function(t, i) {
- var e = this.m;
- e[6] = e[0] * t + e[3] * i + e[6], e[7] = e[1] * t + e[4] * i + e[7], e[8] = e[2] * t + e[5] * i + e[8]
- }, gt.prototype.scale = function(t, i) {
- var e = this.m;
- e[0] *= t, e[1] *= t, e[2] *= t, e[3] *= i, e[4] *= i, e[5] *= i
- }, gt.prototype.shear = function(t, i) {
- var e = this.m,
- r = e[0] + e[3] * i,
- o = e[1] + e[4] * i,
- n = e[2] + e[5] * i;
- e[3] = e[0] * t + e[3], e[4] = e[1] * t + e[4], e[5] = e[2] * t + e[5], e[0] = r, e[1] = o, e[2] = n
- }, gt.prototype.rotate = function(t) {
- var i = this.m,
- e = Math.cos(t),
- r = Math.sin(t),
- o = i[0] * e + i[3] * r,
- n = i[1] * e + i[4] * r,
- s = i[2] * e + i[5] * r;
- i[3] = -i[0] * r + i[3] * e, i[4] = -i[1] * r + i[4] * e, i[5] = -i[2] * r + i[5] * e, i[0] = o, i[1] = n, i[2] = s
- }, gt.prototype.concatenate = function(t) {
- var i = this.m,
- e = t.m,
- r = i[0] * e[0] + i[3] * e[1] + i[6] * e[2],
- o = i[1] * e[0] + i[4] * e[1] + i[7] * e[2],
- n = i[2] * e[0] + i[5] * e[1] + i[8] * e[2],
- s = i[0] * e[3] + i[3] * e[4] + i[6] * e[5],
- _ = i[1] * e[3] + i[4] * e[4] + i[7] * e[5],
- a = i[2] * e[3] + i[5] * e[4] + i[8] * e[5],
- h = i[0] * e[6] + i[3] * e[7] + i[6] * e[8],
- l = i[1] * e[6] + i[4] * e[7] + i[7] * e[8],
- $ = i[2] * e[6] + i[5] * e[7] + i[8] * e[8];
- m[0] = r, m[1] = o, m[2] = n, m[3] = s, m[4] = _, m[5] = a, m[6] = h, m[7] = l, m[8] = $
- }, yt.prototype = new et, yt._$eT = null, yt._$tP = new Object, yt._$2o = function() {
- return null == yt._$eT && (yt._$eT = yt.getID("DST_BASE")), yt._$eT
- }, yt._$27 = function() {
- yt._$tP.clear(), yt._$eT = null
- }, yt.getID = function(t) {
- var i = yt._$tP[t];
- return null == i && (i = new yt(t), yt._$tP[t] = i), i
- }, yt.prototype._$3s = function() {
- return new yt
- }, mt.prototype = new E, mt._$9r = function(t) {
- return new Float32Array(t)
- }, mt._$vb = function(t) {
- return new Int16Array(t)
- }, mt._$cr = function(t, i) {
- return null == t || t._$yL() < i.length ? (t = mt._$9r(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
- }, mt._$mb = function(t, i) {
- return null == t || t._$yL() < i.length ? (t = mt._$vb(2 * i.length), t.put(i), t._$oT(0)) : (t.clear(), t.put(i), t._$oT(0)), t
- }, mt._$Hs = function() {
- return this._$Gr
- }, mt._$as = function(t) {
- this._$Gr = t
- }, mt.prototype.getGL = function() {
- return this.gl
- }, mt.prototype.setGL = function(t) {
- this.gl = t
- }, mt.prototype.setTransform = function(t) {
- this.transform = t
- }, mt.prototype._$ZT = function() {
- var t = this.gl;
- this.firstDraw && (this.initShader(), this.firstDraw = !1, this.anisotropyExt = t.getExtension("EXT_texture_filter_anisotropic") || t.getExtension("WEBKIT_EXT_texture_filter_anisotropic") || t.getExtension("MOZ_EXT_texture_filter_anisotropic"), this.anisotropyExt && (this.maxAnisotropy = t.getParameter(this.anisotropyExt.MAX_TEXTURE_MAX_ANISOTROPY_EXT))), t.disable(t.SCISSOR_TEST), t.disable(t.STENCIL_TEST), t.disable(t.DEPTH_TEST), t.frontFace(t.CW), t.enable(t.BLEND), t.colorMask(1, 1, 1, 1), t.bindBuffer(t.ARRAY_BUFFER, null), t.bindBuffer(t.ELEMENT_ARRAY_BUFFER, null)
- }, mt.prototype._$Uo = function(t, i, e, r, o, n, s, _) {
- if (!(n < .01 && null == this.clipBufPre_clipContextMask)) {
- var a = (n > .9 && at.EXPAND_W, this.gl);
- if (null == this.gl) throw new Error("gl is null");
- var h = 1 * this._$C0 * n,
- l = 1 * this._$tT * n,
- $ = 1 * this._$WL * n,
- u = this._$lT * n;
- if (null != this.clipBufPre_clipContextMask) {
- a.frontFace(a.CCW), a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.getClipBufPre_clipContextMask().matrixForMask);
- var p = this.getClipBufPre_clipContextMask().layoutChannelNo,
- f = this.getChannelFlagAsColor(p);
- a.uniform4f(this.u_channelFlag, f.r, f.g, f.b, f.a);
- var c = this.getClipBufPre_clipContextMask().layoutBounds;
- a.uniform4f(this.u_baseColor_Loc, 2 * c.x - 1, 2 * c.y - 1, 2 * c._$EL() - 1, 2 * c._$5T() - 1), a.uniform1i(this.u_maskFlag_Loc, !0)
- } else if (null != this.getClipBufPre_clipContextDraw()) {
- a.useProgram(this.shaderProgramOff), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc_Off), a.vertexAttribPointer(this.a_position_Loc_Off, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc_Off, 1), a.enableVertexAttribArray(this.a_texCoord_Loc_Off), a.vertexAttribPointer(this.a_texCoord_Loc_Off, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_clipMatrix_Loc_Off, !1, this.getClipBufPre_clipContextDraw().matrixForDraw), a.uniformMatrix4fv(this.u_matrix_Loc_Off, !1, this.matrix4x4), a.activeTexture(a.TEXTURE2), a.bindTexture(a.TEXTURE_2D, at.fTexture[this.glno]), a.uniform1i(this.s_texture1_Loc_Off, 2);
- var p = this.getClipBufPre_clipContextDraw().layoutChannelNo,
- f = this.getChannelFlagAsColor(p);
- a.uniform4f(this.u_channelFlag_Loc_Off, f.r, f.g, f.b, f.a), a.uniform4f(this.u_baseColor_Loc_Off, h, l, $, u)
- } else a.useProgram(this.shaderProgram), this._$vS = Tt(a, this._$vS, r), this._$no = Pt(a, this._$no, e), a.enableVertexAttribArray(this.a_position_Loc), a.vertexAttribPointer(this.a_position_Loc, 2, a.FLOAT, !1, 0, 0), this._$NT = Tt(a, this._$NT, o), a.activeTexture(a.TEXTURE1), a.bindTexture(a.TEXTURE_2D, this.textures[t]), a.uniform1i(this.s_texture0_Loc, 1), a.enableVertexAttribArray(this.a_texCoord_Loc), a.vertexAttribPointer(this.a_texCoord_Loc, 2, a.FLOAT, !1, 0, 0), a.uniformMatrix4fv(this.u_matrix_Loc, !1, this.matrix4x4), a.uniform4f(this.u_baseColor_Loc, h, l, $, u), a.uniform1i(this.u_maskFlag_Loc, !1);
- this.culling ? this.gl.enable(a.CULL_FACE) : this.gl.disable(a.CULL_FACE), this.gl.enable(a.BLEND);
- var d, g, y, m;
- if (null != this.clipBufPre_clipContextMask) d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA;
- else switch (s) {
- case $t._$ms:
- d = a.ONE, g = a.ONE_MINUS_SRC_ALPHA, y = a.ONE, m = a.ONE_MINUS_SRC_ALPHA;
- break;
- case $t._$ns:
- d = a.ONE, g = a.ONE, y = a.ZERO, m = a.ONE;
- break;
- case $t._$_s:
- d = a.DST_COLOR, g = a.ONE_MINUS_SRC_ALPHA, y = a.ZERO, m = a.ONE
- }
- a.blendEquationSeparate(a.FUNC_ADD, a.FUNC_ADD), a.blendFuncSeparate(d, g, y, m), this.anisotropyExt && a.texParameteri(a.TEXTURE_2D, this.anisotropyExt.TEXTURE_MAX_ANISOTROPY_EXT, this.maxAnisotropy);
- var T = e.length;
- a.drawElements(a.TRIANGLES, T, a.UNSIGNED_SHORT, 0), a.bindTexture(a.TEXTURE_2D, null)
- }
- }, mt.prototype._$Rs = function() {
- throw new Error("_$Rs")
- }, mt.prototype._$Ds = function(t) {
- throw new Error("_$Ds")
- }, mt.prototype._$K2 = function() {
- for (var t = 0; t < this.textures.length; t++) {
- 0 != this.textures[t] && (this.gl._$K2(1, this.textures, t), this.textures[t] = null)
- }
- }, mt.prototype.setTexture = function(t, i) {
- this.textures[t] = i
- }, mt.prototype.initShader = function() {
- var t = this.gl;
- this.loadShaders2(), this.a_position_Loc = t.getAttribLocation(this.shaderProgram, "a_position"), this.a_texCoord_Loc = t.getAttribLocation(this.shaderProgram, "a_texCoord"), this.u_matrix_Loc = t.getUniformLocation(this.shaderProgram, "u_mvpMatrix"), this.s_texture0_Loc = t.getUniformLocation(this.shaderProgram, "s_texture0"), this.u_channelFlag = t.getUniformLocation(this.shaderProgram, "u_channelFlag"), this.u_baseColor_Loc = t.getUniformLocation(this.shaderProgram, "u_baseColor"), this.u_maskFlag_Loc = t.getUniformLocation(this.shaderProgram, "u_maskFlag"), this.a_position_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_position"), this.a_texCoord_Loc_Off = t.getAttribLocation(this.shaderProgramOff, "a_texCoord"), this.u_matrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_mvpMatrix"), this.u_clipMatrix_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_ClipMatrix"), this.s_texture0_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture0"), this.s_texture1_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "s_texture1"), this.u_channelFlag_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_channelFlag"), this.u_baseColor_Loc_Off = t.getUniformLocation(this.shaderProgramOff, "u_baseColor")
- }, mt.prototype.disposeShader = function() {
- var t = this.gl;
- this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = null), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = null)
- }, mt.prototype.compileShader = function(t, i) {
- var e = this.gl,
- r = i,
- o = e.createShader(t);
- if (null == o) return _._$Ji("_$L0 to create shader"), null;
- if (e.shaderSource(o, r), e.compileShader(o), !e.getShaderParameter(o, e.COMPILE_STATUS)) {
- var n = e.getShaderInfoLog(o);
- return _._$Ji("_$L0 to compile shader : " + n), e.deleteShader(o), null
- }
- return o
- }, mt.prototype.loadShaders2 = function() {
- var t = this.gl;
- if (this.shaderProgram = t.createProgram(), !this.shaderProgram) return !1;
- if (this.shaderProgramOff = t.createProgram(), !this.shaderProgramOff) return !1;
- if (this.vertShader = this.compileShader(t.VERTEX_SHADER, "attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform mat4 u_mvpMatrix;void main(){ gl_Position = u_mvpMatrix * a_position; v_ClipPos = u_mvpMatrix * a_position; v_texCoord = a_texCoord;}"), !this.vertShader) return _._$Ji("Vertex shader compile _$li!"), !1;
- if (this.vertShaderOff = this.compileShader(t.VERTEX_SHADER, "attribute vec4 a_position;attribute vec2 a_texCoord;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform mat4 u_mvpMatrix;uniform mat4 u_ClipMatrix;void main(){ gl_Position = u_mvpMatrix * a_position; v_ClipPos = u_ClipMatrix * a_position; v_texCoord = a_texCoord ;}"), !this.vertShaderOff) return _._$Ji("OffVertex shader compile _$li!"), !1;
- if (this.fragShader = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform sampler2D s_texture0;uniform vec4 u_channelFlag;uniform vec4 u_baseColor;uniform bool u_maskFlag;void main(){ vec4 smpColor; if(u_maskFlag){ float isInside = step(u_baseColor.x, v_ClipPos.x/v_ClipPos.w) * step(u_baseColor.y, v_ClipPos.y/v_ClipPos.w) * step(v_ClipPos.x/v_ClipPos.w, u_baseColor.z) * step(v_ClipPos.y/v_ClipPos.w, u_baseColor.w); smpColor = u_channelFlag * texture2D(s_texture0 , v_texCoord).a * isInside; }else{ smpColor = texture2D(s_texture0 , v_texCoord) * u_baseColor; } gl_FragColor = smpColor;}"), !this.fragShader) return _._$Ji("Fragment shader compile _$li!"), !1;
- if (this.fragShaderOff = this.compileShader(t.FRAGMENT_SHADER, "precision mediump float ;varying vec2 v_texCoord;varying vec4 v_ClipPos;uniform sampler2D s_texture0;uniform sampler2D s_texture1;uniform vec4 u_channelFlag;uniform vec4 u_baseColor ;void main(){ vec4 col_formask = texture2D(s_texture0, v_texCoord) * u_baseColor; vec4 clipMask = texture2D(s_texture1, v_ClipPos.xy / v_ClipPos.w) * u_channelFlag; float maskVal = clipMask.r + clipMask.g + clipMask.b + clipMask.a; col_formask = col_formask * maskVal; gl_FragColor = col_formask;}"), !this.fragShaderOff) return _._$Ji("OffFragment shader compile _$li!"), !1;
- if (t.attachShader(this.shaderProgram, this.vertShader), t.attachShader(this.shaderProgram, this.fragShader), t.attachShader(this.shaderProgramOff, this.vertShaderOff), t.attachShader(this.shaderProgramOff, this.fragShaderOff), t.linkProgram(this.shaderProgram), t.linkProgram(this.shaderProgramOff), !t.getProgramParameter(this.shaderProgram, t.LINK_STATUS)) {
- var i = t.getProgramInfoLog(this.shaderProgram);
- return _._$Ji("_$L0 to link program: " + i), this.vertShader && (t.deleteShader(this.vertShader), this.vertShader = 0), this.fragShader && (t.deleteShader(this.fragShader), this.fragShader = 0), this.shaderProgram && (t.deleteProgram(this.shaderProgram), this.shaderProgram = 0), this.vertShaderOff && (t.deleteShader(this.vertShaderOff), this.vertShaderOff = 0), this.fragShaderOff && (t.deleteShader(this.fragShaderOff), this.fragShaderOff = 0), this.shaderProgramOff && (t.deleteProgram(this.shaderProgramOff), this.shaderProgramOff = 0), !1
- }
- return !0
- }, mt.prototype.createFramebuffer = function() {
- var t = this.gl,
- i = at.clippingMaskBufferSize,
- e = t.createFramebuffer();
- t.bindFramebuffer(t.FRAMEBUFFER, e);
- var r = t.createRenderbuffer();
- t.bindRenderbuffer(t.RENDERBUFFER, r), t.renderbufferStorage(t.RENDERBUFFER, t.RGBA4, i, i), t.framebufferRenderbuffer(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.RENDERBUFFER, r);
- var o = t.createTexture();
- return t.bindTexture(t.TEXTURE_2D, o), t.texImage2D(t.TEXTURE_2D, 0, t.RGBA, i, i, 0, t.RGBA, t.UNSIGNED_BYTE, null), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MIN_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_MAG_FILTER, t.LINEAR), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_S, t.CLAMP_TO_EDGE), t.texParameteri(t.TEXTURE_2D, t.TEXTURE_WRAP_T, t.CLAMP_TO_EDGE), t.framebufferTexture2D(t.FRAMEBUFFER, t.COLOR_ATTACHMENT0, t.TEXTURE_2D, o, 0), t.bindTexture(t.TEXTURE_2D, null), t.bindRenderbuffer(t.RENDERBUFFER, null), t.bindFramebuffer(t.FRAMEBUFFER, null), at.fTexture[this.glno] = o, {
- framebuffer: e,
- renderbuffer: r,
- texture: at.fTexture[this.glno]
- }
- }, St.prototype._$fP = function() {
- var t, i, e, r = this._$ST();
- if (0 == (128 & r)) return 255 & r;
- if (0 == (128 & (t = this._$ST()))) return (127 & r) << 7 | 127 & t;
- if (0 == (128 & (i = this._$ST()))) return (127 & r) << 14 | (127 & t) << 7 | 255 & i;
- if (0 == (128 & (e = this._$ST()))) return (127 & r) << 21 | (127 & t) << 14 | (127 & i) << 7 | 255 & e;
- throw new lt("_$L _$0P _")
- }, St.prototype.getFormatVersion = function() {
- return this._$S2
- }, St.prototype._$gr = function(t) {
- this._$S2 = t
- }, St.prototype._$3L = function() {
- return this._$fP()
- }, St.prototype._$mP = function() {
- return this._$zT(), this._$F += 8, this._$T.getFloat64(this._$F - 8)
- }, St.prototype._$_T = function() {
- return this._$zT(), this._$F += 4, this._$T.getFloat32(this._$F - 4)
- }, St.prototype._$6L = function() {
- return this._$zT(), this._$F += 4, this._$T.getInt32(this._$F - 4)
- }, St.prototype._$ST = function() {
- return this._$zT(), this._$T.getInt8(this._$F++)
- }, St.prototype._$9T = function() {
- return this._$zT(), this._$F += 2, this._$T.getInt16(this._$F - 2)
- }, St.prototype._$2T = function() {
- throw this._$zT(), this._$F += 8, new lt("_$L _$q read long")
- }, St.prototype._$po = function() {
- return this._$zT(), 0 != this._$T.getInt8(this._$F++)
- };
- var xt = !0;
- St.prototype._$bT = function() {
- this._$zT();
- var t = this._$3L(),
- i = null;
- if (xt) try {
- var e = new ArrayBuffer(2 * t);
- i = new Uint16Array(e);
- for (var r = 0; r < t; ++r) i[r] = this._$T.getUint8(this._$F++);
- return String.fromCharCode.apply(null, i)
- } catch (t) {
- xt = !1
- }
- try {
- var o = new Array;
- if (null == i) for (var r = 0; r < t; ++r) o[r] = this._$T.getUint8(this._$F++);
- else for (var r = 0; r < t; ++r) o[r] = i[r];
- return String.fromCharCode.apply(null, o)
- } catch (t) {
- console.log("read utf8 / _$rT _$L0 !! : " + t)
- }
- }, St.prototype._$cS = function() {
- this._$zT();
- for (var t = this._$3L(), i = new Int32Array(t), e = 0; e < t; e++) i[e] = this._$T.getInt32(this._$F), this._$F += 4;
- return i
- }, St.prototype._$Tb = function() {
- this._$zT();
- for (var t = this._$3L(), i = new Float32Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat32(this._$F), this._$F += 4;
- return i
- }, St.prototype._$5b = function() {
- this._$zT();
- for (var t = this._$3L(), i = new Float64Array(t), e = 0; e < t; e++) i[e] = this._$T.getFloat64(this._$F), this._$F += 8;
- return i
- }, St.prototype._$nP = function() {
- return this._$Jb(-1)
- }, St.prototype._$Jb = function(t) {
- if (this._$zT(), t < 0 && (t = this._$3L()), t == G._$7P) {
- var i = this._$6L();
- if (0 <= i && i < this._$Ko.length) return this._$Ko[i];
- throw new lt("_$sL _$4i @_$m0")
- }
- var e = this._$4b(t);
- return this._$Ko.push(e), e
- }, St.prototype._$4b = function(t) {
- if (0 == t) return null;
- if (50 == t) {
- var i = this._$bT(),
- e = b.getID(i);
- return e
- }
- if (51 == t) {
- var i = this._$bT(),
- e = yt.getID(i);
- return e
- }
- if (134 == t) {
- var i = this._$bT(),
- e = l.getID(i);
- return e
- }
- if (60 == t) {
- var i = this._$bT(),
- e = u.getID(i);
- return e
- }
- if (t >= 48) {
- var r = G._$9o(t);
- return null != r ? (r._$F0(this), r) : null
- }
- switch (t) {
- case 1:
- return this._$bT();
- case 10:
- return new n(this._$6L(), !0);
- case 11:
- return new S(this._$mP(), this._$mP(), this._$mP(), this._$mP());
- case 12:
- return new S(this._$_T(), this._$_T(), this._$_T(), this._$_T());
- case 13:
- return new L(this._$mP(), this._$mP());
- case 14:
- return new L(this._$_T(), this._$_T());
- case 15:
- for (var o = this._$3L(), e = new Array(o), s = 0; s < o; s++) e[s] = this._$nP();
- return e;
- case 17:
- var e = new F(this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP(), this._$mP());
- return e;
- case 21:
- return new h(this._$6L(), this._$6L(), this._$6L(), this._$6L());
- case 22:
- return new pt(this._$6L(), this._$6L());
- case 23:
- throw new Error("_$L _$ro ");
- case 16:
- case 25:
- return this._$cS();
- case 26:
- return this._$5b();
- case 27:
- return this._$Tb();
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
- case 8:
- case 9:
- case 18:
- case 19:
- case 20:
- case 24:
- case 28:
- throw new lt("_$6 _$q : _$nP() of 2-9 ,18,19,20,24,28 : " + t);
- default:
- throw new lt("_$6 _$q : _$nP() NO _$i : " + t)
- }
- }, St.prototype._$8L = function() {
- return 0 == this._$hL ? this._$v0 = this._$ST() : 8 == this._$hL && (this._$v0 = this._$ST(), this._$hL = 0), 1 == (this._$v0 >> 7 - this._$hL++ & 1)
- }, St.prototype._$zT = function() {
- 0 != this._$hL && (this._$hL = 0)
- }, vt.prototype._$wP = function(t, i, e) {
- for (var r = 0; r < e; r++) {
- for (var o = 0; o < i; o++) {
- var n = 2 * (o + r * i);
- console.log("(% 7.3f , % 7.3f) , ", t[n], t[n + 1])
- }
- console.log("\n")
- }
- console.log("\n")
- }, Lt._$2S = Math.PI / 180, Lt._$bS = Math.PI / 180, Lt._$wS = 180 / Math.PI, Lt._$NS = 180 / Math.PI, Lt.PI_F = Math.PI, Lt._$kT = [0, .012368, .024734, .037097, .049454, .061803, .074143, .086471, .098786, .111087, .12337, .135634, .147877, .160098, .172295, .184465, .196606, .208718, .220798, .232844, .244854, .256827, .268761, .280654, .292503, .304308, .316066, .327776, .339436, .351044, .362598, .374097, .385538, .396921, .408243, .419502, .430697, .441826, .452888, .463881, .474802, .485651, .496425, .507124, .517745, .528287, .538748, .549126, .559421, .56963, .579752, .589785, .599728, .609579, .619337, .629, .638567, .648036, .657406, .666676, .675843, .684908, .693867, .70272, .711466, .720103, .72863, .737045, .745348, .753536, .76161, .769566, .777405, .785125, .792725, .800204, .807561, .814793, .821901, .828884, .835739, .842467, .849066, .855535, .861873, .868079, .874153, .880093, .885898, .891567, .897101, .902497, .907754, .912873, .917853, .922692, .92739, .931946, .936359, .940629, .944755, .948737, .952574, .956265, .959809, .963207, .966457, .96956, .972514, .97532, .977976, .980482, .982839, .985045, .987101, .989006, .990759, .992361, .993811, .995109, .996254, .997248, .998088, .998776, .999312, .999694, .999924, 1], Lt._$92 = function(t, i) {
- var e = Math.atan2(t[1], t[0]),
- r = Math.atan2(i[1], i[0]);
- return Lt._$tS(e, r)
- }, Lt._$tS = function(t, i) {
- for (var e = t - i; e < -Math.PI;) e += 2 * Math.PI;
- for (; e > Math.PI;) e -= 2 * Math.PI;
- return e
- }, Lt._$9 = function(t) {
- return Math.sin(t)
- }, Lt.fcos = function(t) {
- return Math.cos(t)
- }, Mt.prototype._$u2 = function() {
- return this._$IS[0]
- }, Mt.prototype._$yo = function() {
- return this._$AT && !this._$IS[0]
- }, Mt.prototype._$GT = function() {
- return this._$e0
- }, Et._$W2 = 0, Et.SYSTEM_INFO = null, Et.USER_AGENT = navigator.userAgent, Et.isIPhone = function() {
- return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone
- }, Et.isIOS = function() {
- return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad
- }, Et.isAndroid = function() {
- return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isAndroid
- }, Et.getOSVersion = function() {
- return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO.version
- }, Et.getOS = function() {
- return Et.SYSTEM_INFO || Et.setup(), Et.SYSTEM_INFO._isIPhone || Et.SYSTEM_INFO._isIPad ? "iOS" : Et.SYSTEM_INFO._isAndroid ? "Android" : "_$Q0 OS"
- }, Et.setup = function() {
- function t(t, i) {
- for (var e = t.substring(i).split(/[ _,;\.]/), r = 0, o = 0; o <= 2 && !isNaN(e[o]); o++) {
- var n = parseInt(e[o]);
- if (n < 0 || n > 999) {
- _._$li("err : " + n + " @UtHtml5.setup()"), r = 0;
- break
- }
- r += n * Math.pow(1e3, 2 - o)
- }
- return r
- }
- var i, e = Et.USER_AGENT,
- r = Et.SYSTEM_INFO = {
- userAgent: e
- };
- if ((i = e.indexOf("iPhone OS ")) >= 0) r.os = "iPhone", r._isIPhone = !0, r.version = t(e, i + "iPhone OS ".length);
- else if ((i = e.indexOf("iPad")) >= 0) {
- if ((i = e.indexOf("CPU OS")) < 0) return void _._$li(" err : " + e + " @UtHtml5.setup()");
- r.os = "iPad", r._isIPad = !0, r.version = t(e, i + "CPU OS ".length)
- } else(i = e.indexOf("Android")) >= 0 ? (r.os = "Android", r._isAndroid = !0, r.version = t(e, i + "Android ".length)) : (r.os = "-", r.version = -1)
- }, window.UtSystem = w, window.UtDebug = _, window.LDTransform = gt, window.LDGL = nt, window.Live2D = at, window.Live2DModelWebGL = ft, window.Live2DModelJS = q, window.Live2DMotion = J, window.MotionQueueManager = ct, window.PhysicsHair = f, window.AMotion = s, window.PartsDataID = l, window.DrawDataID = b, window.BaseDataID = yt, window.ParamID = u, at.init();
- var At = !1
- }()
- }).call(i, e(7))
-}, function(t, i) {
- t.exports = {
- import: function() {
- throw new Error("System.import cannot be used indirectly")
- }
- }
-}, function(t, i, e) {
- "use strict";
-
- function r(t) {
- return t && t.__esModule ? t : {
- default:
- t
- }
- }
- function o() {
- this.models = [], this.count = -1, this.reloadFlg = !1, Live2D.init(), n.Live2DFramework.setPlatformManager(new _.
- default)
- }
- Object.defineProperty(i, "__esModule", {
- value: !0
- }), i.
-default = o;
- var n = e(0),
- s = e(9),
- _ = r(s),
- a = e(10),
- h = r(a),
- l = e(1),
- $ = r(l);
- o.prototype.createModel = function() {
- var t = new h.
- default;
- return this.models.push(t), t
- }, o.prototype.changeModel = function(t, i) {
- if (this.reloadFlg) {
- this.reloadFlg = !1;
- this.releaseModel(0, t), this.createModel(), this.models[0].load(t, i)
- }
- }, o.prototype.getModel = function(t) {
- return t >= this.models.length ? null : this.models[t]
- }, o.prototype.releaseModel = function(t, i) {
- this.models.length <= t || (this.models[t].release(i), delete this.models[t], this.models.splice(t, 1))
- }, o.prototype.numModels = function() {
- return this.models.length
- }, o.prototype.setDrag = function(t, i) {
- for (var e = 0; e < this.models.length; e++) this.models[e].setDrag(t, i)
- }, o.prototype.maxScaleEvent = function() {
- $.
- default.DEBUG_LOG && console.log("Max scale event.");
- for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($.
- default.MOTION_GROUP_PINCH_IN, $.
- default.PRIORITY_NORMAL)
- }, o.prototype.minScaleEvent = function() {
- $.
- default.DEBUG_LOG && console.log("Min scale event.");
- for (var t = 0; t < this.models.length; t++) this.models[t].startRandomMotion($.
- default.MOTION_GROUP_PINCH_OUT, $.
- default.PRIORITY_NORMAL)
- }, o.prototype.tapEvent = function(t, i) {
- $.
- default.DEBUG_LOG && console.log("tapEvent view x:" + t + " y:" + i);
- for (var e = 0; e < this.models.length; e++) this.models[e].hitTest($.
- default.HIT_AREA_HEAD, t, i) ? ($.
- default.DEBUG_LOG && console.log("Tap face."), this.models[e].setRandomExpression()):
- this.models[e].hitTest($.
- default.HIT_AREA_BODY, t, i) ? ($.
- default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($.
- default.MOTION_GROUP_TAP_BODY, $.
- default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("head", t, i) ? ($.
- default.DEBUG_LOG && console.log("Tap face."), this.models[e].startRandomMotion($.
- default.MOTION_GROUP_FLICK_HEAD, $.
- default.PRIORITY_NORMAL)) : this.models[e].hitTestCustom("body", t, i) && ($.
- default.DEBUG_LOG && console.log("Tap body. models[" + e + "]"), this.models[e].startRandomMotion($.
- default.MOTION_GROUP_TAP_BODY, $.
- default.PRIORITY_NORMAL));
- return !0
- }
-}, function(t, i, e) {
- "use strict";
-
- function r() {}
- Object.defineProperty(i, "__esModule", {
- value: !0
- }), i.
-default = r;
- var o = e(2);
- var requestCache = {};
- r.prototype.loadBytes = function(t, i) {
- // Cache 相同的请求,减少请求数量
- if (requestCache[t] !== undefined) {
- i(requestCache[t]);
- return;
- }
- var e = new XMLHttpRequest;
- e.open("GET", t, !0), e.responseType = "arraybuffer", e.onload = function() {
- switch (e.status) {
- case 200:
- requestCache[t] = e.response;
- i(e.response);
- break;
- default:
- console.error("Failed to load (" + e.status + ") : " + t)
- }
- }, e.send(null)
- }, r.prototype.loadString = function(t) {
- this.loadBytes(t, function(t) {
- return t
- })
- }, r.prototype.loadLive2DModel = function(t, i) {
- var e = null;
- this.loadBytes(t, function(t) {
- e = Live2DModelWebGL.loadModel(t), i(e)
- })
- }, r.prototype.loadTexture = function(t, i, e, r) {
- var n = new Image;
- n.crossOrigin = "Anonymous", n.src = e;
- n.onload = function() {
- var e = (0, o.getContext)(),
- s = e.createTexture();
- if (!s) return console.error("Failed to generate gl texture name."), -1;
- 0 == t.isPremultipliedAlpha() && e.pixelStorei(e.UNPACK_PREMULTIPLY_ALPHA_WEBGL, 1), e.pixelStorei(e.UNPACK_FLIP_Y_WEBGL, 1), e.activeTexture(e.TEXTURE0), e.bindTexture(e.TEXTURE_2D, s), e.texImage2D(e.TEXTURE_2D, 0, e.RGBA, e.RGBA, e.UNSIGNED_BYTE, n), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MAG_FILTER, e.LINEAR), e.texParameteri(e.TEXTURE_2D, e.TEXTURE_MIN_FILTER, e.LINEAR_MIPMAP_NEAREST), e.generateMipmap(e.TEXTURE_2D), t.setTexture(i, s), s = null, "function" == typeof r && r()
- }, n.onerror = function() {
- console.error("Failed to load image : " + e)
- }
- }, r.prototype.jsonParseFromBytes = function(t) {
- var i, e = new Uint8Array(t, 0, 3);
- return i = 239 == e[0] && 187 == e[1] && 191 == e[2] ? String.fromCharCode.apply(null, new Uint8Array(t, 3)) : String.fromCharCode.apply(null, new Uint8Array(t)), JSON.parse(i)
- }, r.prototype.log = function(t) {}
-}, function(t, i, e) {
- "use strict";
-
- function r(t) {
- return t && t.__esModule ? t : {
- default:
- t
- }
- }
- function o() {
- n.L2DBaseModel.prototype.constructor.call(this), this.modelHomeDir = "", this.modelSetting = null, this.tmpMatrix = []
- }
- Object.defineProperty(i, "__esModule", {
- value: !0
- }), i.
-default = o;
- var n = e(0),
- s = e(11),
- _ = r(s),
- a = e(1),
- h = r(a),
- l = e(3),
- $ = r(l);
- o.prototype = new n.L2DBaseModel, o.prototype.load = function(t, i, e) {
- this.setUpdating(!0), this.setInitialized(!1), this.modelHomeDir = i.substring(0, i.lastIndexOf("/") + 1), this.modelSetting = new _.
- default;
- var r = this;
- this.modelSetting.loadModelSetting(i, function() {
- var t = r.modelHomeDir + r.modelSetting.getModelFile();
- r.loadModelData(t, function(t) {
- for (var i = 0; i < r.modelSetting.getTextureNum(); i++) {
- if (/^https?:\/\/|^\/\//i.test(r.modelSetting.getTextureFile(i))) var o = r.modelSetting.getTextureFile(i);
- else var o = r.modelHomeDir + r.modelSetting.getTextureFile(i);
- r.loadTexture(i, o, function() {
- if (r.isTexLoaded) {
- if (r.modelSetting.getExpressionNum() > 0) {
- r.expressions = {};
- for (var t = 0; t < r.modelSetting.getExpressionNum(); t++) {
- var i = r.modelSetting.getExpressionName(t),
- o = r.modelHomeDir + r.modelSetting.getExpressionFile(t);
- r.loadExpression(i, o)
- }
- } else r.expressionManager = null, r.expressions = {};
- if (r.eyeBlink, null != r.modelSetting.getPhysicsFile() ? r.loadPhysics(r.modelHomeDir + r.modelSetting.getPhysicsFile()) : r.physics = null, null != r.modelSetting.getPoseFile() ? r.loadPose(r.modelHomeDir + r.modelSetting.getPoseFile(), function() {
- r.pose.updateParam(r.live2DModel)
- }) : r.pose = null, null != r.modelSetting.getLayout()) {
- var n = r.modelSetting.getLayout();
- null != n.width && r.modelMatrix.setWidth(n.width), null != n.height && r.modelMatrix.setHeight(n.height), null != n.x && r.modelMatrix.setX(n.x), null != n.y && r.modelMatrix.setY(n.y), null != n.center_x && r.modelMatrix.centerX(n.center_x), null != n.center_y && r.modelMatrix.centerY(n.center_y), null != n.top && r.modelMatrix.top(n.top), null != n.bottom && r.modelMatrix.bottom(n.bottom), null != n.left && r.modelMatrix.left(n.left), null != n.right && r.modelMatrix.right(n.right)
- }
- if (null != r.modelSetting.getHitAreasCustom()) {
- var s = r.modelSetting.getHitAreasCustom();
- null != s.head_x && (h.
- default.hit_areas_custom_head_x = s.head_x), null != s.head_y && (h.
- default.hit_areas_custom_head_y = s.head_y), null != s.body_x && (h.
- default.hit_areas_custom_body_x = s.body_x), null != s.body_y && (h.
- default.hit_areas_custom_body_y = s.body_y)
- }
- for (var t = 0; t < r.modelSetting.getInitParamNum(); t++) r.live2DModel.setParamFloat(r.modelSetting.getInitParamID(t), r.modelSetting.getInitParamValue(t));
- for (var t = 0; t < r.modelSetting.getInitPartsVisibleNum(); t++) r.live2DModel.setPartsOpacity(r.modelSetting.getInitPartsVisibleID(t), r.modelSetting.getInitPartsVisibleValue(t));
- r.live2DModel.saveParam(), r.preloadMotionGroup(h.
- default.MOTION_GROUP_IDLE), r.preloadMotionGroup(h.
- default.MOTION_GROUP_SLEEPY), r.mainMotionManager.stopAllMotions(), r.setUpdating(!1), r.setInitialized(!0), "function" == typeof e && e()
- }
- })
- }
- })
- })
- }, o.prototype.release = function(t) {
- var i = n.Live2DFramework.getPlatformManager();
- t.deleteTexture(i.texture)
- }, o.prototype.preloadMotionGroup = function(t) {
- for (var i = this, e = 0; e < this.modelSetting.getMotionNum(t); e++) {
- var r = this.modelSetting.getMotionFile(t, e);
- this.loadMotion(r, this.modelHomeDir + r, function(r) {
- r.setFadeIn(i.modelSetting.getMotionFadeIn(t, e)), r.setFadeOut(i.modelSetting.getMotionFadeOut(t, e))
- })
- }
- }, o.prototype.update = function() {
- if (null == this.live2DModel) return void(h.
- default.DEBUG_LOG && console.error("Failed to update."));
- var t = UtSystem.getUserTimeMSec() - this.startTimeMSec,
- i = t / 1e3,
- e = 2 * i * Math.PI;
- if (this.mainMotionManager.isFinished()) {
- "1" === sessionStorage.getItem("Sleepy") ? this.startRandomMotion(h.
- default.MOTION_GROUP_SLEEPY, h.
- default.PRIORITY_SLEEPY) : this.startRandomMotion(h.
- default.MOTION_GROUP_IDLE, h.
- default.PRIORITY_IDLE)
- }
- this.live2DModel.loadParam(), this.mainMotionManager.updateParam(this.live2DModel) || null != this.eyeBlink && this.eyeBlink.updateParam(this.live2DModel), this.live2DModel.saveParam(), null == this.expressionManager || null == this.expressions || this.expressionManager.isFinished() || this.expressionManager.updateParam(this.live2DModel), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", 30 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", 30 * this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", this.dragX * this.dragY * -30, 1), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", 10 * this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_X", this.dragX, 1), this.live2DModel.addToParamFloat("PARAM_EYE_BALL_Y", this.dragY, 1), this.live2DModel.addToParamFloat("PARAM_ANGLE_X", Number(15 * Math.sin(e / 6.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Y", Number(8 * Math.sin(e / 3.5345)), .5), this.live2DModel.addToParamFloat("PARAM_ANGLE_Z", Number(10 * Math.sin(e / 5.5345)), .5), this.live2DModel.addToParamFloat("PARAM_BODY_ANGLE_X", Number(4 * Math.sin(e / 15.5345)), .5), this.live2DModel.setParamFloat("PARAM_BREATH", Number(.5 + .5 * Math.sin(e / 3.2345)), 1), null != this.physics && this.physics.updateParam(this.live2DModel), null == this.lipSync && this.live2DModel.setParamFloat("PARAM_MOUTH_OPEN_Y", this.lipSyncValue), null != this.pose && this.pose.updateParam(this.live2DModel), this.live2DModel.update()
- }, o.prototype.setRandomExpression = function() {
- var t = [];
- for (var i in this.expressions) t.push(i);
- var e = parseInt(Math.random() * t.length);
- this.setExpression(t[e])
- }, o.prototype.startRandomMotion = function(t, i) {
- var e = this.modelSetting.getMotionNum(t),
- r = parseInt(Math.random() * e);
- this.startMotion(t, r, i)
- }, o.prototype.startMotion = function(t, i, e) {
- var r = this.modelSetting.getMotionFile(t, i);
- if (null == r || "" == r) return void(h.
- default.DEBUG_LOG && console.error("Failed to motion."));
- if (e == h.
- default.PRIORITY_FORCE) this.mainMotionManager.setReservePriority(e);
- else if (!this.mainMotionManager.reserveMotion(e)) return void(h.
- default.DEBUG_LOG && console.log("Motion is running."));
- var o, n = this;
- null == this.motions[t] ? this.loadMotion(null, this.modelHomeDir + r, function(r) {
- o = r, n.setFadeInFadeOut(t, i, e, o)
- }) : (o = this.motions[t], n.setFadeInFadeOut(t, i, e, o))
- }, o.prototype.setFadeInFadeOut = function(t, i, e, r) {
- var o = this.modelSetting.getMotionFile(t, i);
- if (r.setFadeIn(this.modelSetting.getMotionFadeIn(t, i)), r.setFadeOut(this.modelSetting.getMotionFadeOut(t, i)), h.
- default.DEBUG_LOG && console.log("Start motion : " + o), null == this.modelSetting.getMotionSound(t, i)) this.mainMotionManager.startMotionPrio(r, e);
- else {
- var n = this.modelSetting.getMotionSound(t, i),
- s = document.createElement("audio");
- s.src = this.modelHomeDir + n, h.
- default.DEBUG_LOG && console.log("Start sound : " + n), s.play(), this.mainMotionManager.startMotionPrio(r, e)
- }
- }, o.prototype.setExpression = function(t) {
- var i = this.expressions[t];
- h.
- default.DEBUG_LOG && console.log("Expression : " + t), this.expressionManager.startMotion(i, !1)
- }, o.prototype.draw = function(t) {
- $.
- default.push(), $.
- default.multMatrix(this.modelMatrix.getArray()), this.tmpMatrix = $.
- default.getMatrix(), this.live2DModel.setMatrix(this.tmpMatrix), this.live2DModel.draw(), $.
- default.pop()
- }, o.prototype.hitTest = function(t, i, e) {
- for (var r = this.modelSetting.getHitAreaNum(), o = 0; o < r; o++) if (t == this.modelSetting.getHitAreaName(o)) {
- var n = this.modelSetting.getHitAreaID(o);
- return this.hitTestSimple(n, i, e)
- }
- return !1
- }, o.prototype.hitTestCustom = function(t, i, e) {
- return "head" == t ? this.hitTestSimpleCustom(h.
- default.hit_areas_custom_head_x, h.
- default.hit_areas_custom_head_y, i, e) : "body" == t && this.hitTestSimpleCustom(h.
- default.hit_areas_custom_body_x, h.
- default.hit_areas_custom_body_y, i, e)
- }
-}, function(t, i, e) {
- "use strict";
-
- function r() {
- this.NAME = "name", this.ID = "id", this.MODEL = "model", this.TEXTURES = "textures", this.HIT_AREAS = "hit_areas", this.PHYSICS = "physics", this.POSE = "pose", this.EXPRESSIONS = "expressions", this.MOTION_GROUPS = "motions", this.SOUND = "sound", this.FADE_IN = "fade_in", this.FADE_OUT = "fade_out", this.LAYOUT = "layout", this.HIT_AREAS_CUSTOM = "hit_areas_custom", this.INIT_PARAM = "init_param", this.INIT_PARTS_VISIBLE = "init_parts_visible", this.VALUE = "val", this.FILE = "file", this.json = {}
- }
- Object.defineProperty(i, "__esModule", {
- value: !0
- }), i.
-default = r;
- var o = e(0);
- r.prototype.loadModelSetting = function(t, i) {
- var e = this;
- o.Live2DFramework.getPlatformManager().loadBytes(t, function(t) {
- var r = String.fromCharCode.apply(null, new Uint8Array(t));
- e.json = JSON.parse(r), i()
- })
- }, r.prototype.getTextureFile = function(t) {
- return null == this.json[this.TEXTURES] || null == this.json[this.TEXTURES][t] ? null : this.json[this.TEXTURES][t]
- }, r.prototype.getModelFile = function() {
- return this.json[this.MODEL]
- }, r.prototype.getTextureNum = function() {
- return null == this.json[this.TEXTURES] ? 0 : this.json[this.TEXTURES].length
- }, r.prototype.getHitAreaNum = function() {
- return null == this.json[this.HIT_AREAS] ? 0 : this.json[this.HIT_AREAS].length
- }, r.prototype.getHitAreaID = function(t) {
- return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.ID]
- }, r.prototype.getHitAreaName = function(t) {
- return null == this.json[this.HIT_AREAS] || null == this.json[this.HIT_AREAS][t] ? null : this.json[this.HIT_AREAS][t][this.NAME]
- }, r.prototype.getPhysicsFile = function() {
- return this.json[this.PHYSICS]
- }, r.prototype.getPoseFile = function() {
- return this.json[this.POSE]
- }, r.prototype.getExpressionNum = function() {
- return null == this.json[this.EXPRESSIONS] ? 0 : this.json[this.EXPRESSIONS].length
- }, r.prototype.getExpressionFile = function(t) {
- return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.FILE]
- }, r.prototype.getExpressionName = function(t) {
- return null == this.json[this.EXPRESSIONS] ? null : this.json[this.EXPRESSIONS][t][this.NAME]
- }, r.prototype.getLayout = function() {
- return this.json[this.LAYOUT]
- }, r.prototype.getHitAreasCustom = function() {
- return this.json[this.HIT_AREAS_CUSTOM]
- }, r.prototype.getInitParamNum = function() {
- return null == this.json[this.INIT_PARAM] ? 0 : this.json[this.INIT_PARAM].length
- }, r.prototype.getMotionNum = function(t) {
- return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] ? 0 : this.json[this.MOTION_GROUPS][t].length
- }, r.prototype.getMotionFile = function(t, i) {
- return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] ? null : this.json[this.MOTION_GROUPS][t][i][this.FILE]
- }, r.prototype.getMotionSound = function(t, i) {
- return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.SOUND] ? null : this.json[this.MOTION_GROUPS][t][i][this.SOUND]
- }, r.prototype.getMotionFadeIn = function(t, i) {
- return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_IN] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_IN]
- }, r.prototype.getMotionFadeOut = function(t, i) {
- return null == this.json[this.MOTION_GROUPS] || null == this.json[this.MOTION_GROUPS][t] || null == this.json[this.MOTION_GROUPS][t][i] || null == this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT] ? 1e3 : this.json[this.MOTION_GROUPS][t][i][this.FADE_OUT]
- }, r.prototype.getInitParamID = function(t) {
- return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? null : this.json[this.INIT_PARAM][t][this.ID]
- }, r.prototype.getInitParamValue = function(t) {
- return null == this.json[this.INIT_PARAM] || null == this.json[this.INIT_PARAM][t] ? NaN : this.json[this.INIT_PARAM][t][this.VALUE]
- }, r.prototype.getInitPartsVisibleNum = function() {
- return null == this.json[this.INIT_PARTS_VISIBLE] ? 0 : this.json[this.INIT_PARTS_VISIBLE].length
- }, r.prototype.getInitPartsVisibleID = function(t) {
- return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? null : this.json[this.INIT_PARTS_VISIBLE][t][this.ID]
- }, r.prototype.getInitPartsVisibleValue = function(t) {
- return null == this.json[this.INIT_PARTS_VISIBLE] || null == this.json[this.INIT_PARTS_VISIBLE][t] ? NaN : this.json[this.INIT_PARTS_VISIBLE][t][this.VALUE]
- }
-}]);
-//# sourceMappingURL=live2d.js.map
diff --git a/spaces/dawdqd/ChuanhuChatGPT/assets/custom.css b/spaces/dawdqd/ChuanhuChatGPT/assets/custom.css
deleted file mode 100644
index 22108488886cfc8d7772214dd9b83727b3fca6a3..0000000000000000000000000000000000000000
--- a/spaces/dawdqd/ChuanhuChatGPT/assets/custom.css
+++ /dev/null
@@ -1,468 +0,0 @@
-:root {
- --chatbot-color-light: #000000;
- --chatbot-color-dark: #FFFFFF;
- --chatbot-background-color-light: #F3F3F3;
- --chatbot-background-color-dark: #121111;
- --message-user-background-color-light: #95EC69;
- --message-user-background-color-dark: #26B561;
- --message-bot-background-color-light: #FFFFFF;
- --message-bot-background-color-dark: #2C2C2C;
-}
-
-#app_title {
- font-weight: var(--prose-header-text-weight);
- font-size: var(--text-xxl);
- line-height: 1.3;
- text-align: left;
- margin-top: 6px;
- white-space: nowrap;
-}
-#description {
- text-align: center;
- margin: 32px 0 4px 0;
-}
-
-/* gradio的页脚信息 */
-footer {
- /* display: none !important; */
- margin-top: .2em !important;
- font-size: 85%;
-}
-#footer {
- text-align: center;
-}
-#footer div {
- display: inline-block;
-}
-#footer .versions{
- font-size: 85%;
- opacity: 0.60;
-}
-
-#float_display {
- position: absolute;
- max-height: 30px;
-}
-/* user_info */
-#user_info {
- white-space: nowrap;
- position: absolute; left: 8em; top: .2em;
- z-index: var(--layer-2);
- box-shadow: var(--block-shadow);
- border: none; border-radius: var(--block-label-radius);
- background: var(--color-accent);
- padding: var(--block-label-padding);
- font-size: var(--block-label-text-size); line-height: var(--line-sm);
- width: auto; min-height: 30px!important;
- opacity: 1;
- transition: opacity 0.3s ease-in-out;
-}
-#user_info .wrap {
- opacity: 0;
-}
-#user_info p {
- color: white;
- font-weight: var(--block-label-text-weight);
-}
-#user_info.hideK {
- opacity: 0;
- transition: opacity 1s ease-in-out;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace;
- /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */
- color: var(--body-text-color-subdued);
-}
-
-#status_display {
- transition: all 0.6s;
-}
-#chuanhu_chatbot {
- transition: height 0.3s ease;
-}
-
-/* usage_display */
-.insert_block {
- position: relative;
- margin: 0;
- padding: .5em 1em;
- box-shadow: var(--block-shadow);
- border-width: var(--block-border-width);
- border-color: var(--block-border-color);
- border-radius: var(--block-radius);
- background: var(--block-background-fill);
- width: 100%;
- line-height: var(--line-sm);
- min-height: 2em;
-}
-#usage_display p, #usage_display span {
- margin: 0;
- font-size: .85em;
- color: var(--body-text-color-subdued);
-}
-.progress-bar {
- background-color: var(--input-background-fill);;
- margin: .5em 0 !important;
- height: 20px;
- border-radius: 10px;
- overflow: hidden;
-}
-.progress {
- background-color: var(--block-title-background-fill);
- height: 100%;
- border-radius: 10px;
- text-align: right;
- transition: width 0.5s ease-in-out;
-}
-.progress-text {
- /* color: white; */
- color: var(--color-accent) !important;
- font-size: 1em !important;
- font-weight: bold;
- padding-right: 10px;
- line-height: 20px;
-}
-
-.apSwitch {
- top: 2px;
- display: inline-block;
- height: 24px;
- position: relative;
- width: 48px;
- border-radius: 12px;
-}
-.apSwitch input {
- display: none !important;
-}
-.apSlider {
- background-color: var(--neutral-200);
- bottom: 0;
- cursor: pointer;
- left: 0;
- position: absolute;
- right: 0;
- top: 0;
- transition: .4s;
- font-size: 18px;
- border-radius: 12px;
-}
-.apSlider::before {
- bottom: -1.5px;
- left: 1px;
- position: absolute;
- transition: .4s;
- content: "🌞";
-}
-input:checked + .apSlider {
- background-color: var(--primary-600);
-}
-input:checked + .apSlider::before {
- transform: translateX(23px);
- content:"🌚";
-}
-
-/* Override Slider Styles (for webkit browsers like Safari and Chrome)
- * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410
- * 进度滑块在各个平台还是太不统一了
- */
-input[type="range"] {
- -webkit-appearance: none;
- height: 4px;
- background: var(--input-background-fill);
- border-radius: 5px;
- background-image: linear-gradient(var(--primary-500),var(--primary-500));
- background-size: 0% 100%;
- background-repeat: no-repeat;
-}
-input[type="range"]::-webkit-slider-thumb {
- -webkit-appearance: none;
- height: 20px;
- width: 20px;
- border-radius: 50%;
- border: solid 0.5px #ddd;
- background-color: white;
- cursor: ew-resize;
- box-shadow: var(--input-shadow);
- transition: background-color .1s ease;
-}
-input[type="range"]::-webkit-slider-thumb:hover {
- background: var(--neutral-50);
-}
-input[type=range]::-webkit-slider-runnable-track {
- -webkit-appearance: none;
- box-shadow: none;
- border: none;
- background: transparent;
-}
-
-#submit_btn, #cancel_btn {
- height: 42px !important;
-}
-#submit_btn::before {
- content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-#cancel_btn::before {
- content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色(默认) */
-#chuanhu_chatbot {
- background-color: var(--chatbot-background-color-light) !important;
- color: var(--chatbot-color-light) !important;
-}
-[data-testid = "bot"] {
- background-color: var(--message-bot-background-color-light) !important;
-}
-[data-testid = "user"] {
- background-color: var(--message-user-background-color-light) !important;
-}
-/* 暗色 */
-.dark #chuanhu_chatbot {
- background-color: var(--chatbot-background-color-dark) !important;
- color: var(--chatbot-color-dark) !important;
-}
-.dark [data-testid = "bot"] {
- background-color: var(--message-bot-background-color-dark) !important;
-}
-.dark [data-testid = "user"] {
- background-color: var(--message-user-background-color-dark) !important;
-}
-
-/* 屏幕宽度大于等于500px的设备 */
-/* update on 2023.4.8: 高度的细致调整已写入JavaScript */
-@media screen and (min-width: 500px) {
- #chuanhu_chatbot {
- height: calc(100vh - 200px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
-}
-/* 屏幕宽度小于500px的设备 */
-@media screen and (max-width: 499px) {
- #chuanhu_chatbot {
- height: calc(100vh - 140px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
- [data-testid = "bot"] {
- max-width: 95% !important;
- }
- #app_title h1{
- letter-spacing: -1px; font-size: 22px;
- }
-}
-#chuanhu_chatbot .wrap {
- overflow-x: hidden;
-}
-/* 对话气泡 */
-.message {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-
-.message.user p {
- white-space: pre-wrap;
-}
-.message .user-message {
- display: block;
- padding: 0 !important;
- white-space: pre-wrap;
-}
-
-.message .md-message p {
- margin-top: 0.6em !important;
- margin-bottom: 0.6em !important;
-}
-.message .md-message p:first-child { margin-top: 0 !important; }
-.message .md-message p:last-of-type { margin-bottom: 0 !important; }
-
-.message .md-message {
- display: block;
- padding: 0 !important;
-}
-.message .raw-message p {
- margin:0 !important;
-}
-.message .raw-message {
- display: block;
- padding: 0 !important;
- white-space: pre-wrap;
-}
-.raw-message.hideM, .md-message.hideM {
- display: none;
-}
-
-/* custom buttons */
-.chuanhu-btn {
- border-radius: 5px;
- /* background-color: #E6E6E6 !important; */
- color: rgba(120, 120, 120, 0.64) !important;
- padding: 4px !important;
- position: absolute;
- right: -22px;
- cursor: pointer !important;
- transition: color .2s ease, background-color .2s ease;
-}
-.chuanhu-btn:hover {
- background-color: rgba(167, 167, 167, 0.25) !important;
- color: unset !important;
-}
-.chuanhu-btn:active {
- background-color: rgba(167, 167, 167, 0.5) !important;
-}
-.chuanhu-btn:focus {
- outline: none;
-}
-.copy-bot-btn {
- /* top: 18px; */
- bottom: 0;
-}
-.toggle-md-btn {
- /* top: 0; */
- bottom: 20px;
-}
-.copy-code-btn {
- position: relative;
- float: right;
- font-size: 1em;
- cursor: pointer;
-}
-
-.message-wrap>div img{
- border-radius: 10px !important;
-}
-
-/* history message */
-.wrap>.history-message {
- padding: 10px !important;
-}
-.history-message {
- /* padding: 0 !important; */
- opacity: 80%;
- display: flex;
- flex-direction: column;
-}
-.history-message>.history-message {
- padding: 0 !important;
-}
-.history-message>.message-wrap {
- padding: 0 !important;
- margin-bottom: 16px;
-}
-.history-message>.message {
- margin-bottom: 16px;
-}
-.wrap>.history-message::after {
- content: "";
- display: block;
- height: 2px;
- background-color: var(--body-text-color-subdued);
- margin-bottom: 10px;
- margin-top: -10px;
- clear: both;
-}
-.wrap>.history-message>:last-child::after {
- content: "仅供查看";
- display: block;
- text-align: center;
- color: var(--body-text-color-subdued);
- font-size: 0.8em;
-}
-
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-.message :not(pre) code {
- display: inline;
- white-space: break-spaces;
- font-family: var(--font-mono);
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-.message pre,
-.message pre[class*=language-] {
- color: #fff;
- overflow-x: auto;
- overflow-y: hidden;
- margin: .8em 1em 1em 0em !important;
- padding: var(--spacing-xl) 1.2em !important;
- border-radius: var(--radius-lg) !important;
-}
-.message pre code,
-.message pre code[class*=language-] {
- color: #fff;
- padding: 0;
- margin: 0;
- background-color: unset;
- text-shadow: none;
- font-family: var(--font-mono);
-}
-/* 覆盖 gradio 丑陋的复制按钮样式 */
-pre button[title="copy"] {
- border-radius: 5px;
- transition: background-color .2s ease;
-}
-pre button[title="copy"]:hover {
- background-color: #333232;
-}
-pre button .check {
- color: #fff !important;
- background: var(--neutral-950) !important;
-}
-
-/* 覆盖prism.css */
-.language-css .token.string,
-.style .token.string,
-.token.entity,
-.token.operator,
-.token.url {
- background: none !important;
-}
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py
deleted file mode 100644
index 108eda75dda951e1b07ff4ca3603f5ba0e0d1e75..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py
+++ /dev/null
@@ -1,318 +0,0 @@
-from __future__ import annotations
-
-import io
-from typing import TYPE_CHECKING, Any
-
-from bokeh.io import export_png, export_svg, show
-from bokeh.io.export import get_screenshot_as_png
-from bokeh.layouts import gridplot
-from bokeh.models.annotations.labels import Label
-from bokeh.palettes import Category10
-from bokeh.plotting import figure
-import numpy as np
-
-from contourpy import FillType, LineType
-from contourpy.util.bokeh_util import filled_to_bokeh, lines_to_bokeh
-from contourpy.util.renderer import Renderer
-
-if TYPE_CHECKING:
- from bokeh.models import GridPlot
- from bokeh.palettes import Palette
- from numpy.typing import ArrayLike
-
- from contourpy._contourpy import FillReturn, LineReturn
-
-
-class BokehRenderer(Renderer):
- _figures: list[figure]
- _layout: GridPlot
- _palette: Palette
- _want_svg: bool
-
- """Utility renderer using Bokeh to render a grid of plots over the same (x, y) range.
-
- Args:
- nrows (int, optional): Number of rows of plots, default ``1``.
- ncols (int, optional): Number of columns of plots, default ``1``.
- figsize (tuple(float, float), optional): Figure size in inches (assuming 100 dpi), default
- ``(9, 9)``.
- show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``.
- want_svg (bool, optional): Whether output is required in SVG format or not, default
- ``False``.
-
- Warning:
- :class:`~contourpy.util.bokeh_renderer.BokehRenderer`, unlike
- :class:`~contourpy.util.mpl_renderer.MplRenderer`, needs to be told in advance if output to
- SVG format will be required later, otherwise it will assume PNG output.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- show_frame: bool = True,
- want_svg: bool = False,
- ) -> None:
- self._want_svg = want_svg
- self._palette = Category10[10]
-
- total_size = 100*np.asarray(figsize, dtype=int) # Assuming 100 dpi.
-
- nfigures = nrows*ncols
- self._figures = []
- backend = "svg" if self._want_svg else "canvas"
- for _ in range(nfigures):
- fig = figure(output_backend=backend)
- fig.xgrid.visible = False
- fig.ygrid.visible = False
- self._figures.append(fig)
- if not show_frame:
- fig.outline_line_color = None # type: ignore[assignment]
- fig.axis.visible = False
-
- self._layout = gridplot(
- self._figures, ncols=ncols, toolbar_location=None, # type: ignore[arg-type]
- width=total_size[0] // ncols, height=total_size[1] // nrows)
-
- def _convert_color(self, color: str) -> str:
- if isinstance(color, str) and color[0] == "C":
- index = int(color[1:])
- color = self._palette[index]
- return color
-
- def _get_figure(self, ax: figure | int) -> figure:
- if isinstance(ax, int):
- ax = self._figures[ax]
- return ax
-
- def filled(
- self,
- filled: FillReturn,
- fill_type: FillType,
- ax: figure | int = 0,
- color: str = "C0",
- alpha: float = 0.7,
- ) -> None:
- """Plot filled contours on a single plot.
-
- Args:
- filled (sequence of arrays): Filled contour data as returned by
- :func:`~contourpy.ContourGenerator.filled`.
- fill_type (FillType): Type of ``filled`` data, as returned by
- :attr:`~contourpy.ContourGenerator.fill_type`.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color to plot with. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``"C0"``.
- alpha (float, optional): Opacity to plot with, default ``0.7``.
- """
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- xs, ys = filled_to_bokeh(filled, fill_type)
- if len(xs) > 0:
- fig.multi_polygons(xs=[xs], ys=[ys], color=color, fill_alpha=alpha, line_width=0)
-
- def grid(
- self,
- x: ArrayLike,
- y: ArrayLike,
- ax: figure | int = 0,
- color: str = "black",
- alpha: float = 0.1,
- point_color: str | None = None,
- quad_as_tri_alpha: float = 0,
- ) -> None:
- """Plot quad grid lines on a single plot.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color to plot grid lines, default ``"black"``.
- alpha (float, optional): Opacity to plot lines with, default ``0.1``.
- point_color (str, optional): Color to plot grid points or ``None`` if grid points
- should not be plotted, default ``None``.
- quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default
- ``0``.
-
- Colors may be a string color or the letter ``"C"`` followed by an integer in the range
- ``"C0"`` to ``"C9"`` to use a color from the ``Category10`` palette.
-
- Warning:
- ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked.
- """
- fig = self._get_figure(ax)
- x, y = self._grid_as_2d(x, y)
- xs = [row for row in x] + [row for row in x.T]
- ys = [row for row in y] + [row for row in y.T]
- kwargs = dict(line_color=color, alpha=alpha)
- fig.multi_line(xs, ys, **kwargs)
- if quad_as_tri_alpha > 0:
- # Assumes no quad mask.
- xmid = (0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:])).ravel()
- ymid = (0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:])).ravel()
- fig.multi_line(
- [row for row in np.stack((x[:-1, :-1].ravel(), xmid, x[1:, 1:].ravel()), axis=1)],
- [row for row in np.stack((y[:-1, :-1].ravel(), ymid, y[1:, 1:].ravel()), axis=1)],
- **kwargs)
- fig.multi_line(
- [row for row in np.stack((x[:-1, 1:].ravel(), xmid, x[1:, :-1].ravel()), axis=1)],
- [row for row in np.stack((y[:-1, 1:].ravel(), ymid, y[1:, :-1].ravel()), axis=1)],
- **kwargs)
- if point_color is not None:
- fig.circle(
- x=x.ravel(), y=y.ravel(), fill_color=color, line_color=None, alpha=alpha, size=8)
-
- def lines(
- self,
- lines: LineReturn,
- line_type: LineType,
- ax: figure | int = 0,
- color: str = "C0",
- alpha: float = 1.0,
- linewidth: float = 1,
- ) -> None:
- """Plot contour lines on a single plot.
-
- Args:
- lines (sequence of arrays): Contour line data as returned by
- :func:`~contourpy.ContourGenerator.lines`.
- line_type (LineType): Type of ``lines`` data, as returned by
- :attr:`~contourpy.ContourGenerator.line_type`.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color to plot lines. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``"C0"``.
- alpha (float, optional): Opacity to plot lines with, default ``1.0``.
- linewidth (float, optional): Width of lines, default ``1``.
-
- Note:
- Assumes all lines are open line strips not closed line loops.
- """
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- xs, ys = lines_to_bokeh(lines, line_type)
- if len(xs) > 0:
- fig.multi_line(xs, ys, line_color=color, line_alpha=alpha, line_width=linewidth)
-
- def mask(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike | np.ma.MaskedArray[Any, Any],
- ax: figure | int = 0,
- color: str = "black",
- ) -> None:
- """Plot masked out grid points as circles on a single plot.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (masked array of shape (ny, nx): z-values.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Circle color, default ``"black"``.
- """
- mask = np.ma.getmask(z) # type: ignore[no-untyped-call]
- if mask is np.ma.nomask:
- return
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- x, y = self._grid_as_2d(x, y)
- fig.circle(x[mask], y[mask], fill_color=color, size=10)
-
- def save(self, filename: str, transparent: bool = False) -> None:
- """Save plots to SVG or PNG file.
-
- Args:
- filename (str): Filename to save to.
- transparent (bool, optional): Whether background should be transparent, default
- ``False``.
-
- Warning:
- To output to SVG file, ``want_svg=True`` must have been passed to the constructor.
- """
- if transparent:
- for fig in self._figures:
- fig.background_fill_color = None # type: ignore[assignment]
- fig.border_fill_color = None # type: ignore[assignment]
-
- if self._want_svg:
- export_svg(self._layout, filename=filename)
- else:
- export_png(self._layout, filename=filename)
-
- def save_to_buffer(self) -> io.BytesIO:
- """Save plots to an ``io.BytesIO`` buffer.
-
- Return:
- BytesIO: PNG image buffer.
- """
- image = get_screenshot_as_png(self._layout)
- buffer = io.BytesIO()
- image.save(buffer, "png")
- return buffer
-
- def show(self) -> None:
- """Show plots in web browser, in usual Bokeh manner.
- """
- show(self._layout)
-
- def title(self, title: str, ax: figure | int = 0, color: str | None = None) -> None:
- """Set the title of a single plot.
-
- Args:
- title (str): Title text.
- ax (int or Bokeh Figure, optional): Which plot to set the title of, default ``0``.
- color (str, optional): Color to set title. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``None`` which is ``black``.
- """
- fig = self._get_figure(ax)
- fig.title = title # type: ignore[assignment]
- fig.title.align = "center" # type: ignore[attr-defined]
- if color is not None:
- fig.title.text_color = self._convert_color(color) # type: ignore[attr-defined]
-
- def z_values(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: figure | int = 0,
- color: str = "green",
- fmt: str = ".1f",
- quad_as_tri: bool = False,
- ) -> None:
- """Show ``z`` values on a single plot.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (array-like of shape (ny, nx): z-values.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color of added text. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``"green"``.
- fmt (str, optional): Format to display z-values, default ``".1f"``.
- quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centres
- of quads.
-
- Warning:
- ``quad_as_tri=True`` shows z-values for all quads, even if masked.
- """
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- kwargs = dict(text_color=color, text_align="center", text_baseline="middle")
- for j in range(ny):
- for i in range(nx):
- fig.add_layout(Label(x=x[j, i], y=y[j, i], text=f"{z[j, i]:{fmt}}", **kwargs))
- if quad_as_tri:
- for j in range(ny-1):
- for i in range(nx-1):
- xx = np.mean(x[j:j+2, i:i+2])
- yy = np.mean(y[j:j+2, i:i+2])
- zz = np.mean(z[j:j+2, i:i+2])
- fig.add_layout(Label(x=xx, y=yy, text=f"{zz:{fmt}}", **kwargs))
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/base.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/base.py
deleted file mode 100644
index a5159301ed6a378181d416b5c476fa6a1d87bfad..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/themes/base.py
+++ /dev/null
@@ -1,1826 +0,0 @@
-from __future__ import annotations
-
-import json
-import re
-import tempfile
-import textwrap
-from pathlib import Path
-from typing import Iterable
-
-import huggingface_hub
-import requests
-import semantic_version as semver
-from gradio_client.documentation import document, set_documentation_group
-from huggingface_hub import CommitOperationAdd
-
-from gradio.themes.utils import (
- colors,
- fonts,
- get_matching_version,
- get_theme_assets,
- sizes,
-)
-from gradio.themes.utils.readme_content import README_CONTENT
-
-set_documentation_group("themes")
-
-
-class ThemeClass:
- def __init__(self):
- self._stylesheets = []
- self.name = None
-
- def _get_theme_css(self):
- css = {}
- dark_css = {}
-
- for attr, val in self.__dict__.items():
- if attr.startswith("_"):
- continue
- if val is None:
- if attr.endswith("_dark"):
- dark_css[attr[:-5]] = None
- continue
- else:
- raise ValueError(
- f"Cannot set '{attr}' to None - only dark mode variables can be None."
- )
- val = str(val)
- pattern = r"(\*)([\w_]+)(\b)"
-
- def repl_func(match):
- full_match = match.group(0)
- if full_match.startswith("*") and full_match.endswith("_dark"):
- raise ValueError(
- f"Cannot refer '{attr}' to '{val}' - dark variable references are automatically used for dark mode attributes, so do not use the _dark suffix in the value."
- )
- if (
- attr.endswith("_dark")
- and full_match.startswith("*")
- and attr[:-5] == full_match[1:]
- ):
- raise ValueError(
- f"Cannot refer '{attr}' to '{val}' - if dark and light mode values are the same, set dark mode version to None."
- )
-
- word = match.group(2)
- word = word.replace("_", "-")
- return f"var(--{word})"
-
- val = re.sub(pattern, repl_func, val)
-
- attr = attr.replace("_", "-")
-
- if attr.endswith("-dark"):
- attr = attr[:-5]
- dark_css[attr] = val
- else:
- css[attr] = val
-
- for attr, val in css.items():
- if attr not in dark_css:
- dark_css[attr] = val
-
- css_code = (
- ":root {\n"
- + "\n".join([f" --{attr}: {val};" for attr, val in css.items()])
- + "\n}"
- )
- dark_css_code = (
- ".dark {\n"
- + "\n".join([f" --{attr}: {val};" for attr, val in dark_css.items()])
- + "\n}"
- )
-
- return f"{css_code}\n{dark_css_code}"
-
- def to_dict(self):
- """Convert the theme into a python dictionary."""
- schema = {"theme": {}}
- for prop in dir(self):
- if (
- not prop.startswith("_")
- or prop.startswith("_font")
- or prop == "_stylesheets"
- or prop == "name"
- ) and isinstance(getattr(self, prop), (list, str)):
- schema["theme"][prop] = getattr(self, prop)
- return schema
-
- @classmethod
- def load(cls, path: str) -> ThemeClass:
- """Load a theme from a json file.
-
- Parameters:
- path: The filepath to read.
- """
- with open(path) as fp:
- return cls.from_dict(json.load(fp, object_hook=fonts.as_font))
-
- @classmethod
- def from_dict(cls, theme: dict[str, dict[str, str]]) -> ThemeClass:
- """Create a theme instance from a dictionary representation.
-
- Parameters:
- theme: The dictionary representation of the theme.
- """
- new_theme = cls()
- for prop, value in theme["theme"].items():
- setattr(new_theme, prop, value)
-
- # For backwards compatibility, load attributes in base theme not in the loaded theme from the base theme.
- base = Base()
- for attr in base.__dict__:
- if not attr.startswith("_") and not hasattr(new_theme, attr):
- setattr(new_theme, attr, getattr(base, attr))
-
- return new_theme
-
- def dump(self, filename: str):
- """Write the theme to a json file.
-
- Parameters:
- filename: The path to write the theme too
- """
- Path(filename).write_text(json.dumps(self.to_dict(), cls=fonts.FontEncoder))
-
- @classmethod
- def from_hub(cls, repo_name: str, hf_token: str | None = None):
- """Load a theme from the hub.
-
- This DOES NOT require a HuggingFace account for downloading publicly available themes.
-
- Parameters:
- repo_name: string of the form /@. If a semantic version expression is omitted, the latest version will be fetched.
- hf_token: HuggingFace Token. Only needed to download private themes.
- """
- if "@" not in repo_name:
- name, version = repo_name, None
- else:
- name, version = repo_name.split("@")
-
- api = huggingface_hub.HfApi(token=hf_token)
-
- try:
- space_info = api.space_info(name)
- except requests.HTTPError as e:
- raise ValueError(f"The space {name} does not exist") from e
-
- assets = get_theme_assets(space_info)
- matching_version = get_matching_version(assets, version)
-
- if not matching_version:
- raise ValueError(
- f"Cannot find a matching version for expression {version} "
- f"from files {[f.filename for f in assets]}"
- )
-
- theme_file = huggingface_hub.hf_hub_download(
- repo_id=name,
- repo_type="space",
- filename=f"themes/theme_schema@{matching_version.version}.json",
- )
- theme = cls.load(theme_file)
- theme.name = name
- return theme
-
- @staticmethod
- def _get_next_version(space_info: huggingface_hub.hf_api.SpaceInfo) -> str:
- assets = get_theme_assets(space_info)
- latest_version = max(assets, key=lambda asset: asset.version).version
- return str(latest_version.next_patch())
-
- @staticmethod
- def _theme_version_exists(
- space_info: huggingface_hub.hf_api.SpaceInfo, version: str
- ) -> bool:
- assets = get_theme_assets(space_info)
- return any(a.version == semver.Version(version) for a in assets)
-
- def push_to_hub(
- self,
- repo_name: str,
- org_name: str | None = None,
- version: str | None = None,
- hf_token: str | None = None,
- theme_name: str | None = None,
- description: str | None = None,
- private: bool = False,
- ):
- """Upload a theme to the HuggingFace hub.
-
- This requires a HuggingFace account.
-
- Parameters:
- repo_name: The name of the repository to store the theme assets, e.g. 'my_theme' or 'sunset'.
- org_name: The name of the org to save the space in. If None (the default), the username corresponding to the logged in user, or hƒ_token is used.
- version: A semantic version tag for theme. Bumping the version tag lets you publish updates to a theme without changing the look of applications that already loaded your theme.
- hf_token: API token for your HuggingFace account
- theme_name: Name for the name. If None, defaults to repo_name
- description: A long form description to your theme.
- """
-
- from gradio import __version__
-
- api = huggingface_hub.HfApi()
-
- if not hf_token:
- try:
- author = huggingface_hub.whoami()["name"]
- except OSError as e:
- raise ValueError(
- "In order to push to hub, log in via `huggingface-cli login` "
- "or provide a theme_token to push_to_hub. For more information "
- "see https://huggingface.co/docs/huggingface_hub/quick-start#login"
- ) from e
- else:
- author = huggingface_hub.whoami(token=hf_token)["name"]
-
- space_id = f"{org_name or author}/{repo_name}"
-
- try:
- space_info = api.space_info(space_id)
- except requests.HTTPError:
- space_info = None
-
- space_exists = space_info is not None
-
- # If no version, set the version to next patch release
- if not version:
- version = self._get_next_version(space_info) if space_exists else "0.0.1"
- else:
- _ = semver.Version(version)
-
- if space_exists and self._theme_version_exists(space_info, version):
- raise ValueError(
- f"The space {space_id} already has a "
- f"theme with version {version}. See: themes/theme_schema@{version}.json. "
- "To manually override this version, use the HuggingFace hub UI."
- )
-
- theme_name = theme_name or repo_name
-
- with tempfile.NamedTemporaryFile(
- mode="w", delete=False, suffix=".json"
- ) as css_file:
- contents = self.to_dict()
- contents["version"] = version
- json.dump(contents, css_file, cls=fonts.FontEncoder)
- with tempfile.NamedTemporaryFile(mode="w", delete=False) as readme_file:
- readme_content = README_CONTENT.format(
- theme_name=theme_name,
- description=description or "Add a description of this theme here!",
- author=author,
- gradio_version=__version__,
- )
- readme_file.write(textwrap.dedent(readme_content))
- with tempfile.NamedTemporaryFile(mode="w", delete=False) as app_file:
- contents = (Path(__file__).parent / "app.py").read_text()
- contents = re.sub(
- r"theme=gr.themes.Default\(\)",
- f"theme='{space_id}'",
- contents,
- )
- contents = re.sub(r"{THEME}", theme_name or repo_name, contents)
- contents = re.sub(r"{AUTHOR}", org_name or author, contents)
- contents = re.sub(r"{SPACE_NAME}", repo_name, contents)
- app_file.write(contents)
-
- operations = [
- CommitOperationAdd(
- path_in_repo=f"themes/theme_schema@{version}.json",
- path_or_fileobj=css_file.name,
- ),
- CommitOperationAdd(
- path_in_repo="README.md", path_or_fileobj=readme_file.name
- ),
- CommitOperationAdd(path_in_repo="app.py", path_or_fileobj=app_file.name),
- ]
-
- huggingface_hub.create_repo(
- space_id,
- repo_type="space",
- space_sdk="gradio",
- token=hf_token,
- exist_ok=True,
- private=private,
- )
-
- api.create_commit(
- repo_id=space_id,
- commit_message="Updating theme",
- repo_type="space",
- operations=operations,
- token=hf_token,
- )
- url = f"https://huggingface.co/spaces/{space_id}"
- print(f"See your theme here! {url}")
- return url
-
-
-@document("push_to_hub", "from_hub", "load", "dump", "from_dict", "to_dict")
-class Base(ThemeClass):
- def __init__(
- self,
- *,
- primary_hue: colors.Color | str = colors.blue,
- secondary_hue: colors.Color | str = colors.blue,
- neutral_hue: colors.Color | str = colors.gray,
- text_size: sizes.Size | str = sizes.text_md,
- spacing_size: sizes.Size | str = sizes.spacing_md,
- radius_size: sizes.Size | str = sizes.radius_md,
- font: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("Source Sans Pro"),
- "ui-sans-serif",
- "system-ui",
- "sans-serif",
- ),
- font_mono: fonts.Font
- | str
- | Iterable[fonts.Font | str] = (
- fonts.GoogleFont("IBM Plex Mono"),
- "ui-monospace",
- "Consolas",
- "monospace",
- ),
- ):
- """
- Parameters:
- primary_hue: The primary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object.
- secondary_hue: The secondary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object.
- neutral_hue: The neutral hue of the theme, used . Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object.
- text_size: The size of the text. Load a preset, like gradio.themes.sizes.text_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object.
- spacing_size: The size of the spacing. Load a preset, like gradio.themes.sizes.spacing_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object.
- radius_size: The radius size of corners. Load a preset, like gradio.themes.sizes.radius_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object.
- font: The primary font to use for the theme. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks.
- font_mono: The monospace font to use for the theme, applies to code. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks.
- """
-
- self.name = "base"
-
- def expand_shortcut(shortcut, mode="color", prefix=None):
- if not isinstance(shortcut, str):
- return shortcut
- if mode == "color":
- for color in colors.Color.all:
- if color.name == shortcut:
- return color
- raise ValueError(f"Color shortcut {shortcut} not found.")
- elif mode == "size":
- for size in sizes.Size.all:
- if size.name == f"{prefix}_{shortcut}":
- return size
- raise ValueError(f"Size shortcut {shortcut} not found.")
-
- primary_hue = expand_shortcut(primary_hue, mode="color")
- secondary_hue = expand_shortcut(secondary_hue, mode="color")
- neutral_hue = expand_shortcut(neutral_hue, mode="color")
- text_size = expand_shortcut(text_size, mode="size", prefix="text")
- spacing_size = expand_shortcut(spacing_size, mode="size", prefix="spacing")
- radius_size = expand_shortcut(radius_size, mode="size", prefix="radius")
-
- # Hue ranges
- self.primary_50 = primary_hue.c50
- self.primary_100 = primary_hue.c100
- self.primary_200 = primary_hue.c200
- self.primary_300 = primary_hue.c300
- self.primary_400 = primary_hue.c400
- self.primary_500 = primary_hue.c500
- self.primary_600 = primary_hue.c600
- self.primary_700 = primary_hue.c700
- self.primary_800 = primary_hue.c800
- self.primary_900 = primary_hue.c900
- self.primary_950 = primary_hue.c950
-
- self.secondary_50 = secondary_hue.c50
- self.secondary_100 = secondary_hue.c100
- self.secondary_200 = secondary_hue.c200
- self.secondary_300 = secondary_hue.c300
- self.secondary_400 = secondary_hue.c400
- self.secondary_500 = secondary_hue.c500
- self.secondary_600 = secondary_hue.c600
- self.secondary_700 = secondary_hue.c700
- self.secondary_800 = secondary_hue.c800
- self.secondary_900 = secondary_hue.c900
- self.secondary_950 = secondary_hue.c950
-
- self.neutral_50 = neutral_hue.c50
- self.neutral_100 = neutral_hue.c100
- self.neutral_200 = neutral_hue.c200
- self.neutral_300 = neutral_hue.c300
- self.neutral_400 = neutral_hue.c400
- self.neutral_500 = neutral_hue.c500
- self.neutral_600 = neutral_hue.c600
- self.neutral_700 = neutral_hue.c700
- self.neutral_800 = neutral_hue.c800
- self.neutral_900 = neutral_hue.c900
- self.neutral_950 = neutral_hue.c950
-
- # Spacing
- self.spacing_xxs = spacing_size.xxs
- self.spacing_xs = spacing_size.xs
- self.spacing_sm = spacing_size.sm
- self.spacing_md = spacing_size.md
- self.spacing_lg = spacing_size.lg
- self.spacing_xl = spacing_size.xl
- self.spacing_xxl = spacing_size.xxl
-
- self.radius_xxs = radius_size.xxs
- self.radius_xs = radius_size.xs
- self.radius_sm = radius_size.sm
- self.radius_md = radius_size.md
- self.radius_lg = radius_size.lg
- self.radius_xl = radius_size.xl
- self.radius_xxl = radius_size.xxl
-
- self.text_xxs = text_size.xxs
- self.text_xs = text_size.xs
- self.text_sm = text_size.sm
- self.text_md = text_size.md
- self.text_lg = text_size.lg
- self.text_xl = text_size.xl
- self.text_xxl = text_size.xxl
-
- # Font
- if not isinstance(font, Iterable):
- font = [font]
- self._font = [
- fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam)
- for fontfam in font
- ]
- if not isinstance(font_mono, Iterable):
- font_mono = [font_mono]
- self._font_mono = [
- fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam)
- for fontfam in font_mono
- ]
- self.font = ", ".join(str(font) for font in self._font)
- self.font_mono = ", ".join(str(font) for font in self._font_mono)
-
- self._stylesheets = []
- for font in self._font + self._font_mono:
- font_stylesheet = font.stylesheet()
- if font_stylesheet:
- self._stylesheets.append(font_stylesheet)
-
- self.set()
-
- def set(
- self,
- *,
- # Body Attributes: These set set the values for the entire body of the app.
- body_background_fill=None,
- body_background_fill_dark=None,
- body_text_color=None,
- body_text_color_dark=None,
- body_text_size=None,
- body_text_color_subdued=None,
- body_text_color_subdued_dark=None,
- body_text_weight=None,
- embed_radius=None,
- # Element Colors: These set the colors for common elements.
- background_fill_primary=None,
- background_fill_primary_dark=None,
- background_fill_secondary=None,
- background_fill_secondary_dark=None,
- border_color_accent=None,
- border_color_accent_dark=None,
- border_color_accent_subdued=None,
- border_color_accent_subdued_dark=None,
- border_color_primary=None,
- border_color_primary_dark=None,
- color_accent=None,
- color_accent_soft=None,
- color_accent_soft_dark=None,
- # Text: This sets the text styling for text elements.
- link_text_color=None,
- link_text_color_dark=None,
- link_text_color_active=None,
- link_text_color_active_dark=None,
- link_text_color_hover=None,
- link_text_color_hover_dark=None,
- link_text_color_visited=None,
- link_text_color_visited_dark=None,
- prose_text_size=None,
- prose_text_weight=None,
- prose_header_text_weight=None,
- # Shadows: These set the high-level shadow rendering styles. These variables are often referenced by other component-specific shadow variables.
- shadow_drop=None,
- shadow_drop_lg=None,
- shadow_inset=None,
- shadow_spread=None,
- shadow_spread_dark=None,
- # Layout Atoms: These set the style for common layout elements, such as the blocks that wrap components.
- block_background_fill=None,
- block_background_fill_dark=None,
- block_border_color=None,
- block_border_color_dark=None,
- block_border_width=None,
- block_border_width_dark=None,
- block_info_text_color=None,
- block_info_text_color_dark=None,
- block_info_text_size=None,
- block_info_text_weight=None,
- block_label_background_fill=None,
- block_label_background_fill_dark=None,
- block_label_border_color=None,
- block_label_border_color_dark=None,
- block_label_border_width=None,
- block_label_border_width_dark=None,
- block_label_shadow=None,
- block_label_text_color=None,
- block_label_text_color_dark=None,
- block_label_margin=None,
- block_label_padding=None,
- block_label_radius=None,
- block_label_right_radius=None,
- block_label_text_size=None,
- block_label_text_weight=None,
- block_padding=None,
- block_radius=None,
- block_shadow=None,
- block_shadow_dark=None,
- block_title_background_fill=None,
- block_title_background_fill_dark=None,
- block_title_border_color=None,
- block_title_border_color_dark=None,
- block_title_border_width=None,
- block_title_border_width_dark=None,
- block_title_text_color=None,
- block_title_text_color_dark=None,
- block_title_padding=None,
- block_title_radius=None,
- block_title_text_size=None,
- block_title_text_weight=None,
- container_radius=None,
- form_gap_width=None,
- layout_gap=None,
- panel_background_fill=None,
- panel_background_fill_dark=None,
- panel_border_color=None,
- panel_border_color_dark=None,
- panel_border_width=None,
- panel_border_width_dark=None,
- section_header_text_size=None,
- section_header_text_weight=None,
- # Component Atoms: These set the style for elements within components.
- chatbot_code_background_color=None,
- chatbot_code_background_color_dark=None,
- checkbox_background_color=None,
- checkbox_background_color_dark=None,
- checkbox_background_color_focus=None,
- checkbox_background_color_focus_dark=None,
- checkbox_background_color_hover=None,
- checkbox_background_color_hover_dark=None,
- checkbox_background_color_selected=None,
- checkbox_background_color_selected_dark=None,
- checkbox_border_color=None,
- checkbox_border_color_dark=None,
- checkbox_border_color_focus=None,
- checkbox_border_color_focus_dark=None,
- checkbox_border_color_hover=None,
- checkbox_border_color_hover_dark=None,
- checkbox_border_color_selected=None,
- checkbox_border_color_selected_dark=None,
- checkbox_border_radius=None,
- checkbox_border_width=None,
- checkbox_border_width_dark=None,
- checkbox_check=None,
- radio_circle=None,
- checkbox_shadow=None,
- checkbox_label_background_fill=None,
- checkbox_label_background_fill_dark=None,
- checkbox_label_background_fill_hover=None,
- checkbox_label_background_fill_hover_dark=None,
- checkbox_label_background_fill_selected=None,
- checkbox_label_background_fill_selected_dark=None,
- checkbox_label_border_color=None,
- checkbox_label_border_color_dark=None,
- checkbox_label_border_color_hover=None,
- checkbox_label_border_color_hover_dark=None,
- checkbox_label_border_width=None,
- checkbox_label_border_width_dark=None,
- checkbox_label_gap=None,
- checkbox_label_padding=None,
- checkbox_label_shadow=None,
- checkbox_label_text_size=None,
- checkbox_label_text_weight=None,
- checkbox_label_text_color=None,
- checkbox_label_text_color_dark=None,
- checkbox_label_text_color_selected=None,
- checkbox_label_text_color_selected_dark=None,
- error_background_fill=None,
- error_background_fill_dark=None,
- error_border_color=None,
- error_border_color_dark=None,
- error_border_width=None,
- error_border_width_dark=None,
- error_text_color=None,
- error_text_color_dark=None,
- error_icon_color=None,
- error_icon_color_dark=None,
- input_background_fill=None,
- input_background_fill_dark=None,
- input_background_fill_focus=None,
- input_background_fill_focus_dark=None,
- input_background_fill_hover=None,
- input_background_fill_hover_dark=None,
- input_border_color=None,
- input_border_color_dark=None,
- input_border_color_focus=None,
- input_border_color_focus_dark=None,
- input_border_color_hover=None,
- input_border_color_hover_dark=None,
- input_border_width=None,
- input_border_width_dark=None,
- input_padding=None,
- input_placeholder_color=None,
- input_placeholder_color_dark=None,
- input_radius=None,
- input_shadow=None,
- input_shadow_dark=None,
- input_shadow_focus=None,
- input_shadow_focus_dark=None,
- input_text_size=None,
- input_text_weight=None,
- loader_color=None,
- loader_color_dark=None,
- slider_color=None,
- slider_color_dark=None,
- stat_background_fill=None,
- stat_background_fill_dark=None,
- table_border_color=None,
- table_border_color_dark=None,
- table_even_background_fill=None,
- table_even_background_fill_dark=None,
- table_odd_background_fill=None,
- table_odd_background_fill_dark=None,
- table_radius=None,
- table_row_focus=None,
- table_row_focus_dark=None,
- # Buttons: These set the style for buttons.
- button_border_width=None,
- button_border_width_dark=None,
- button_shadow=None,
- button_shadow_active=None,
- button_shadow_hover=None,
- button_transition=None,
- button_large_padding=None,
- button_large_radius=None,
- button_large_text_size=None,
- button_large_text_weight=None,
- button_small_padding=None,
- button_small_radius=None,
- button_small_text_size=None,
- button_small_text_weight=None,
- button_primary_background_fill=None,
- button_primary_background_fill_dark=None,
- button_primary_background_fill_hover=None,
- button_primary_background_fill_hover_dark=None,
- button_primary_border_color=None,
- button_primary_border_color_dark=None,
- button_primary_border_color_hover=None,
- button_primary_border_color_hover_dark=None,
- button_primary_text_color=None,
- button_primary_text_color_dark=None,
- button_primary_text_color_hover=None,
- button_primary_text_color_hover_dark=None,
- button_secondary_background_fill=None,
- button_secondary_background_fill_dark=None,
- button_secondary_background_fill_hover=None,
- button_secondary_background_fill_hover_dark=None,
- button_secondary_border_color=None,
- button_secondary_border_color_dark=None,
- button_secondary_border_color_hover=None,
- button_secondary_border_color_hover_dark=None,
- button_secondary_text_color=None,
- button_secondary_text_color_dark=None,
- button_secondary_text_color_hover=None,
- button_secondary_text_color_hover_dark=None,
- button_cancel_background_fill=None,
- button_cancel_background_fill_dark=None,
- button_cancel_background_fill_hover=None,
- button_cancel_background_fill_hover_dark=None,
- button_cancel_border_color=None,
- button_cancel_border_color_dark=None,
- button_cancel_border_color_hover=None,
- button_cancel_border_color_hover_dark=None,
- button_cancel_text_color=None,
- button_cancel_text_color_dark=None,
- button_cancel_text_color_hover=None,
- button_cancel_text_color_hover_dark=None,
- ) -> Base:
- """
- Parameters:
- body_background_fill: The background of the entire app.
- body_background_fill_dark: The background of the entire app in dark mode.
- body_text_color: The default text color.
- body_text_color_dark: The default text color in dark mode.
- body_text_size: The default text size.
- body_text_color_subdued: The text color used for softer, less important text.
- body_text_color_subdued_dark: The text color used for softer, less important text in dark mode.
- body_text_weight: The default text weight.
- embed_radius: The corner radius used for embedding when the app is embedded within a page.
- background_fill_primary: The background primarily used for items placed directly on the page.
- background_fill_primary_dark: The background primarily used for items placed directly on the page in dark mode.
- background_fill_secondary: The background primarily used for items placed on top of another item.
- background_fill_secondary_dark: The background primarily used for items placed on top of another item in dark mode.
- border_color_accent: The border color used for accented items.
- border_color_accent_dark: The border color used for accented items in dark mode.
- border_color_accent_subdued: The subdued border color for accented items.
- border_color_accent_subdued_dark: The subdued border color for accented items in dark mode.
- border_color_primary: The border color primarily used for items placed directly on the page.
- border_color_primary_dark: The border color primarily used for items placed directly on the page in dark mode.
- color_accent: The color used for accented items.
- color_accent_soft: The softer color used for accented items.
- color_accent_soft_dark: The softer color used for accented items in dark mode.
- link_text_color: The text color used for links.
- link_text_color_dark: The text color used for links in dark mode.
- link_text_color_active: The text color used for links when they are active.
- link_text_color_active_dark: The text color used for links when they are active in dark mode.
- link_text_color_hover: The text color used for links when they are hovered over.
- link_text_color_hover_dark: The text color used for links when they are hovered over in dark mode.
- link_text_color_visited: The text color used for links when they have been visited.
- link_text_color_visited_dark: The text color used for links when they have been visited in dark mode.
- prose_text_size: The text size used for markdown and other prose.
- prose_text_weight: The text weight used for markdown and other prose.
- prose_header_text_weight: The text weight of a header used for markdown and other prose.
- shadow_drop: Drop shadow used by other shadowed items.
- shadow_drop_lg: Larger drop shadow used by other shadowed items.
- shadow_inset: Inset shadow used by other shadowed items.
- shadow_spread: Size of shadow spread used by shadowed items.
- shadow_spread_dark: Size of shadow spread used by shadowed items in dark mode.
- block_background_fill: The background around an item.
- block_background_fill_dark: The background around an item in dark mode.
- block_border_color: The border color around an item.
- block_border_color_dark: The border color around an item in dark mode.
- block_border_width: The border width around an item.
- block_border_width_dark: The border width around an item in dark mode.
- block_info_text_color: The color of the info text.
- block_info_text_color_dark: The color of the info text in dark mode.
- block_info_text_size: The size of the info text.
- block_info_text_weight: The weight of the info text.
- block_label_background_fill: The background of the title label of a media element (e.g. image).
- block_label_background_fill_dark: The background of the title label of a media element (e.g. image) in dark mode.
- block_label_border_color: The border color of the title label of a media element (e.g. image).
- block_label_border_color_dark: The border color of the title label of a media element (e.g. image) in dark mode.
- block_label_border_width: The border width of the title label of a media element (e.g. image).
- block_label_border_width_dark: The border width of the title label of a media element (e.g. image) in dark mode.
- block_label_shadow: The shadow of the title label of a media element (e.g. image).
- block_label_text_color: The text color of the title label of a media element (e.g. image).
- block_label_text_color_dark: The text color of the title label of a media element (e.g. image) in dark mode.
- block_label_margin: The margin of the title label of a media element (e.g. image) from its surrounding container.
- block_label_padding: The padding of the title label of a media element (e.g. image).
- block_label_radius: The corner radius of the title label of a media element (e.g. image).
- block_label_right_radius: The corner radius of a right-aligned helper label.
- block_label_text_size: The text size of the title label of a media element (e.g. image).
- block_label_text_weight: The text weight of the title label of a media element (e.g. image).
- block_padding: The padding around an item.
- block_radius: The corner radius around an item.
- block_shadow: The shadow under an item.
- block_shadow_dark: The shadow under an item in dark mode.
- block_title_background_fill: The background of the title of a form element (e.g. textbox).
- block_title_background_fill_dark: The background of the title of a form element (e.g. textbox) in dark mode.
- block_title_border_color: The border color of the title of a form element (e.g. textbox).
- block_title_border_color_dark: The border color of the title of a form element (e.g. textbox) in dark mode.
- block_title_border_width: The border width of the title of a form element (e.g. textbox).
- block_title_border_width_dark: The border width of the title of a form element (e.g. textbox) in dark mode.
- block_title_text_color: The text color of the title of a form element (e.g. textbox).
- block_title_text_color_dark: The text color of the title of a form element (e.g. textbox) in dark mode.
- block_title_padding: The padding of the title of a form element (e.g. textbox).
- block_title_radius: The corner radius of the title of a form element (e.g. textbox).
- block_title_text_size: The text size of the title of a form element (e.g. textbox).
- block_title_text_weight: The text weight of the title of a form element (e.g. textbox).
- container_radius: The corner radius of a layout component that holds other content.
- form_gap_width: The border gap between form elements, (e.g. consecutive textboxes).
- layout_gap: The gap between items within a row or column.
- panel_background_fill: The background of a panel.
- panel_background_fill_dark: The background of a panel in dark mode.
- panel_border_color: The border color of a panel.
- panel_border_color_dark: The border color of a panel in dark mode.
- panel_border_width: The border width of a panel.
- panel_border_width_dark: The border width of a panel in dark mode.
- section_header_text_size: The text size of a section header (e.g. tab name).
- section_header_text_weight: The text weight of a section header (e.g. tab name).
- chatbot_code_background_color: The background color of code blocks in the chatbot.
- chatbot_code_background_color_dark: The background color of code blocks in the chatbot in dark mode.
- checkbox_background_color: The background of a checkbox square or radio circle.
- checkbox_background_color_dark: The background of a checkbox square or radio circle in dark mode.
- checkbox_background_color_focus: The background of a checkbox square or radio circle when focused.
- checkbox_background_color_focus_dark: The background of a checkbox square or radio circle when focused in dark mode.
- checkbox_background_color_hover: The background of a checkbox square or radio circle when hovered over.
- checkbox_background_color_hover_dark: The background of a checkbox square or radio circle when hovered over in dark mode.
- checkbox_background_color_selected: The background of a checkbox square or radio circle when selected.
- checkbox_background_color_selected_dark: The background of a checkbox square or radio circle when selected in dark mode.
- checkbox_border_color: The border color of a checkbox square or radio circle.
- checkbox_border_color_dark: The border color of a checkbox square or radio circle in dark mode.
- checkbox_border_color_focus: The border color of a checkbox square or radio circle when focused.
- checkbox_border_color_focus_dark: The border color of a checkbox square or radio circle when focused in dark mode.
- checkbox_border_color_hover: The border color of a checkbox square or radio circle when hovered over.
- checkbox_border_color_hover_dark: The border color of a checkbox square or radio circle when hovered over in dark mode.
- checkbox_border_color_selected: The border color of a checkbox square or radio circle when selected.
- checkbox_border_color_selected_dark: The border color of a checkbox square or radio circle when selected in dark mode.
- checkbox_border_radius: The corner radius of a checkbox square.
- checkbox_border_width: The border width of a checkbox square or radio circle.
- checkbox_border_width_dark: The border width of a checkbox square or radio circle in dark mode.
- checkbox_check: The checkmark visual of a checkbox square.
- radio_circle: The circle visual of a radio circle.
- checkbox_shadow: The shadow of a checkbox square or radio circle.
- checkbox_label_background_fill: The background of the surrounding button of a checkbox or radio element.
- checkbox_label_background_fill_dark: The background of the surrounding button of a checkbox or radio element in dark mode.
- checkbox_label_background_fill_hover: The background of the surrounding button of a checkbox or radio element when hovered over.
- checkbox_label_background_fill_hover_dark: The background of the surrounding button of a checkbox or radio element when hovered over in dark mode.
- checkbox_label_background_fill_selected: The background of the surrounding button of a checkbox or radio element when selected.
- checkbox_label_background_fill_selected_dark: The background of the surrounding button of a checkbox or radio element when selected in dark mode.
- checkbox_label_border_color: The border color of the surrounding button of a checkbox or radio element.
- checkbox_label_border_color_dark: The border color of the surrounding button of a checkbox or radio element in dark mode.
- checkbox_label_border_color_hover: The border color of the surrounding button of a checkbox or radio element when hovered over.
- checkbox_label_border_color_hover_dark: The border color of the surrounding button of a checkbox or radio element when hovered over in dark mode.
- checkbox_label_border_width: The border width of the surrounding button of a checkbox or radio element.
- checkbox_label_border_width_dark: The border width of the surrounding button of a checkbox or radio element in dark mode.
- checkbox_label_gap: The gap consecutive checkbox or radio elements.
- checkbox_label_padding: The padding of the surrounding button of a checkbox or radio element.
- checkbox_label_shadow: The shadow of the surrounding button of a checkbox or radio element.
- checkbox_label_text_size: The text size of the label accompanying a checkbox or radio element.
- checkbox_label_text_weight: The text weight of the label accompanying a checkbox or radio element.
- checkbox_label_text_color: The text color of the label accompanying a checkbox or radio element.
- checkbox_label_text_color_dark: The text color of the label accompanying a checkbox or radio element in dark mode.
- checkbox_label_text_color_selected: The text color of the label accompanying a checkbox or radio element when selected.
- checkbox_label_text_color_selected_dark: The text color of the label accompanying a checkbox or radio element when selected in dark mode.
- error_background_fill: The background of an error message.
- error_background_fill_dark: The background of an error message in dark mode.
- error_border_color: The border color of an error message.
- error_border_color_dark: The border color of an error message in dark mode.
- error_border_width: The border width of an error message.
- error_border_width_dark: The border width of an error message in dark mode.
- error_text_color: The text color of an error message.
- error_text_color_dark: The text color of an error message in dark mode.
- input_background_fill: The background of an input field.
- input_background_fill_dark: The background of an input field in dark mode.
- input_background_fill_focus: The background of an input field when focused.
- input_background_fill_focus_dark: The background of an input field when focused in dark mode.
- input_background_fill_hover: The background of an input field when hovered over.
- input_background_fill_hover_dark: The background of an input field when hovered over in dark mode.
- input_border_color: The border color of an input field.
- input_border_color_dark: The border color of an input field in dark mode.
- input_border_color_focus: The border color of an input field when focused.
- input_border_color_focus_dark: The border color of an input field when focused in dark mode.
- input_border_color_hover: The border color of an input field when hovered over.
- input_border_color_hover_dark: The border color of an input field when hovered over in dark mode.
- input_border_width: The border width of an input field.
- input_border_width_dark: The border width of an input field in dark mode.
- input_padding: The padding of an input field.
- input_placeholder_color: The placeholder text color of an input field.
- input_placeholder_color_dark: The placeholder text color of an input field in dark mode.
- input_radius: The corner radius of an input field.
- input_shadow: The shadow of an input field.
- input_shadow_dark: The shadow of an input field in dark mode.
- input_shadow_focus: The shadow of an input field when focused.
- input_shadow_focus_dark: The shadow of an input field when focused in dark mode.
- input_text_size: The text size of an input field.
- input_text_weight: The text weight of an input field.
- loader_color: The color of the loading animation while a request is pending.
- loader_color_dark: The color of the loading animation while a request is pending in dark mode.
- slider_color: The color of the slider in a range element.
- slider_color_dark: The color of the slider in a range element in dark mode.
- stat_background_fill: The background used for stats visuals (e.g. confidence bars in label).
- stat_background_fill_dark: The background used for stats visuals (e.g. confidence bars in label) in dark mode.
- table_border_color: The border color of a table.
- table_border_color_dark: The border color of a table in dark mode.
- table_even_background_fill: The background of even rows in a table.
- table_even_background_fill_dark: The background of even rows in a table in dark mode.
- table_odd_background_fill: The background of odd rows in a table.
- table_odd_background_fill_dark: The background of odd rows in a table in dark mode.
- table_radius: The corner radius of a table.
- table_row_focus: The background of a focused row in a table.
- table_row_focus_dark: The background of a focused row in a table in dark mode.
- button_border_width: The border width of a button.
- button_border_width_dark: The border width of a button in dark mode.
- button_cancel_background_fill: The background of a button of "cancel" variant.
- button_cancel_background_fill_dark: The background of a button of "cancel" variant in dark mode.
- button_cancel_background_fill_hover: The background of a button of "cancel" variant when hovered over.
- button_cancel_background_fill_hover_dark: The background of a button of "cancel" variant when hovered over in dark mode.
- button_cancel_border_color: The border color of a button of "cancel" variant.
- button_cancel_border_color_dark: The border color of a button of "cancel" variant in dark mode.
- button_cancel_border_color_hover: The border color of a button of "cancel" variant when hovered over.
- button_cancel_border_color_hover_dark: The border color of a button of "cancel" variant when hovered over in dark mode.
- button_cancel_text_color: The text color of a button of "cancel" variant.
- button_cancel_text_color_dark: The text color of a button of "cancel" variant in dark mode.
- button_cancel_text_color_hover: The text color of a button of "cancel" variant when hovered over.
- button_cancel_text_color_hover_dark: The text color of a button of "cancel" variant when hovered over in dark mode.
- button_large_padding: The padding of a button with the default "large" size.
- button_large_radius: The corner radius of a button with the default "large" size.
- button_large_text_size: The text size of a button with the default "large" size.
- button_large_text_weight: The text weight of a button with the default "large" size.
- button_primary_background_fill: The background of a button of "primary" variant.
- button_primary_background_fill_dark: The background of a button of "primary" variant in dark mode.
- button_primary_background_fill_hover: The background of a button of "primary" variant when hovered over.
- button_primary_background_fill_hover_dark: The background of a button of "primary" variant when hovered over in dark mode.
- button_primary_border_color: The border color of a button of "primary" variant.
- button_primary_border_color_dark: The border color of a button of "primary" variant in dark mode.
- button_primary_border_color_hover: The border color of a button of "primary" variant when hovered over.
- button_primary_border_color_hover_dark: The border color of a button of "primary" variant when hovered over in dark mode.
- button_primary_text_color: The text color of a button of "primary" variant.
- button_primary_text_color_dark: The text color of a button of "primary" variant in dark mode.
- button_primary_text_color_hover: The text color of a button of "primary" variant when hovered over.
- button_primary_text_color_hover_dark: The text color of a button of "primary" variant when hovered over in dark mode.
- button_secondary_background_fill: The background of a button of default "secondary" variant.
- button_secondary_background_fill_dark: The background of a button of default "secondary" variant in dark mode.
- button_secondary_background_fill_hover: The background of a button of default "secondary" variant when hovered over.
- button_secondary_background_fill_hover_dark: The background of a button of default "secondary" variant when hovered over in dark mode.
- button_secondary_border_color: The border color of a button of default "secondary" variant.
- button_secondary_border_color_dark: The border color of a button of default "secondary" variant in dark mode.
- button_secondary_border_color_hover: The border color of a button of default "secondary" variant when hovered over.
- button_secondary_border_color_hover_dark: The border color of a button of default "secondary" variant when hovered over in dark mode.
- button_secondary_text_color: The text color of a button of default "secondary" variant.
- button_secondary_text_color_dark: The text color of a button of default "secondary" variant in dark mode.
- button_secondary_text_color_hover: The text color of a button of default "secondary" variant when hovered over.
- button_secondary_text_color_hover_dark: The text color of a button of default "secondary" variant when hovered over in dark mode.
- button_shadow: The shadow under a button.
- button_shadow_active: The shadow under a button when pressed.
- button_shadow_hover: The shadow under a button when hovered over.
- button_small_padding: The padding of a button set to "small" size.
- button_small_radius: The corner radius of a button set to "small" size.
- button_small_text_size: The text size of a button set to "small" size.
- button_small_text_weight: The text weight of a button set to "small" size.
- button_transition: The transition animation duration of a button between regular, hover, and focused states.
- """
-
- # Body
- self.body_background_fill = body_background_fill or getattr(
- self, "body_background_fill", "*background_fill_primary"
- )
- self.body_background_fill_dark = body_background_fill_dark or getattr(
- self, "body_background_fill_dark", "*background_fill_primary"
- )
- self.body_text_color = body_text_color or getattr(
- self, "body_text_color", "*neutral_800"
- )
- self.body_text_color_dark = body_text_color_dark or getattr(
- self, "body_text_color_dark", "*neutral_100"
- )
- self.body_text_size = body_text_size or getattr(
- self, "body_text_size", "*text_md"
- )
- self.body_text_weight = body_text_weight or getattr(
- self, "body_text_weight", "400"
- )
- self.embed_radius = embed_radius or getattr(self, "embed_radius", "*radius_lg")
- # Core Colors
- self.color_accent = color_accent or getattr(
- self, "color_accent", "*primary_500"
- )
- self.color_accent_soft = color_accent_soft or getattr(
- self, "color_accent_soft", "*primary_50"
- )
- self.color_accent_soft_dark = color_accent_soft_dark or getattr(
- self, "color_accent_soft_dark", "*neutral_700"
- )
- self.background_fill_primary = background_fill_primary or getattr(
- self, "background_primary", "white"
- )
- self.background_fill_primary_dark = background_fill_primary_dark or getattr(
- self, "background_primary_dark", "*neutral_950"
- )
- self.background_fill_secondary = background_fill_secondary or getattr(
- self, "background_secondary", "*neutral_50"
- )
- self.background_fill_secondary_dark = background_fill_secondary_dark or getattr(
- self, "background_secondary_dark", "*neutral_900"
- )
- self.border_color_accent = border_color_accent or getattr(
- self, "border_color_accent", "*primary_300"
- )
- self.border_color_accent_dark = border_color_accent_dark or getattr(
- self, "border_color_accent_dark", "*neutral_600"
- )
- self.border_color_primary = border_color_primary or getattr(
- self, "border_color_primary", "*neutral_200"
- )
- self.border_color_primary_dark = border_color_primary_dark or getattr(
- self, "border_color_primary_dark", "*neutral_700"
- )
- # Text Colors
- self.link_text_color = link_text_color or getattr(
- self, "link_text_color", "*secondary_600"
- )
- self.link_text_color_active = link_text_color_active or getattr(
- self, "link_text_color_active", "*secondary_600"
- )
- self.link_text_color_active_dark = link_text_color_active_dark or getattr(
- self, "link_text_color_active_dark", "*secondary_500"
- )
- self.link_text_color_dark = link_text_color_dark or getattr(
- self, "link_text_color_dark", "*secondary_500"
- )
- self.link_text_color_hover = link_text_color_hover or getattr(
- self, "link_text_color_hover", "*secondary_700"
- )
- self.link_text_color_hover_dark = link_text_color_hover_dark or getattr(
- self, "link_text_color_hover_dark", "*secondary_400"
- )
- self.link_text_color_visited = link_text_color_visited or getattr(
- self, "link_text_color_visited", "*secondary_500"
- )
- self.link_text_color_visited_dark = link_text_color_visited_dark or getattr(
- self, "link_text_color_visited_dark", "*secondary_600"
- )
- self.body_text_color_subdued = body_text_color_subdued or getattr(
- self, "body_text_color_subdued", "*neutral_400"
- )
- self.body_text_color_subdued_dark = body_text_color_subdued_dark or getattr(
- self, "body_text_color_subdued_dark", "*neutral_400"
- )
- # Shadows
- self.shadow_drop = shadow_drop or getattr(
- self, "shadow_drop", "rgba(0,0,0,0.05) 0px 1px 2px 0px"
- )
- self.shadow_drop_lg = shadow_drop_lg or getattr(
- self,
- "shadow_drop_lg",
- "0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)",
- )
- self.shadow_inset = shadow_inset or getattr(
- self, "shadow_inset", "rgba(0,0,0,0.05) 0px 2px 4px 0px inset"
- )
- self.shadow_spread = shadow_spread or getattr(self, "shadow_spread", "3px")
- self.shadow_spread_dark = shadow_spread_dark or getattr(
- self, "shadow_spread_dark", "1px"
- )
- # Layout Atoms
- self.block_background_fill = block_background_fill or getattr(
- self, "block_background_fill", "*background_fill_primary"
- )
- self.block_background_fill_dark = block_background_fill_dark or getattr(
- self, "block_background_fill_dark", "*neutral_800"
- )
- self.block_border_color = block_border_color or getattr(
- self, "block_border_color", "*border_color_primary"
- )
- self.block_border_color_dark = block_border_color_dark or getattr(
- self, "block_border_color_dark", "*border_color_primary"
- )
- self.block_border_width = block_border_width or getattr(
- self, "block_border_width", "1px"
- )
- self.block_border_width_dark = block_border_width_dark or getattr(
- self, "block_border_width_dark", None
- )
- self.block_info_text_color = block_info_text_color or getattr(
- self, "block_info_text_color", "*body_text_color_subdued"
- )
- self.block_info_text_color_dark = block_info_text_color_dark or getattr(
- self, "block_info_text_color_dark", "*body_text_color_subdued"
- )
- self.block_info_text_size = block_info_text_size or getattr(
- self, "block_info_text_size", "*text_sm"
- )
- self.block_info_text_weight = block_info_text_weight or getattr(
- self, "block_info_text_weight", "400"
- )
- self.block_label_background_fill = block_label_background_fill or getattr(
- self, "block_label_background_fill", "*background_fill_primary"
- )
- self.block_label_background_fill_dark = (
- block_label_background_fill_dark
- or getattr(
- self, "block_label_background_fill_dark", "*background_fill_secondary"
- )
- )
- self.block_label_border_color = block_label_border_color or getattr(
- self, "block_label_border_color", "*border_color_primary"
- )
- self.block_label_border_color_dark = block_label_border_color_dark or getattr(
- self, "block_label_border_color_dark", "*border_color_primary"
- )
- self.block_label_border_width = block_label_border_width or getattr(
- self, "block_label_border_width", "1px"
- )
- self.block_label_border_width_dark = block_label_border_width_dark or getattr(
- self, "block_label_border_width_dark", None
- )
- self.block_label_shadow = block_label_shadow or getattr(
- self, "block_label_shadow", "*block_shadow"
- )
- self.block_label_text_color = block_label_text_color or getattr(
- self, "block_label_text_color", "*neutral_500"
- )
- self.block_label_text_color_dark = block_label_text_color_dark or getattr(
- self, "block_label_text_color_dark", "*neutral_200"
- )
- self.block_label_margin = block_label_margin or getattr(
- self, "block_label_margin", "0"
- )
- self.block_label_padding = block_label_padding or getattr(
- self, "block_label_padding", "*spacing_sm *spacing_lg"
- )
- self.block_label_radius = block_label_radius or getattr(
- self,
- "block_label_radius",
- "calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px) 0",
- )
- self.block_label_right_radius = block_label_right_radius or getattr(
- self,
- "block_label_right_radius",
- "0 calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px)",
- )
- self.block_label_text_size = block_label_text_size or getattr(
- self, "block_label_text_size", "*text_sm"
- )
- self.block_label_text_weight = block_label_text_weight or getattr(
- self, "block_label_text_weight", "400"
- )
- self.block_padding = block_padding or getattr(
- self, "block_padding", "*spacing_xl calc(*spacing_xl + 2px)"
- )
- self.block_radius = block_radius or getattr(self, "block_radius", "*radius_lg")
- self.block_shadow = block_shadow or getattr(self, "block_shadow", "none")
- self.block_shadow_dark = block_shadow_dark or getattr(
- self, "block_shadow_dark", None
- )
- self.block_title_background_fill = block_title_background_fill or getattr(
- self, "block_title_background_fill", "none"
- )
- self.block_title_background_fill_dark = (
- block_title_background_fill_dark
- or getattr(self, "block_title_background_fill_dark", None)
- )
- self.block_title_border_color = block_title_border_color or getattr(
- self, "block_title_border_color", "none"
- )
- self.block_title_border_color_dark = block_title_border_color_dark or getattr(
- self, "block_title_border_color_dark", None
- )
- self.block_title_border_width = block_title_border_width or getattr(
- self, "block_title_border_width", "0px"
- )
- self.block_title_border_width_dark = block_title_border_width_dark or getattr(
- self, "block_title_border_width_dark", None
- )
- self.block_title_text_color = block_title_text_color or getattr(
- self, "block_title_text_color", "*neutral_500"
- )
- self.block_title_text_color_dark = block_title_text_color_dark or getattr(
- self, "block_title_text_color_dark", "*neutral_200"
- )
- self.block_title_padding = block_title_padding or getattr(
- self, "block_title_padding", "0"
- )
- self.block_title_radius = block_title_radius or getattr(
- self, "block_title_radius", "none"
- )
- self.block_title_text_size = block_title_text_size or getattr(
- self, "block_title_text_size", "*text_md"
- )
- self.block_title_text_weight = block_title_text_weight or getattr(
- self, "block_title_text_weight", "400"
- )
- self.container_radius = container_radius or getattr(
- self, "container_radius", "*radius_lg"
- )
- self.form_gap_width = form_gap_width or getattr(self, "form_gap_width", "0px")
- self.layout_gap = layout_gap or getattr(self, "layout_gap", "*spacing_xxl")
- self.panel_background_fill = panel_background_fill or getattr(
- self, "panel_background_fill", "*background_fill_secondary"
- )
- self.panel_background_fill_dark = panel_background_fill_dark or getattr(
- self, "panel_background_fill_dark", "*background_fill_secondary"
- )
- self.panel_border_color = panel_border_color or getattr(
- self, "panel_border_color", "*border_color_primary"
- )
- self.panel_border_color_dark = panel_border_color_dark or getattr(
- self, "panel_border_color_dark", "*border_color_primary"
- )
- self.panel_border_width = panel_border_width or getattr(
- self, "panel_border_width", "0"
- )
- self.panel_border_width_dark = panel_border_width_dark or getattr(
- self, "panel_border_width_dark", None
- )
- self.section_header_text_size = section_header_text_size or getattr(
- self, "section_header_text_size", "*text_md"
- )
- self.section_header_text_weight = section_header_text_weight or getattr(
- self, "section_header_text_weight", "400"
- )
- self.border_color_accent_subdued = border_color_accent_subdued or getattr(
- self, "border_color_accent_subdued", "*border_color_accent"
- )
- self.border_color_accent_subdued_dark = (
- border_color_accent_subdued_dark
- or getattr(self, "border_color_accent_subdued_dark", "*border_color_accent")
- )
- # Component Atoms
- self.chatbot_code_background_color = chatbot_code_background_color or getattr(
- self, "chatbot_code_background_color", "*neutral_100"
- )
- self.chatbot_code_background_color_dark = (
- chatbot_code_background_color_dark
- or getattr(self, "chatbot_code_background_color_dark", "*neutral_800")
- )
- self.checkbox_background_color = checkbox_background_color or getattr(
- self, "checkbox_background_color", "*background_fill_primary"
- )
- self.checkbox_background_color_dark = checkbox_background_color_dark or getattr(
- self, "checkbox_background_color_dark", "*neutral_800"
- )
- self.checkbox_background_color_focus = (
- checkbox_background_color_focus
- or getattr(
- self, "checkbox_background_color_focus", "*checkbox_background_color"
- )
- )
- self.checkbox_background_color_focus_dark = (
- checkbox_background_color_focus_dark
- or getattr(
- self,
- "checkbox_background_color_focus_dark",
- "*checkbox_background_color",
- )
- )
- self.checkbox_background_color_hover = (
- checkbox_background_color_hover
- or getattr(
- self, "checkbox_background_color_hover", "*checkbox_background_color"
- )
- )
- self.checkbox_background_color_hover_dark = (
- checkbox_background_color_hover_dark
- or getattr(
- self,
- "checkbox_background_color_hover_dark",
- "*checkbox_background_color",
- )
- )
- self.checkbox_background_color_selected = (
- checkbox_background_color_selected
- or getattr(self, "checkbox_background_color_selected", "*secondary_600")
- )
- self.checkbox_background_color_selected_dark = (
- checkbox_background_color_selected_dark
- or getattr(
- self, "checkbox_background_color_selected_dark", "*secondary_600"
- )
- )
- self.checkbox_border_color = checkbox_border_color or getattr(
- self, "checkbox_border_color", "*neutral_300"
- )
- self.checkbox_border_color_dark = checkbox_border_color_dark or getattr(
- self, "checkbox_border_color_dark", "*neutral_700"
- )
- self.checkbox_border_color_focus = checkbox_border_color_focus or getattr(
- self, "checkbox_border_color_focus", "*secondary_500"
- )
- self.checkbox_border_color_focus_dark = (
- checkbox_border_color_focus_dark
- or getattr(self, "checkbox_border_color_focus_dark", "*secondary_500")
- )
- self.checkbox_border_color_hover = checkbox_border_color_hover or getattr(
- self, "checkbox_border_color_hover", "*neutral_300"
- )
- self.checkbox_border_color_hover_dark = (
- checkbox_border_color_hover_dark
- or getattr(self, "checkbox_border_color_hover_dark", "*neutral_600")
- )
- self.checkbox_border_color_selected = checkbox_border_color_selected or getattr(
- self, "checkbox_border_color_selected", "*secondary_600"
- )
- self.checkbox_border_color_selected_dark = (
- checkbox_border_color_selected_dark
- or getattr(self, "checkbox_border_color_selected_dark", "*secondary_600")
- )
- self.checkbox_border_radius = checkbox_border_radius or getattr(
- self, "checkbox_border_radius", "*radius_sm"
- )
- self.checkbox_border_width = checkbox_border_width or getattr(
- self, "checkbox_border_width", "*input_border_width"
- )
- self.checkbox_border_width_dark = checkbox_border_width_dark or getattr(
- self, "checkbox_border_width_dark", "*input_border_width"
- )
- self.checkbox_label_background_fill = checkbox_label_background_fill or getattr(
- self, "checkbox_label_background_fill", "*button_secondary_background_fill"
- )
- self.checkbox_label_background_fill_dark = (
- checkbox_label_background_fill_dark
- or getattr(
- self,
- "checkbox_label_background_fill_dark",
- "*button_secondary_background_fill",
- )
- )
- self.checkbox_label_background_fill_hover = (
- checkbox_label_background_fill_hover
- or getattr(
- self,
- "checkbox_label_background_fill_hover",
- "*button_secondary_background_fill_hover",
- )
- )
- self.checkbox_label_background_fill_hover_dark = (
- checkbox_label_background_fill_hover_dark
- or getattr(
- self,
- "checkbox_label_background_fill_hover_dark",
- "*button_secondary_background_fill_hover",
- )
- )
- self.checkbox_label_background_fill_selected = (
- checkbox_label_background_fill_selected
- or getattr(
- self,
- "checkbox_label_background_fill_selected",
- "*checkbox_label_background_fill",
- )
- )
- self.checkbox_label_background_fill_selected_dark = (
- checkbox_label_background_fill_selected_dark
- or getattr(
- self,
- "checkbox_label_background_fill_selected_dark",
- "*checkbox_label_background_fill",
- )
- )
- self.checkbox_label_border_color = checkbox_label_border_color or getattr(
- self, "checkbox_label_border_color", "*border_color_primary"
- )
- self.checkbox_label_border_color_dark = (
- checkbox_label_border_color_dark
- or getattr(
- self, "checkbox_label_border_color_dark", "*border_color_primary"
- )
- )
- self.checkbox_label_border_color_hover = (
- checkbox_label_border_color_hover
- or getattr(
- self,
- "checkbox_label_border_color_hover",
- "*checkbox_label_border_color",
- )
- )
- self.checkbox_label_border_color_hover_dark = (
- checkbox_label_border_color_hover_dark
- or getattr(
- self,
- "checkbox_label_border_color_hover_dark",
- "*checkbox_label_border_color",
- )
- )
- self.checkbox_label_border_width = checkbox_label_border_width or getattr(
- self, "checkbox_label_border_width", "*input_border_width"
- )
- self.checkbox_label_border_width_dark = (
- checkbox_label_border_width_dark
- or getattr(self, "checkbox_label_border_width_dark", "*input_border_width")
- )
- self.checkbox_label_gap = checkbox_label_gap or getattr(
- self, "checkbox_label_gap", "*spacing_lg"
- )
- self.checkbox_label_padding = checkbox_label_padding or getattr(
- self, "checkbox_label_padding", "*spacing_md calc(2 * *spacing_md)"
- )
- self.checkbox_label_shadow = checkbox_label_shadow or getattr(
- self, "checkbox_label_shadow", "none"
- )
- self.checkbox_label_text_size = checkbox_label_text_size or getattr(
- self, "checkbox_label_text_size", "*text_md"
- )
- self.checkbox_label_text_weight = checkbox_label_text_weight or getattr(
- self, "checkbox_label_text_weight", "400"
- )
- self.checkbox_check = checkbox_check or getattr(
- self,
- "checkbox_check",
- """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3cpath d='M12.207 4.793a1 1 0 010 1.414l-5 5a1 1 0 01-1.414 0l-2-2a1 1 0 011.414-1.414L6.5 9.086l4.293-4.293a1 1 0 011.414 0z'/%3e%3c/svg%3e")""",
- )
- self.radio_circle = radio_circle or getattr(
- self,
- "radio_circle",
- """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='8' cy='8' r='3'/%3e%3c/svg%3e")""",
- )
- self.checkbox_shadow = checkbox_shadow or getattr(
- self, "checkbox_shadow", "*input_shadow"
- )
- self.checkbox_label_text_color = checkbox_label_text_color or getattr(
- self, "checkbox_label_text_color", "*body_text_color"
- )
- self.checkbox_label_text_color_dark = checkbox_label_text_color_dark or getattr(
- self, "checkbox_label_text_color_dark", "*body_text_color"
- )
- self.checkbox_label_text_color_selected = (
- checkbox_label_text_color_selected
- or getattr(
- self, "checkbox_label_text_color_selected", "*checkbox_label_text_color"
- )
- )
- self.checkbox_label_text_color_selected_dark = (
- checkbox_label_text_color_selected_dark
- or getattr(
- self,
- "checkbox_label_text_color_selected_dark",
- "*checkbox_label_text_color",
- )
- )
- self.error_background_fill = error_background_fill or getattr(
- self, "error_background_fill", colors.red.c50
- )
- self.error_background_fill_dark = error_background_fill_dark or getattr(
- self, "error_background_fill_dark", "*background_fill_primary"
- )
- self.error_border_color = error_border_color or getattr(
- self, "error_border_color", colors.red.c700
- )
- self.error_border_color_dark = error_border_color_dark or getattr(
- self, "error_border_color_dark", colors.red.c500
- )
- self.error_border_width = error_border_width or getattr(
- self, "error_border_width", "1px"
- )
- self.error_border_width_dark = error_border_width_dark or getattr(
- self, "error_border_width_dark", None
- )
- self.error_text_color = error_text_color or getattr(
- self, "error_text_color", colors.red.c700
- )
- self.error_text_color_dark = error_text_color_dark or getattr(
- self, "error_text_color_dark", colors.red.c50
- )
- self.error_icon_color = error_icon_color or getattr(
- self, "error_icon_color", colors.red.c700
- )
- self.error_icon_color_dark = error_icon_color_dark or getattr(
- self, "error_icon_color_dark", colors.red.c500
- )
- self.input_background_fill = input_background_fill or getattr(
- self, "input_background_fill", "*neutral_100"
- )
- self.input_background_fill_dark = input_background_fill_dark or getattr(
- self, "input_background_fill_dark", "*neutral_700"
- )
- self.input_background_fill_focus = input_background_fill_focus or getattr(
- self, "input_background_fill_focus", "*secondary_500"
- )
- self.input_background_fill_focus_dark = (
- input_background_fill_focus_dark
- or getattr(self, "input_background_fill_focus_dark", "*secondary_600")
- )
- self.input_background_fill_hover = input_background_fill_hover or getattr(
- self, "input_background_fill_hover", "*input_background_fill"
- )
- self.input_background_fill_hover_dark = (
- input_background_fill_hover_dark
- or getattr(
- self, "input_background_fill_hover_dark", "*input_background_fill"
- )
- )
- self.input_border_color = input_border_color or getattr(
- self, "input_border_color", "*border_color_primary"
- )
- self.input_border_color_dark = input_border_color_dark or getattr(
- self, "input_border_color_dark", "*border_color_primary"
- )
- self.input_border_color_focus = input_border_color_focus or getattr(
- self, "input_border_color_focus", "*secondary_300"
- )
- self.input_border_color_focus_dark = input_border_color_focus_dark or getattr(
- self, "input_border_color_focus_dark", "*neutral_700"
- )
- self.input_border_color_hover = input_border_color_hover or getattr(
- self, "input_border_color_hover", "*input_border_color"
- )
- self.input_border_color_hover_dark = input_border_color_hover_dark or getattr(
- self, "input_border_color_hover_dark", "*input_border_color"
- )
- self.input_border_width = input_border_width or getattr(
- self, "input_border_width", "0px"
- )
- self.input_border_width_dark = input_border_width_dark or getattr(
- self, "input_border_width_dark", None
- )
- self.input_padding = input_padding or getattr(
- self, "input_padding", "*spacing_xl"
- )
- self.input_placeholder_color = input_placeholder_color or getattr(
- self, "input_placeholder_color", "*neutral_400"
- )
- self.input_placeholder_color_dark = input_placeholder_color_dark or getattr(
- self, "input_placeholder_color_dark", "*neutral_500"
- )
- self.input_radius = input_radius or getattr(self, "input_radius", "*radius_lg")
- self.input_shadow = input_shadow or getattr(self, "input_shadow", "none")
- self.input_shadow_dark = input_shadow_dark or getattr(
- self, "input_shadow_dark", None
- )
- self.input_shadow_focus = input_shadow_focus or getattr(
- self, "input_shadow_focus", "*input_shadow"
- )
- self.input_shadow_focus_dark = input_shadow_focus_dark or getattr(
- self, "input_shadow_focus_dark", None
- )
- self.input_text_size = input_text_size or getattr(
- self, "input_text_size", "*text_md"
- )
- self.input_text_weight = input_text_weight or getattr(
- self, "input_text_weight", "400"
- )
- self.loader_color = loader_color or getattr(
- self, "loader_color", "*color_accent"
- )
- self.loader_color_dark = loader_color_dark or getattr(
- self, "loader_color_dark", None
- )
- self.prose_text_size = prose_text_size or getattr(
- self, "prose_text_size", "*text_md"
- )
- self.prose_text_weight = prose_text_weight or getattr(
- self, "prose_text_weight", "400"
- )
- self.prose_header_text_weight = prose_header_text_weight or getattr(
- self, "prose_header_text_weight", "600"
- )
- self.slider_color = slider_color or getattr(self, "slider_color", "auto")
- self.slider_color_dark = slider_color_dark or getattr(
- self, "slider_color_dark", None
- )
- self.stat_background_fill = stat_background_fill or getattr(
- self, "stat_background_fill", "*primary_300"
- )
- self.stat_background_fill_dark = stat_background_fill_dark or getattr(
- self, "stat_background_fill_dark", "*primary_500"
- )
- self.table_border_color = table_border_color or getattr(
- self, "table_border_color", "*neutral_300"
- )
- self.table_border_color_dark = table_border_color_dark or getattr(
- self, "table_border_color_dark", "*neutral_700"
- )
- self.table_even_background_fill = table_even_background_fill or getattr(
- self, "table_even_background_fill", "white"
- )
- self.table_even_background_fill_dark = (
- table_even_background_fill_dark
- or getattr(self, "table_even_background_fill_dark", "*neutral_950")
- )
- self.table_odd_background_fill = table_odd_background_fill or getattr(
- self, "table_odd_background_fill", "*neutral_50"
- )
- self.table_odd_background_fill_dark = table_odd_background_fill_dark or getattr(
- self, "table_odd_background_fill_dark", "*neutral_900"
- )
- self.table_radius = table_radius or getattr(self, "table_radius", "*radius_lg")
- self.table_row_focus = table_row_focus or getattr(
- self, "table_row_focus", "*color_accent_soft"
- )
- self.table_row_focus_dark = table_row_focus_dark or getattr(
- self, "table_row_focus_dark", "*color_accent_soft"
- )
- # Buttons
- self.button_border_width = button_border_width or getattr(
- self, "button_border_width", "*input_border_width"
- )
- self.button_border_width_dark = button_border_width_dark or getattr(
- self, "button_border_width_dark", "*input_border_width"
- )
- self.button_cancel_background_fill = button_cancel_background_fill or getattr(
- self, "button_cancel_background_fill", "*button_secondary_background_fill"
- )
- self.button_cancel_background_fill_dark = (
- button_cancel_background_fill_dark
- or getattr(
- self,
- "button_cancel_background_fill_dark",
- "*button_secondary_background_fill",
- )
- )
- self.button_cancel_background_fill_hover = (
- button_cancel_background_fill_hover
- or getattr(
- self,
- "button_cancel_background_fill_hover",
- "*button_cancel_background_fill",
- )
- )
- self.button_cancel_background_fill_hover_dark = (
- button_cancel_background_fill_hover_dark
- or getattr(
- self,
- "button_cancel_background_fill_hover_dark",
- "*button_cancel_background_fill",
- )
- )
- self.button_cancel_border_color = button_cancel_border_color or getattr(
- self, "button_cancel_border_color", "*button_secondary_border_color"
- )
- self.button_cancel_border_color_dark = (
- button_cancel_border_color_dark
- or getattr(
- self,
- "button_cancel_border_color_dark",
- "*button_secondary_border_color",
- )
- )
- self.button_cancel_border_color_hover = (
- button_cancel_border_color_hover
- or getattr(
- self,
- "button_cancel_border_color_hover",
- "*button_cancel_border_color",
- )
- )
- self.button_cancel_border_color_hover_dark = (
- button_cancel_border_color_hover_dark
- or getattr(
- self,
- "button_cancel_border_color_hover_dark",
- "*button_cancel_border_color",
- )
- )
- self.button_cancel_text_color = button_cancel_text_color or getattr(
- self, "button_cancel_text_color", "*button_secondary_text_color"
- )
- self.button_cancel_text_color_dark = button_cancel_text_color_dark or getattr(
- self, "button_cancel_text_color_dark", "*button_secondary_text_color"
- )
- self.button_cancel_text_color_hover = button_cancel_text_color_hover or getattr(
- self, "button_cancel_text_color_hover", "*button_cancel_text_color"
- )
- self.button_cancel_text_color_hover_dark = (
- button_cancel_text_color_hover_dark
- or getattr(
- self, "button_cancel_text_color_hover_dark", "*button_cancel_text_color"
- )
- )
- self.button_large_padding = button_large_padding or getattr(
- self, "button_large_padding", "*spacing_lg calc(2 * *spacing_lg)"
- )
- self.button_large_radius = button_large_radius or getattr(
- self, "button_large_radius", "*radius_lg"
- )
- self.button_large_text_size = button_large_text_size or getattr(
- self, "button_large_text_size", "*text_lg"
- )
- self.button_large_text_weight = button_large_text_weight or getattr(
- self, "button_large_text_weight", "600"
- )
- self.button_primary_background_fill = button_primary_background_fill or getattr(
- self, "button_primary_background_fill", "*primary_200"
- )
- self.button_primary_background_fill_dark = (
- button_primary_background_fill_dark
- or getattr(self, "button_primary_background_fill_dark", "*primary_700")
- )
- self.button_primary_background_fill_hover = (
- button_primary_background_fill_hover
- or getattr(
- self,
- "button_primary_background_fill_hover",
- "*button_primary_background_fill",
- )
- )
- self.button_primary_background_fill_hover_dark = (
- button_primary_background_fill_hover_dark
- or getattr(
- self,
- "button_primary_background_fill_hover_dark",
- "*button_primary_background_fill",
- )
- )
- self.button_primary_border_color = button_primary_border_color or getattr(
- self, "button_primary_border_color", "*primary_200"
- )
- self.button_primary_border_color_dark = (
- button_primary_border_color_dark
- or getattr(self, "button_primary_border_color_dark", "*primary_600")
- )
- self.button_primary_border_color_hover = (
- button_primary_border_color_hover
- or getattr(
- self,
- "button_primary_border_color_hover",
- "*button_primary_border_color",
- )
- )
- self.button_primary_border_color_hover_dark = (
- button_primary_border_color_hover_dark
- or getattr(
- self,
- "button_primary_border_color_hover_dark",
- "*button_primary_border_color",
- )
- )
- self.button_primary_text_color = button_primary_text_color or getattr(
- self, "button_primary_text_color", "*primary_600"
- )
- self.button_primary_text_color_dark = button_primary_text_color_dark or getattr(
- self, "button_primary_text_color_dark", "white"
- )
- self.button_primary_text_color_hover = (
- button_primary_text_color_hover
- or getattr(
- self, "button_primary_text_color_hover", "*button_primary_text_color"
- )
- )
- self.button_primary_text_color_hover_dark = (
- button_primary_text_color_hover_dark
- or getattr(
- self,
- "button_primary_text_color_hover_dark",
- "*button_primary_text_color",
- )
- )
- self.button_secondary_background_fill = (
- button_secondary_background_fill
- or getattr(self, "button_secondary_background_fill", "*neutral_200")
- )
- self.button_secondary_background_fill_dark = (
- button_secondary_background_fill_dark
- or getattr(self, "button_secondary_background_fill_dark", "*neutral_600")
- )
- self.button_secondary_background_fill_hover = (
- button_secondary_background_fill_hover
- or getattr(
- self,
- "button_secondary_background_fill_hover",
- "*button_secondary_background_fill",
- )
- )
- self.button_secondary_background_fill_hover_dark = (
- button_secondary_background_fill_hover_dark
- or getattr(
- self,
- "button_secondary_background_fill_hover_dark",
- "*button_secondary_background_fill",
- )
- )
- self.button_secondary_border_color = button_secondary_border_color or getattr(
- self, "button_secondary_border_color", "*neutral_200"
- )
- self.button_secondary_border_color_dark = (
- button_secondary_border_color_dark
- or getattr(self, "button_secondary_border_color_dark", "*neutral_600")
- )
- self.button_secondary_border_color_hover = (
- button_secondary_border_color_hover
- or getattr(
- self,
- "button_secondary_border_color_hover",
- "*button_secondary_border_color",
- )
- )
- self.button_secondary_border_color_hover_dark = (
- button_secondary_border_color_hover_dark
- or getattr(
- self,
- "button_secondary_border_color_hover_dark",
- "*button_secondary_border_color",
- )
- )
- self.button_secondary_text_color = button_secondary_text_color or getattr(
- self, "button_secondary_text_color", "*neutral_700"
- )
- self.button_secondary_text_color_dark = (
- button_secondary_text_color_dark
- or getattr(self, "button_secondary_text_color_dark", "white")
- )
- self.button_secondary_text_color_hover = (
- button_secondary_text_color_hover
- or getattr(
- self,
- "button_secondary_text_color_hover",
- "*button_secondary_text_color",
- )
- )
- self.button_secondary_text_color_hover_dark = (
- button_secondary_text_color_hover_dark
- or getattr(
- self,
- "button_secondary_text_color_hover_dark",
- "*button_secondary_text_color",
- )
- )
- self.button_shadow = button_shadow or getattr(self, "button_shadow", "none")
- self.button_shadow_active = button_shadow_active or getattr(
- self, "button_shadow_active", "none"
- )
- self.button_shadow_hover = button_shadow_hover or getattr(
- self, "button_shadow_hover", "none"
- )
- self.button_small_padding = button_small_padding or getattr(
- self, "button_small_padding", "*spacing_sm calc(2 * *spacing_sm)"
- )
- self.button_small_radius = button_small_radius or getattr(
- self, "button_small_radius", "*radius_lg"
- )
- self.button_small_text_size = button_small_text_size or getattr(
- self, "button_small_text_size", "*text_md"
- )
- self.button_small_text_weight = button_small_text_weight or getattr(
- self, "button_small_text_weight", "400"
- )
- self.button_transition = button_transition or getattr(
- self, "button_transition", "background-color 0.2s ease"
- )
- return self
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/interfaces.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/interfaces.py
deleted file mode 100644
index 5e95be1ec72425178245c32c33874303e0906405..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/interfaces.py
+++ /dev/null
@@ -1,135 +0,0 @@
-from contextlib import contextmanager
-from typing import Iterator, Optional, Union
-
-from .._models import (
- URL,
- Extensions,
- HeaderTypes,
- Origin,
- Request,
- Response,
- enforce_bytes,
- enforce_headers,
- enforce_url,
- include_request_headers,
-)
-
-
-class RequestInterface:
- def request(
- self,
- method: Union[bytes, str],
- url: Union[URL, bytes, str],
- *,
- headers: HeaderTypes = None,
- content: Union[bytes, Iterator[bytes], None] = None,
- extensions: Optional[Extensions] = None,
- ) -> Response:
- # Strict type checking on our parameters.
- method = enforce_bytes(method, name="method")
- url = enforce_url(url, name="url")
- headers = enforce_headers(headers, name="headers")
-
- # Include Host header, and optionally Content-Length or Transfer-Encoding.
- headers = include_request_headers(headers, url=url, content=content)
-
- request = Request(
- method=method,
- url=url,
- headers=headers,
- content=content,
- extensions=extensions,
- )
- response = self.handle_request(request)
- try:
- response.read()
- finally:
- response.close()
- return response
-
- @contextmanager
- def stream(
- self,
- method: Union[bytes, str],
- url: Union[URL, bytes, str],
- *,
- headers: HeaderTypes = None,
- content: Union[bytes, Iterator[bytes], None] = None,
- extensions: Optional[Extensions] = None,
- ) -> Iterator[Response]:
- # Strict type checking on our parameters.
- method = enforce_bytes(method, name="method")
- url = enforce_url(url, name="url")
- headers = enforce_headers(headers, name="headers")
-
- # Include Host header, and optionally Content-Length or Transfer-Encoding.
- headers = include_request_headers(headers, url=url, content=content)
-
- request = Request(
- method=method,
- url=url,
- headers=headers,
- content=content,
- extensions=extensions,
- )
- response = self.handle_request(request)
- try:
- yield response
- finally:
- response.close()
-
- def handle_request(self, request: Request) -> Response:
- raise NotImplementedError() # pragma: nocover
-
-
-class ConnectionInterface(RequestInterface):
- def close(self) -> None:
- raise NotImplementedError() # pragma: nocover
-
- def info(self) -> str:
- raise NotImplementedError() # pragma: nocover
-
- def can_handle_request(self, origin: Origin) -> bool:
- raise NotImplementedError() # pragma: nocover
-
- def is_available(self) -> bool:
- """
- Return `True` if the connection is currently able to accept an
- outgoing request.
-
- An HTTP/1.1 connection will only be available if it is currently idle.
-
- An HTTP/2 connection will be available so long as the stream ID space is
- not yet exhausted, and the connection is not in an error state.
-
- While the connection is being established we may not yet know if it is going
- to result in an HTTP/1.1 or HTTP/2 connection. The connection should be
- treated as being available, but might ultimately raise `NewConnectionRequired`
- required exceptions if multiple requests are attempted over a connection
- that ends up being established as HTTP/1.1.
- """
- raise NotImplementedError() # pragma: nocover
-
- def has_expired(self) -> bool:
- """
- Return `True` if the connection is in a state where it should be closed.
-
- This either means that the connection is idle and it has passed the
- expiry time on its keep-alive, or that server has sent an EOF.
- """
- raise NotImplementedError() # pragma: nocover
-
- def is_idle(self) -> bool:
- """
- Return `True` if the connection is currently idle.
- """
- raise NotImplementedError() # pragma: nocover
-
- def is_closed(self) -> bool:
- """
- Return `True` if the connection has been closed.
-
- Used when a response is closed to determine if the connection may be
- returned to the connection pool or not.
- """
- raise NotImplementedError() # pragma: nocover
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/socks_proxy.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/socks_proxy.py
deleted file mode 100644
index 407351d06b21954cad45dca7d2065bf1d24d88fd..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/socks_proxy.py
+++ /dev/null
@@ -1,340 +0,0 @@
-import logging
-import ssl
-import typing
-
-from socksio import socks5
-
-from .._backends.sync import SyncBackend
-from .._backends.base import NetworkBackend, NetworkStream
-from .._exceptions import ConnectionNotAvailable, ProxyError
-from .._models import URL, Origin, Request, Response, enforce_bytes, enforce_url
-from .._ssl import default_ssl_context
-from .._synchronization import Lock
-from .._trace import Trace
-from .connection_pool import ConnectionPool
-from .http11 import HTTP11Connection
-from .interfaces import ConnectionInterface
-
-logger = logging.getLogger("httpcore.socks")
-
-
-AUTH_METHODS = {
- b"\x00": "NO AUTHENTICATION REQUIRED",
- b"\x01": "GSSAPI",
- b"\x02": "USERNAME/PASSWORD",
- b"\xff": "NO ACCEPTABLE METHODS",
-}
-
-REPLY_CODES = {
- b"\x00": "Succeeded",
- b"\x01": "General SOCKS server failure",
- b"\x02": "Connection not allowed by ruleset",
- b"\x03": "Network unreachable",
- b"\x04": "Host unreachable",
- b"\x05": "Connection refused",
- b"\x06": "TTL expired",
- b"\x07": "Command not supported",
- b"\x08": "Address type not supported",
-}
-
-
-def _init_socks5_connection(
- stream: NetworkStream,
- *,
- host: bytes,
- port: int,
- auth: typing.Optional[typing.Tuple[bytes, bytes]] = None,
-) -> None:
- conn = socks5.SOCKS5Connection()
-
- # Auth method request
- auth_method = (
- socks5.SOCKS5AuthMethod.NO_AUTH_REQUIRED
- if auth is None
- else socks5.SOCKS5AuthMethod.USERNAME_PASSWORD
- )
- conn.send(socks5.SOCKS5AuthMethodsRequest([auth_method]))
- outgoing_bytes = conn.data_to_send()
- stream.write(outgoing_bytes)
-
- # Auth method response
- incoming_bytes = stream.read(max_bytes=4096)
- response = conn.receive_data(incoming_bytes)
- assert isinstance(response, socks5.SOCKS5AuthReply)
- if response.method != auth_method:
- requested = AUTH_METHODS.get(auth_method, "UNKNOWN")
- responded = AUTH_METHODS.get(response.method, "UNKNOWN")
- raise ProxyError(
- f"Requested {requested} from proxy server, but got {responded}."
- )
-
- if response.method == socks5.SOCKS5AuthMethod.USERNAME_PASSWORD:
- # Username/password request
- assert auth is not None
- username, password = auth
- conn.send(socks5.SOCKS5UsernamePasswordRequest(username, password))
- outgoing_bytes = conn.data_to_send()
- stream.write(outgoing_bytes)
-
- # Username/password response
- incoming_bytes = stream.read(max_bytes=4096)
- response = conn.receive_data(incoming_bytes)
- assert isinstance(response, socks5.SOCKS5UsernamePasswordReply)
- if not response.success:
- raise ProxyError("Invalid username/password")
-
- # Connect request
- conn.send(
- socks5.SOCKS5CommandRequest.from_address(
- socks5.SOCKS5Command.CONNECT, (host, port)
- )
- )
- outgoing_bytes = conn.data_to_send()
- stream.write(outgoing_bytes)
-
- # Connect response
- incoming_bytes = stream.read(max_bytes=4096)
- response = conn.receive_data(incoming_bytes)
- assert isinstance(response, socks5.SOCKS5Reply)
- if response.reply_code != socks5.SOCKS5ReplyCode.SUCCEEDED:
- reply_code = REPLY_CODES.get(response.reply_code, "UNKOWN")
- raise ProxyError(f"Proxy Server could not connect: {reply_code}.")
-
-
-class SOCKSProxy(ConnectionPool):
- """
- A connection pool that sends requests via an HTTP proxy.
- """
-
- def __init__(
- self,
- proxy_url: typing.Union[URL, bytes, str],
- proxy_auth: typing.Optional[
- typing.Tuple[typing.Union[bytes, str], typing.Union[bytes, str]]
- ] = None,
- ssl_context: typing.Optional[ssl.SSLContext] = None,
- max_connections: typing.Optional[int] = 10,
- max_keepalive_connections: typing.Optional[int] = None,
- keepalive_expiry: typing.Optional[float] = None,
- http1: bool = True,
- http2: bool = False,
- retries: int = 0,
- network_backend: typing.Optional[NetworkBackend] = None,
- ) -> None:
- """
- A connection pool for making HTTP requests.
-
- Parameters:
- proxy_url: The URL to use when connecting to the proxy server.
- For example `"http://127.0.0.1:8080/"`.
- ssl_context: An SSL context to use for verifying connections.
- If not specified, the default `httpcore.default_ssl_context()`
- will be used.
- max_connections: The maximum number of concurrent HTTP connections that
- the pool should allow. Any attempt to send a request on a pool that
- would exceed this amount will block until a connection is available.
- max_keepalive_connections: The maximum number of idle HTTP connections
- that will be maintained in the pool.
- keepalive_expiry: The duration in seconds that an idle HTTP connection
- may be maintained for before being expired from the pool.
- http1: A boolean indicating if HTTP/1.1 requests should be supported
- by the connection pool. Defaults to True.
- http2: A boolean indicating if HTTP/2 requests should be supported by
- the connection pool. Defaults to False.
- retries: The maximum number of retries when trying to establish
- a connection.
- local_address: Local address to connect from. Can also be used to
- connect using a particular address family. Using
- `local_address="0.0.0.0"` will connect using an `AF_INET` address
- (IPv4), while using `local_address="::"` will connect using an
- `AF_INET6` address (IPv6).
- uds: Path to a Unix Domain Socket to use instead of TCP sockets.
- network_backend: A backend instance to use for handling network I/O.
- """
- super().__init__(
- ssl_context=ssl_context,
- max_connections=max_connections,
- max_keepalive_connections=max_keepalive_connections,
- keepalive_expiry=keepalive_expiry,
- http1=http1,
- http2=http2,
- network_backend=network_backend,
- retries=retries,
- )
- self._ssl_context = ssl_context
- self._proxy_url = enforce_url(proxy_url, name="proxy_url")
- if proxy_auth is not None:
- username, password = proxy_auth
- username_bytes = enforce_bytes(username, name="proxy_auth")
- password_bytes = enforce_bytes(password, name="proxy_auth")
- self._proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = (
- username_bytes,
- password_bytes,
- )
- else:
- self._proxy_auth = None
-
- def create_connection(self, origin: Origin) -> ConnectionInterface:
- return Socks5Connection(
- proxy_origin=self._proxy_url.origin,
- remote_origin=origin,
- proxy_auth=self._proxy_auth,
- ssl_context=self._ssl_context,
- keepalive_expiry=self._keepalive_expiry,
- http1=self._http1,
- http2=self._http2,
- network_backend=self._network_backend,
- )
-
-
-class Socks5Connection(ConnectionInterface):
- def __init__(
- self,
- proxy_origin: Origin,
- remote_origin: Origin,
- proxy_auth: typing.Optional[typing.Tuple[bytes, bytes]] = None,
- ssl_context: typing.Optional[ssl.SSLContext] = None,
- keepalive_expiry: typing.Optional[float] = None,
- http1: bool = True,
- http2: bool = False,
- network_backend: typing.Optional[NetworkBackend] = None,
- ) -> None:
- self._proxy_origin = proxy_origin
- self._remote_origin = remote_origin
- self._proxy_auth = proxy_auth
- self._ssl_context = ssl_context
- self._keepalive_expiry = keepalive_expiry
- self._http1 = http1
- self._http2 = http2
-
- self._network_backend: NetworkBackend = (
- SyncBackend() if network_backend is None else network_backend
- )
- self._connect_lock = Lock()
- self._connection: typing.Optional[ConnectionInterface] = None
- self._connect_failed = False
-
- def handle_request(self, request: Request) -> Response:
- timeouts = request.extensions.get("timeout", {})
- timeout = timeouts.get("connect", None)
-
- with self._connect_lock:
- if self._connection is None:
- try:
- # Connect to the proxy
- kwargs = {
- "host": self._proxy_origin.host.decode("ascii"),
- "port": self._proxy_origin.port,
- "timeout": timeout,
- }
- with Trace("connect_tcp", logger, request, kwargs) as trace:
- stream = self._network_backend.connect_tcp(**kwargs)
- trace.return_value = stream
-
- # Connect to the remote host using socks5
- kwargs = {
- "stream": stream,
- "host": self._remote_origin.host.decode("ascii"),
- "port": self._remote_origin.port,
- "auth": self._proxy_auth,
- }
- with Trace(
- "setup_socks5_connection", logger, request, kwargs
- ) as trace:
- _init_socks5_connection(**kwargs)
- trace.return_value = stream
-
- # Upgrade the stream to SSL
- if self._remote_origin.scheme == b"https":
- ssl_context = (
- default_ssl_context()
- if self._ssl_context is None
- else self._ssl_context
- )
- alpn_protocols = (
- ["http/1.1", "h2"] if self._http2 else ["http/1.1"]
- )
- ssl_context.set_alpn_protocols(alpn_protocols)
-
- kwargs = {
- "ssl_context": ssl_context,
- "server_hostname": self._remote_origin.host.decode("ascii"),
- "timeout": timeout,
- }
- with Trace("start_tls", logger, request, kwargs) as trace:
- stream = stream.start_tls(**kwargs)
- trace.return_value = stream
-
- # Determine if we should be using HTTP/1.1 or HTTP/2
- ssl_object = stream.get_extra_info("ssl_object")
- http2_negotiated = (
- ssl_object is not None
- and ssl_object.selected_alpn_protocol() == "h2"
- )
-
- # Create the HTTP/1.1 or HTTP/2 connection
- if http2_negotiated or (
- self._http2 and not self._http1
- ): # pragma: nocover
- from .http2 import HTTP2Connection
-
- self._connection = HTTP2Connection(
- origin=self._remote_origin,
- stream=stream,
- keepalive_expiry=self._keepalive_expiry,
- )
- else:
- self._connection = HTTP11Connection(
- origin=self._remote_origin,
- stream=stream,
- keepalive_expiry=self._keepalive_expiry,
- )
- except Exception as exc:
- self._connect_failed = True
- raise exc
- elif not self._connection.is_available(): # pragma: nocover
- raise ConnectionNotAvailable()
-
- return self._connection.handle_request(request)
-
- def can_handle_request(self, origin: Origin) -> bool:
- return origin == self._remote_origin
-
- def close(self) -> None:
- if self._connection is not None:
- self._connection.close()
-
- def is_available(self) -> bool:
- if self._connection is None: # pragma: nocover
- # If HTTP/2 support is enabled, and the resulting connection could
- # end up as HTTP/2 then we should indicate the connection as being
- # available to service multiple requests.
- return (
- self._http2
- and (self._remote_origin.scheme == b"https" or not self._http1)
- and not self._connect_failed
- )
- return self._connection.is_available()
-
- def has_expired(self) -> bool:
- if self._connection is None: # pragma: nocover
- return self._connect_failed
- return self._connection.has_expired()
-
- def is_idle(self) -> bool:
- if self._connection is None: # pragma: nocover
- return self._connect_failed
- return self._connection.is_idle()
-
- def is_closed(self) -> bool:
- if self._connection is None: # pragma: nocover
- return self._connect_failed
- return self._connection.is_closed()
-
- def info(self) -> str:
- if self._connection is None: # pragma: nocover
- return "CONNECTION FAILED" if self._connect_failed else "CONNECTING"
- return self._connection.info()
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} [{self.info()}]>"
diff --git a/spaces/deaaassws/QQsign1/devices/device_8950.js b/spaces/deaaassws/QQsign1/devices/device_8950.js
deleted file mode 100644
index fe1caad4a8c5eb07633510e1d8a890197056a211..0000000000000000000000000000000000000000
--- a/spaces/deaaassws/QQsign1/devices/device_8950.js
+++ /dev/null
@@ -1,344 +0,0 @@
-"use strict";
-var __importDefault = (this && this.__importDefault) || function (mod) {
- return (mod && mod.__esModule) ? mod : { "default": mod };
-};
-Object.defineProperty(exports, "__esModule", { value: true });
-exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0;
-const crypto_1 = require("crypto");
-const constants_1 = require("./constants");
-const axios_1 = __importDefault(require("axios"));
-const algo_1 = require("./algo");
-function generateImei() {
- let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`;
- function calcSP(imei) {
- let sum = 0;
- for (let i = 0; i < imei.length; ++i) {
- if (i % 2) {
- let j = parseInt(imei[i]) * 2;
- sum += j % 10 + Math.floor(j / 10);
- }
- else {
- sum += parseInt(imei[i]);
- }
- }
- return (100 - sum) % 10;
- }
- return imei + calcSP(imei);
-}
-/** 生成短设备信息 */
-function generateShortDevice() {
- const randstr = (length, num = false) => {
- const map = num ? '0123456789' : '0123456789abcdef';
- return (0, constants_1.randomString)(length, map);
- };
- return {
- "--begin--": "该设备为随机生成,丢失后不能得到原先配置",
- product: `ILPP-${randstr(5).toUpperCase()}`,
- device: `${randstr(5).toUpperCase()}`,
- board: `${randstr(5).toUpperCase()}`,
- brand: `${randstr(4).toUpperCase()}`,
- model: `ICQQ ${randstr(4).toUpperCase()}`,
- wifi_ssid: `HUAWEI-${randstr(7)}`,
- bootloader: `U-boot`,
- android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`,
- boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`,
- proc_version: `Linux version 5.10.101-android12-${randstr(8)}`,
- mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`,
- ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`,
- imei: `${generateImei()}`,
- incremental: `${randstr(10, true).toUpperCase()}`,
- "--end--": "修改后可能需要重新验证设备。"
- };
-}
-exports.generateShortDevice = generateShortDevice;
-/** 生成完整设备信息 */
-function generateFullDevice(apk, d) {
- if (!d)
- d = generateShortDevice();
- return {
- display: d.android_id,
- product: d.product,
- device: d.device,
- board: d.board,
- brand: d.brand,
- model: d.model,
- bootloader: d.bootloader,
- fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`,
- boot_id: d.boot_id,
- proc_version: d.proc_version,
- baseband: "",
- sim: "T-Mobile",
- os_type: "android",
- mac_address: d.mac_address,
- ip_address: d.ip_address,
- wifi_bssid: d.mac_address,
- wifi_ssid: d.wifi_ssid,
- imei: d.imei,
- android_id: (0, constants_1.md5)(d.android_id).toString("hex"),
- apn: "wifi",
- version: {
- incremental: d.incremental,
- release: "10",
- codename: "REL",
- sdk: 29,
- },
- imsi: (0, crypto_1.randomBytes)(16),
- guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])),
- };
-}
-exports.generateFullDevice = generateFullDevice;
-class Device {
- constructor(apk, d) {
- this.apk = apk;
- this.secret = 'ZdJqM15EeO2zWc08';
- this.publicKey = `-----BEGIN PUBLIC KEY-----
-MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9
-qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq
-LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B
-9NMbHddGSAUmRTCrHQIDAQAB
------END PUBLIC KEY-----`;
- if (!d)
- d = generateShortDevice();
- Object.assign(this, generateFullDevice(apk, d));
- }
- async getQIMEI() {
- if (this.apk.app_key === "") {
- return;
- }
- const k = (0, constants_1.randomString)(16);
- const key = (0, algo_1.encryptPKCS1)(this.publicKey, k);
- const time = Date.now();
- const nonce = (0, constants_1.randomString)(16);
- const payload = this.genRandomPayloadByDevice();
- const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64');
- try {
- const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", {
- key,
- params,
- time, nonce,
- sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"),
- extra: ''
- }, {
- headers: {
- 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`,
- 'Content-Type': "application/json"
- }
- });
- if (data?.code !== 0) {
- return;
- }
- const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k));
- this.qImei16 = q16;
- this.qImei36 = q36;
- }
- catch {
- }
- }
- genRandomPayloadByDevice() {
- const fixedRand = (max = 1, min = 0) => {
- if (max < min)
- [max, min] = [min, max];
- const diff = max - min;
- return Math.floor(Math.random() * diff) + min;
- };
- const reserved = {
- "harmony": "0",
- "clone": Math.random() > 0.5 ? "1" : "0",
- "containe": "",
- "oz": "",
- "oo": "",
- "kelong": Math.random() > 0.5 ? "1" : "0",
- "uptimes": (0, constants_1.formatTime)(new Date()),
- "multiUser": Math.random() > 0.5 ? "1" : "0",
- "bod": this.board,
- "brd": this.brand,
- "dv": this.device,
- "firstLevel": "",
- "manufact": this.brand,
- "name": this.model,
- "host": "se.infra",
- "kernel": this.fingerprint
- };
- const timestamp = Date.now();
- this.mtime = this.mtime || Date.now();
- const mtime1 = new Date(this.mtime || Date.now());
- const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt);
- const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11);
- const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4)));
- const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14);
- let beaconIdArr = [
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- mtimeStr1,
- '0000000000000000',
- (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16),
- ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)),
- this.boot_id,
- '1',
- fixedRand(5, 0),
- fixedRand(5, 0),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(5, 0),
- fixedRand(100, 10),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(50000, 10000),
- fixedRand(100, 10),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- mtimeStr2,
- fixedRand(10000, 1000),
- fixedRand(5, 0),
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- fixedRand(10000, 1000),
- fixedRand(100, 10),
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- fixedRand(10000, 1000),
- fixedRand(5, 0),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(5, 0),
- fixedRand(100, 10),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(5, 0),
- fixedRand(5, 0),
- ].map((str, idx) => `k${idx + 1}:${str}`);
- return {
- "androidId": this.android_id,
- "platformId": 1,
- "appKey": this.apk.app_key,
- "appVersion": this.apk.version,
- "beaconIdSrc": beaconIdArr.join(';'),
- "brand": this.brand,
- "channelId": "2017",
- "cid": "",
- "imei": this.imei,
- "imsi": this.imsi.toString("hex"),
- "mac": this.mac_address,
- "model": this.model,
- "networkType": "unknown",
- "oaid": "",
- "osVersion": `Android ${this.version.release},level ${this.version.sdk}`,
- "qimei": "",
- "qimei36": "",
- "sdkVersion": "1.2.13.6",
- "targetSdkVersion": "26",
- "audit": "",
- "userId": "{}",
- "packageId": this.apk.id,
- "deviceType": this.display,
- "sdkName": "",
- "reserved": JSON.stringify(reserved),
- };
- }
-}
-exports.Device = Device;
-/** 支持的登录设备平台 */
-var Platform;
-(function (Platform) {
- Platform[Platform["Android"] = 1] = "Android";
- Platform[Platform["aPad"] = 2] = "aPad";
- Platform[Platform["Watch"] = 3] = "Watch";
- Platform[Platform["iMac"] = 4] = "iMac";
- Platform[Platform["iPad"] = 5] = "iPad";
- Platform[Platform["Tim"] = 6] = "Tim";
-})(Platform || (exports.Platform = Platform = {}));
-const mobile = {
- id: "com.tencent.mobileqq",
- app_key: '0S200MNJT807V3GE',
- name: "A8.9.50.f5a7d351",
- version: "8.9.50.10650",
- ver: "8.9.50",
- sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))),
- buildtime: 1676531414,
- appid: 16,
- subid: 537155547,
- bitmap: 150470524,
- main_sig_map: 16724722,
- sub_sig_map: 0x10400,
- sdkver: "6.0.0.2535",
- display: "Android",
- qua: 'V1_AND_SQ_8.9.50_3898_YYB_D',
- ssover: 19,
-};
-const tim = {
- id: "com.tencent.tim",
- app_key: '0S200MNJT807V3GE',
- name: "A3.5.1.3168",
- version: "3.5.1.3168",
- ver: "3.5.1",
- sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'),
- buildtime: 1630062176,
- appid: 16,
- subid: 537150355,
- bitmap: 150470524,
- main_sig_map: 16724722,
- sub_sig_map: 0x10400,
- sdkver: "6.0.0.2484",
- display: "Tim",
- qua: "V1_AND_SQ_8.3.9_351_TIM_D",
- ssover: 18,
-};
-const watch = {
- id: "com.tencent.qqlite",
- app_key: '0S200MNJT807V3GE',
- name: "A2.0.8",
- version: "2.0.8",
- ver: "2.0.8",
- sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))),
- buildtime: 1559564731,
- appid: 16,
- subid: 537065138,
- bitmap: 16252796,
- main_sig_map: 16724722,
- sub_sig_map: 0x10400,
- sdkver: "6.0.0.2365",
- display: "Watch",
- qua: '',
- ssover: 5
-};
-const hd = {
- id: "com.tencent.minihd.qq",
- app_key: '0S200MNJT807V3GE',
- name: "A5.9.3.3468",
- version: "5.9.3.3468",
- ver: "5.9.3",
- sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))),
- buildtime: 1637427966,
- appid: 16,
- subid: 537128930,
- bitmap: 150470524,
- main_sig_map: 1970400,
- sub_sig_map: 66560,
- sdkver: "6.0.0.2433",
- display: "iMac",
- qua: '',
- ssover: 12
-};
-const apklist = {
- [Platform.Android]: mobile,
- [Platform.Tim]: tim,
- [Platform.aPad]: {
- ...mobile,
- subid: 537155599,
- display: 'aPad'
- },
- [Platform.Watch]: watch,
- [Platform.iMac]: { ...hd },
- [Platform.iPad]: {
- ...mobile,
- subid: 537155074,
- sign: hd.sign,
- name: 'A8.9.50.611',
- version: 'A8.9.50.611',
- sdkver: '6.0.0.2535',
- qua: 'V1_AND_SQ_8.9.50_3898_YYB_D',
- display: 'iPad'
- },
-};
-function getApkInfo(p) {
- return apklist[p] || apklist[Platform.Android];
-}
-exports.getApkInfo = getApkInfo;
diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py
deleted file mode 100644
index 08ba55dbbea6df0afffddbb3d1ed173efad99604..0000000000000000000000000000000000000000
--- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "arcface"
-config.network = "r50"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/ms1m-retinaface-t1"
-config.num_classes = 93431
-config.num_image = 5179510
-config.num_epoch = 25
-config.warmup_epoch = -1
-config.decay_epoch = [10, 16, 22]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/dhof/shapetest/app.py b/spaces/dhof/shapetest/app.py
deleted file mode 100644
index d492ebaa7d5cff41bf04c77b4d7a10e1f9c1532d..0000000000000000000000000000000000000000
--- a/spaces/dhof/shapetest/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env python
-
-import os
-
-import gradio as gr
-import torch
-
-from app_image_to_3d import create_demo as create_demo_image_to_3d
-from app_text_to_3d import create_demo as create_demo_text_to_3d
-from model import Model
-
-DESCRIPTION = '# [Shap-E](https://github.com/openai/shap-e)'
-
-if (SPACE_ID := os.getenv('SPACE_ID')) is not None:
- DESCRIPTION += f'\n
For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
'
-if not torch.cuda.is_available():
- DESCRIPTION += '\n
Running on CPU 🥶 This demo does not work on CPU.
'
-
-model = Model()
-
-with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
- with gr.Tabs():
- with gr.Tab(label='Text to 3D'):
- create_demo_text_to_3d(model)
- with gr.Tab(label='Image to 3D'):
- create_demo_image_to_3d(model)
-demo.queue(api_open=False, max_size=10).launch()
diff --git a/spaces/diacanFperku/AutoGPT/Bupena Kelas 5 Sd Pdf 71.md b/spaces/diacanFperku/AutoGPT/Bupena Kelas 5 Sd Pdf 71.md
deleted file mode 100644
index 16338f00a8c3229019f411c5fc6aa2a76ee49fa9..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Bupena Kelas 5 Sd Pdf 71.md
+++ /dev/null
@@ -1,181 +0,0 @@
-
-
Bupena Kelas 5 SD PDF 71: Apa itu dan Bagaimana Cara Mendapatkannya?
-
-
Bupena Kelas 5 SD PDF 71 adalah salah satu buku penilaian tematik terpadu untuk siswa kelas 5 SD/MI yang mengacu pada kurikulum 2013 revisi 2018. Buku ini berisi materi dan latihan soal yang sesuai dengan tema dan subtema yang dipelajari di sekolah, seperti hubungan antarmakhluk hidup dalam ekosistem, perubahan wujud benda, dan lain-lain.
Buku ini sangat berguna untuk membantu siswa mengukur pemahaman dan kemampuan mereka dalam berbagai kompetensi dasar yang harus dikuasai. Selain itu, buku ini juga dapat membantu siswa mempersiapkan diri menghadapi ujian tematik yang dilaksanakan di akhir semester.
-
-
Bagaimana Cara Mendapatkan Bupena Kelas 5 SD PDF 71?
-
-
Ada beberapa cara yang dapat dilakukan untuk mendapatkan Bupena Kelas 5 SD PDF 71, yaitu:
-
-
-
Membeli buku cetak di toko buku terdekat atau online. Buku ini diterbitkan oleh penerbit Erlangga dan memiliki harga yang terjangkau. Buku ini tersedia dalam empat jilid, yaitu 5A, 5B, 5C, dan 5D.
-
Mengunduh buku elektronik atau e-book di situs resmi penerbit Erlangga atau situs lain yang menyediakan layanan download buku gratis. Buku ini dapat diakses melalui perangkat komputer, laptop, tablet, atau smartphone dengan menggunakan aplikasi pembaca PDF.
-
Meminjam buku dari perpustakaan sekolah atau umum. Buku ini biasanya tersedia di koleksi buku pelajaran atau referensi yang dapat dipinjam oleh siswa atau masyarakat umum dengan syarat dan ketentuan yang berlaku.
-
-
-
Dengan memiliki Bupena Kelas 5 SD PDF 71, siswa dapat belajar tematik dengan lebih mudah dan menyenangkan. Buku ini juga dapat meningkatkan motivasi dan minat belajar siswa serta membantu mereka mencapai prestasi akademik yang lebih baik.
-
Apa Isi Bupena Kelas 5 SD PDF 71?
-
-
Bupena Kelas 5 SD PDF 71 memiliki isi yang beragam dan menarik, sesuai dengan tema dan subtema yang dipelajari di kelas 5 SD/MI. Berikut adalah beberapa contoh isi buku ini:
-
-
-
Tema 5: Ekosistem. Subtema 1: Ekosistem di Sekitarku. Siswa akan belajar tentang pengertian ekosistem, komponen ekosistem, jenis-jenis ekosistem, dan hubungan antara komponen ekosistem.
-
Tema 5: Ekosistem. Subtema 2: Hubungan Antarmakhluk Hidup dalam Ekosistem. Siswa akan belajar tentang pengertian simbiosis, jenis-jenis simbiosis, contoh simbiosis, dan manfaat simbiosis bagi makhluk hidup.
-
Tema 6: Perubahan Wujud Benda. Subtema 1: Perubahan Wujud Benda Padat Menjadi Cair dan Gas. Siswa akan belajar tentang pengertian perubahan wujud benda, proses perubahan wujud benda, faktor-faktor yang mempengaruhi perubahan wujud benda, dan contoh perubahan wujud benda di sekitar kita.
-
Tema 6: Perubahan Wujud Benda. Subtema 2: Perubahan Wujud Benda Gas Menjadi Cair dan Padat. Siswa akan belajar tentang pengertian perubahan wujud benda, proses perubahan wujud benda, faktor-faktor yang mempengaruhi perubahan wujud benda, dan contoh perubahan wujud benda di sekitar kita.
-
-
-
Selain materi, buku ini juga dilengkapi dengan latihan soal yang berupa pilihan ganda, isian singkat, uraian, dan penugasan. Latihan soal ini bertujuan untuk mengukur pemahaman dan kemampuan siswa dalam menerapkan konsep yang telah dipelajari.
-
-
Apa Kelebihan Bupena Kelas 5 SD PDF 71?
-
-
Bupena Kelas 5 SD PDF 71 memiliki banyak kelebihan yang dapat memberikan manfaat bagi siswa, guru, dan orang tua. Berikut adalah beberapa kelebihan buku ini:
-
-
-
-
Buku ini disusun sesuai dengan kurikulum 2013 revisi 2018 yang berorientasi pada kompetensi dasar dan indikator pencapaian kompetensi.
-
Buku ini menggunakan pendekatan tematik terpadu yang mengintegrasikan berbagai mata pelajaran dalam satu tema.
-
Buku ini menggunakan bahasa yang mudah dipahami oleh siswa dan disajikan dengan ilustrasi yang menarik dan relevan.
-
Buku ini menyediakan berbagai sumber belajar yang dapat diakses melalui QR code atau tautan online.
-
Buku ini memberikan kesempatan bagi siswa untuk berkreasi, bereksplorasi, dan berkolaborasi dalam pembelajaran tematik.
-
-
-
Dengan demikian, Bupena Kelas 5 SD PDF 71 adalah buku penilaian tematik terpadu yang sangat direkomendasikan bagi siswa kelas 5 SD/MI. Buku ini dapat membantu siswa belajar tematik dengan lebih mudah dan menyenangkan serta meningkatkan prestasi akademik mereka.
-
Apa Manfaat Bupena Kelas 5 SD PDF 71?
-
-
Bupena Kelas 5 SD PDF 71 memiliki banyak manfaat yang dapat dirasakan oleh siswa, guru, dan orang tua. Berikut adalah beberapa manfaat buku ini:
-
-
-
Buku ini dapat meningkatkan kualitas pembelajaran tematik di kelas 5 SD/MI dengan menyediakan materi dan latihan soal yang sesuai dengan kurikulum 2013 revisi 2018.
-
Buku ini dapat membantu siswa mengembangkan kompetensi dasar dan indikator pencapaian kompetensi yang diharapkan dari setiap tema dan subtema.
-
Buku ini dapat membantu siswa mengasah keterampilan berpikir kritis, kreatif, kolaboratif, dan komunikatif dalam pembelajaran tematik.
-
Buku ini dapat membantu siswa mengenal dan menghargai keanekaragaman makhluk hidup dan lingkungan di sekitar mereka.
-
Buku ini dapat membantu siswa menumbuhkan sikap positif dan karakter bangsa dalam pembelajaran tematik.
-
-
-
Selain itu, buku ini juga dapat membantu guru dalam merencanakan, melaksanakan, dan mengevaluasi pembelajaran tematik di kelas 5 SD/MI. Buku ini juga dapat membantu orang tua dalam mendampingi dan mendukung anak-anak mereka dalam belajar tematik di rumah.
-
-
Bagaimana Cara Belajar dengan Bupena Kelas 5 SD PDF 71?
-
-
Bupena Kelas 5 SD PDF 71 adalah buku penilaian tematik terpadu yang dapat digunakan sebagai sumber belajar bagi siswa kelas 5 SD/MI. Berikut adalah beberapa cara belajar dengan buku ini:
-
-
-
Sebelum mempelajari setiap tema dan subtema, siswa harus membaca tujuan pembelajaran dan indikator pencapaian kompetensi yang terdapat di awal setiap bab.
-
Selama mempelajari setiap tema dan subtema, siswa harus memperhatikan materi yang disajikan dengan bahasa yang mudah dipahami dan ilustrasi yang menarik dan relevan.
-
Setelah mempelajari setiap tema dan subtema, siswa harus mengerjakan latihan soal yang berupa pilihan ganda, isian singkat, uraian, dan penugasan dengan jujur dan teliti.
-
Siswa harus memeriksa jawaban mereka dengan menggunakan kunci jawaban yang terdapat di akhir setiap bab atau dengan bantuan guru atau orang tua.
-
Siswa harus mencatat nilai mereka dari setiap latihan soal dan mengevaluasi kekuatan dan kelemahan mereka dalam pembelajaran tematik.
-
-
-
Dengan belajar dengan Bupena Kelas 5 SD PDF 71, siswa dapat belajar tematik dengan lebih mudah dan menyenangkan serta meningkatkan prestasi akademik mereka.
-
Apa Saja Tema dan Subtema yang Terdapat di Bupena Kelas 5 SD PDF 71?
-
-
Bupena Kelas 5 SD PDF 71 merupakan buku penilaian tematik terpadu yang mengacu pada kurikulum 2013 revisi 2018. Buku ini terdiri dari empat jilid, yaitu 5A, 5B, 5C, dan 5D. Setiap jilid mencakup dua tema dan empat subtema. Berikut adalah tema dan subtema yang terdapat di buku ini:
-
-
-
-
Jilid
-
Tema
-
Subtema
-
-
-
5A
-
Tema 1: Organ Gerak Hewan dan Manusia
-
Subtema 1: Organ Gerak pada Hewan Subtema 2: Organ Gerak pada Manusia Subtema 3: Perawatan Organ Gerak pada Hewan Subtema 4: Perawatan Organ Gerak pada Manusia
-
-
-
5A
-
Tema 2: Selamatkan Makhluk Hidup
-
Subtema 1: Keanekaragaman Makhluk Hidup Subtema 2: Perlindungan Makhluk Hidup Subtema 3: Konservasi Makhluk Hidup Subtema 4: Pelestarian Makhluk Hidup
-
-
-
5B
-
Tema 3: Perkembangbiakan Hewan dan Tumbuhan
-
Subtema 1: Perkembangbiakan Hewan Subtema 2: Perkembangbiakan Tumbuhan Subtema 3: Adaptasi Hewan dan Tumbuhan Subtema 4: Keseimbangan Ekosistem
-
-
-
5B
-
Tema 4: Pahlawanku
-
Subtema 1: Tokoh Pahlawan Nasional Subtema 2: Tokoh Pahlawan Daerah Subtema 3: Tokoh Pahlawan Lingkungan Subtema 4: Tokoh Pahlawan Sekolahku
-
-
-
5C
-
Tema 5: Ekosistem
-
Subtema 1: Ekosistem di Sekitarku Subtema 2: Hubungan Antarmakhluk Hidup dalam Ekosistem Subtema 3: Dampak Perubahan Ekosistem Subtema 4: Upaya Pelestarian Ekosistem
-
-
-
5C
-
Tema 6: Perubahan Wujud Benda
-
Subtema 1: Perubahan Wujud Benda Padat Menjadi Cair dan Gas Subtema 2: Perubahan Wujud Benda Gas Menjadi Cair dan Padat Subtema 3: Manfaat Perubahan Wujud Benda bagi Kehidupan Subtema 4: Pengaruh Suhu terhadap Perubahan Wujud Benda
-
-
-
5D
-
Tema 7: Energi dan Perubahannya
-
Subtema 1: Sumber Energi Alamiah dan Buatan Subtema 2: Penggunaan Energi Listrik di Rumah Tangga Subtema 3: Penghematan Energi Listrik di Rumah Tangga Subtema 4: Energi Alternatif Ramah Lingkungan
-
-
-
5D
-
Tema 8: Bangga sebagai Bangsa Indonesia
-
Subtema 1: Keberagaman Suku Bangsa di Indonesia Subtema 2: Keberagaman Budaya di Indonesia Subtema 3: Keberagaman Agama di Indonesia Subtema 4: Persatuan dan Kesatuan Bangsa Indonesia
-
-
-
Dengan mempelajari tema dan subtema yang terdapat di Bupena Kelas 5 SD PDF 71, siswa dapat memperluas wawasan dan pengetahuan mereka tentang berbagai aspek kehidupan.
-
-
Apa Rekomendasi Buku Lain yang Sejenis dengan Bupena Kelas 5 SD PDF 71?
-
-
Bupena Kelas 5 SD PDF 71 adalah salah satu buku penilaian tematik terpadu yang dapat digunakan oleh siswa kelas 5 SD/MI. Selain buku ini, ada beberapa buku lain yang sejenis dan juga bermanfaat bagi siswa. Berikut adalah beberapa rekomendasi buku lain yang sejenis dengan Bupena Kelas 5 SD PDF
-71:
-
-
-
-
Buku Penilaian Tematik Terpadu Kurikulum
-2013 Kelas V Jilid A-D oleh Tim Penulis
-Penerbit Erlangga. Buku ini juga mengacu pada kurikulum
-2013 revisi
-2018 dan terdiri dari empat jilid yang mencakup delapan tema dan
-32 subtema. Buku ini juga dilengkapi dengan latihan soal, kunci jawaban,
-dan QR code untuk mengakses sumber belajar online.
-
-
Buku Penilaian Tematik Terpadu Kurikulum
-2013 Kelas V Jilid A-D oleh Tim Penulis
-Penerbit Yudhistira. Buku ini juga mengacu pada kurikulum
-2013 revisi
-2018 dan terdiri dari empat jilid yang mencakup delapan tema dan
-32 subtema. Buku ini juga dilengkapi dengan latihan soal, kunci jawaban,
-dan QR code untuk mengakses sumber belajar online.
-
-
Buku Penilaian Tematik Terpadu Kurikulum
-2013 Kelas V Jilid A-D oleh Tim Penulis
-Penerbit Intan Pariwara. Buku ini juga mengacu pada kurikulum
-2013 revisi
-2018 dan terdiri dari empat jilid yang mencakup delapan tema dan
-32 subtema. Buku ini juga dilengkapi dengan latihan soal, kunci jawaban,
-dan QR code untuk mengakses sumber belajar online.
-
-
Buku Penilaian Tematik Terpadu Kurikulum
-2013 Kelas V Jilid A-D oleh Tim Penulis
-Penerbit Ganeca Exact. Buku ini juga mengacu pada kurikulum
-2013 revisi
-2018 dan terdiri dari empat jilid yang mencakup delapan tema dan
-32 subtema. Buku ini juga dilengkapi dengan latihan soal, kunci jawaban,
-dan QR code untuk mengakses sumber belajar online.
-
-
Buku Penilaian Tematik Terpadu Kurikulum
-2013 Kelas V Jilid A-D oleh Tim Penulis
-Penerbit Esis. Buku ini juga mengacu pada kurikulum
-2013 revisi
-2018 dan terdiri dari empat jilid yang mencakup delapan tema dan
-32 subtema. Buku ini juga dilengkapi dengan latihan soal, kunci jawaban,
-dan QR code untuk mengakses sumber belajar online.
-
-
Dengan membaca buku-buku lain yang sejenis dengan Bupena Kelas
-5 SD PDF
-71, siswa dapat memperkaya referensi dan sumber belajar mereka tentang pembelajaran tematik.
-
Kesimpulan
-
-
Bupena Kelas 5 SD PDF 71 adalah buku penilaian tematik terpadu yang sangat bermanfaat bagi siswa kelas 5 SD/MI. Buku ini menyediakan materi dan latihan soal yang sesuai dengan kurikulum 2013 revisi 2018. Buku ini juga memiliki banyak kelebihan dan manfaat yang dapat meningkatkan kualitas pembelajaran tematik di kelas 5 SD/MI. Buku ini terdiri dari empat jilid yang mencakup delapan tema dan 32 subtema yang menarik dan relevan. Buku ini juga dapat digunakan bersama dengan buku-buku lain yang sejenis sebagai sumber belajar yang beragam dan kaya. Dengan belajar dengan Bupena Kelas 5 SD PDF 71, siswa dapat belajar tematik dengan lebih mudah dan menyenangkan serta meningkatkan prestasi akademik mereka.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Grimm Season 1 TOP Download Mkv.md b/spaces/diacanFperku/AutoGPT/Grimm Season 1 TOP Download Mkv.md
deleted file mode 100644
index e9ee12dd249f13a46ca68eb7a092fc0135b3df2c..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Grimm Season 1 TOP Download Mkv.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
nick continues to investigate the truth about how he came to be a grimm. with maria, renard, and hank all sticking to their story, nick is left with no other choice but to go with his gut feeling. he finds the man that originally gave him the book and does what he feels needs to be done, in order to keep himself and monroe safe and free. i think it was important to us as an audience to see nick, specifically, stuck in a system that he didnt see himself being in. he doesnt accept things as they are and its this push back against the system that allows him to reassess his approach to the world and to his people. while the political subtext is sometimes a little heavy, this is another key episode, i think, for the character of nick and i love it.
its almost inevitable, theres going to be a monster in the house. and not just any monster, no, this one is from a twisted line of wesen that revels in pain and inflicting it on others. its a familiar face, and people are not happy when they find out what happened to those missing children. and even if they didnt recognize him, they know his taste in flesh. when two little girls go missing in the middle of the forest, nick is called to investigate. he finds out that the girls are not alone and are being held captive by the hogans. nick is taken captive and they attempt to kill him to save the girls. the girls start to die just like the first victim (the incubus), but nick somehow manages to get out of the house and kill the clan leader himself. its a pretty intense sequence that leads us to the finale of the season.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Nissan Datascan 1.6 Cracked VERIFIED.md b/spaces/diacanFperku/AutoGPT/Nissan Datascan 1.6 Cracked VERIFIED.md
deleted file mode 100644
index 3dd2fe4199a87695f0a2adf609c8918f18124d0c..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Nissan Datascan 1.6 Cracked VERIFIED.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
no virus found in this sophiestoric software.thousands of my drivers registered. when i connect ios device sometimes show error " com.durham diagnostic s live" but not only this my phone cannot recognize my sd card i tried everything and i find another way to activate it and my phone work normal as ios 13.1.1
nissan diagnostics tool 3 plus is an utility for the evaluation of automotive networks and diagnostics for calculating the alarms and counter values.the application is designed to use as a diagnostic software assistant tool to combine the essential processes like diagnostics,programming,calculations,storage and transmitting of diagnostic or programming data. it could operate virtually any diagnostic socket and transmit information to a pc or laptop.
-
nissan data scan can be utilized for diagnosing and programming of my vehicles one in all nissan-branded vehicles, as well as the following non-nissan vehicles:. nissan's specific.s. data scanner - japanese website review. on this page, we will share the launch information for nissan data scan, see below if your model is included in this version, right here we'll share to you launch news, reviews, can also find download location for the launch version.nissan datascan 1.6 cracked.. read more.
view on youtoo
-
nissan diagnostics tool 3 plus is a utility for automotive networks and diagnostics for calculating the alarms and counter values. the software program is designed to use as a diagnostic software assistant tool to combine the essential processes like diagnostics, programming, calculations, storage and transmitting of diagnostic or programming data. it may operate virtually any diagnostic socket and transmit information to a pc or laptop.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Physics Derivations For Class 11 Pdf 3844.md b/spaces/diacanFperku/AutoGPT/Physics Derivations For Class 11 Pdf 3844.md
deleted file mode 100644
index 0f07cd5971bb1bfdfb3da09163a72b6cdfa6db47..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Physics Derivations For Class 11 Pdf 3844.md
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
Physics Derivations For Class 11 Pdf 3844: A Comprehensive Guide
-
If you are looking for a complete and detailed guide on physics derivations for class 11 pdf 3844, you have come to the right place. In this article, we will cover the following topics:
-
-
What is physics derivations for class 11 pdf 3844 and why is it important?
-
What are the main topics and concepts covered in physics derivations for class 11 pdf 3844?
-
How to download physics derivations for class 11 pdf 3844 for free?
-
How to use physics derivations for class 11 pdf 3844 to ace your exams?
What is physics derivations for class 11 pdf 3844 and why is it important?
-
Physics derivations for class 11 pdf 3844 is a comprehensive and updated collection of physics formulas and derivations for class 11 students. It covers all the topics and subtopics of the CBSE syllabus, as well as some additional topics that are useful for competitive exams like JEE and NEET.
-
Physics derivations for class 11 pdf 3844 is important because it helps you to understand the concepts and principles of physics in a deeper and clearer way. It also helps you to solve numerical problems and apply your knowledge to real-life situations. By learning physics derivations for class 11 pdf 3844, you can improve your analytical and logical skills, as well as your confidence and interest in physics.
-
What are the main topics and concepts covered in physics derivations for class 11 pdf 3844?
-
Physics derivations for class 11 pdf 3844 covers a wide range of topics and concepts, such as:
-
-
Units and measurements
-
Motion in a straight line
-
Motion in a plane
-
Laws of motion
-
Work, energy and power
-
System of particles and rotational motion
-
Gravitation
-
Mechanical properties of solids
-
Mechanical properties of fluids
-
Thermal properties of matter
-
Thermodynamics
-
Kinetic theory of gases
-
Oscillations
-
Waves
-
Electric charges and fields
-
Electric potential and capacitance
-
Current electricity
-
Moving charges and magnetism
-
Magnetism and matter
-
Electromagnetic induction
-
Alternating current
-
Electromagnetic waves
-
Ray optics and optical instruments
-
Wave optics
-
Dual nature of radiation and matter
-
Atoms
-
Nuclei
-
Semiconductor electronics: materials, devices and simple circuits
-
Communication systems
How to download physics derivations for class 11 pdf 3844 for free?
-
There are many online sources that offer physics derivations for class 11 pdf 3844 for free. However, not all of them are reliable or updated. Some of them may contain errors, omissions, or outdated information. Therefore, you should be careful and selective when choosing a source to download physics derivations for class 11 pdf 3844 for free.
-
One of the best and most trusted sources to download physics derivations for class 11 pdf 3844 for free is SoundCloud. SoundCloud is a popular online platform that allows users to upload and share audio files. You can find physics derivations for class 11 pdf 3844 on SoundCloud as an audiobook and an excerpt. The audiobook is narrated by a professional voice actor and the excerpt is a sample of the content. You can listen to them online or download them to your device.
-
-
To download physics derivations for class 11 pdf 3844 for free from SoundCloud, you need to follow these steps:
Click on the "More" button below the audio player.
-
Select "Download file" from the drop-down menu.
-
Save the file to your device.
-
-
Alternatively, you can also scan the QR code below to access the link directly from your smartphone.
-
-
How to use physics derivations for class 11 pdf 3844 to ace your exams?
-
Physics derivations for class 11 pdf 3844 is a valuable resource that can help you to ace your exams. However, you need to use it wisely and effectively. Here are some tips on how to use physics derivations for class 11 pdf 3844 to ace your exams:
-
-
Read and listen to physics derivations for class 11 pdf 3844 regularly and thoroughly. Try to understand the logic and steps behind each derivation. Don't just memorize the formulas, but also learn how they are derived and why they are valid.
-
Practice solving numerical problems using physics derivations for class 11 pdf 3844. Try to apply the formulas and derivations to different situations and scenarios. Check your answers with the solutions provided in the pdf file or online sources.
-
Revise and review physics derivations for class 11 pdf 3844 before your exams. Go over the important topics and concepts that are likely to appear in your exams. Make notes and flashcards of the key points and formulas. Test yourself with mock tests and previous year papers.
-
Use physics derivations for class 11 pdf 3844 as a reference and a supplement, not as a substitute. Don't rely solely on physics derivations for class 11 pdf 3844 for your exam preparation. You should also study from your textbooks, notes, teachers, and other sources. Physics derivations for class 11 pdf 3844 is meant to enhance your understanding and performance, not to replace your learning and effort.
-
-
We hope this article has helped you to learn more about physics derivations for class 11 pdf 3844 and how to use it effectively. We wish you all the best for your exams!
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/base_colbert.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/base_colbert.py
deleted file mode 100644
index 4c8b2709800faaf749ca980be7be85b6e4a1397c..0000000000000000000000000000000000000000
--- a/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/base_colbert.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import os
-import torch
-
-from colbert.utils.utils import torch_load_dnn
-
-from transformers import AutoTokenizer
-from colbert.modeling.hf_colbert import HF_ColBERT
-from colbert.infra.config import ColBERTConfig
-
-
-class BaseColBERT(torch.nn.Module):
- """
- Shallow module that wraps the ColBERT parameters, custom configuration, and underlying tokenizer.
- This class provides direct instantiation and saving of the model/colbert_config/tokenizer package.
-
- Like HF, evaluation mode is the default.
- """
-
- def __init__(self, name, colbert_config=None):
- super().__init__()
-
- self.name = name
- self.colbert_config = ColBERTConfig.from_existing(ColBERTConfig.load_from_checkpoint(name), colbert_config)
- self.model = HF_ColBERT.from_pretrained(name, colbert_config=self.colbert_config)
- self.raw_tokenizer = AutoTokenizer.from_pretrained(self.model.base)
-
- self.eval()
-
- @property
- def device(self):
- return self.model.device
-
- @property
- def bert(self):
- return self.model.bert
-
- @property
- def linear(self):
- return self.model.linear
-
- @property
- def score_scaler(self):
- return self.model.score_scaler
-
- def save(self, path):
- assert not path.endswith('.dnn'), f"{path}: We reserve *.dnn names for the deprecated checkpoint format."
-
- self.model.save_pretrained(path)
- self.raw_tokenizer.save_pretrained(path)
-
- self.colbert_config.save_for_checkpoint(path)
-
-
-if __name__ == '__main__':
- import random
- import numpy as np
-
- from colbert.infra.run import Run
- from colbert.infra.config import RunConfig
-
- random.seed(12345)
- np.random.seed(12345)
- torch.manual_seed(12345)
-
- with Run().context(RunConfig(gpus=2)):
- m = BaseColBERT('bert-base-uncased', colbert_config=ColBERTConfig(Run().config, doc_maxlen=300, similarity='l2'))
- m.colbert_config.help()
- print(m.linear.weight)
- m.save('/future/u/okhattab/tmp/2021/08/model.deleteme2/')
-
- m2 = BaseColBERT('/future/u/okhattab/tmp/2021/08/model.deleteme2/')
- m2.colbert_config.help()
- print(m2.linear.weight)
-
- exit()
-
- m = BaseColBERT('/future/u/okhattab/tmp/2021/08/model.deleteme/')
- print('BaseColBERT', m.linear.weight)
- print('BaseColBERT', m.colbert_config)
-
- exit()
-
- # m = HF_ColBERT.from_pretrained('nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Large')
- m = HF_ColBERT.from_pretrained('/future/u/okhattab/tmp/2021/08/model.deleteme/')
- print('HF_ColBERT', m.linear.weight)
-
- m.save_pretrained('/future/u/okhattab/tmp/2021/08/model.deleteme/')
-
- # old = OldColBERT.from_pretrained('bert-base-uncased')
- # print(old.bert.encoder.layer[10].attention.self.value.weight)
-
- # random.seed(12345)
- # np.random.seed(12345)
- # torch.manual_seed(12345)
-
- dnn = torch_load_dnn(
- "/future/u/okhattab/root/TACL21/experiments/Feb26.NQ/train.py/ColBERT.C3/checkpoints/colbert-60000.dnn")
- # base = dnn.get('arguments', {}).get('model', 'bert-base-uncased')
-
- # new = BaseColBERT.from_pretrained('bert-base-uncased', state_dict=dnn['model_state_dict'])
-
- # print(new.bert.encoder.layer[10].attention.self.value.weight)
-
- print(dnn['model_state_dict']['linear.weight'])
- # print(dnn['model_state_dict']['bert.encoder.layer.10.attention.self.value.weight'])
-
- # # base_model_prefix
diff --git a/spaces/dnouri/monai-demo/app.py b/spaces/dnouri/monai-demo/app.py
deleted file mode 100644
index 7585e27694fa620c0087a419d775a692edb2ac11..0000000000000000000000000000000000000000
--- a/spaces/dnouri/monai-demo/app.py
+++ /dev/null
@@ -1,226 +0,0 @@
-from glob import glob
-import logging
-from matplotlib import pyplot as plt
-import os
-
-from monai.bundle.scripts import upload_zoo_bundle_to_hf
-import streamlit as st
-import torch
-
-
-def main():
- st.title("MONAI 🤗 Hugging Face Integration")
-
- st.write("""\
-Here's a demo of a prototype integration between
-[MONAI](https://monai.io/) and the [Hugging Face
-Hub](https://huggingface.co/docs/hub/index), which allows for
-uploading models to the Hub and downloading them. The integration
-itself is implemented in [this
-branch](https://github.com/dnouri/MONAI/tree/dnouri/huggingface-support)
-of MONAI.
-""")
-
- st.write("""\
-## Uploading models to the Hub ⬆
-
-The new `upload_zoo_bundle_to_hf` command allows us to upload models
-from the existing [MONAI Model
-Zoo](https://github.com/Project-MONAI/model-zoo) on Github directly
-onto the Hugging Face Hub.
-
-The `--name` option specifies the [filename of an existing
-model](https://github.com/Project-MONAI/model-zoo/releases/tag/hosting_storage_v1)
-in the MONAI Model Zoo, while the `--hf_organization` specifies the
-name of the organization to upload to, in the Hugging Face Hub,
-whereas `--hf_token` is the [HF user access
-token](https://huggingface.co/docs/hub/security-tokens).
-
-An additional `--hf_card_data` option allows us to specify [model card
-metadata](https://huggingface.co/docs/hub/models-cards#model-card-metadata)
-to be added to the Hugging Face model card.
-
-An example call to the `upload_zoo_bundle_to_hf` script looks like
-this:
-
-```bash
- python -m monai.bundle upload_zoo_bundle_to_hf \\
- --name spleen_ct_segmentation_v0.1.0 \\
- --hf_organization dnouri --hf_token mytoken \\
- --hf_card_data '{"lang": "en"}'
-```
-
-An example of a thus automatically uploaded model can be found
-[here](https://huggingface.co/dnouri/spleen_ct_segmentation).
-
-### Try it out!
-
-To try out uploading your own model, please provide the information below:
-""")
- filename = st.text_input("Filename of MONAI Model Zoo model "
- "(e.g. ventricular_short_axis_3label_v0.1.0.zip)")
- username = st.text_input("Hub organization or user name (e.g. dnouri)")
- card_data = st.text_input("Optional model card metadata",
- value='{"tags": ["MONAI"]}')
- token = st.text_input("Hugging Face user access token")
-
- if filename and username and token:
- st.write("Please wait...")
- upload_zoo_bundle_to_hf(
- name=filename,
- hf_organization=username,
- hf_token=token,
- hf_card_data=card_data or None,
- )
- st.write(f"""\
-Done! You should be able to find the [result here](https://huggingface.co/{username}/{filename.rsplit("_", 1)[0]}).
-""")
-
- st.write("""\
-## Downloading models from the Hub ⬇
-
-Uploading isn't much fun if you can't also download the models from
-the Hub! To help with that, we've added support for the Hugging Face
-Hub to the existing MONAI bundle `download` command.
-
-The `download` command's default `--source` is `github`. We'll choose
-`huggingface` instead to download from the Hub.
-
-The `--name` of the model is the name of your model on the Hub,
-e.g. `ventricular_short_axis_3label`. Note that as per MONAI
-convention, we do not specify the version name here. (Future versions of
-this command might allow for downloading specific versions, or tags.)
-
-The `--repo` normally points to the MONAI Model Zoo's ['hosting
-storage' release page on
-Github](https://github.com/Project-MONAI/model-zoo/releases/tag/hosting_storage_v1).
-When we call `download` with the `huggingface` source, we'll require
-the `--repo` argument to point to the organization or user name that
-hosts the model, e.g. `dnouri`. (While this choice is a bit
-confusing, it also reflects an attempt to pragmatically blend concepts
-from both MONAI bundles and the Hub. Future versions might improve on
-this.)
-
-An example call to the `upload_zoo_bundle_to_hf` script that perhaps
-downloads the model that we uploaded previously, looks like this:
-
-```bash
- python -m monai.bundle download \\
- --name spleen_ct_segmentation \\
- --source huggingface --repo dnouri
-```
-""")
-
- st.write("""\
-## Use model for inference 🧠
-
-To use the `spleen_ct_segmentation` pretrained model to do inference,
-we'll first load it into memory (as a TorchScript module) using the
-`load` function below. This will download the model from the Hugging
-Face Hub, as `load` uses the aforementioned `download` under the hood:
-""")
- # The next line is a workaround against a buggy interaction
- # between how streamlit sets up stderr and how tqdm uses it:
- logging.getLogger().setLevel(logging.NOTSET)
-
- with st.echo():
- from monai.bundle.scripts import load
-
- model, metadata, extra = load(
- name="spleen_ct_segmentation",
- source="huggingface",
- repo="dnouri",
- load_ts_module=True,
- progress=False,
- )
-
- st.write("""\
-This will produce a model, but we'll also need the corresponding
-transforms. These are defined in the MONAI bundle configuration
-files. There's unfortunately not a convenient way to do this using a
-MONAI bundle script function, so we'll have to reach into the MONAI
-bowels for a bit:
-""")
- with st.echo():
- from monai.bundle.config_parser import ConfigParser
- from monai.bundle.scripts import _process_bundle_dir
-
- model_dir = _process_bundle_dir() / "spleen_ct_segmentation"
- config_paths = [
- model_dir / "configs" / "train.json",
- model_dir / "configs" / "evaluate.json",
- ]
- config = ConfigParser(
- ConfigParser.load_config_files(config_paths),
- )
- preprocess = config.get_parsed_content("validate#preprocessing")
-
- st.write("""\
-We'll borrow code from the MONAI [Spleen 3D segmentation with MONAI
-tutorial](https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/spleen_segmentation_3d.ipynb)
-to download the data that our `spleen_ct_segmentation` model was
-trained with:
-""")
-
- with st.echo():
- from monai.apps import download_and_extract
-
- root_dir = os.environ.get(
- "MONAI_DATA_DIRECTORY",
- os.path.expanduser("~/.cache/monai_data_directory")
- )
- os.makedirs(root_dir, exist_ok=True)
- resource = "https://msd-for-monai.s3-us-west-2.amazonaws.com/Task09_Spleen.tar"
- md5 = "410d4a301da4e5b2f6f86ec3ddba524e"
- compressed_file = os.path.join(root_dir, "Task09_Spleen.tar")
- data_dir = os.path.join(root_dir, "Task09_Spleen")
- if not os.path.exists(data_dir):
- download_and_extract(resource, compressed_file, root_dir, md5)
-
- train_images = sorted(
- glob(os.path.join(data_dir, "imagesTr", "*.nii.gz")))
- train_labels = sorted(
- glob(os.path.join(data_dir, "labelsTr", "*.nii.gz")))
- data_dicts = [
- {"image": image_name, "label": label_name}
- for image_name, label_name in zip(train_images, train_labels)
- ]
- files = data_dicts
- st.write(f"Downloaded {len(files)} files.")
-
- st.write("""\
-Finally, we can run inference and plot some results: 🥳
-""")
- image_idx = st.slider("Image number", 0, len(files))
-
- with st.echo():
- from monai.inferers import sliding_window_inference
-
- data = preprocess(files[image_idx])
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
- output = sliding_window_inference(
- inputs=data["image"].to(device)[None, ...],
- roi_size=(160, 160, 160),
- sw_batch_size=4,
- predictor=model.eval(),
- )
-
- fig, (ax1, ax2, ax3) = plt.subplots(1, 3)
- ax1.set_title("Image")
- ax2.set_title("Label")
- ax3.set_title("Output")
- ax1.imshow(data["image"][0, :, :, 80], cmap="gray")
- ax2.imshow(data["label"][0, :, :, 80], cmap="gray")
- output_img = (
- torch.argmax(output, dim=1)[0, :, :, 80]
- .cpu().detach().numpy()
- )
- ax3.imshow(
- output_img,
- cmap="gray",
- )
- st.pyplot(fig)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/elumamai/openai-whisper-large/README.md b/spaces/elumamai/openai-whisper-large/README.md
deleted file mode 100644
index b1fcf15e68f03bc04f20c02c1d333e1140c41c87..0000000000000000000000000000000000000000
--- a/spaces/elumamai/openai-whisper-large/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Openai Whisper Large
-emoji: 📈
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ennov8ion/FantasyArt-Models/README.md b/spaces/ennov8ion/FantasyArt-Models/README.md
deleted file mode 100644
index c6707d6ba21511efa2cb15ba7301e57fdf8e6394..0000000000000000000000000000000000000000
--- a/spaces/ennov8ion/FantasyArt-Models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Maximum Multiplier
-emoji: 🛕🛕
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: blueorigin6/Scifi-Models
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/eubinecto/idiomify/explore/explore_nlpaug.py b/spaces/eubinecto/idiomify/explore/explore_nlpaug.py
deleted file mode 100644
index e7d419807a8d264de5b50c393f900a00689a8391..0000000000000000000000000000000000000000
--- a/spaces/eubinecto/idiomify/explore/explore_nlpaug.py
+++ /dev/null
@@ -1,21 +0,0 @@
-
-import nlpaug.augmenter.word as naw
-import nlpaug.augmenter.sentence as nas
-
-import nltk
-
-
-sent = "I am really happy with the new job and I mean that with sincere feeling"
-
-
-def main():
- nltk.download("omw-1.4")
- # this seems legit! I could definitely use this to increase the accuracy of the model
- # for a few idioms (possibly ten, ten very different but frequent idioms)
- aug = naw.ContextualWordEmbsAug()
- augmented = aug.augment(sent, n=10)
- print(augmented)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/exbert-project/exbert/client/src/index.html b/spaces/exbert-project/exbert/client/src/index.html
deleted file mode 100644
index 03774870d641820df4af9e12ac9d8a20bd8d4380..0000000000000000000000000000000000000000
--- a/spaces/exbert-project/exbert/client/src/index.html
+++ /dev/null
@@ -1,236 +0,0 @@
-
-
-
-
-
-
-
-
- exBERT
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Large language models can produce powerful contextual representations that lead to improvements across many
- NLP tasks.
- Since these models are typically guided by a sequence of learned self attention mechanisms and may comprise
- undesired inductive biases, it is paramount to be able to explore what the attention has learned.
- While static analyses of these models lead to targeted insights, interactive tools are more dynamic and can
- help humans better gain an intuition for the model-internal reasoning process.
-
-
-
- We present exBERT , an interactive tool named after the popular BERT language model, that provides
- insights into the meaning of the contextual representations by matching a human-specified input to similar
- contexts in a large annotated dataset.
- By aggregating the annotations of the matching similar contexts, exBERT helps intuitively explain
- what each attention-head has learned.
-
-
-
Large language models can produce powerful contextual representations that lead to improvements across many
- NLP tasks. Though these models can comprise undesired inductive biases, it is challenging to identify what
- information they encode in their learned representations.
-
-
Since the model-internal reasoning process is often guided by a sequence of learned self-attention
- mechanisms, it is paramount to be able to explore what the attention has learned. While static analyses for
- this can lead to targeted insights, interactive tools can be more dynamic and help humans gain an intuition
- for the model-internal reasoning process. We present exBERT, a tool that helps to gain insights into the
- meaning of the contextual representations. exBERT matches a human-specified input to similar contexts in a
- large annotated dataset. By aggregating these annotations across all similar contexts, exBERT can help to
- explain what each attention-head has learned.
-
-
Thanks to
- Jesse Vig
- for feedback. Please let us know what you think by commenting below!
-
-October 18, 2021 - DeskScapes Crack is the best software that can animate and manipulate the desktop background very effectively. They can make your wallpaper look very, very beautiful and attractive. DeskScapes is one of the best desktop apps with which you can easily and easily add beautiful backgrounds to your wallpapers that will make your wallpaper look more beautiful and amazing. DeskScapes is the best wallpaper manager for Windows that can remove all unnecessary background images and keep only the ones you like. DeskScapes is the best way to get great wallpapers for your desktop. 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Engineering Graphics Lecture Notes Download _BEST_.md b/spaces/falterWliame/Face_Mask_Detection/Engineering Graphics Lecture Notes Download _BEST_.md
deleted file mode 100644
index 8a8f8045729895a102bbff2215d437f93ae6f19d..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Engineering Graphics Lecture Notes Download _BEST_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Thank you extremely much for downloading engineering drawing lecture notes.Maybe you have knowledge that, people have look numerous ... 4d29de3e1b
-
-
-
diff --git a/spaces/faressayadi/n-gpt/README.md b/spaces/faressayadi/n-gpt/README.md
deleted file mode 100644
index 4e054febbdc6453f19af01ea709db028de7aded2..0000000000000000000000000000000000000000
--- a/spaces/faressayadi/n-gpt/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: N Gpt
-emoji: 📉
-colorFrom: gray
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/fclong/summary/fengshen/data/universal_datamodule/universal_datamodule.py b/spaces/fclong/summary/fengshen/data/universal_datamodule/universal_datamodule.py
deleted file mode 100644
index 240557694e97197f08a310351eb6206973107c4d..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/data/universal_datamodule/universal_datamodule.py
+++ /dev/null
@@ -1,165 +0,0 @@
-from pytorch_lightning import LightningDataModule
-from typing import Optional
-
-from torch.utils.data import DataLoader, DistributedSampler
-
-
-def get_consume_samples(data_model: LightningDataModule) -> int:
- if hasattr(data_model.trainer.lightning_module, 'consumed_samples'):
- consumed_samples = data_model.trainer.lightning_module.consumed_samples
- print('get consumed samples from model: {}'.format(consumed_samples))
- else:
- world_size = data_model.trainer.world_size
- consumed_samples = max(0, data_model.trainer.global_step - 1) * \
- data_model.hparams.train_batchsize * world_size * data_model.trainer.accumulate_grad_batches
- print('calculate consumed samples: {}'.format(consumed_samples))
- return consumed_samples
-
-
-class UniversalDataModule(LightningDataModule):
- @ staticmethod
- def add_data_specific_args(parent_args):
- parser = parent_args.add_argument_group('Universal DataModule')
- parser.add_argument('--num_workers', default=8, type=int)
- parser.add_argument('--dataloader_workers', default=2, type=int)
- parser.add_argument('--train_batchsize', default=16, type=int)
- parser.add_argument('--val_batchsize', default=16, type=int)
- parser.add_argument('--test_batchsize', default=16, type=int)
- parser.add_argument('--datasets_name', type=str, default=None)
- parser.add_argument('--train_datasets_field', type=str, default='train')
- parser.add_argument('--val_datasets_field', type=str, default='validation')
- parser.add_argument('--test_datasets_field', type=str, default='test')
- parser.add_argument('--train_file', type=str, default=None)
- parser.add_argument('--val_file', type=str, default=None)
- parser.add_argument('--test_file', type=str, default=None)
- parser.add_argument('--raw_file_type', type=str, default='json')
- parser.add_argument('--sampler_type', type=str,
- choices=['single',
- 'random'],
- default='random')
- return parent_args
-
- def __init__(
- self,
- tokenizer,
- collate_fn,
- args,
- datasets=None,
- **kwargs,
- ):
- super().__init__()
- # 如果不传入datasets的名字,则可以在对象外部替换内部的datasets为模型需要的
- if datasets is not None:
- self.datasets = datasets
- elif args.datasets_name is not None:
- from fengshen.data.fs_datasets import load_dataset
- print('---------begin to load datasets {}'.format(args.datasets_name))
- self.datasets = load_dataset(
- args.datasets_name, num_proc=args.num_workers)
- print('---------ending load datasets {}'.format(args.datasets_name))
- else:
- print('---------begin to load datasets from local file')
- from datasets import load_dataset
- self.datasets = load_dataset(args.raw_file_type,
- data_files={
- args.train_datasets_field: args.train_file,
- args.val_datasets_field: args.val_file,
- args.test_datasets_field: args.test_file})
- print('---------end to load datasets from local file')
-
- self.tokenizer = tokenizer
- self.collate_fn = collate_fn
- self.save_hyperparameters(args)
-
- def get_custom_sampler(self, ds):
- from .universal_sampler import PretrainingRandomSampler
- from .universal_sampler import PretrainingSampler
- world_size = self.trainer.world_size
- consumed_samples = get_consume_samples(self)
- # use the user default sampler
- if self.hparams.sampler_type == 'random':
- return PretrainingRandomSampler(
- total_samples=len(ds),
- # consumed_samples cal by global steps
- consumed_samples=consumed_samples,
- micro_batch_size=self.hparams.train_batchsize,
- data_parallel_rank=self.trainer.global_rank,
- data_parallel_size=world_size,
- epoch=self.trainer.current_epoch,
- )
- elif self.hparams.sampler_type == 'single':
- return PretrainingSampler(
- total_samples=len(ds),
- # consumed_samples cal by global steps
- consumed_samples=consumed_samples,
- micro_batch_size=self.hparams.train_batchsize,
- data_parallel_rank=self.trainer.global_rank,
- data_parallel_size=world_size,
- )
- else:
- raise Exception('Unknown sampler type: {}'.format(self.hparams.sampler_type))
-
- def setup(self, stage: Optional[str] = None) -> None:
- return
-
- def train_dataloader(self):
- ds = self.datasets[self.hparams.train_datasets_field]
-
- collate_fn = self.collate_fn
- if hasattr(ds, 'collate_fn'):
- collate_fn = ds.collate_fn
-
- if self.hparams.replace_sampler_ddp is False:
- return DataLoader(
- ds,
- batch_sampler=self.get_custom_sampler(ds),
- num_workers=self.hparams.dataloader_workers,
- collate_fn=collate_fn,
- pin_memory=True,
- )
- return DataLoader(
- ds,
- batch_size=self.hparams.train_batchsize,
- num_workers=self.hparams.dataloader_workers,
- collate_fn=collate_fn,
- pin_memory=True,
- )
-
- def val_dataloader(self):
- ds = self.datasets[self.hparams.val_datasets_field]
- collate_fn = self.collate_fn
- if hasattr(ds, 'collate_fn'):
- collate_fn = ds.collate_fn
-
- return DataLoader(
- ds,
- batch_size=self.hparams.val_batchsize,
- shuffle=False,
- num_workers=self.hparams.dataloader_workers,
- collate_fn=collate_fn,
- sampler=DistributedSampler(
- ds, shuffle=False),
- pin_memory=True,
- )
-
- # return DataLoader(
- # ds, shuffle=False, batch_size=self.hparams.val_batchsize, pin_memory=False, collate_fn=collate_fn,
- # )
-
- def test_dataloader(self):
- ds = self.datasets[self.hparams.test_datasets_field]
-
- collate_fn = self.collate_fn
- if collate_fn is None and hasattr(ds, 'collater'):
- collate_fn = ds.collater
-
- return DataLoader(
- ds,
- batch_size=self.hparams.test_batchsize,
- shuffle=False,
- num_workers=self.hparams.dataloader_workers,
- collate_fn=collate_fn,
- sampler=DistributedSampler(
- ds, shuffle=False),
- pin_memory=True,
- )
diff --git a/spaces/fclong/summary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_10B.sh b/spaces/fclong/summary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_10B.sh
deleted file mode 100644
index 6b85b4886dffc191c6d4856f66c2b3fd51817f69..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/pretrain_t5/pretrain_randeng_t5_char_10B.sh
+++ /dev/null
@@ -1,129 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=pretrain_randeng_t5_char_10B
-#SBATCH --nodes=4
-#SBATCH --ntasks-per-node=8
-#SBATCH --gres=gpu:8 # number of gpus
-#SBATCH --cpus-per-task=32 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH -o /cognitive_comp/ganruyi/experiments/randeng_t5_char_10B/%x-%j.log
-#SBATCH -e /cognitive_comp/ganruyi/experiments/randeng_t5_char_10B/%x-%j.err
-
-set -x -e
-
-echo "START TIME: $(date)"
-MICRO_BATCH_SIZE=1
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/randeng_t5_char_10B/
-if [ ! -d ${ROOT_DIR} ];then
- mkdir ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-ZERO_STAGE=2
-
-config_json="$ROOT_DIR/ds_config.randeng_t5_char_10B.$SLURM_JOBID.json"
-export MASTER_PORT=$[RANDOM%10000+30000]
-export CUDA_VISIBLE_DEVICES='1,2,3,4'
-
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE},
- "steps_per_print": 100,
- "gradient_clipping": 1.0,
- "zero_optimization": {
- "stage": $ZERO_STAGE,
- "cpu_offload": true,
- "contiguous_gradients": false,
- "overlap_comm": true,
- "reduce_scatter": true,
- "reduce_bucket_size": 50000000,
- "allgather_bucket_size": 500000000
- },
- "optimizer": {
- "type": "Adam",
- "params": {
- "lr": 1e-4,
- "weight_decay": 1e-2
- }
- },
- "scheduler": {
- "params": {
- "warmup_max_lr": 1e-04,
- "warmup_min_lr": 1e-05,
- "total_num_steps": 100000,
- "warmup_num_steps" : 10000
- },
- "type": "WarmupDecayLR"
- },
- "zero_allow_untested_optimizer": false,
- "fp16": {
- "enabled": true,
- "loss_scale": 0,
- "loss_scale_window": 1000,
- "hysteresis": 2,
- "min_loss_scale": 1
- },
- "activation_checkpointing": {
- "partition_activations": false,
- "contiguous_memory_optimization": false
- },
- "wall_clock_breakdown": false
-}
-EOT
-
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-# strategy=ddp
-strategy=deepspeed_stage_${ZERO_STAGE}
-
-TRAINER_ARGS="
- --max_epochs 1 \
- --gpus 4 \
- --num_nodes 1 \
- --strategy ${strategy} \
- --default_root_dir $ROOT_DIR \
- --dirpath $ROOT_DIR/ckpt \
- --save_top_k 3 \
- --every_n_train_steps 1000000 \
- --monitor train_loss \
- --mode min \
- --save_last \
- --val_check_interval 0.1 \
- --dataset_num_workers 4 \
- --dataloader_num_workers 4 \
- --replace_sampler_ddp False \
-"
-# --accumulate_grad_batches 8 \
-DATA_DIR=wudao_180g_bert_tokenized_512
-
-DATA_ARGS="
- --train_batchsize $MICRO_BATCH_SIZE \
- --valid_batchsize $MICRO_BATCH_SIZE \
- --train_data_path ${DATA_DIR} \
- --train_split_size 0.999 \
- --max_seq_length 512 \
-"
-
-MODEL_ARGS="
- --pretrained_model_path /cognitive_comp/ganruyi/experiments/randeng_t5_char_10B/randeng_t5_char_10B \
- --tokenizer_type bert_tokenizer \
-"
-
-SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/pretrain_t5/pretrain_t5.py
-
-export CMD=" \
- $SCRIPTS_PATH \
- $TRAINER_ARGS \
- $MODEL_ARGS \
- $DATA_ARGS \
- "
-
-echo $CMD
-/home/ganruyi/anaconda3/bin/python $CMD
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD'
-
-# source activate base
-# python $CMD
-# srun --nodes=1 --gres=gpu:8 --ntasks-per-node=8 --cpus-per-task=30 --jobid=171866 -e %x-%j.err -o %x-%j.log python $CMD
-
diff --git a/spaces/fclong/summary/fengshen/models/megatron_t5/modeling_megatron_t5.py b/spaces/fclong/summary/fengshen/models/megatron_t5/modeling_megatron_t5.py
deleted file mode 100644
index 82ad4fb8126b9a4c0b0bb7debed95b999b5cf097..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/models/megatron_t5/modeling_megatron_t5.py
+++ /dev/null
@@ -1,2086 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" PyTorch T5 model. """
-
-
-import copy
-import math
-import os
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import CrossEntropyLoss
-from torch.utils.checkpoint import checkpoint
-
-from transformers.activations import ACT2FN
-from transformers.file_utils import (
- DUMMY_INPUTS,
- DUMMY_MASK,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- is_torch_fx_proxy,
- replace_return_docstrings,
-)
-from transformers.modeling_outputs import (
- BaseModelOutput,
- BaseModelOutputWithPastAndCrossAttentions,
- Seq2SeqLMOutput,
- Seq2SeqModelOutput,
-)
-from transformers.modeling_utils import PreTrainedModel, find_pruneable_heads_and_indices, prune_linear_layer
-from transformers.utils import logging
-from transformers.utils.model_parallel_utils import assert_device_map, get_device_map
-from .configuration_megatron_t5 import T5Config
-import numpy as np
-
-logger = logging.get_logger(__name__)
-
-_CONFIG_FOR_DOC = "T5Config"
-_TOKENIZER_FOR_DOC = "T5Tokenizer"
-_CHECKPOINT_FOR_DOC = "T5-small"
-
-####################################################
-# This dict contains ids and associated url
-# for the pretrained weights provided with the models
-####################################################
-T5_PRETRAINED_MODEL_ARCHIVE_LIST = [
- "T5-small",
- "T5-base",
- "T5-large",
- "T5-3b",
- "T5-11b",
- # See all T5 models at https://huggingface.co/models?filter=T5
-]
-
-
-####################################################
-# This is a conversion method from TF 1.0 to PyTorch
-# More details: https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28
-####################################################
-
-def load_tf_weights_in_T5(model, config, tf_checkpoint_path):
- """Load tf checkpoints in a pytorch model."""
- try:
- import re
-
- import numpy as np
- import tensorflow as tf
- except ImportError:
- logger.error(
- "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
- "https://www.tensorflow.org/install/ for installation instructions."
- )
- raise
- tf_path = os.path.abspath(tf_checkpoint_path)
- logger.info(f"Converting TensorFlow checkpoint from {tf_path}")
- # Load weights from TF model
- init_vars = tf.train.list_variables(tf_path)
- names = []
- tf_weights = {}
- for name, shape in init_vars:
- logger.info(f"Loading TF weight {name} with shape {shape}")
- array = tf.train.load_variable(tf_path, name)
- names.append(name)
- tf_weights[name] = array
-
- for txt_name in names:
- name = txt_name.split("/")
- # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
- # which are not required for using pretrained model
- if any(
- n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer",
- "AdamWeightDecayOptimizer_1", "global_step"]
- for n in name
- ):
- logger.info(f"Skipping {'/'.join(name)}")
- tf_weights.pop(txt_name, None)
- continue
- if "_slot_" in name[-1]:
- logger.info(f"Skipping {'/'.join(name)}")
- tf_weights.pop(txt_name, None)
- continue
- pointer = model
- array = tf_weights[txt_name]
-
- for m_name in name:
- if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
- scope_names = re.split(r"_(\d+)", m_name)
- else:
- scope_names = [m_name]
- if scope_names[0] in ["kernel", "scale", "embedding"]:
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "self_attention":
- pointer = getattr(pointer, "layer")
- pointer = pointer[0]
- elif scope_names[0] == "enc_dec_attention":
- pointer = getattr(pointer, "layer")
- pointer = pointer[1]
- elif scope_names[0] == "dense_relu_dense":
- pointer = getattr(pointer, "layer")
- pointer = pointer[2]
- elif scope_names[0] == "rms_norm":
- if hasattr(pointer, "layer_norm"):
- pointer = getattr(pointer, "layer_norm")
- elif hasattr(pointer, "final_layer_norm"):
- pointer = getattr(pointer, "final_layer_norm")
- elif scope_names[0] == "scale":
- pointer = getattr(pointer, "weight")
- elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
- pointer = getattr(pointer, "bias")
- elif scope_names[0] == "squad":
- pointer = getattr(pointer, "classifier")
- elif scope_names[0] == "decoder" and name[1] == "logits":
- continue
- elif scope_names[0] == "logits":
- pointer = getattr(pointer, "lm_head")
- elif scope_names[0] == "wi" and len(scope_names) > 1 and scope_names[1].isdigit():
- pointer = getattr(pointer, f"wi_{scope_names[1]}")
- continue
- else:
- try:
- pointer = getattr(pointer, scope_names[0])
- except AttributeError:
- logger.info(f"Skipping {'/'.join(name)}")
- continue
- if len(scope_names) >= 2:
- num = int(scope_names[1])
- pointer = pointer[num]
- if scope_names[0] not in ["kernel", "scale", "embedding"]:
- pointer = getattr(pointer, "weight")
- if scope_names[0] != "embedding":
- logger.info(
- f"Transposing numpy weight of shape {array.shape} for {name}")
- array = np.transpose(array)
- try:
- assert (
- pointer.shape == array.shape
- ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
- except AssertionError as e:
- e.args += (pointer.shape, array.shape)
- raise
- logger.info(f"Initialize PyTorch weight {name}")
- pointer.data = torch.from_numpy(array.astype(np.float32))
- tf_weights.pop(txt_name, None)
-
- logger.info(
- f"Weights not copied to PyTorch model: {', '.join(tf_weights.keys())}.")
- return model
-
-
-####################################################
-# PyTorch Models are constructed by sub-classing
-# - torch.nn.Module for the layers and
-# - PreTrainedModel for the models (it-self a sub-class of nn.Module)
-####################################################
-PARALLELIZE_DOCSTRING = r"""
- This is an experimental feature and is a subject to change at a moment's notice.
-
- Uses a device map to distribute attention modules of the model across several devices. If no device map is given,
- it will evenly distribute blocks across all devices.
-
- Args:
- device_map (:obj:`Dict[int, list]`, optional, defaults to None):
- A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always
- automatically mapped to the first device (for esoteric reasons). That means that the first device should
- have fewer attention modules mapped to it than other devices. For reference, the T5 models have the
- following number of attention modules:
-
- - T5-small: 6
- - T5-base: 12
- - T5-large: 24
- - T5-3b: 24
- - T5-11b: 24
-
- Example::
-
- # Here is an example of a device map on a machine with 4 GPUs using T5-3b,
- # which has a total of 24 attention modules:
- model = T5ForConditionalGeneration.from_pretrained('T5-3b')
- device_map = {0: [0, 1, 2],
-
- 1: [3, 4, 5, 6, 7, 8, 9],
- 2: [10, 11, 12, 13, 14, 15, 16],
- 3: [17, 18, 19, 20, 21, 22, 23]}
- model.parallelize(device_map)
-"""
-DEPARALLELIZE_DOCSTRING = r"""
- Moves the model to cpu from a model parallel state.
-
- Example::
-
- # On a 4 GPU machine with T5-3b:
- model = T5ForConditionalGeneration.from_pretrained('T5-3b')
- device_map = {0: [0, 1, 2],
-
- 1: [3, 4, 5, 6, 7, 8, 9],
- 2: [10, 11, 12, 13, 14, 15, 16],
- 3: [17, 18, 19, 20, 21, 22, 23]}
- model.parallelize(device_map) # Splits the model across several devices
- model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache()
-"""
-
-
-class T5LayerNorm(nn.Module):
- def __init__(self, hidden_size, eps=1e-6):
- """
- Construct a layernorm module in the T5 style No bias and no subtraction of mean.
- """
- super().__init__()
- self.weight = nn.Parameter(torch.ones(hidden_size))
- self.variance_epsilon = eps
-
- def forward(self, hidden_states):
- # layer norm should always be calculated in float32
- variance = hidden_states.to(torch.float32).pow(
- 2).mean(-1, keepdim=True)
- hidden_states = hidden_states * \
- torch.rsqrt(variance + self.variance_epsilon)
-
- # convert into float16 if necessary
- if self.weight.dtype == torch.float16:
- hidden_states = hidden_states.to(torch.float16)
- return self.weight * hidden_states
-
-
-class T5DenseReluDense(nn.Module):
- def __init__(self, config):
- super().__init__()
- # @IDEA modified -> bias=False -> bias=True
- self.wi = nn.Linear(config.d_model, config.d_ff, bias=True)
- self.wo = nn.Linear(config.d_ff, config.d_model, bias=True)
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(self, hidden_states):
- hidden_states = self.wi(hidden_states)
- hidden_states = nn.functional.relu(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.wo(hidden_states)
- return hidden_states
-
-
-class T5DenseGeluDense(nn.Module):
- def __init__(self, config):
- super().__init__()
- # @IDEA modified -> bias=False -> bias=True
- self.wi = nn.Linear(config.d_model, config.d_ff, bias=True)
- self.wo = nn.Linear(config.d_ff, config.d_model, bias=True)
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(self, hidden_states):
- hidden_states = self.wi(hidden_states)
- hidden_states = nn.functional.gelu(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.wo(hidden_states)
- return hidden_states
-
-
-class T5DenseGatedGeluDense(nn.Module):
- def __init__(self, config):
- super().__init__()
- # @IDEA modified -> bias=False -> bias=True
- self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=True)
- self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=True)
- self.wo = nn.Linear(config.d_ff, config.d_model, bias=True)
- self.dropout = nn.Dropout(config.dropout_rate)
- self.gelu_act = ACT2FN["gelu_new"]
-
- def forward(self, hidden_states):
- hidden_gelu = self.gelu_act(self.wi_0(hidden_states))
- hidden_linear = self.wi_1(hidden_states)
- hidden_states = hidden_gelu * hidden_linear
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.wo(hidden_states)
- return hidden_states
-
-
-class T5LayerFF(nn.Module):
- def __init__(self, config):
- super().__init__()
- # @IDEA modified -> T5LayerNorm -> nn.LayerNorm
- # self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
- self.layer_norm = nn.LayerNorm(
- config.d_model, eps=config.layer_norm_epsilon)
- if config.feed_forward_proj == "relu":
- self.DenseReluDense = T5DenseReluDense(config)
- elif config.feed_forward_proj == "gelu":
- self.DenseReluDense = T5DenseGeluDense(config)
- else:
- raise ValueError(
- f"{self.config.feed_forward_proj} is not supported. Choose between `relu` and `gated-gelu`"
- )
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(self, hidden_states):
- forwarded_states = self.layer_norm(hidden_states)
- forwarded_states = self.DenseReluDense(forwarded_states)
- hidden_states = hidden_states + self.dropout(forwarded_states)
- return hidden_states
-
-
-class T5Attention(nn.Module):
- def __init__(self, config: T5Config, has_relative_attention_bias=False):
- super().__init__()
- self.is_decoder = config.is_decoder
- self.has_relative_attention_bias = has_relative_attention_bias
-
- self.relative_attention_num_buckets = config.relative_attention_num_buckets
- self.d_model = config.d_model
- self.key_value_proj_dim = config.d_kv
- self.n_heads = config.num_heads
- self.dropout = config.dropout_rate
- self.inner_dim = self.n_heads * self.key_value_proj_dim
-
- # Mesh TensorFlow initialization to avoid scaling before softmax
- # @IDEA modified -> bias=False -> bias=True
-
- self.q = nn.Linear(self.d_model, self.inner_dim, bias=True)
- self.k = nn.Linear(self.d_model, self.inner_dim, bias=True)
- self.v = nn.Linear(self.d_model, self.inner_dim, bias=True)
-
- self.o = nn.Linear(self.inner_dim, self.d_model, bias=True)
-
- if self.has_relative_attention_bias:
- self.relative_attention_bias = nn.Embedding(
- self.relative_attention_num_buckets, self.n_heads)
- self.pruned_heads = set()
- self.gradient_checkpointing = False
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads, self.n_heads, self.key_value_proj_dim, self.pruned_heads
- )
- # Prune linear layers
- self.q = prune_linear_layer(self.q, index)
- self.k = prune_linear_layer(self.k, index)
- self.v = prune_linear_layer(self.v, index)
-
- self.o = prune_linear_layer(self.o, index, dim=1)
- # Update hyper params
- self.n_heads = self.n_heads - len(heads)
- self.inner_dim = self.key_value_proj_dim * self.n_heads
- self.pruned_heads = self.pruned_heads.union(heads)
-
- @staticmethod
- def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128):
- """
- Adapted from Mesh Tensorflow:
- https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
-
- Translate relative position to a bucket number for relative attention. The relative position is defined as
- memory_position - query_position, i.e. the distance in tokens from the attending position to the attended-to
- position. If bidirectional=False, then positive relative positions are invalid. We use smaller buckets for
- small absolute relative_position and larger buckets for larger absolute relative_positions. All relative
- positions >=max_distance map to the same bucket. All relative positions <=-max_distance map to the same bucket.
- This should allow for more graceful generalization to longer sequences than the model has been trained on
-
- Args:
- relative_position: an int32 Tensor
- bidirectional: a boolean - whether the attention is bidirectional
- num_buckets: an integer
- max_distance: an integer
-
- Returns:
- a Tensor with the same shape as relative_position, containing int32 values in the range [0, num_buckets)
- """
- relative_buckets = 0
- if bidirectional:
- num_buckets //= 2
- relative_buckets += (relative_position >
- 0).to(torch.long) * num_buckets
- relative_position = torch.abs(relative_position)
- else:
- relative_position = - \
- torch.min(relative_position,
- torch.zeros_like(relative_position))
- # now relative_position is in the range [0, inf)
-
- # half of the buckets are for exact increments in positions
- max_exact = num_buckets // 2
- is_small = relative_position < max_exact
-
- # The other half of the buckets are for logarithmically bigger bins in positions up to max_distance
- relative_postion_if_large = max_exact + (
- torch.log(relative_position.float() / max_exact)
- / math.log(max_distance / max_exact)
- * (num_buckets - max_exact)
- ).to(torch.long)
- relative_postion_if_large = torch.min(
- relative_postion_if_large, torch.full_like(
- relative_postion_if_large, num_buckets - 1)
- )
-
- relative_buckets += torch.where(is_small,
- relative_position, relative_postion_if_large)
- return relative_buckets
-
- def compute_bias(self, query_length, key_length):
- """Compute binned relative position bias"""
- context_position = torch.arange(
- query_length, dtype=torch.long, device=self.relative_attention_bias.weight.device
- )[:, None]
- memory_position = torch.arange(
- key_length, dtype=torch.long, device=self.relative_attention_bias.weight.device
- )[None, :]
- relative_position = memory_position - \
- context_position # shape (query_length, key_length)
- relative_position_bucket = self._relative_position_bucket(
- relative_position, # shape (query_length, key_length)
- bidirectional=(not self.is_decoder),
- num_buckets=self.relative_attention_num_buckets,
- )
- # shape (query_length, key_length, num_heads)
- values = self.relative_attention_bias(relative_position_bucket)
- # shape (1, num_heads, query_length, key_length)
- values = values.permute([2, 0, 1]).unsqueeze(0)
- return values
-
- def forward(
- self,
- hidden_states,
- mask=None,
- key_value_states=None,
- position_bias=None,
- past_key_value=None,
- layer_head_mask=None,
- query_length=None,
- use_cache=False,
- output_attentions=False,
- ):
- """
- Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states).
- """
- # Input is (batch_size, seq_length, dim)
- # Mask is (batch_size, key_length) (non-causal) or (batch_size, key_length, key_length)
- # past_key_value[0] is (batch_size, n_heads, q_len - 1, dim_per_head)
- batch_size, seq_length = hidden_states.shape[:2]
-
- real_seq_length = seq_length
-
- if past_key_value is not None:
- assert (
- len(past_key_value) == 2
- ), f"past_key_value should have 2 past states: keys and values. Got { len(past_key_value)} past states"
- real_seq_length += past_key_value[0].shape[2] if query_length is None else query_length
-
- key_length = real_seq_length if key_value_states is None else key_value_states.shape[
- 1]
-
- def shape(states):
- """projection"""
- return states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2)
-
- def unshape(states):
- """reshape"""
- return states.transpose(1, 2).contiguous().view(batch_size, -1, self.inner_dim)
-
- def project(hidden_states, proj_layer, key_value_states, past_key_value):
- """projects hidden states correctly to key/query states"""
- if key_value_states is None:
- # self-attn
- # (batch_size, n_heads, seq_length, dim_per_head)
- hidden_states = shape(proj_layer(hidden_states))
- elif past_key_value is None:
- # cross-attn
- # (batch_size, n_heads, seq_length, dim_per_head)
- hidden_states = shape(proj_layer(key_value_states))
-
- if past_key_value is not None:
- if key_value_states is None:
- # self-attn
- # (batch_size, n_heads, key_length, dim_per_head)
- hidden_states = torch.cat(
- [past_key_value, hidden_states], dim=2)
- else:
- # cross-attn
- hidden_states = past_key_value
- return hidden_states
-
- # get query states
- # (batch_size, n_heads, seq_length, dim_per_head)
- query_states = shape(self.q(hidden_states))
-
- # get key/value states
- key_states = project(
- hidden_states, self.k, key_value_states, past_key_value[
- 0] if past_key_value is not None else None
- )
- value_states = project(
- hidden_states, self.v, key_value_states, past_key_value[
- 1] if past_key_value is not None else None
- )
-
- # compute scores
- scores = torch.matmul(
- query_states, key_states.transpose(3, 2)
- ) # equivalent of torch.einsum("bnqd,bnkd->bnqk", query_states, key_states), compatible with onnx op>9
-
- if position_bias is None:
- if not self.has_relative_attention_bias:
- position_bias = torch.zeros(
- (1, self.n_heads, real_seq_length, key_length), device=scores.device, dtype=scores.dtype
- )
- if self.gradient_checkpointing and self.training:
- position_bias.requires_grad = True
- else:
- position_bias = self.compute_bias(real_seq_length, key_length)
-
- # if key and values are already calculated
- # we want only the last query position bias
- if past_key_value is not None:
- position_bias = position_bias[:, :, -hidden_states.size(1):, :]
-
- if mask is not None:
- # (batch_size, n_heads, seq_length, key_length)
- position_bias = position_bias + mask
-
- # @IDEA modified -> delete scores += position_bias, use absolute positional
- # scores += position_bias
- scores = scores / math.sqrt(self.key_value_proj_dim)
-
- if mask is not None:
- scores = scores + mask
-
- attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(
- scores
- ) # (batch_size, n_heads, seq_length, key_length)
-
- attn_weights = nn.functional.dropout(
- attn_weights, p=0, training=self.training
- ) # (batch_size, n_heads, seq_length, key_length)
-
- # Mask heads if we want to
- if layer_head_mask is not None:
- attn_weights = attn_weights * layer_head_mask
-
- # (batch_size, seq_length, dim)
- attn_output = unshape(torch.matmul(attn_weights, value_states))
-
- attn_output = self.o(attn_output)
-
- present_key_value_state = (key_states, value_states) if (
- self.is_decoder and use_cache) else None
- outputs = (attn_output,) + \
- (present_key_value_state,) + (position_bias,)
-
- if output_attentions:
- outputs = outputs + (attn_weights,)
- return outputs
-
-
-class T5LayerSelfAttention(nn.Module):
- def __init__(self, config, has_relative_attention_bias=False):
- super().__init__()
-
- # @IDEA modified -> T5LayerNorm -> nn.LayerNorm
- # self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
- self.layer_norm = nn.LayerNorm(
- config.d_model, eps=config.layer_norm_epsilon)
- self.SelfAttention = T5Attention(
- config, has_relative_attention_bias=has_relative_attention_bias)
-
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- position_bias=None,
- layer_head_mask=None,
- past_key_value=None,
- use_cache=False,
- output_attentions=False,
- ):
- normed_hidden_states = self.layer_norm(hidden_states)
- attention_output = self.SelfAttention(
- normed_hidden_states,
- mask=attention_mask,
- position_bias=position_bias,
- layer_head_mask=layer_head_mask,
- past_key_value=past_key_value,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
-
- hidden_states = hidden_states + self.dropout(attention_output[0])
- # add attentions if we output them
- outputs = (hidden_states,) + attention_output[1:]
- return outputs
-
-
-class T5LayerCrossAttention(nn.Module):
- def __init__(self, config):
- super().__init__()
- # @IDEA modified -> T5LayerNorm -> nn.LayerNorm
- # self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
- self.layer_norm = nn.LayerNorm(
- config.d_model, eps=config.layer_norm_epsilon)
-
- self.EncDecAttention = T5Attention(
- config, has_relative_attention_bias=False)
-
- self.dropout = nn.Dropout(config.dropout_rate)
-
- def forward(
- self,
- hidden_states,
- key_value_states,
- attention_mask=None,
- position_bias=None,
- layer_head_mask=None,
- past_key_value=None,
- use_cache=False,
- query_length=None,
- output_attentions=False,
- ):
- normed_hidden_states = self.layer_norm(hidden_states)
- attention_output = self.EncDecAttention(
- normed_hidden_states,
- mask=attention_mask,
- key_value_states=key_value_states,
- position_bias=position_bias,
- layer_head_mask=layer_head_mask,
- past_key_value=past_key_value,
- use_cache=use_cache,
- query_length=query_length,
- output_attentions=output_attentions,
- )
- layer_output = hidden_states + self.dropout(attention_output[0])
- # add attentions if we output them
- outputs = (layer_output,) + attention_output[1:]
- return outputs
-
-
-class T5Block(nn.Module):
- def __init__(self, config, has_relative_attention_bias=False):
- super().__init__()
- self.is_decoder = config.is_decoder
- # @IDEA modified ->
- # self.layer = nn.ModuleList()
- # self.layer.append(T5LayerSelfAttention(config, has_relative_attention_bias=has_relative_attention_bias))
- # if self.is_decoder:
- # self.layer.append(T5LayerCrossAttention(config))
-
- # self.layer.append(T5LayerFF(config))
-
- self.T5LayerSelfAttention = T5LayerSelfAttention(
- config, has_relative_attention_bias=has_relative_attention_bias)
- if self.is_decoder:
- self.T5LayerCrossAttention = T5LayerCrossAttention(
- config)
- self.T5LayerFF = T5LayerFF(config)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- position_bias=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- encoder_decoder_position_bias=None,
- layer_head_mask=None,
- cross_attn_layer_head_mask=None,
- past_key_value=None,
- use_cache=False,
- output_attentions=False,
- return_dict=True,
- ):
-
- if past_key_value is not None:
- assert self.is_decoder, "Only decoder can use `past_key_values`"
- expected_num_past_key_values = 2 if encoder_hidden_states is None else 4
-
- if len(past_key_value) != expected_num_past_key_values:
- raise ValueError(
- f"There should be {expected_num_past_key_values} past states. "
- f"{'2 (past / key) for cross attention. ' if expected_num_past_key_values == 4 else ''}"
- f"Got {len(past_key_value)} past key / value states"
- )
-
- self_attn_past_key_value = past_key_value[:2]
- cross_attn_past_key_value = past_key_value[2:]
- else:
- self_attn_past_key_value, cross_attn_past_key_value = None, None
-
- # @IDEA modified -> self.layer[0] -> self.T5LayerSelfAttention
- self_attention_outputs = self.T5LayerSelfAttention(
- hidden_states,
- attention_mask=attention_mask,
- position_bias=position_bias,
- layer_head_mask=layer_head_mask,
- past_key_value=self_attn_past_key_value,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
- hidden_states, present_key_value_state = self_attention_outputs[:2]
- # Keep self-attention outputs and relative position weights
- attention_outputs = self_attention_outputs[2:]
-
- # clamp inf values to enable fp16 training
- if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any():
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(
- hidden_states, min=-clamp_value, max=clamp_value)
-
- do_cross_attention = self.is_decoder and encoder_hidden_states is not None
- if do_cross_attention:
- # the actual query length is unknown for cross attention
- # if using past key value states. Need to inject it here
- if present_key_value_state is not None:
- query_length = present_key_value_state[0].shape[2]
- else:
- query_length = None
- # @IDEA modified -> self.layer[1] -> self.T5LayerCrossAttention
- cross_attention_outputs = self.T5LayerCrossAttention(
- hidden_states,
- key_value_states=encoder_hidden_states,
- attention_mask=encoder_attention_mask,
- position_bias=encoder_decoder_position_bias,
- layer_head_mask=cross_attn_layer_head_mask,
- past_key_value=cross_attn_past_key_value,
- query_length=query_length,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
- hidden_states = cross_attention_outputs[0]
-
- # clamp inf values to enable fp16 training
- if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any():
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(
- hidden_states, min=-clamp_value, max=clamp_value)
-
- # Combine self attn and cross attn key value states
- if present_key_value_state is not None:
- present_key_value_state = present_key_value_state + \
- cross_attention_outputs[1]
-
- # Keep cross-attention outputs and relative position weights
- attention_outputs = attention_outputs + cross_attention_outputs[2:]
-
- # Apply Feed Forward layer
- # @IDEA modified -> self.layer[-1] -> self.T5LayerFF
- hidden_states = self.T5LayerFF(hidden_states)
-
- # clamp inf values to enable fp16 training
- if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any():
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(
- hidden_states, min=-clamp_value, max=clamp_value)
-
- outputs = (hidden_states,)
-
- if use_cache:
- outputs = outputs + (present_key_value_state,) + attention_outputs
- else:
- outputs = outputs + attention_outputs
-
- # hidden-states, present_key_value_states, (self-attention position bias),
- # (self-attention weights), (cross-attention position bias), (cross-attention weights)
- return outputs
-
-
-class T5PreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = T5Config
- load_tf_weights = load_tf_weights_in_T5
- base_model_prefix = "transformer"
- is_parallelizable = True
- supports_gradient_checkpointing = True
-
- @property
- def dummy_inputs(self):
- input_ids = torch.tensor(DUMMY_INPUTS)
- input_mask = torch.tensor(DUMMY_MASK)
- dummy_inputs = {
- "decoder_input_ids": input_ids,
- "input_ids": input_ids,
- "decoder_attention_mask": input_mask,
- }
- return dummy_inputs
-
- def _init_weights(self, module):
- """Initialize the weights"""
- factor = self.config.initializer_factor # Used for testing weights initialization
- if isinstance(module, T5LayerNorm):
- module.weight.data.fill_(factor * 1.0)
- elif isinstance(module, (T5Model, T5ForConditionalGeneration, T5EncoderModel)):
- # Mesh TensorFlow embeddings initialization
- # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d
- # /mesh_tensorflow/layers.py#L1624
- # @IDEA modified -> module.shared.weight -> module.shared.word_embeddings.weight
- # module.shared.weight.data.normal_(mean=0.0, std=factor * 1.0)
- module.shared.word_embeddings.weight.data.normal_(
- mean=0.0, std=factor * 1.0)
- module.shared.position_embeddings.weight.data.normal_(
- mean=0.0, std=factor * 1.0)
- elif isinstance(module, T5DenseReluDense):
- # Mesh TensorFlow FF initialization
- # See https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow
- # /transformer/transformer_layers.py#L56
- # and https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/
- # mesh_tensorflow/layers.py#L89
- module.wi.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
- if hasattr(module.wi, "bias") and module.wi.bias is not None:
- module.wi.bias.data.zero_()
- module.wo.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_ff) ** -0.5))
- if hasattr(module.wo, "bias") and module.wo.bias is not None:
- module.wo.bias.data.zero_()
- elif isinstance(module, T5DenseGeluDense):
- module.wi_0.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
- if hasattr(module.wi_0, "bias") and module.wi_0.bias is not None:
- module.wi_0.bias.data.zero_()
- module.wi_1.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_model) ** -0.5))
- if hasattr(module.wi, "bias") and module.wi.bias is not None:
- module.wi.bias.data.zero_()
- module.wo.weight.data.normal_(
- mean=0.0, std=factor * ((self.config.d_ff) ** -0.5))
- if hasattr(module.wo, "bias") and module.wo.bias is not None:
- module.wo.bias.data.zero_()
- elif isinstance(module, T5Attention):
- # Mesh TensorFlow attention initialization to avoid scaling before softmax
- # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d
- # /mesh_tensorflow/transformer/attention.py#L136
- d_model = self.config.d_model
- key_value_proj_dim = self.config.d_kv
- n_heads = self.config.num_heads
- module.q.weight.data.normal_(
- mean=0.0, std=factor * ((d_model * key_value_proj_dim) ** -0.5))
- module.k.weight.data.normal_(
- mean=0.0, std=factor * (d_model ** -0.5))
- module.v.weight.data.normal_(
- mean=0.0, std=factor * (d_model ** -0.5))
-
- module.o.weight.data.normal_(
- mean=0.0, std=factor * ((n_heads * key_value_proj_dim) ** -0.5))
- if module.has_relative_attention_bias:
- module.relative_attention_bias.weight.data.normal_(
- mean=0.0, std=factor * ((d_model) ** -0.5))
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, (T5Attention, T5Stack)):
- module.gradient_checkpointing = value
-
- def _shift_right(self, input_ids):
- decoder_start_token_id = self.config.decoder_start_token_id
- pad_token_id = self.config.pad_token_id
-
- assert (
- decoder_start_token_id is not None
- ), "self.model.config.decoder_start_token_id has to be defined. "\
- "In T5 it is usually set to the pad_token_id. See T5 docs for more information"
-
- # shift inputs to the right
- if is_torch_fx_proxy(input_ids):
- # Item assignment is not supported natively for proxies.
- shifted_input_ids = torch.full(
- input_ids.shape[:-1] + (1,), decoder_start_token_id)
- shifted_input_ids = torch.cat(
- [shifted_input_ids, input_ids[..., :-1]], dim=-1)
- else:
- shifted_input_ids = input_ids.new_zeros(input_ids.shape)
- shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
- shifted_input_ids[..., 0] = decoder_start_token_id
-
- assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined."
- # replace possible -100 values in labels by `pad_token_id`
- shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
-
- assert torch.all(shifted_input_ids >= 0).item(
- ), "Verify that `shifted_input_ids` has only positive values"
-
- return shifted_input_ids
-
-
-class T5Embeddings(nn.Module):
- """Construct the embeddings from word, position and token_type embeddings."""
-
- def __init__(self, config):
- super().__init__()
- self.word_embeddings = nn.Embedding(
- config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
- self.position_embeddings = nn.Embedding(
- config.max_position_embeddings, config.hidden_size)
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
-
- # In Megatron, layer-norm is applied after the 1st dropout.
- # self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.dropout_rate)
-
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.register_buffer("position_ids", torch.arange(
- config.max_position_embeddings).expand((1, -1)))
- self.position_embedding_type = getattr(
- config, "position_embedding_type", "absolute")
-
- def forward(
- self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
- ):
- if input_ids is not None:
- input_shape = input_ids.size()
- else:
- input_shape = inputs_embeds.size()[:-1]
-
- seq_length = input_shape[1]
-
- if position_ids is None:
- position_ids = self.position_ids[:,
- past_key_values_length: seq_length + past_key_values_length]
-
- if inputs_embeds is None:
- inputs_embeds = self.word_embeddings(input_ids)
-
- embeddings = inputs_embeds
- if self.position_embedding_type == "absolute":
- position_embeddings = self.position_embeddings(position_ids)
- embeddings += position_embeddings
-
- # Megatron BERT moves that layer norm after the drop-out (and to each layer).
- # embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class T5Stack(T5PreTrainedModel):
- def __init__(self, config, embed_tokens=None):
- super().__init__(config)
-
- self.embed_tokens = embed_tokens
- self.is_decoder = config.is_decoder
-
- # @IDEA modified -> has_relative_attention_bias=bool(i == 0)) for i in range(config.num_layers)
- # -> has_relative_attention_bias=False
- self.block = nn.ModuleList(
- [T5Block(config, has_relative_attention_bias=False)
- for _ in range(config.num_layers)]
- )
- # @IDEA modified -> T5LayerNorm -> nn.LayerNorm
- # self.final_layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)
- self.final_layer_norm = nn.LayerNorm(
- config.d_model, eps=config.layer_norm_epsilon)
-
- self.dropout = nn.Dropout(config.dropout_rate)
-
- self.init_weights()
- # Model parallel
- self.model_parallel = False
- self.device_map = None
- self.gradient_checkpointing = False
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- # Check validity of device_map
- self.device_map = (
- get_device_map(len(self.block), range(
- torch.cuda.device_count())) if device_map is None else device_map
- )
- assert_device_map(self.device_map, len(self.block))
- self.model_parallel = True
- self.first_device = "cpu" if "cpu" in self.device_map.keys() else "cuda:" + \
- str(min(self.device_map.keys()))
- self.last_device = "cuda:" + str(max(self.device_map.keys()))
- # Load onto devices
- for k, v in self.device_map.items():
- for layer in v:
- cuda_device = "cuda:" + str(k)
- self.block[layer] = self.block[layer].to(cuda_device)
-
- # Set embed_tokens to first layer
-
- self.embed_tokens = self.embed_tokens.to(self.first_device)
- self.embeddings = self.embeddings.to(self.first_device)
- # Set final layer norm to last device
- self.final_layer_norm = self.final_layer_norm.to(self.last_device)
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.model_parallel = False
- self.device_map = None
- self.first_device = "cpu"
- self.last_device = "cpu"
- for i in range(len(self.block)):
- self.block[i] = self.block[i].to("cpu")
- self.embed_tokens = self.embed_tokens.to("cpu")
- self.final_layer_norm = self.final_layer_norm.to("cpu")
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.embed_tokens
-
- def set_input_embeddings(self, new_embeddings):
- self.embed_tokens = new_embeddings
-
- def forward(
- self,
- input_ids=None,
- position_ids=None,
- attention_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- inputs_embeds=None,
- head_mask=None,
- cross_attn_head_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- # Model parallel
- if self.model_parallel:
- torch.cuda.set_device(self.first_device)
- self.embed_tokens = self.embed_tokens.to(self.first_device)
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is not None and inputs_embeds is not None:
- err_msg_prefix = "decoder_" if self.is_decoder else ""
- raise ValueError(
- f"You cannot specify both {err_msg_prefix}input_ids and {err_msg_prefix}inputs_embeds at the same time"
- )
- elif input_ids is not None:
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- err_msg_prefix = "decoder_" if self.is_decoder else ""
- raise ValueError(
- f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
-
- if inputs_embeds is None:
- assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings"
- # @IDEA modified -> self.embed_tokens(input_ids=input_ids) ->
- # self.embed_tokens(input_ids=input_ids,osition_ids=position_ids,)
- # inputs_embeds = self.embed_tokens(input_ids=input_ids)
- inputs_embeds = self.embed_tokens(input_ids=input_ids)
-
- batch_size, seq_length = input_shape
-
- # required mask seq length can be calculated via length of past
- mask_seq_length = past_key_values[0][0].shape[2] + \
- seq_length if past_key_values is not None else seq_length
-
- if use_cache is True:
- assert self.is_decoder, f":obj:`use_cache` can only be set to `True` if {self} is used as a decoder"
-
- if attention_mask is None:
- attention_mask = torch.ones(
- batch_size, mask_seq_length).to(inputs_embeds.device)
- if self.is_decoder and encoder_attention_mask is None and encoder_hidden_states is not None:
- encoder_seq_length = encoder_hidden_states.shape[1]
- encoder_attention_mask = torch.ones(
- batch_size, encoder_seq_length, device=inputs_embeds.device, dtype=torch.long
- )
-
- # initialize past_key_values with `None` if past does not exist
- if past_key_values is None:
- past_key_values = [None] * len(self.block)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask = self.get_extended_attention_mask(
- attention_mask, input_shape, inputs_embeds.device)
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if self.is_decoder and encoder_hidden_states is not None:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (
- encoder_batch_size, encoder_sequence_length)
- if encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(
- encoder_hidden_shape, device=inputs_embeds.device)
- encoder_extended_attention_mask = self.invert_attention_mask(
- encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
-
- # Prepare head mask if needed
- head_mask = self.get_head_mask(head_mask, self.config.num_layers)
- cross_attn_head_mask = self.get_head_mask(
- cross_attn_head_mask, self.config.num_layers)
- present_key_value_states = () if use_cache else None
- all_hidden_states = () if output_hidden_states else None
- all_attentions = () if output_attentions else None
- all_cross_attentions = () if (output_attentions and self.is_decoder) else None
- position_bias = None
- encoder_decoder_position_bias = None
-
- hidden_states = self.dropout(inputs_embeds)
-
- for i, (layer_module, past_key_value) in enumerate(zip(self.block, past_key_values)):
-
- layer_head_mask = head_mask[i]
- cross_attn_layer_head_mask = cross_attn_head_mask[i]
- # Model parallel
- if self.model_parallel:
- torch.cuda.set_device(hidden_states.device)
- # Ensure that attention_mask is always on the same device as hidden_states
- if attention_mask is not None:
- attention_mask = attention_mask.to(hidden_states.device)
- if position_bias is not None:
- position_bias = position_bias.to(hidden_states.device)
- if encoder_hidden_states is not None:
- encoder_hidden_states = encoder_hidden_states.to(
- hidden_states.device)
- if encoder_extended_attention_mask is not None:
- encoder_extended_attention_mask = encoder_extended_attention_mask.to(
- hidden_states.device)
- if encoder_decoder_position_bias is not None:
- encoder_decoder_position_bias = encoder_decoder_position_bias.to(
- hidden_states.device)
- if layer_head_mask is not None:
- layer_head_mask = layer_head_mask.to(hidden_states.device)
- if cross_attn_layer_head_mask is not None:
- cross_attn_layer_head_mask = cross_attn_layer_head_mask.to(
- hidden_states.device)
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if self.gradient_checkpointing and self.training:
- if use_cache:
- logger.warn(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return tuple(module(*inputs, use_cache, output_attentions))
-
- return custom_forward
-
- layer_outputs = checkpoint(
- create_custom_forward(layer_module),
- hidden_states,
- extended_attention_mask,
- position_bias,
- encoder_hidden_states,
- encoder_extended_attention_mask,
- encoder_decoder_position_bias,
- layer_head_mask,
- cross_attn_layer_head_mask,
- None, # past_key_value is always None with gradient checkpointing
- )
- else:
- layer_outputs = layer_module(
- hidden_states,
- attention_mask=extended_attention_mask,
- position_bias=position_bias,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- encoder_decoder_position_bias=encoder_decoder_position_bias,
- layer_head_mask=layer_head_mask,
- cross_attn_layer_head_mask=cross_attn_layer_head_mask,
- past_key_value=past_key_value,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
-
- # layer_outputs is a tuple with:
- # hidden-states, key-value-states, (self-attention position bias), (self-attention weights),
- # (cross-attention position bias), (cross-attention weights)
- if use_cache is False:
- layer_outputs = layer_outputs[:1] + (None,) + layer_outputs[1:]
-
- hidden_states, present_key_value_state = layer_outputs[:2]
-
- # We share the position biases between the layers - the first layer store them
- # layer_outputs = hidden-states, key-value-states (self-attention position bias), (self-attention weights),
- # (cross-attention position bias), (cross-attention weights)
- position_bias = layer_outputs[2]
- if self.is_decoder and encoder_hidden_states is not None:
- encoder_decoder_position_bias = layer_outputs[4 if output_attentions else 3]
- # append next layer key value states
- if use_cache:
- present_key_value_states = present_key_value_states + \
- (present_key_value_state,)
-
- if output_attentions:
- all_attentions = all_attentions + (layer_outputs[3],)
- if self.is_decoder:
- all_cross_attentions = all_cross_attentions + \
- (layer_outputs[5],)
-
- # Model Parallel: If it's the last layer for that device, put things on the next device
- if self.model_parallel:
- for k, v in self.device_map.items():
- if i == v[-1] and "cuda:" + str(k) != self.last_device:
- hidden_states = hidden_states.to("cuda:" + str(k + 1))
-
- hidden_states = self.final_layer_norm(hidden_states)
- hidden_states = self.dropout(hidden_states)
-
- # Add last layer
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(
- v
- for v in [
- hidden_states,
- present_key_value_states,
- all_hidden_states,
- all_attentions,
- all_cross_attentions,
- ]
- if v is not None
- )
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=present_key_value_states,
- hidden_states=all_hidden_states,
- attentions=all_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-
-T5_START_DOCSTRING = r"""
-
- The T5 model was proposed in `Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- `__ by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
- Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text
- denoising generative setting.
-
- This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic
- methods the library implements for all its model (such as downloading or saving, resizing the input embeddings,
- pruning heads etc.)
-
- This model is also a PyTorch `torch.nn.Module `__
- subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
- general usage and behavior.
-
- Parameters:
- config (:class:`~transformers.T5Config`): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model
- weights.
-"""
-
-T5_INPUTS_DOCSTRING = """
- Args:
- input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
- should be able to pad the inputs on both the right and the left.
-
- Indices can be obtained using :class:`~transformers.T5Tokenizer`. See
- :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
- detail.
-
- `What are input IDs? <../glossary.html#input-ids>`__
-
- To know more on how to prepare :obj:`input_ids` for pretraining take a look a `T5 Training
- <./T5.html#training>`__.
- attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- `What are attention masks? <../glossary.html#attention-mask>`__
- decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
- Indices of decoder input sequence tokens in the vocabulary.
-
- Indices can be obtained using :class:`~transformers.T5Tokenizer`. See
- :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
- details.
-
- `What are decoder input IDs? <../glossary.html#decoder-input-ids>`__
-
- T5 uses the :obj:`pad_token_id` as the starting token for :obj:`decoder_input_ids` generation. If
- :obj:`past_key_values` is used, optionally only the last :obj:`decoder_input_ids` have to be input (see
- :obj:`past_key_values`).
-
- To know more on how to prepare :obj:`decoder_input_ids` for pretraining take a look at `T5 Training
- <./T5.html#training>`__.
- decoder_attention_mask (:obj:`torch.BoolTensor` of shape
- :obj:`(batch_size, target_sequence_length)`, `optional`):
- Default behavior: generate a tensor that ignores pad tokens in :obj:`decoder_input_ids`. Causal mask will
- also be used by default.
- head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):
- Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in ``[0,
- 1]``:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- decoder_head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or
- :obj:`(num_layers, num_heads)`, `optional`):
- Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in ``[0,
- 1]``:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- cross_attn_head_mask (:obj:`torch.Tensor` of shape :obj:`(num_heads,)` or
- :obj:`(num_layers, num_heads)`, `optional`):
- Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
- ``[0, 1]``:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- encoder_outputs (:obj:`tuple(tuple(torch.FloatTensor)`, `optional`):
- Tuple consists of (:obj:`last_hidden_state`, :obj:`optional`: `hidden_states`, :obj:`optional`:
- `attentions`) :obj:`last_hidden_state` of shape :obj:`(batch_size, sequence_length, hidden_size)` is a
- sequence of hidden states at the output of the last layer of the encoder. Used in the cross-attention of
- the decoder.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having
- 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
- This is useful if you want more control over how to convert :obj:`input_ids` indices into associated
- vectors than the model's internal embedding lookup matrix.
- decoder_inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, target_sequence_length, hidden_size)
- `, `optional`):
- Optionally, instead of passing :obj:`decoder_input_ids` you can choose to directly pass an embedded
- representation. If :obj:`past_key_values` is used, optionally only the last :obj:`decoder_inputs_embeds`
- have to be input (see :obj:`past_key_values`). This is useful if you want more control over how to convert
- :obj:`decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix.
-
- If :obj:`decoder_input_ids` and :obj:`decoder_inputs_embeds` are both unset, :obj:`decoder_inputs_embeds`
- takes the value of :obj:`inputs_embeds`.
-
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
-
- output_attentions (:obj:`bool`, `optional`):
- Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
- tensors for more detail.
- output_hidden_states (:obj:`bool`, `optional`):
- Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
- more detail.
- return_dict (:obj:`bool`, `optional`):
- Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
-"""
-
-T5_ENCODER_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
- should be able to pad the inputs on both the right and the left.
-
- Indices can be obtained using :class:`~transformers.T5Tokenizer`. See
- :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for
- detail.
-
- To know more on how to prepare :obj:`input_ids` for pretraining take a look a `T5 Training
- <./T5.html#training>`__.
- attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- `What are attention masks? <../glossary.html#attention-mask>`__
- head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):
- Mask to nullify selected heads of the self-attention modules. Mask values selected in ``[0, 1]``:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
- This is useful if you want more control over how to convert :obj:`input_ids` indices into associated
- vectors than the model's internal embedding lookup matrix.
- output_attentions (:obj:`bool`, `optional`):
- Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
- tensors for more detail.
- output_hidden_states (:obj:`bool`, `optional`):
- Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
- more detail.
- return_dict (:obj:`bool`, `optional`):
- Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
-"""
-
-# Warning message for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask
-__HEAD_MASK_WARNING_MSG = """
-The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently,
-`decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions.
-If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers,
-num_heads)`.
-"""
-
-
-@add_start_docstrings(
- "The bare T5 Model transformer outputting raw hidden-states without any specific head on top.",
- T5_START_DOCSTRING,
-)
-class T5LMHead(nn.Module):
- """Masked LM head for T5
-
- Arguments:
- mpu_vocab_size: model parallel size of vocabulary.
- hidden_size: hidden size
- init_method: init method for weight initialization
- layernorm_epsilon: tolerance for layer norm divisions
- parallel_output: wether output logits being distributed or not.
- """
-
- def __init__(self, config):
- super(T5LMHead, self).__init__()
-
- self.bias = torch.nn.Parameter(torch.zeros(config.vocab_size))
-
- def forward(self, hidden_states, word_embeddings_weight):
- output = torch.nn.functional.linear(hidden_states,
- word_embeddings_weight,
- bias=self.bias)
- return output
-
-
-class T5Model(T5PreTrainedModel):
- _keys_to_ignore_on_load_missing = [
- r"encoder\.embed_tokens\.weight",
- r"decoder\.embed_tokens\.weight",
- ]
- _keys_to_ignore_on_load_unexpected = [
- r"decoder\.block\.0\.layer\.1\.EncDecAttention\.relative_attention_bias\.weight",
- ]
-
- def __init__(self, config: T5Config):
- super().__init__(config)
- # @IDEA modified -> nn.Embedding -> T5Embeddings
- # self.shared = nn.Embedding(config.vocab_size, config.d_model)
- self.shared = T5Embeddings(config)
-
- encoder_config = copy.deepcopy(config)
- encoder_config.is_decoder = False
- encoder_config.use_cache = False
- encoder_config.is_encoder_decoder = False
- self.encoder = T5Stack(encoder_config, self.shared)
-
- decoder_config = copy.deepcopy(config)
- decoder_config.is_decoder = True
- decoder_config.is_encoder_decoder = False
- decoder_config.num_layers = config.num_decoder_layers
- self.decoder = T5Stack(decoder_config, self.shared)
-
- self.init_weights()
-
- # Model parallel
- self.model_parallel = False
- self.device_map = None
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- self.device_map = (
- get_device_map(len(self.encoder.block),
- range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.encoder.block))
- self.encoder.parallelize(self.device_map)
- self.decoder.parallelize(self.device_map)
- self.model_parallel = True
-
- @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.encoder.deparallelize()
- self.decoder.deparallelize()
- self.encoder = self.encoder.to("cpu")
- self.decoder = self.decoder.to("cpu")
- self.model_parallel = False
- self.device_map = None
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.shared
-
- def set_input_embeddings(self, new_embeddings):
- self.shared = new_embeddings
- self.encoder.set_input_embeddings(new_embeddings)
- self.decoder.set_input_embeddings(new_embeddings)
-
- def get_encoder(self):
- return self.encoder
-
- def get_decoder(self):
- return self.decoder
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.layer[layer].attention.prune_heads(heads)
-
- @add_start_docstrings_to_model_forward(T5_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=Seq2SeqModelOutput, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- decoder_input_ids=None,
- decoder_attention_mask=None,
- head_mask=None,
- decoder_head_mask=None,
- cross_attn_head_mask=None,
- encoder_outputs=None,
- past_key_values=None,
- inputs_embeds=None,
- decoder_inputs_embeds=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- Returns:
-
- Example::
-
- >>> from transformers import T5Tokenizer, T5Model
-
- >>> tokenizer = T5Tokenizer.from_pretrained('T5-small')
- >>> model = T5Model.from_pretrained('T5-small')
-
- >>> input_ids = tokenizer("Studies have been shown that owning a dog is good for you",
- return_tensors="pt").input_ids # Batch size 1
- >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
-
- >>> # forward pass
- >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
- >>> last_hidden_states = outputs.last_hidden_state
- """
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask
- if head_mask is not None and decoder_head_mask is None:
- if self.config.num_layers == self.config.num_decoder_layers:
- warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning)
- decoder_head_mask = head_mask
-
- # Encode if needed (training, first prediction pass)
- if encoder_outputs is None:
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- inputs_embeds=inputs_embeds,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
- encoder_outputs = BaseModelOutput(
- last_hidden_state=encoder_outputs[0],
- hidden_states=encoder_outputs[1] if len(
- encoder_outputs) > 1 else None,
- attentions=encoder_outputs[2] if len(
- encoder_outputs) > 2 else None,
- )
-
- hidden_states = encoder_outputs[0]
- if self.model_parallel:
- torch.cuda.set_device(self.decoder.first_device)
- # Set device for model parallelism
- if self.model_parallel:
- torch.cuda.set_device(self.decoder.first_device)
- hidden_states = hidden_states.to(self.decoder.first_device)
- if decoder_input_ids is not None:
- decoder_input_ids = decoder_input_ids.to(
- self.decoder.first_device)
- if attention_mask is not None:
- attention_mask = attention_mask.to(self.decoder.first_device)
- if decoder_attention_mask is not None:
- decoder_attention_mask = decoder_attention_mask.to(
- self.decoder.first_device)
-
- # Decode
- decoder_outputs = self.decoder(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- inputs_embeds=decoder_inputs_embeds,
- past_key_values=past_key_values,
- encoder_hidden_states=hidden_states,
- encoder_attention_mask=attention_mask,
- head_mask=decoder_head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- if not return_dict:
- return decoder_outputs + encoder_outputs
-
- return Seq2SeqModelOutput(
- last_hidden_state=decoder_outputs.last_hidden_state,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings("""T5 Model with a `language modeling` head on top. """, T5_START_DOCSTRING)
-class T5ForConditionalGeneration(T5PreTrainedModel):
- _keys_to_ignore_on_load_missing = [
- r"encoder\.embed_tokens\.weight",
- r"decoder\.embed_tokens\.weight",
- r"lm_head\.weight",
- ]
- _keys_to_ignore_on_load_unexpected = [
- r"decoder\.block\.0\.layer\.1\.EncDecAttention\.relative_attention_bias\.weight",
- ]
-
- def __init__(self, config):
- super().__init__(config)
- self.model_dim = config.d_model
-
- # @IDEA modified -> nn.Embedding -> T5Embeddings
- # self.shared = nn.Embedding(config.vocab_size, config.d_model)
- self.shared = T5Embeddings(config)
-
- encoder_config = copy.deepcopy(config)
- encoder_config.is_decoder = False
- encoder_config.use_cache = False
- encoder_config.is_encoder_decoder = False
- self.encoder = T5Stack(encoder_config, self.shared)
-
- decoder_config = copy.deepcopy(config)
- decoder_config.is_decoder = True
- decoder_config.is_encoder_decoder = False
- decoder_config.num_layers = config.num_decoder_layers
- self.decoder = T5Stack(decoder_config, self.shared)
-
- # @IDEA modified -> add self.lm_head_bias
- self.lm_head_bias = torch.nn.Parameter(torch.zeros(config.vocab_size))
-
- self.init_weights()
-
- # Model parallel
- self.model_parallel = False
- self.device_map = None
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- self.device_map = (
- get_device_map(len(self.encoder.block),
- range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.encoder.block))
- self.encoder.parallelize(self.device_map)
- self.decoder.parallelize(self.device_map)
- self.lm_head = self.lm_head.to(self.decoder.first_device)
- self.model_parallel = True
-
- @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.encoder.deparallelize()
- self.decoder.deparallelize()
- self.encoder = self.encoder.to("cpu")
- self.decoder = self.decoder.to("cpu")
- self.lm_head = self.lm_head.to("cpu")
- self.model_parallel = False
- self.device_map = None
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.shared
-
- def set_input_embeddings(self, new_embeddings):
- self.shared = new_embeddings
- self.encoder.set_input_embeddings(new_embeddings)
- self.decoder.set_input_embeddings(new_embeddings)
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def get_output_embeddings(self):
- return self.lm_head_bias
-
- def get_encoder(self):
- return self.encoder
-
- def get_decoder(self):
- return self.decoder
-
- def generate(self, input_ids=None, max_length=512):
-
- input_ids = torch.tensor(input_ids)
- if len(input_ids.shape) < 2:
- input_ids = input_ids.unsqueeze(0)
- decode_input_id = [21128] # [BOS]的token_id为21128
- for i in range(max_length):
- tensor_decode_input_id = torch.tensor([decode_input_id])
- forword_output = self.forward(input_ids=input_ids,
- decoder_input_ids=tensor_decode_input_id)
- logits = forword_output.logits
- logits = torch.nn.functional.softmax(
- logits, dim=-1).cpu().detach().numpy()[0]
-
- last_output_id = int(np.random.choice(
- logits.shape[1], p=logits[-1]))
- if last_output_id == 21129: # [EOS]的token_id为21129
- break
- else:
- decode_input_id.append(last_output_id)
-
- return decode_input_id
-
- @add_start_docstrings_to_model_forward(T5_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- decoder_input_ids=None,
- decoder_attention_mask=None,
- head_mask=None,
- decoder_head_mask=None,
- cross_attn_head_mask=None,
- encoder_outputs=None,
- past_key_values=None,
- inputs_embeds=None,
- decoder_inputs_embeds=None,
- labels=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[-100, 0, ...,
- config.vocab_size - 1]`. All labels set to ``-100`` are ignored (masked), the loss is only computed for
- labels in ``[0, ..., config.vocab_size]``
-
- Returns:
- Examples::
-
- >>> from transformers import T5Tokenizer, T5ForConditionalGeneration
-
- >>> tokenizer = T5Tokenizer.from_pretrained('T5-small')
- >>> model = T5ForConditionalGeneration.from_pretrained('T5-small')
-
- >>> # training
- >>> input_ids = tokenizer('The walks in park', return_tensors='pt').input_ids
- >>> labels = tokenizer(' cute dog the ', return_tensors='pt').input_ids
- >>> outputs = model(input_ids=input_ids, labels=labels)
- >>> loss = outputs.loss
- >>> logits = outputs.logits
-
- >>> # inference
- >>> input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you",
- return_tensors="pt").input_ids # Batch size 1
- >>> outputs = model.generate(input_ids)
- >>> print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- >>> # studies have shown that owning a dog is good for you.
- """
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask
- if head_mask is not None and decoder_head_mask is None:
- if self.config.num_layers == self.config.num_decoder_layers:
- warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning)
- decoder_head_mask = head_mask
-
- # Encode if needed (training, first prediction pass)
- if encoder_outputs is None:
- # Convert encoder inputs in embeddings if needed
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- inputs_embeds=inputs_embeds,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
- encoder_outputs = BaseModelOutput(
- last_hidden_state=encoder_outputs[0],
- hidden_states=encoder_outputs[1] if len(
- encoder_outputs) > 1 else None,
- attentions=encoder_outputs[2] if len(
- encoder_outputs) > 2 else None,
- )
-
- hidden_states = encoder_outputs[0]
-
- if self.model_parallel:
- torch.cuda.set_device(self.decoder.first_device)
-
- if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None:
- # get decoder inputs from shifting lm labels to the right
- decoder_input_ids = self._shift_right(labels)
-
- # Set device for model parallelism
- if self.model_parallel:
- torch.cuda.set_device(self.decoder.first_device)
- hidden_states = hidden_states.to(self.decoder.first_device)
- if decoder_input_ids is not None:
- decoder_input_ids = decoder_input_ids.to(
- self.decoder.first_device)
- if attention_mask is not None:
- attention_mask = attention_mask.to(self.decoder.first_device)
- if decoder_attention_mask is not None:
- decoder_attention_mask = decoder_attention_mask.to(
- self.decoder.first_device)
-
- # Decode
- decoder_outputs = self.decoder(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- inputs_embeds=decoder_inputs_embeds,
- past_key_values=past_key_values,
- encoder_hidden_states=hidden_states,
- encoder_attention_mask=attention_mask,
- head_mask=decoder_head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = decoder_outputs.last_hidden_state
-
- # Set device for model parallelism
- # if self.model_parallel:
- # torch.cuda.set_device(self.encoder.first_device)
- # self.lm_head = self.lm_head.to(self.encoder.first_device)
- # sequence_output = sequence_output.to(self.lm_head.weight.device)
-
- # if self.config.tie_word_embeddings:
- # # Rescale output before projecting on vocab
- # # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/
- # mesh_tensorflow/transformer/transformer.py#L586
- # sequence_output = sequence_output * (self.model_dim ** -0.5)
-
- lm_logits = torch.nn.functional.linear(
- sequence_output, self.shared.word_embeddings.weight, bias=self.lm_head_bias)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss(ignore_index=-100)
- loss = loss_fct(
- lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))
- # @IDEA modified(thom): Add z_loss https://github.com/tensorflow/mesh/blob/
- # fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L666
-
- if not return_dict:
- output = (lm_logits,) + decoder_outputs[1:] + encoder_outputs
- return ((loss,) + output) if loss is not None else output
-
- return Seq2SeqLMOutput(
- loss=loss,
- logits=lm_logits,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
- def prepare_inputs_for_generation(
- self,
- input_ids,
- past=None,
- attention_mask=None,
- head_mask=None,
- decoder_head_mask=None,
- cross_attn_head_mask=None,
- use_cache=None,
- encoder_outputs=None,
- **kwargs
- ):
-
- # cut decoder_input_ids if past is used
- if past is not None:
- input_ids = input_ids[:, -1:]
-
- return {
- "decoder_input_ids": input_ids,
- "past_key_values": past,
- "encoder_outputs": encoder_outputs,
- "attention_mask": attention_mask,
- "head_mask": head_mask,
- "decoder_head_mask": decoder_head_mask,
- "cross_attn_head_mask": cross_attn_head_mask,
- "use_cache": use_cache,
- }
-
- def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
- return self._shift_right(labels)
-
- def _reorder_cache(self, past, beam_idx):
- # if decoder past is not included in output
- # speedy decoding is disabled and no need to reorder
- if past is None:
- logger.warning(
- "You might want to consider setting `use_cache=True` to speed up decoding")
- return past
-
- reordered_decoder_past = ()
- for layer_past_states in past:
- # get the correct batch idx from layer past batch dim
- # batch dim of `past` is at 2nd position
- reordered_layer_past_states = ()
- for layer_past_state in layer_past_states:
- # need to set correct `past` for each of the four key / value states
- reordered_layer_past_states = reordered_layer_past_states + (
- layer_past_state.index_select(
- 0, beam_idx.to(layer_past_state.device)),
- )
-
- assert reordered_layer_past_states[0].shape == layer_past_states[0].shape
- assert len(reordered_layer_past_states) == len(layer_past_states)
-
- reordered_decoder_past = reordered_decoder_past + \
- (reordered_layer_past_states,)
- return reordered_decoder_past
-
-
-@add_start_docstrings(
- "The bare T5 Model transformer outputting encoder's raw hidden-states without any specific head on top.",
- T5_START_DOCSTRING,
-)
-class T5EncoderModel(T5PreTrainedModel):
- authorized_missing_keys = [
- r"encoder\.embed_tokens\.weight",
- ]
-
- def __init__(self, config: T5Config):
- super().__init__(config)
- # @IDEA modified -> nn.Embedding -> T5Embeddings
- # self.shared = nn.Embedding(config.vocab_size, config.d_model)
- self.shared = T5Embeddings(config)
-
- encoder_config = copy.deepcopy(config)
- encoder_config.use_cache = False
- encoder_config.is_encoder_decoder = False
- self.encoder = T5Stack(encoder_config, self.shared)
-
- self.init_weights()
-
- # Model parallel
- self.model_parallel = False
- self.device_map = None
-
- @add_start_docstrings(PARALLELIZE_DOCSTRING)
- def parallelize(self, device_map=None):
- self.device_map = (
- get_device_map(len(self.encoder.block),
- range(torch.cuda.device_count()))
- if device_map is None
- else device_map
- )
- assert_device_map(self.device_map, len(self.encoder.block))
- self.encoder.parallelize(self.device_map)
- self.model_parallel = True
-
- @add_start_docstrings(DEPARALLELIZE_DOCSTRING)
- def deparallelize(self):
- self.encoder.deparallelize()
- self.encoder = self.encoder.to("cpu")
- self.model_parallel = False
- self.device_map = None
- torch.cuda.empty_cache()
-
- def get_input_embeddings(self):
- return self.shared
-
- def set_input_embeddings(self, new_embeddings):
- self.shared = new_embeddings
- self.encoder.set_input_embeddings(new_embeddings)
-
- def get_encoder(self):
- return self.encoder
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.layer[layer].attention.prune_heads(heads)
-
- @add_start_docstrings_to_model_forward(T5_ENCODER_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=BaseModelOutput, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- head_mask=None,
- inputs_embeds=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- Returns:
-
- Example::
-
- >>> from transformers import T5Tokenizer, T5EncoderModel
- >>> tokenizer = T5Tokenizer.from_pretrained('T5-small')
- >>> model = T5EncoderModel.from_pretrained('T5-small')
- >>> input_ids = tokenizer("Studies have been shown that owning a dog is good for you",
- return_tensors="pt").input_ids # Batch size 1
- >>> outputs = model(input_ids=input_ids)
- >>> last_hidden_states = outputs.last_hidden_state
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- inputs_embeds=inputs_embeds,
- head_mask=head_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- return encoder_outputs
diff --git a/spaces/fengmuxi/ChatGpt-Web/app/api/user/login/route.ts b/spaces/fengmuxi/ChatGpt-Web/app/api/user/login/route.ts
deleted file mode 100644
index aa50ca272127fb3682d2827f1884bac750d993fe..0000000000000000000000000000000000000000
--- a/spaces/fengmuxi/ChatGpt-Web/app/api/user/login/route.ts
+++ /dev/null
@@ -1,33 +0,0 @@
-import { NextRequest } from "next/server";
-import { getIP } from "../../auth";
-
-export async function POST(req: NextRequest) {
- try {
- // console.log(req.body)
- // const user=req.nextUrl.searchParams.get("user")
- // const password=req.nextUrl.searchParams.get("password")
- // const code=req.nextUrl.searchParams.get("code")
- // const uuid=req.nextUrl.searchParams.get("uuid")
- // let body={
- // "username": user,
- // "password": password,
- // "code": code,
- // "uuid": uuid
- // }
- // console.log(await req.json())
- let res=await fetch("https://eladmin.dwzynj.top/auth/loginWeb", {
- method: "POST",
- headers:{
- "Content-Type":'application/json',
- "UserIp": String(getIP(req))
- },
- body:JSON.stringify(await req.json())
- })
- let msg=await res.json()
- // console.log(msg)
- return new Response(JSON.stringify(msg))
- } catch (e) {
- console.error("[eladmin] ", e);
- return new Response(JSON.stringify(e));
- }
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/readline.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/readline.d.ts
deleted file mode 100644
index 6ab64acbbec10680e4c519598e84b9c64bd97984..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/readline.d.ts
+++ /dev/null
@@ -1,653 +0,0 @@
-/**
- * The `readline` module provides an interface for reading data from a `Readable` stream (such as `process.stdin`) one line at a time.
- *
- * To use the promise-based APIs:
- *
- * ```js
- * import * as readline from 'node:readline/promises';
- * ```
- *
- * To use the callback and sync APIs:
- *
- * ```js
- * import * as readline from 'node:readline';
- * ```
- *
- * The following simple example illustrates the basic use of the `readline` module.
- *
- * ```js
- * import * as readline from 'node:readline/promises';
- * import { stdin as input, stdout as output } from 'node:process';
- *
- * const rl = readline.createInterface({ input, output });
- *
- * const answer = await rl.question('What do you think of Node.js? ');
- *
- * console.log(`Thank you for your valuable feedback: ${answer}`);
- *
- * rl.close();
- * ```
- *
- * Once this code is invoked, the Node.js application will not terminate until the`readline.Interface` is closed because the interface waits for data to be
- * received on the `input` stream.
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/readline.js)
- */
-declare module 'readline' {
- import { Abortable, EventEmitter } from 'node:events';
- import * as promises from 'node:readline/promises';
-
- export { promises };
- export interface Key {
- sequence?: string | undefined;
- name?: string | undefined;
- ctrl?: boolean | undefined;
- meta?: boolean | undefined;
- shift?: boolean | undefined;
- }
- /**
- * Instances of the `readline.Interface` class are constructed using the`readline.createInterface()` method. Every instance is associated with a
- * single `input` `Readable` stream and a single `output` `Writable` stream.
- * The `output` stream is used to print prompts for user input that arrives on,
- * and is read from, the `input` stream.
- * @since v0.1.104
- */
- export class Interface extends EventEmitter {
- readonly terminal: boolean;
- /**
- * The current input data being processed by node.
- *
- * This can be used when collecting input from a TTY stream to retrieve the
- * current value that has been processed thus far, prior to the `line` event
- * being emitted. Once the `line` event has been emitted, this property will
- * be an empty string.
- *
- * Be aware that modifying the value during the instance runtime may have
- * unintended consequences if `rl.cursor` is not also controlled.
- *
- * **If not using a TTY stream for input, use the `'line'` event.**
- *
- * One possible use case would be as follows:
- *
- * ```js
- * const values = ['lorem ipsum', 'dolor sit amet'];
- * const rl = readline.createInterface(process.stdin);
- * const showResults = debounce(() => {
- * console.log(
- * '\n',
- * values.filter((val) => val.startsWith(rl.line)).join(' ')
- * );
- * }, 300);
- * process.stdin.on('keypress', (c, k) => {
- * showResults();
- * });
- * ```
- * @since v0.1.98
- */
- readonly line: string;
- /**
- * The cursor position relative to `rl.line`.
- *
- * This will track where the current cursor lands in the input string, when
- * reading input from a TTY stream. The position of cursor determines the
- * portion of the input string that will be modified as input is processed,
- * as well as the column where the terminal caret will be rendered.
- * @since v0.1.98
- */
- readonly cursor: number;
- /**
- * NOTE: According to the documentation:
- *
- * > Instances of the `readline.Interface` class are constructed using the
- * > `readline.createInterface()` method.
- *
- * @see https://nodejs.org/dist/latest-v10.x/docs/api/readline.html#readline_class_interface
- */
- protected constructor(input: NodeJS.ReadableStream, output?: NodeJS.WritableStream, completer?: Completer | AsyncCompleter, terminal?: boolean);
- /**
- * NOTE: According to the documentation:
- *
- * > Instances of the `readline.Interface` class are constructed using the
- * > `readline.createInterface()` method.
- *
- * @see https://nodejs.org/dist/latest-v10.x/docs/api/readline.html#readline_class_interface
- */
- protected constructor(options: ReadLineOptions);
- /**
- * The `rl.getPrompt()` method returns the current prompt used by `rl.prompt()`.
- * @since v15.3.0
- * @return the current prompt string
- */
- getPrompt(): string;
- /**
- * The `rl.setPrompt()` method sets the prompt that will be written to `output`whenever `rl.prompt()` is called.
- * @since v0.1.98
- */
- setPrompt(prompt: string): void;
- /**
- * The `rl.prompt()` method writes the `readline.Interface` instances configured`prompt` to a new line in `output` in order to provide a user with a new
- * location at which to provide input.
- *
- * When called, `rl.prompt()` will resume the `input` stream if it has been
- * paused.
- *
- * If the `readline.Interface` was created with `output` set to `null` or`undefined` the prompt is not written.
- * @since v0.1.98
- * @param preserveCursor If `true`, prevents the cursor placement from being reset to `0`.
- */
- prompt(preserveCursor?: boolean): void;
- /**
- * The `rl.question()` method displays the `query` by writing it to the `output`,
- * waits for user input to be provided on `input`, then invokes the `callback`function passing the provided input as the first argument.
- *
- * When called, `rl.question()` will resume the `input` stream if it has been
- * paused.
- *
- * If the `readline.Interface` was created with `output` set to `null` or`undefined` the `query` is not written.
- *
- * The `callback` function passed to `rl.question()` does not follow the typical
- * pattern of accepting an `Error` object or `null` as the first argument.
- * The `callback` is called with the provided answer as the only argument.
- *
- * Example usage:
- *
- * ```js
- * rl.question('What is your favorite food? ', (answer) => {
- * console.log(`Oh, so your favorite food is ${answer}`);
- * });
- * ```
- *
- * Using an `AbortController` to cancel a question.
- *
- * ```js
- * const ac = new AbortController();
- * const signal = ac.signal;
- *
- * rl.question('What is your favorite food? ', { signal }, (answer) => {
- * console.log(`Oh, so your favorite food is ${answer}`);
- * });
- *
- * signal.addEventListener('abort', () => {
- * console.log('The food question timed out');
- * }, { once: true });
- *
- * setTimeout(() => ac.abort(), 10000);
- * ```
- *
- * If this method is invoked as it's util.promisify()ed version, it returns a
- * Promise that fulfills with the answer. If the question is canceled using
- * an `AbortController` it will reject with an `AbortError`.
- *
- * ```js
- * const util = require('util');
- * const question = util.promisify(rl.question).bind(rl);
- *
- * async function questionExample() {
- * try {
- * const answer = await question('What is you favorite food? ');
- * console.log(`Oh, so your favorite food is ${answer}`);
- * } catch (err) {
- * console.error('Question rejected', err);
- * }
- * }
- * questionExample();
- * ```
- * @since v0.3.3
- * @param query A statement or query to write to `output`, prepended to the prompt.
- * @param callback A callback function that is invoked with the user's input in response to the `query`.
- */
- question(query: string, callback: (answer: string) => void): void;
- question(query: string, options: Abortable, callback: (answer: string) => void): void;
- /**
- * The `rl.pause()` method pauses the `input` stream, allowing it to be resumed
- * later if necessary.
- *
- * Calling `rl.pause()` does not immediately pause other events (including`'line'`) from being emitted by the `readline.Interface` instance.
- * @since v0.3.4
- */
- pause(): this;
- /**
- * The `rl.resume()` method resumes the `input` stream if it has been paused.
- * @since v0.3.4
- */
- resume(): this;
- /**
- * The `rl.close()` method closes the `readline.Interface` instance and
- * relinquishes control over the `input` and `output` streams. When called,
- * the `'close'` event will be emitted.
- *
- * Calling `rl.close()` does not immediately stop other events (including `'line'`)
- * from being emitted by the `readline.Interface` instance.
- * @since v0.1.98
- */
- close(): void;
- /**
- * The `rl.write()` method will write either `data` or a key sequence identified
- * by `key` to the `output`. The `key` argument is supported only if `output` is
- * a `TTY` text terminal. See `TTY keybindings` for a list of key
- * combinations.
- *
- * If `key` is specified, `data` is ignored.
- *
- * When called, `rl.write()` will resume the `input` stream if it has been
- * paused.
- *
- * If the `readline.Interface` was created with `output` set to `null` or`undefined` the `data` and `key` are not written.
- *
- * ```js
- * rl.write('Delete this!');
- * // Simulate Ctrl+U to delete the line written previously
- * rl.write(null, { ctrl: true, name: 'u' });
- * ```
- *
- * The `rl.write()` method will write the data to the `readline` `Interface`'s`input`_as if it were provided by the user_.
- * @since v0.1.98
- */
- write(data: string | Buffer, key?: Key): void;
- write(data: undefined | null | string | Buffer, key: Key): void;
- /**
- * Returns the real position of the cursor in relation to the input
- * prompt + string. Long input (wrapping) strings, as well as multiple
- * line prompts are included in the calculations.
- * @since v13.5.0, v12.16.0
- */
- getCursorPos(): CursorPos;
- /**
- * events.EventEmitter
- * 1. close
- * 2. line
- * 3. pause
- * 4. resume
- * 5. SIGCONT
- * 6. SIGINT
- * 7. SIGTSTP
- * 8. history
- */
- addListener(event: string, listener: (...args: any[]) => void): this;
- addListener(event: 'close', listener: () => void): this;
- addListener(event: 'line', listener: (input: string) => void): this;
- addListener(event: 'pause', listener: () => void): this;
- addListener(event: 'resume', listener: () => void): this;
- addListener(event: 'SIGCONT', listener: () => void): this;
- addListener(event: 'SIGINT', listener: () => void): this;
- addListener(event: 'SIGTSTP', listener: () => void): this;
- addListener(event: 'history', listener: (history: string[]) => void): this;
- emit(event: string | symbol, ...args: any[]): boolean;
- emit(event: 'close'): boolean;
- emit(event: 'line', input: string): boolean;
- emit(event: 'pause'): boolean;
- emit(event: 'resume'): boolean;
- emit(event: 'SIGCONT'): boolean;
- emit(event: 'SIGINT'): boolean;
- emit(event: 'SIGTSTP'): boolean;
- emit(event: 'history', history: string[]): boolean;
- on(event: string, listener: (...args: any[]) => void): this;
- on(event: 'close', listener: () => void): this;
- on(event: 'line', listener: (input: string) => void): this;
- on(event: 'pause', listener: () => void): this;
- on(event: 'resume', listener: () => void): this;
- on(event: 'SIGCONT', listener: () => void): this;
- on(event: 'SIGINT', listener: () => void): this;
- on(event: 'SIGTSTP', listener: () => void): this;
- on(event: 'history', listener: (history: string[]) => void): this;
- once(event: string, listener: (...args: any[]) => void): this;
- once(event: 'close', listener: () => void): this;
- once(event: 'line', listener: (input: string) => void): this;
- once(event: 'pause', listener: () => void): this;
- once(event: 'resume', listener: () => void): this;
- once(event: 'SIGCONT', listener: () => void): this;
- once(event: 'SIGINT', listener: () => void): this;
- once(event: 'SIGTSTP', listener: () => void): this;
- once(event: 'history', listener: (history: string[]) => void): this;
- prependListener(event: string, listener: (...args: any[]) => void): this;
- prependListener(event: 'close', listener: () => void): this;
- prependListener(event: 'line', listener: (input: string) => void): this;
- prependListener(event: 'pause', listener: () => void): this;
- prependListener(event: 'resume', listener: () => void): this;
- prependListener(event: 'SIGCONT', listener: () => void): this;
- prependListener(event: 'SIGINT', listener: () => void): this;
- prependListener(event: 'SIGTSTP', listener: () => void): this;
- prependListener(event: 'history', listener: (history: string[]) => void): this;
- prependOnceListener(event: string, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'close', listener: () => void): this;
- prependOnceListener(event: 'line', listener: (input: string) => void): this;
- prependOnceListener(event: 'pause', listener: () => void): this;
- prependOnceListener(event: 'resume', listener: () => void): this;
- prependOnceListener(event: 'SIGCONT', listener: () => void): this;
- prependOnceListener(event: 'SIGINT', listener: () => void): this;
- prependOnceListener(event: 'SIGTSTP', listener: () => void): this;
- prependOnceListener(event: 'history', listener: (history: string[]) => void): this;
- [Symbol.asyncIterator](): AsyncIterableIterator;
- }
- export type ReadLine = Interface; // type forwarded for backwards compatibility
- export type Completer = (line: string) => CompleterResult;
- export type AsyncCompleter = (line: string, callback: (err?: null | Error, result?: CompleterResult) => void) => void;
- export type CompleterResult = [string[], string];
- export interface ReadLineOptions {
- input: NodeJS.ReadableStream;
- output?: NodeJS.WritableStream | undefined;
- completer?: Completer | AsyncCompleter | undefined;
- terminal?: boolean | undefined;
- /**
- * Initial list of history lines. This option makes sense
- * only if `terminal` is set to `true` by the user or by an internal `output`
- * check, otherwise the history caching mechanism is not initialized at all.
- * @default []
- */
- history?: string[] | undefined;
- historySize?: number | undefined;
- prompt?: string | undefined;
- crlfDelay?: number | undefined;
- /**
- * If `true`, when a new input line added
- * to the history list duplicates an older one, this removes the older line
- * from the list.
- * @default false
- */
- removeHistoryDuplicates?: boolean | undefined;
- escapeCodeTimeout?: number | undefined;
- tabSize?: number | undefined;
- }
- /**
- * The `readline.createInterface()` method creates a new `readline.Interface`instance.
- *
- * ```js
- * const readline = require('readline');
- * const rl = readline.createInterface({
- * input: process.stdin,
- * output: process.stdout
- * });
- * ```
- *
- * Once the `readline.Interface` instance is created, the most common case is to
- * listen for the `'line'` event:
- *
- * ```js
- * rl.on('line', (line) => {
- * console.log(`Received: ${line}`);
- * });
- * ```
- *
- * If `terminal` is `true` for this instance then the `output` stream will get
- * the best compatibility if it defines an `output.columns` property and emits
- * a `'resize'` event on the `output` if or when the columns ever change
- * (`process.stdout` does this automatically when it is a TTY).
- *
- * When creating a `readline.Interface` using `stdin` as input, the program
- * will not terminate until it receives `EOF` (Ctrl+D on
- * Linux/macOS, Ctrl+Z followed by Return on
- * Windows).
- * If you want your application to exit without waiting for user input, you can `unref()` the standard input stream:
- *
- * ```js
- * process.stdin.unref();
- * ```
- * @since v0.1.98
- */
- export function createInterface(input: NodeJS.ReadableStream, output?: NodeJS.WritableStream, completer?: Completer | AsyncCompleter, terminal?: boolean): Interface;
- export function createInterface(options: ReadLineOptions): Interface;
- /**
- * The `readline.emitKeypressEvents()` method causes the given `Readable` stream to begin emitting `'keypress'` events corresponding to received input.
- *
- * Optionally, `interface` specifies a `readline.Interface` instance for which
- * autocompletion is disabled when copy-pasted input is detected.
- *
- * If the `stream` is a `TTY`, then it must be in raw mode.
- *
- * This is automatically called by any readline instance on its `input` if the`input` is a terminal. Closing the `readline` instance does not stop
- * the `input` from emitting `'keypress'` events.
- *
- * ```js
- * readline.emitKeypressEvents(process.stdin);
- * if (process.stdin.isTTY)
- * process.stdin.setRawMode(true);
- * ```
- *
- * ## Example: Tiny CLI
- *
- * The following example illustrates the use of `readline.Interface` class to
- * implement a small command-line interface:
- *
- * ```js
- * const readline = require('readline');
- * const rl = readline.createInterface({
- * input: process.stdin,
- * output: process.stdout,
- * prompt: 'OHAI> '
- * });
- *
- * rl.prompt();
- *
- * rl.on('line', (line) => {
- * switch (line.trim()) {
- * case 'hello':
- * console.log('world!');
- * break;
- * default:
- * console.log(`Say what? I might have heard '${line.trim()}'`);
- * break;
- * }
- * rl.prompt();
- * }).on('close', () => {
- * console.log('Have a great day!');
- * process.exit(0);
- * });
- * ```
- *
- * ## Example: Read file stream line-by-Line
- *
- * A common use case for `readline` is to consume an input file one line at a
- * time. The easiest way to do so is leveraging the `fs.ReadStream` API as
- * well as a `for await...of` loop:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * async function processLineByLine() {
- * const fileStream = fs.createReadStream('input.txt');
- *
- * const rl = readline.createInterface({
- * input: fileStream,
- * crlfDelay: Infinity
- * });
- * // Note: we use the crlfDelay option to recognize all instances of CR LF
- * // ('\r\n') in input.txt as a single line break.
- *
- * for await (const line of rl) {
- * // Each line in input.txt will be successively available here as `line`.
- * console.log(`Line from file: ${line}`);
- * }
- * }
- *
- * processLineByLine();
- * ```
- *
- * Alternatively, one could use the `'line'` event:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * const rl = readline.createInterface({
- * input: fs.createReadStream('sample.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * console.log(`Line from file: ${line}`);
- * });
- * ```
- *
- * Currently, `for await...of` loop can be a bit slower. If `async` / `await`flow and speed are both essential, a mixed approach can be applied:
- *
- * ```js
- * const { once } = require('events');
- * const { createReadStream } = require('fs');
- * const { createInterface } = require('readline');
- *
- * (async function processLineByLine() {
- * try {
- * const rl = createInterface({
- * input: createReadStream('big-file.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * // Process the line.
- * });
- *
- * await once(rl, 'close');
- *
- * console.log('File processed.');
- * } catch (err) {
- * console.error(err);
- * }
- * })();
- * ```
- * @since v0.7.7
- */
- export function emitKeypressEvents(stream: NodeJS.ReadableStream, readlineInterface?: Interface): void;
- export type Direction = -1 | 0 | 1;
- export interface CursorPos {
- rows: number;
- cols: number;
- }
- /**
- * The `readline.clearLine()` method clears current line of given `TTY` stream
- * in a specified direction identified by `dir`.
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function clearLine(stream: NodeJS.WritableStream, dir: Direction, callback?: () => void): boolean;
- /**
- * The `readline.clearScreenDown()` method clears the given `TTY` stream from
- * the current position of the cursor down.
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function clearScreenDown(stream: NodeJS.WritableStream, callback?: () => void): boolean;
- /**
- * The `readline.cursorTo()` method moves cursor to the specified position in a
- * given `TTY` `stream`.
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function cursorTo(stream: NodeJS.WritableStream, x: number, y?: number, callback?: () => void): boolean;
- /**
- * The `readline.moveCursor()` method moves the cursor _relative_ to its current
- * position in a given `TTY` `stream`.
- *
- * ## Example: Tiny CLI
- *
- * The following example illustrates the use of `readline.Interface` class to
- * implement a small command-line interface:
- *
- * ```js
- * const readline = require('readline');
- * const rl = readline.createInterface({
- * input: process.stdin,
- * output: process.stdout,
- * prompt: 'OHAI> '
- * });
- *
- * rl.prompt();
- *
- * rl.on('line', (line) => {
- * switch (line.trim()) {
- * case 'hello':
- * console.log('world!');
- * break;
- * default:
- * console.log(`Say what? I might have heard '${line.trim()}'`);
- * break;
- * }
- * rl.prompt();
- * }).on('close', () => {
- * console.log('Have a great day!');
- * process.exit(0);
- * });
- * ```
- *
- * ## Example: Read file stream line-by-Line
- *
- * A common use case for `readline` is to consume an input file one line at a
- * time. The easiest way to do so is leveraging the `fs.ReadStream` API as
- * well as a `for await...of` loop:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * async function processLineByLine() {
- * const fileStream = fs.createReadStream('input.txt');
- *
- * const rl = readline.createInterface({
- * input: fileStream,
- * crlfDelay: Infinity
- * });
- * // Note: we use the crlfDelay option to recognize all instances of CR LF
- * // ('\r\n') in input.txt as a single line break.
- *
- * for await (const line of rl) {
- * // Each line in input.txt will be successively available here as `line`.
- * console.log(`Line from file: ${line}`);
- * }
- * }
- *
- * processLineByLine();
- * ```
- *
- * Alternatively, one could use the `'line'` event:
- *
- * ```js
- * const fs = require('fs');
- * const readline = require('readline');
- *
- * const rl = readline.createInterface({
- * input: fs.createReadStream('sample.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * console.log(`Line from file: ${line}`);
- * });
- * ```
- *
- * Currently, `for await...of` loop can be a bit slower. If `async` / `await`flow and speed are both essential, a mixed approach can be applied:
- *
- * ```js
- * const { once } = require('events');
- * const { createReadStream } = require('fs');
- * const { createInterface } = require('readline');
- *
- * (async function processLineByLine() {
- * try {
- * const rl = createInterface({
- * input: createReadStream('big-file.txt'),
- * crlfDelay: Infinity
- * });
- *
- * rl.on('line', (line) => {
- * // Process the line.
- * });
- *
- * await once(rl, 'close');
- *
- * console.log('File processed.');
- * } catch (err) {
- * console.error(err);
- * }
- * })();
- * ```
- * @since v0.7.7
- * @param callback Invoked once the operation completes.
- * @return `false` if `stream` wishes for the calling code to wait for the `'drain'` event to be emitted before continuing to write additional data; otherwise `true`.
- */
- export function moveCursor(stream: NodeJS.WritableStream, dx: number, dy: number, callback?: () => void): boolean;
-}
-declare module 'node:readline' {
- export * from 'readline';
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/Readme.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/Readme.md
deleted file mode 100644
index 2cd9df1f8d86e61b68079250fefac76fd4a6e8d4..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/Readme.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-# socket.io-adapter
-
-Default socket.io in-memory adapter class.
-
-Compatibility table:
-
-| Adapter version | Socket.IO server version |
-|-----------------| ------------------------ |
-| 1.x.x | 1.x.x / 2.x.x |
-| 2.x.x | 3.x.x |
-
-## How to use
-
-This module is not intended for end-user usage, but can be used as an
-interface to inherit from other adapters you might want to build.
-
-As an example of an adapter that builds on top of this, please take a look
-at [socket.io-redis](https://github.com/learnboost/socket.io-redis).
-
-## License
-
-MIT
diff --git a/spaces/fgpzen/remove-photo-object/src/helper.py b/spaces/fgpzen/remove-photo-object/src/helper.py
deleted file mode 100644
index 5dd517aa53a623997c3115284cd2e13a836ab225..0000000000000000000000000000000000000000
--- a/spaces/fgpzen/remove-photo-object/src/helper.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import os
-import sys
-
-from urllib.parse import urlparse
-import cv2
-import numpy as np
-import torch
-from torch.hub import download_url_to_file, get_dir
-
-LAMA_MODEL_URL = os.environ.get(
- "LAMA_MODEL_URL",
- "https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
-)
-
-
-def download_model(url=LAMA_MODEL_URL):
- parts = urlparse(url)
- hub_dir = get_dir()
- model_dir = os.path.join(hub_dir, "checkpoints")
- if not os.path.isdir(model_dir):
- os.makedirs(os.path.join(model_dir, "hub", "checkpoints"))
- filename = os.path.basename(parts.path)
- cached_file = os.path.join(model_dir, filename)
- if not os.path.exists(cached_file):
- sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
- hash_prefix = None
- download_url_to_file(url, cached_file, hash_prefix, progress=True)
- return cached_file
-
-
-def ceil_modulo(x, mod):
- if x % mod == 0:
- return x
- return (x // mod + 1) * mod
-
-
-def numpy_to_bytes(image_numpy: np.ndarray) -> bytes:
- data = cv2.imencode(".jpg", image_numpy)[1]
- image_bytes = data.tobytes()
- return image_bytes
-
-
-def load_img(img_bytes, gray: bool = False):
- nparr = np.frombuffer(img_bytes, np.uint8)
- if gray:
- np_img = cv2.imdecode(nparr, cv2.IMREAD_GRAYSCALE)
- else:
- np_img = cv2.imdecode(nparr, cv2.IMREAD_UNCHANGED)
- if len(np_img.shape) == 3 and np_img.shape[2] == 4:
- np_img = cv2.cvtColor(np_img, cv2.COLOR_BGRA2RGB)
- else:
- np_img = cv2.cvtColor(np_img, cv2.COLOR_BGR2RGB)
-
- return np_img
-
-
-def norm_img(np_img):
- if len(np_img.shape) == 2:
- np_img = np_img[:, :, np.newaxis]
- np_img = np.transpose(np_img, (2, 0, 1))
- np_img = np_img.astype("float32") / 255
- return np_img
-
-
-def resize_max_size(
- np_img, size_limit: int, interpolation=cv2.INTER_CUBIC
-) -> np.ndarray:
- # Resize image's longer size to size_limit if longer size larger than size_limit
- h, w = np_img.shape[:2]
- if max(h, w) > size_limit:
- ratio = size_limit / max(h, w)
- new_w = int(w * ratio + 0.5)
- new_h = int(h * ratio + 0.5)
- return cv2.resize(np_img, dsize=(new_w, new_h), interpolation=interpolation)
- else:
- return np_img
-
-
-def pad_img_to_modulo(img, mod):
- channels, height, width = img.shape
- out_height = ceil_modulo(height, mod)
- out_width = ceil_modulo(width, mod)
- return np.pad(
- img,
- ((0, 0), (0, out_height - height), (0, out_width - width)),
- mode="symmetric",
- )
\ No newline at end of file
diff --git a/spaces/flatindo/scaler/realesrgan/__init__.py b/spaces/flatindo/scaler/realesrgan/__init__.py
deleted file mode 100644
index 2276f1eecded80d1f00ff97b45c66c7a8922b987..0000000000000000000000000000000000000000
--- a/spaces/flatindo/scaler/realesrgan/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# flake8: noqa
-from .archs import *
-from .data import *
-from .models import *
-from .utils import *
-from .version import *
diff --git a/spaces/flax-community/DietNerf-Demo/jaxnerf/eval.py b/spaces/flax-community/DietNerf-Demo/jaxnerf/eval.py
deleted file mode 100644
index 52cc35e952c6dd232a736c80eaabad5076c865a3..0000000000000000000000000000000000000000
--- a/spaces/flax-community/DietNerf-Demo/jaxnerf/eval.py
+++ /dev/null
@@ -1,192 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Google Research Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Lint as: python3
-"""Evaluation script for Nerf."""
-import functools
-from os import path
-
-from absl import app
-from absl import flags
-import flax
-from flax.metrics import tensorboard
-from flax.training import checkpoints
-import jax
-from jax import random
-import numpy as np
-import tensorflow as tf
-import tensorflow_hub as tf_hub
-#import wandb
-import glob
-import cv2
-import os
-
-from jaxnerf.nerf import datasets
-from jaxnerf.nerf import models
-from jaxnerf.nerf import utils
-
-FLAGS = flags.FLAGS
-
-utils.define_flags()
-
-#LPIPS_TFHUB_PATH = "@neural-rendering/lpips/distance/1"
-
-
-def compute_lpips(image1, image2, model):
- """Compute the LPIPS metric."""
- # The LPIPS model expects a batch dimension.
- return model(
- tf.convert_to_tensor(image1[None, Ellipsis]),
- tf.convert_to_tensor(image2[None, Ellipsis]))[0]
-
-
-def main(unused_argv):
- # Hide the GPUs and TPUs from TF so it does not reserve memory on them for
- # LPIPS computation or dataset loading.
- tf.config.experimental.set_visible_devices([], "GPU")
- tf.config.experimental.set_visible_devices([], "TPU")
-
- #wandb.init(project="hf-flax-clip-nerf", entity="wandb", sync_tensorboard=True)
-
- rng = random.PRNGKey(20200823)
-
- if FLAGS.config is not None:
- utils.update_flags(FLAGS)
- if FLAGS.train_dir is None:
- raise ValueError("train_dir must be set. None set now.")
- if FLAGS.data_dir is None:
- raise ValueError("data_dir must be set. None set now.")
-
- dataset = datasets.get_dataset("test", FLAGS)
- rng, key = random.split(rng)
- model, init_variables = models.get_model(key, dataset.peek(), FLAGS)
- optimizer = flax.optim.Adam(FLAGS.lr_init).create(init_variables)
- state = utils.TrainState(optimizer=optimizer)
- del optimizer, init_variables
-
- #lpips_model = tf_hub.load(LPIPS_TFHUB_PATH)
-
- # Rendering is forced to be deterministic even if training was randomized, as
- # this eliminates "speckle" artifacts.
- def render_fn(variables, key_0, key_1, rays):
- return jax.lax.all_gather(
- model.apply(variables, key_0, key_1, rays, False), axis_name="batch")
-
- # pmap over only the data input.
- render_pfn = jax.pmap(
- render_fn,
- in_axes=(None, None, None, 0),
- donate_argnums=3,
- axis_name="batch",
- )
-
- # Compiling to the CPU because it's faster and more accurate.
- ssim_fn = jax.jit(
- functools.partial(utils.compute_ssim, max_val=1.), backend="cpu")
-
- last_step = 0
- out_dir = path.join(FLAGS.train_dir,
- "path_renders" if FLAGS.render_path else "test_preds")
- if not FLAGS.eval_once:
- summary_writer = tensorboard.SummaryWriter(
- path.join(FLAGS.train_dir, "eval"))
- while True:
- state = checkpoints.restore_checkpoint(FLAGS.train_dir, state)
- step = int(state.optimizer.state.step)
- if step <= last_step:
- continue
- if FLAGS.save_output and (not utils.isdir(out_dir)):
- utils.makedirs(out_dir)
- psnr_values = []
- ssim_values = []
- #lpips_values = []
- if not FLAGS.eval_once:
- showcase_index = np.random.randint(0, dataset.size)
- for idx in range(dataset.sizerender_image):
- print(f"Evaluating {idx + 1}/{dataset.size}")
- batch = next(dataset)
- pred_color, pred_disp, pred_acc = utils.render_image(
- functools.partial(render_pfn, state.optimizer.target),
- batch["rays"],
- rng,
- FLAGS.dataset == "llff",
- chunk=FLAGS.chunk)
- if jax.host_id() != 0: # Only record via host 0.
- continue
- if not FLAGS.eval_once and idx == showcase_index:
- showcase_color = pred_color
- showcase_disp = pred_disp
- showcase_acc = pred_acc
- if not FLAGS.render_path:
- showcase_gt = batch["pixels"]
- if not FLAGS.render_path:
- psnr = utils.compute_psnr(((pred_color - batch["pixels"]) ** 2).mean())
- ssim = ssim_fn(pred_color, batch["pixels"])
- #lpips = compute_lpips(pred_color, batch["pixels"], lpips_model)
- print(f"PSNR = {psnr:.4f}, SSIM = {ssim:.4f}")
- psnr_values.append(float(psnr))
- ssim_values.append(float(ssim))
- #lpips_values.append(float(lpips))
- if FLAGS.save_output:
- utils.save_img(pred_color, path.join(out_dir, "{:03d}.png".format(idx)))
- utils.save_img(pred_disp[Ellipsis, 0],
- path.join(out_dir, "disp_{:03d}.png".format(idx)))
- if (not FLAGS.eval_once) and (jax.host_id() == 0):
- summary_writer.image("pred_color", showcase_color, step)
- summary_writer.image("pred_disp", showcase_disp, step)
- summary_writer.image("pred_acc", showcase_acc, step)
- if not FLAGS.render_path:
- summary_writer.scalar("psnr", np.mean(np.array(psnr_values)), step)
- summary_writer.scalar("ssim", np.mean(np.array(ssim_values)), step)
- #summary_writer.scalar("lpips", np.mean(np.array(lpips_values)), step)
- summary_writer.image("target", showcase_gt, step)
- if FLAGS.save_output and (not FLAGS.render_path) and (jax.host_id() == 0):
- with utils.open_file(path.join(out_dir, f"psnrs_{step}.txt"), "w") as f:
- f.write(" ".join([str(v) for v in psnr_values]))
- with utils.open_file(path.join(out_dir, f"ssims_{step}.txt"), "w") as f:
- f.write(" ".join([str(v) for v in ssim_values]))
- #with utils.open_file(path.join(out_dir, f"lpips_{step}.txt"), "w") as f:
- #f.write(" ".join([str(v) for v in lpips_values]))
- with utils.open_file(path.join(out_dir, "psnr.txt"), "w") as f:
- f.write("{}".format(np.mean(np.array(psnr_values))))
- with utils.open_file(path.join(out_dir, "ssim.txt"), "w") as f:
- f.write("{}".format(np.mean(np.array(ssim_values))))
- #with utils.open_file(path.join(out_dir, "lpips.txt"), "w") as f:
- #f.write("{}".format(np.mean(np.array(lpips_values))))
- imglist = glob.glob(os.path.join(out_dir, "[0-9][0-9][0-9].png"))
- sorted_files = sorted(imglist, key=lambda x: int(x.split('/')[-1].split('.')[0]))
- imglist2 = glob.glob(os.path.join(out_dir, "disp_[0-9][0-9][0-9].png"))
- sorted_files2 = sorted(imglist2, key=lambda x: int(x.split('/')[-1].split('.')[0].split('_')[-1]))
- fourcc = cv2.VideoWriter_fourcc(*'MP4V')
- fps = 10.0
- out = cv2.VideoWriter(os.path.join(out_dir, "rendering_video.mp4"), fourcc, fps,
- (2 * img.shape[1], img.shape[0]))
-
- for i in range(len(imglist)):
- img = cv2.imread(imglist[i], cv2.IMREAD_COLOR)
- img2 = cv2.imread(imglist2[i], cv2.IMREAD_COLOR)
- catimg = np.concatenate((img, img2), axis=1)
- out.write(catimg)
-
- out.release()
- if FLAGS.eval_once:
- break
- if int(step) >= FLAGS.max_steps:
- break
- last_step = step
-
-
-if __name__ == "__main__":
- app.run(main)
diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/walkers/classic_bipedal_body.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/walkers/classic_bipedal_body.js
deleted file mode 100644
index 12b574a722fb6510c78789fbdd6202a426c7ee4b..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/walkers/classic_bipedal_body.js
+++ /dev/null
@@ -1,140 +0,0 @@
-HULL_POLYGONS = [
- [[-30, +9], [+6, +9], [+34, +1], [+34, -8], [-30, -8]]
-];
-HULL_BOTTOM_WIDTH = 64;
-const SPEED_HIP = 4;
-const SPEED_KNEE = 6;
-
-/**
- * @classdesc Bipedal Walker morphology.
- */
-class ClassicBipedalBody extends WalkerAbstractBody {
- /**
- * @constructor
- * @param scale {number} - Scale of the environment
- * @param motors_torque {number}
- * @param nb_steps_under_water {number}
- * @param reset_on_hull_critical_contact {boolean}
- */
- constructor(scale, motors_torque=80, nb_steps_under_water=600, reset_on_hull_critical_contact=false) {
- super(scale, motors_torque, nb_steps_under_water);
-
- this.LEG_DOWN = 3 / this.SCALE; // 0 = center of hull
- this.LEG_W = 8 / this.SCALE;
- this.LEG_H = 34 / this.SCALE;
- this.TORQUE_PENALTY = 0.00035;
- this.reset_on_hull_critical_contact = reset_on_hull_critical_contact;
-
- this.AGENT_WIDTH = HULL_BOTTOM_WIDTH / this.SCALE;
- this.AGENT_HEIGHT = 17 / this.SCALE + this.LEG_H * 2 - this.LEG_DOWN;
- this.AGENT_CENTER_HEIGHT = this.LEG_H * 2 + this.LEG_DOWN;
-
- this.old_morphology = false;
- }
-
- draw(world, init_x, init_y, force_to_center){
- let HULL_FIXTURES = [];
- let fd_polygon;
- let vertices;
- let y_offset = 0;//10/this.SCALE;
-
- for(let polygon of HULL_POLYGONS){
- fd_polygon = new b2.FixtureDef();
- fd_polygon.shape = new b2.PolygonShape();
- vertices = [];
- for(let vertex of polygon){
- vertices.push(new b2.Vec2(vertex[0] / this.SCALE, vertex[1] / this.SCALE));
- }
- fd_polygon.shape.Set(vertices, polygon.length);
- fd_polygon.density = 5.0;
- fd_polygon.friction = 0.1;
- fd_polygon.filter.categoryBits = 0x20;
- fd_polygon.filter.maskBits = 0x000F; // 0.99 bouncy
- HULL_FIXTURES.push(fd_polygon);
- }
-
- let LEG_FD = new b2.FixtureDef();
- LEG_FD.shape = new b2.PolygonShape();
- LEG_FD.shape.SetAsBox(this.LEG_W / 2, this.LEG_H / 2);
- LEG_FD.density = 1.0;
- LEG_FD.restitution = 0.0;
- LEG_FD.filter.categoryBits = 0x20;
- LEG_FD.filter.maskBits = 0x000F;
-
- let LOWER_FD = new b2.FixtureDef();
- LOWER_FD.shape = new b2.PolygonShape();
- LOWER_FD.shape.SetAsBox(0.8 * this.LEG_W / 2, this.LEG_H / 2);
- LOWER_FD.density = 1.0;
- LOWER_FD.restitution = 0.0;
- LOWER_FD.filter.categoryBits = 0x20;
- LOWER_FD.filter.maskBits = 0x000F;
-
- let hull_bd = new b2.BodyDef();
- hull_bd.type = b2.Body.b2_dynamicBody;
- hull_bd.position.Set(init_x, init_y + y_offset);
- let hull = world.CreateBody(hull_bd);
- for(let fd of HULL_FIXTURES){
- hull.CreateFixture(fd);
- }
- hull.color1 = "#806682"; // [0.5, 0.4, 0.9]
- hull.color2 = "#4D4D80"; // [0.3, 0.3, 0.5]
- hull.ApplyForceToCenter(new b2.Vec2(force_to_center, 0), true);
- hull.SetUserData(new CustomBodyUserData(true, this.reset_on_hull_critical_contact, "hull"));
- this.body_parts.push(hull);
- this.reference_head_object = hull;
-
- // Leg and lower bodies and joints
- for(let i of [-1, +1]){
-
- // Leg body
- let leg_bd = new b2.BodyDef();
- leg_bd.type = b2.Body.b2_dynamicBody;
- leg_bd.position.Set(init_x, init_y - this.LEG_H / 2 - this.LEG_DOWN + y_offset);
- //leg_bd.angle = i * 0.05; // 2°
- let leg = world.CreateBody(leg_bd);
- leg.CreateFixture(LEG_FD);
- leg.color1 = i == -1 ? "#9C4F82" : "#964A7D"; // [0.61, 0.31, 0.51] : [0.59, 0.29, 0.49]
- leg.color2 = i == -1 ? "#69364F" : "#63304A"; // [0.41, 0.21, 0.31] : [0.39, 0.19, 0.29]
- leg.SetUserData(new CustomBodyUserData(false, false,"leg"));
- this.body_parts.push(leg);
-
- // Leg joint motor
- let leg_rjd = new b2.RevoluteJointDef();
- leg_rjd.Initialize(hull, leg, new b2.Vec2(init_x, init_y - this.LEG_DOWN + y_offset));
- leg_rjd.enableMotor = true;
- leg_rjd.enableLimit = true;
- leg_rjd.maxMotorTorque = this.MOTORS_TORQUE;
- leg_rjd.motorSpeed = i;
- leg_rjd.lowerAngle = - 0.8;
- leg_rjd.upperAngle = 1.1;
- let joint_motor = world.CreateJoint(leg_rjd);
- joint_motor.SetUserData(new CustomMotorUserData("hip", SPEED_HIP, false));
- this.motors.push(joint_motor);
-
- // lower body
- let lower_bd = new b2.BodyDef();
- lower_bd.type = b2.Body.b2_dynamicBody;
- lower_bd.position.Set(init_x, init_y - this.LEG_H * 3 / 2 - this.LEG_DOWN + y_offset);
- //lower_bd.angle = i * 0.05; // 2°
- let lower = world.CreateBody(lower_bd);
- lower.CreateFixture(LOWER_FD);
- lower.color1 = i == -1 ? "#9C4F82" : "#964A7D"; // [0.61, 0.31, 0.51] : [0.59, 0.29, 0.49]
- lower.color2 = i == -1 ? "#69364F" : "#63304A"; // [0.41, 0.21, 0.31] : [0.39, 0.19, 0.29]
- lower.SetUserData(new CustomBodyUserData(true, false,"lower"));
- this.body_parts.push(lower);
-
- // lower joint motor
- let lower_rjd = new b2.RevoluteJointDef();
- lower_rjd.Initialize(leg, lower, new b2.Vec2(init_x, init_y - this.LEG_DOWN - this.LEG_H + y_offset));
- lower_rjd.enableMotor = true;
- lower_rjd.enableLimit = true;
- lower_rjd.maxMotorTorque = this.MOTORS_TORQUE;
- lower_rjd.motorSpeed = 1;
- lower_rjd.lowerAngle = - 1.6;
- lower_rjd.upperAngle = -0.1;
- joint_motor = world.CreateJoint(lower_rjd);
- joint_motor.SetUserData(new CustomMotorUserData("knee", SPEED_KNEE, true, 1.0, lower));
- this.motors.push(joint_motor);
- }
- }
-}
\ No newline at end of file
diff --git a/spaces/furiosa-ai/ocr/style.css b/spaces/furiosa-ai/ocr/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/furiosa-ai/ocr/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/get-foundation/getdemo/app/__init__.py b/spaces/get-foundation/getdemo/app/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/grosenthal/aineid/src/aineid/src/test-utils.tsx b/spaces/grosenthal/aineid/src/aineid/src/test-utils.tsx
deleted file mode 100644
index d7ef121147d3cdac16949e9b9a5b1b06d8283849..0000000000000000000000000000000000000000
--- a/spaces/grosenthal/aineid/src/aineid/src/test-utils.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import * as React from "react"
-import { render, RenderOptions } from "@testing-library/react"
-import { ChakraProvider, theme } from "@chakra-ui/react"
-
-const AllProviders = ({ children }: { children?: React.ReactNode }) => (
- {children}
-)
-
-const customRender = (ui: React.ReactElement, options?: RenderOptions) =>
- render(ui, { wrapper: AllProviders, ...options })
-
-export { customRender as render }
diff --git a/spaces/gvozdev/subspace/main.py b/spaces/gvozdev/subspace/main.py
deleted file mode 100644
index f6cd46ca108ce150daf14232ba0bdf335744e0e9..0000000000000000000000000000000000000000
--- a/spaces/gvozdev/subspace/main.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import gradio as gr
-from datasets import load_dataset
-from transformers import AutoTokenizer, AutoModel
-import torch
-import pandas as pd
-import os
-
-os.environ['CURL_CA_BUNDLE'] = ''
-
-# Load dataset
-issues_dataset = load_dataset("gvozdev/subspace-info-v2", split="train")
-
-# Load tokenizer and model
-model_ckpt = "sentence-transformers/all-MiniLM-L12-v1"
-tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
-model = AutoModel.from_pretrained(model_ckpt, trust_remote_code=True)
-
-# Text concatenation - not used in this case as mapping only on subject returns better results
-# def concatenate_text(examples):
-# return {
-# "text": examples["subject"]
-# + " \n "
-# + examples["details"]
-# }
-
-
-issues_dataset = issues_dataset.map()
-
-# To speed up embedding, we can switch to GPU (change device to "cuda") - for larger models
-device = torch.device("cpu")
-model.to(device)
-
-
-# CLS pooling on model’s outputs: collect the last hidden state for the special [CLS] token
-def cls_pooling(model_output):
- return model_output.last_hidden_state[:, 0]
-
-
-# Tokenize a list of documents, place the tensors on the CPU/GPU, feed them to the model,
-# and apply CLS pooling to the outputs
-def get_embeddings(text_list):
- encoded_input = tokenizer(
- text_list, padding=True, truncation=True, return_tensors="pt"
- )
- encoded_input = {k: v.to(device) for k, v in encoded_input.items()}
- model_output = model(**encoded_input)
- return cls_pooling(model_output)
-
-# Test if the function works
-# embedding = get_embeddings(issues_dataset["details"][0])
-# print(embedding.shape)
-
-
-# Use Dataset.map() to apply get_embeddings() function to each row in the dataset and create a new "embeddings" column
-# Convert the embeddings to NumPy arrays as Datasets requires this format when we try to index them with FAISS
-embeddings_dataset = issues_dataset.map(
- lambda x: {"embeddings": get_embeddings(x["subject"]).detach().cpu().numpy()[0]}
-)
-
-# Create a FAISS index
-embeddings_dataset.add_faiss_index(column="embeddings")
-
-
-#
-def answer_question(question):
- # Get an embedding for the question
- question_embedding = get_embeddings([question]).cpu().detach().numpy()
-
- # Find a nearest neighbor in our dataset
- scores, samples = embeddings_dataset.get_nearest_examples(
- "embeddings", question_embedding, k=1
- )
-
- samples_df = pd.DataFrame.from_dict(samples)
-
- # This part is needed in case we use k>1
- # samples_df["scores"] = scores
- # samples_df.sort_values("scores", ascending=False, inplace=True)
-
- return samples_df["details"].values[0]
-
-
-# Gradio interface
-title = "Subspace Docs bot"
-description = '
This is a bot trained on Subspace Network documentation ' \
- 'to answer the most common questions about the project
'
-
-
-def chat(message, history):
- history = history or []
- response = answer_question(message)
- history.append((message, response))
- return history, history
-
-
-iface = gr.Interface(
- chat,
- ["text", "state"],
- ["chatbot", "state"],
- allow_flagging="never",
- title=title,
- description=description,
- theme="Monochrome",
- examples=["What is Subspace Network?", "Do you have a token?", "System requirements"]
-)
-
-iface.launch(share=False)
diff --git a/spaces/h2oai/wave-tour/examples/table_select_single.py b/spaces/h2oai/wave-tour/examples/table_select_single.py
deleted file mode 100644
index 7e37db40c73bf3eb5482f63e34d4e1913f59fc41..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/table_select_single.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Table / Preselection / Single
-# Use a #table as an advanced single-select. To allow single #selection,
-# specify the name of the row you want to pre-select in 'value' attribute
-# or simply specify the `isSingle=True`.
-# #table #selection
-# ---
-from h2o_wave import main, app, Q, ui
-
-@app('/demo')
-async def serve(q: Q):
- if q.args.show_inputs:
- q.page['example'].items = [
- ui.text(f'selected={q.args.table}'),
- ui.button(name='show_form', label='Back', primary=True),
- ]
- else:
- q.page['example'] = ui.form_card(box='1 1 -1 5', items=[
- ui.table(
- name='table',
- columns=[ui.table_column(name='text', label='Table single selection', min_width='300px')],
- rows=[
- ui.table_row(name='row1', cells=['Row 1']),
- ui.table_row(name='row2', cells=['Row 2']),
- ui.table_row(name='row3', cells=['Row 3']),
- ui.table_row(name='row4', cells=['Row 4']),
- ui.table_row(name='row5', cells=['Row 5'])
- ],
- value='row2',
- ),
- ui.button(name='show_inputs', label='Submit', primary=True)
- ])
- await q.page.save()
diff --git a/spaces/haakohu/deep_privacy2/dp2/__init__.py b/spaces/haakohu/deep_privacy2/dp2/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/hamacojr/CAT-Seg/datasets/README.md b/spaces/hamacojr/CAT-Seg/datasets/README.md
deleted file mode 100644
index db2642a9b39eab0d02857ac2dafb15b4658e7cad..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/CAT-Seg/datasets/README.md
+++ /dev/null
@@ -1,167 +0,0 @@
-# Prepare Datasets for CAT-Seg
-
-A dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.DatasetCatalog)
-for its data, or [MetadataCatalog](https://detectron2.readthedocs.io/modules/data.html#detectron2.data.MetadataCatalog) for its metadata (class names, etc).
-This document explains how to setup the builtin datasets so they can be used by the above APIs.
-[Use Custom Datasets](https://detectron2.readthedocs.io/tutorials/datasets.html) gives a deeper dive on how to use `DatasetCatalog` and `MetadataCatalog`,
-and how to add new datasets to them.
-
-CAT-Seg has builtin support for a few datasets.
-The datasets are assumed to exist in a directory specified by the environment variable
-`DETECTRON2_DATASETS`.
-Under this directory, detectron2 will look for datasets in the structure described below, if needed.
-```
-$DETECTRON2_DATASETS/
- coco/ # COCO-Stuff
- ADEChallengeData2016/ # ADE20K-150
- ADE20K_2021_17_01/ # ADE20K-847
- VOCdevkit/
- VOC2010/ # PASCAL Context
- VOC2012/ # PASCAL VOC
-```
-
-You can set the location for builtin datasets by `export DETECTRON2_DATASETS=/path/to/datasets`.
-If left unset, the default is `./datasets` relative to your current working directory.
-
-## Prepare data for [COCO-Stuff](https://github.com/nightrome/cocostuff):
-
-### Expected data structure
-
-```
-coco-stuff/
- annotations/
- train2017/
- val2017/
- images/
- train2017/
- val2017/
- # below are generated by prepare_coco_stuff.py
- annotations_detectron2/
- train2017/
- val2017/
-```
-Download the COCO (2017) images from https://cocodataset.org/
-
-```bash
-wget http://images.cocodataset.org/zips/train2017.zip
-wget http://images.cocodataset.org/zips/val2017.zip
-```
-
-Download the COCO-Stuff annotation from https://github.com/nightrome/cocostuff.
-```bash
-wget http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip
-```
-Unzip `train2017.zip`, `val2017.zip`, and `stuffthingmaps_trainval2017.zip`. Then put them to the correct location listed above.
-
-Generate the labels for training and testing.
-
-```
-python datasets/prepare_coco_stuff.py
-```
-
-
-
-## Prepare data for [ADE20K-150](http://sceneparsing.csail.mit.edu):
-
-### Expected data structure
-```
-ADEChallengeData2016/
- annotations/
- validation/
- images/
- validation/
- # below are generated by prepare_ade20k_150.py
- annotations_detectron2/
- validation/
-```
-Download the data of ADE20K-150 from http://sceneparsing.csail.mit.edu.
-```
-wget http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip
-```
-Unzip `ADEChallengeData2016.zip` and generate the labels for testing.
-```
-python datasets/prepare_ade20k_150.py
-```
-## Prepare data for [ADE20k-847](https://groups.csail.mit.edu/vision/datasets/ADE20K/):
-
-### Expected data structure
-```
-ADE20K_2021_17_01/
- images/
- ADE/
- validation/
- index_ade20k.mat
- index_ade20k.pkl
- # below are generated by prepare_ade20k_847.py
- annotations_detectron2/
- validation/
-```
-Download the data of ADE20k-Full from https://groups.csail.mit.edu/vision/datasets/ADE20K/request_data/
-Unzip the dataset and generate the labels for testing.
-```
-python datasets/prepare_ade20k_847.py
-```
-
-## Prepare data for [PASCAL VOC 2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#devkit):
-
-
-### Expected data structure
-```
-VOCdevkit/
- VOC2012/
- Annotations/
- ImageSets/
- JPEGImages/
- SegmentationClass/
- SegmentationClassAug/
- SegmentationObject/
- # below are generated by prepare_voc.py
- annotations_detectron2
- annotations_detectron2_bg
-
-```
-Download the data of PASCAL VOC from http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#devkit.
-
-We use SBD augmentated training data as SegmentationClassAug following [Deeplab](https://github.com/kazuto1011/deeplab-pytorch/blob/master/data/datasets/voc12/README.md).
-```
-wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
-wget https://www.dropbox.com/s/oeu149j8qtbs1x0/SegmentationClassAug.zip
-```
-Unzip `VOCtrainval_11-May-2012.tar` and `SegmentationClassAug.zip`. Then put them to the correct location listed above and generate the labels for testing.
-```
-python datasets/prepare_voc.py
-```
-
-
-## Prepare data for [PASCAL Context](https://www.cs.stanford.edu/~roozbeh/pascal-context/):
-
-
-### Expected data structure
-```
-VOCdevkit/
- VOC2010/
- Annotations/
- ImageSets/
- JPEGImages/
- SegmentationClass/
- SegmentationObject/
- trainval/
- labels.txt
- 59_labels.txt
- pascalcontext_val.txt
- # below are generated by prepare_pascal_context.py
- annotations_detectron2/
- pc459_val
- pc59_val
-```
-Download the data of PASCAL VOC 2010 from https://www.cs.stanford.edu/~roozbeh/pascal-context/.
-
-```
-wget http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar
-wget https://www.cs.stanford.edu/~roozbeh/pascal-context/trainval.tar.gz
-wget https://www.cs.stanford.edu/~roozbeh/pascal-context/59_labels.txt
-```
-Unzip `VOCtrainval_03-May-2010.tar` and `trainval.tar.gz`. Then put them to the correct location listed above and generate the labels for testing.
-```
-python datasets/prepare_pascal_context.py
-```
\ No newline at end of file
diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/main.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/main.py
deleted file mode 100644
index e648099c0209bc01451ea8ff3bb110e9a336e357..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/main.py
+++ /dev/null
@@ -1,446 +0,0 @@
-import glob
-import logging
-import os
-import re
-import subprocess
-import sys
-import random
-from datetime import datetime
-
-import numpy as np
-import torch
-from torch import optim
-from torch.cuda.amp import GradScaler
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-try:
- import torch.utils.tensorboard as tensorboard
-except ImportError:
- tensorboard = None
-
-try:
- import horovod.torch as hvd
-except ImportError:
- hvd = None
-
-from open_clip import create_model_and_transforms, trace_model, get_tokenizer
-from training.data import get_data
-from training.distributed import is_master, init_distributed_device, broadcast_object
-from training.logger import setup_logging
-from training.params import parse_args
-from training.scheduler import cosine_lr, const_lr, const_lr_cooldown
-from training.train import train_one_epoch, evaluate
-from training.file_utils import pt_load, check_exists, start_sync_process, remote_sync
-
-
-LATEST_CHECKPOINT_NAME = "epoch_latest.pt"
-
-
-def random_seed(seed=42, rank=0):
- torch.manual_seed(seed + rank)
- np.random.seed(seed + rank)
- random.seed(seed + rank)
-
-
-def natural_key(string_):
- """See http://www.codinghorror.com/blog/archives/001018.html"""
- return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())]
-
-
-def get_latest_checkpoint(path: str, remote : bool):
- # as writen, this glob recurses, so can pick up checkpoints across multiple sub-folders
- if remote:
- result = subprocess.run(["aws", "s3", "ls", path + "/"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- print(result)
- if result.returncode == 1:
- return None
- checkpoints = [os.path.join(path, x.split(' ')[-1]) for x in result.stdout.decode().split('\n')[:-1]]
- else:
- checkpoints = glob.glob(path + '**/*.pt', recursive=True)
- if checkpoints:
- checkpoints = sorted(checkpoints, key=natural_key)
- return checkpoints[-1]
- return None
-
-
-def main(args):
- args = parse_args(args)
-
- if torch.cuda.is_available():
- # This enables tf32 on Ampere GPUs which is only 8% slower than
- # float16 and almost as accurate as float32
- # This was a default in pytorch until 1.12
- torch.backends.cuda.matmul.allow_tf32 = True
- torch.backends.cudnn.benchmark = True
- torch.backends.cudnn.deterministic = False
-
- # fully initialize distributed device environment
- device = init_distributed_device(args)
-
- # get the name of the experiments
- if args.name is None:
- # sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule?
- model_name_safe = args.model.replace('/', '-')
- date_str = datetime.now().strftime("%Y_%m_%d`-%H_%M_%S")
- if args.distributed:
- # sync date_str from master to all ranks
- date_str = broadcast_object(args, date_str)
- args.name = '-'.join([
- date_str,
- f"model_{model_name_safe}",
- f"lr_{args.lr}",
- f"b_{args.batch_size}",
- f"j_{args.workers}",
- f"p_{args.precision}",
- ])
-
- resume_latest = args.resume == 'latest'
- log_base_path = os.path.join(args.logs, args.name)
- args.log_path = None
- if is_master(args, local=args.log_local):
- os.makedirs(log_base_path, exist_ok=True)
- log_filename = f'out-{args.rank}' if args.log_local else 'out.log'
- args.log_path = os.path.join(log_base_path, log_filename)
- if os.path.exists(args.log_path) and not resume_latest:
- print(
- "Error. Experiment already exists. Use --name {} to specify a new experiment."
- )
- return -1
-
- # Setup text logger
- args.log_level = logging.DEBUG if args.debug else logging.INFO
- setup_logging(args.log_path, args.log_level)
-
- # Setup wandb, tensorboard, checkpoint logging
- args.wandb = 'wandb' in args.report_to or 'all' in args.report_to
- args.tensorboard = 'tensorboard' in args.report_to or 'all' in args.report_to
- args.checkpoint_path = os.path.join(log_base_path, "checkpoints")
- if is_master(args):
- args.tensorboard_path = os.path.join(log_base_path, "tensorboard") if args.tensorboard else ''
- for dirname in [args.tensorboard_path, args.checkpoint_path]:
- if dirname:
- os.makedirs(dirname, exist_ok=True)
- else:
- args.tensorboard_path = ''
-
- if resume_latest:
- resume_from = None
- checkpoint_path = args.checkpoint_path
- # If using remote_sync, need to check the remote instead of the local checkpoints folder.
- if args.remote_sync is not None:
- checkpoint_path = os.path.join(args.remote_sync, args.name, "checkpoints")
- if args.save_most_recent:
- print('Error. Cannot use save-most-recent with remote_sync and resume latest.')
- return -1
- if args.remote_sync_protocol != 's3':
- print('Error. Sync protocol not supported when using resume latest.')
- return -1
- if is_master(args):
- # Checking for existing checkpoint via master rank only. It is possible for
- # different rank processes to see different files if a shared file-system is under
- # stress, however it's very difficult to fully work around such situations.
- if args.save_most_recent:
- # if --save-most-recent flag is set, look for latest at a fixed filename
- resume_from = os.path.join(checkpoint_path, LATEST_CHECKPOINT_NAME)
- if not os.path.exists(resume_from):
- # If no latest checkpoint has been saved yet, don't try to resume
- resume_from = None
- else:
- # otherwise, list checkpoint dir contents and pick the newest checkpoint
- resume_from = get_latest_checkpoint(checkpoint_path, remote=args.remote_sync is not None)
- if resume_from:
- logging.info(f'Found latest resume checkpoint at {resume_from}.')
- else:
- logging.info(f'No latest resume checkpoint found in {checkpoint_path}.')
- if args.distributed:
- # sync found checkpoint path to all ranks
- resume_from = broadcast_object(args, resume_from)
- args.resume = resume_from
-
- if args.copy_codebase:
- copy_codebase(args)
-
- # start the sync proces if remote-sync is not None
- remote_sync_process = None
- if is_master(args) and args.remote_sync is not None:
- # first make sure it works
- result = remote_sync(
- os.path.join(args.logs, args.name),
- os.path.join(args.remote_sync, args.name),
- args.remote_sync_protocol
- )
- if result:
- logging.info('remote sync successful.')
- else:
- logging.info('Error: remote sync failed. Exiting.')
- return -1
- # if all looks good, start a process to do this every args.remote_sync_frequency seconds
- remote_sync_process = start_sync_process(
- args.remote_sync_frequency,
- os.path.join(args.logs, args.name),
- os.path.join(args.remote_sync, args.name),
- args.remote_sync_protocol
- )
- remote_sync_process.start()
-
- if args.precision == 'fp16':
- logging.warning(
- 'It is recommended to use AMP mixed-precision instead of FP16. '
- 'FP16 support needs further verification and tuning, especially for train.')
-
- if args.horovod:
- logging.info(
- f'Running in horovod mode with multiple processes / nodes. Device: {args.device}.'
- f'Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}.')
- elif args.distributed:
- logging.info(
- f'Running in distributed mode with multiple processes. Device: {args.device}.'
- f'Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}.')
- else:
- logging.info(f'Running with a single process. Device {args.device}.')
-
- if isinstance(args.force_image_size, (tuple, list)) and len(args.force_image_size) == 1:
- # arg is nargs, single (square) image size list -> int
- args.force_image_size = args.force_image_size[0]
- random_seed(args.seed, 0)
- model, preprocess_train, preprocess_val = create_model_and_transforms(
- args.model,
- args.pretrained,
- precision=args.precision,
- device=device,
- jit=args.torchscript,
- force_quick_gelu=args.force_quick_gelu,
- force_custom_text=args.force_custom_text,
- force_patch_dropout=args.force_patch_dropout,
- force_image_size=args.force_image_size,
- pretrained_image=args.pretrained_image,
- image_mean=args.image_mean,
- image_std=args.image_std,
- aug_cfg=args.aug_cfg,
- )
- random_seed(args.seed, args.rank)
-
- if args.trace:
- model = trace_model(model, batch_size=args.batch_size, device=device)
-
- if args.lock_image:
- # lock image tower as per LiT - https://arxiv.org/abs/2111.07991
- model.lock_image_tower(
- unlocked_groups=args.lock_image_unlocked_groups,
- freeze_bn_stats=args.lock_image_freeze_bn_stats)
- if args.lock_text:
- model.lock_text_tower(
- unlocked_layers=args.lock_text_unlocked_layers,
- freeze_layer_norm=args.lock_text_freeze_layer_norm)
-
- if args.grad_checkpointing:
- model.set_grad_checkpointing()
-
- if is_master(args):
- logging.info("Model:")
- logging.info(f"{str(model)}")
- logging.info("Params:")
- params_file = os.path.join(args.logs, args.name, "params.txt")
- with open(params_file, "w") as f:
- for name in sorted(vars(args)):
- val = getattr(args, name)
- logging.info(f" {name}: {val}")
- f.write(f"{name}: {val}\n")
-
- if args.distributed and not args.horovod:
- if args.use_bn_sync:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
- ddp_args = {}
- if args.ddp_static_graph:
- # this doesn't exist in older PyTorch, arg only added if enabled
- ddp_args['static_graph'] = True
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[device], **ddp_args)
-
- # create optimizer and scaler
- optimizer = None
- scaler = None
-
- if args.train_data or args.dataset_type == "synthetic":
- assert not args.trace, 'Cannot train with traced model'
-
- exclude = lambda n, p: p.ndim < 2 or "bn" in n or "ln" in n or "bias" in n or 'logit_scale' in n
- include = lambda n, p: not exclude(n, p)
-
- named_parameters = list(model.named_parameters())
- gain_or_bias_params = [p for n, p in named_parameters if exclude(n, p) and p.requires_grad]
- rest_params = [p for n, p in named_parameters if include(n, p) and p.requires_grad]
-
- optimizer = optim.AdamW(
- [
- {"params": gain_or_bias_params, "weight_decay": 0.},
- {"params": rest_params, "weight_decay": args.wd},
- ],
- lr=args.lr,
- betas=(args.beta1, args.beta2),
- eps=args.eps,
- )
- if args.horovod:
- optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=model.named_parameters())
- hvd.broadcast_parameters(model.state_dict(), root_rank=0)
- hvd.broadcast_optimizer_state(optimizer, root_rank=0)
-
- scaler = GradScaler() if args.precision == "amp" else None
-
- # optionally resume from a checkpoint
- start_epoch = 0
- if args.resume is not None:
- checkpoint = pt_load(args.resume, map_location='cpu')
- if 'epoch' in checkpoint:
- # resuming a train checkpoint w/ epoch and optimizer state
- start_epoch = checkpoint["epoch"]
- sd = checkpoint["state_dict"]
- if not args.distributed and next(iter(sd.items()))[0].startswith('module'):
- sd = {k[len('module.'):]: v for k, v in sd.items()}
- model.load_state_dict(sd)
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint["optimizer"])
- if scaler is not None and 'scaler' in checkpoint:
- scaler.load_state_dict(checkpoint['scaler'])
- logging.info(f"=> resuming checkpoint '{args.resume}' (epoch {start_epoch})")
- else:
- # loading a bare (model only) checkpoint for fine-tune or evaluation
- model.load_state_dict(checkpoint)
- logging.info(f"=> loaded checkpoint '{args.resume}' (epoch {start_epoch})")
-
- # initialize datasets
- data = get_data(args, (preprocess_train, preprocess_val), epoch=start_epoch, tokenizer=get_tokenizer(args.model))
- assert len(data), 'At least one train or eval dataset must be specified.'
-
- # create scheduler if train
- scheduler = None
- if 'train' in data and optimizer is not None:
- total_steps = (data["train"].dataloader.num_batches // args.accum_freq) * args.epochs
- if args.lr_scheduler == "cosine":
- scheduler = cosine_lr(optimizer, args.lr, args.warmup, total_steps)
- elif args.lr_scheduler == "const":
- scheduler = const_lr(optimizer, args.lr, args.warmup, total_steps)
- elif args.lr_scheduler == "const-cooldown":
- assert args.epochs_cooldown is not None,\
- "Please specify the number of cooldown epochs for this lr schedule."
- cooldown_steps = (data["train"].dataloader.num_batches // args.accum_freq) * args.epochs_cooldown
- scheduler = const_lr_cooldown(
- optimizer, args.lr, args.warmup, total_steps,
- cooldown_steps, args.lr_cooldown_power, args.lr_cooldown_end)
- else:
- logging.error(
- f'Unknown scheduler, {args.lr_scheduler}. Available options are: cosine, const, const-cooldown.')
- exit(1)
-
- # determine if this worker should save logs and checkpoints. only do so if it is rank == 0
- args.save_logs = args.logs and args.logs.lower() != 'none' and is_master(args)
- writer = None
- if args.save_logs and args.tensorboard:
- assert tensorboard is not None, "Please install tensorboard."
- writer = tensorboard.SummaryWriter(args.tensorboard_path)
-
- if args.wandb and is_master(args):
- assert wandb is not None, 'Please install wandb.'
- logging.debug('Starting wandb.')
- args.train_sz = data["train"].dataloader.num_samples
- if args.val_data is not None:
- args.val_sz = data["val"].dataloader.num_samples
- # you will have to configure this for your project!
- wandb.init(
- project=args.wandb_project_name,
- name=args.name,
- id=args.name,
- notes=args.wandb_notes,
- tags=[],
- resume='auto' if args.resume == "latest" else None,
- config=vars(args),
- )
- if args.debug:
- wandb.watch(model, log='all')
- wandb.save(params_file)
- logging.debug('Finished loading wandb.')
-
- if 'train' not in data:
- evaluate(model, data, start_epoch, args, writer)
- return
-
- for epoch in range(start_epoch, args.epochs):
- if is_master(args):
- logging.info(f'Start epoch {epoch}')
-
- train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, args, writer)
- completed_epoch = epoch + 1
-
- if any(v in data for v in ('val', 'imagenet-val', 'imagenet-v2')):
- evaluate(model, data, completed_epoch, args, writer)
-
- # Saving checkpoints.
- if args.save_logs:
- checkpoint_dict = {
- "epoch": completed_epoch,
- "name": args.name,
- "state_dict": model.state_dict(),
- "optimizer": optimizer.state_dict(),
- }
- if scaler is not None:
- checkpoint_dict["scaler"] = scaler.state_dict()
-
- if completed_epoch == args.epochs or (
- args.save_frequency > 0 and (completed_epoch % args.save_frequency) == 0
- ):
- torch.save(
- checkpoint_dict,
- os.path.join(args.checkpoint_path, f"epoch_{completed_epoch}.pt"),
- )
- if args.delete_previous_checkpoint:
- previous_checkpoint = os.path.join(args.checkpoint_path, f"epoch_{completed_epoch - 1}.pt")
- if os.path.exists(previous_checkpoint):
- os.remove(previous_checkpoint)
-
- if args.save_most_recent:
- # try not to corrupt the latest checkpoint if save fails
- tmp_save_path = os.path.join(args.checkpoint_path, "tmp.pt")
- latest_save_path = os.path.join(args.checkpoint_path, LATEST_CHECKPOINT_NAME)
- torch.save(checkpoint_dict, tmp_save_path)
- os.replace(tmp_save_path, latest_save_path)
-
- if args.wandb and is_master(args):
- wandb.finish()
-
- # run a final sync.
- if remote_sync_process is not None:
- logging.info('Final remote sync.')
- remote_sync_process.terminate()
- result = remote_sync(
- os.path.join(args.logs, args.name),
- os.path.join(args.remote_sync, args.name),
- args.remote_sync_protocol
- )
- if result:
- logging.info('Final remote sync successful.')
- else:
- logging.info('Final remote sync failed.')
-
-
-def copy_codebase(args):
- from shutil import copytree, ignore_patterns
- new_code_path = os.path.join(args.logs, args.name, "code")
- if os.path.exists(new_code_path):
- print(
- f"Error. Experiment already exists at {new_code_path}. Use --name to specify a new experiment."
- )
- return -1
- print(f"Copying codebase to {new_code_path}")
- current_code_path = os.path.realpath(__file__)
- for _ in range(3):
- current_code_path = os.path.dirname(current_code_path)
- copytree(current_code_path, new_code_path, ignore=ignore_patterns('log', 'logs', 'wandb'))
- print("Done copying code.")
- return 1
-
-
-if __name__ == "__main__":
- main(sys.argv[1:])
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/__init__.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/hbestm/gpt-academic-play/request_llm/README.md b/spaces/hbestm/gpt-academic-play/request_llm/README.md
deleted file mode 100644
index 545bc1ffba8b79a49d994cfedcc2a787475181b2..0000000000000000000000000000000000000000
--- a/spaces/hbestm/gpt-academic-play/request_llm/README.md
+++ /dev/null
@@ -1,79 +0,0 @@
-# 如何使用其他大语言模型
-
-## ChatGLM
-
-- 安装依赖 `pip install -r request_llm/requirements_chatglm.txt`
-- 修改配置,在config.py中将LLM_MODEL的值改为"chatglm"
-
-``` sh
-LLM_MODEL = "chatglm"
-```
-- 运行!
-``` sh
-`python main.py`
-```
-
-## Claude-Stack
-
-- 请参考此教程获取 https://zhuanlan.zhihu.com/p/627485689
- - 1、SLACK_CLAUDE_BOT_ID
- - 2、SLACK_CLAUDE_USER_TOKEN
-
-- 把token加入config.py
-
-## Newbing
-
-- 使用cookie editor获取cookie(json)
-- 把cookie(json)加入config.py (NEWBING_COOKIES)
-
-## Moss
-- 使用docker-compose
-
-## RWKV
-- 使用docker-compose
-
-## LLAMA
-- 使用docker-compose
-
-## 盘古
-- 使用docker-compose
-
-
----
-## Text-Generation-UI (TGUI,调试中,暂不可用)
-
-### 1. 部署TGUI
-``` sh
-# 1 下载模型
-git clone https://github.com/oobabooga/text-generation-webui.git
-# 2 这个仓库的最新代码有问题,回滚到几周之前
-git reset --hard fcda3f87767e642d1c0411776e549e1d3894843d
-# 3 切换路径
-cd text-generation-webui
-# 4 安装text-generation的额外依赖
-pip install accelerate bitsandbytes flexgen gradio llamacpp markdown numpy peft requests rwkv safetensors sentencepiece tqdm datasets git+https://github.com/huggingface/transformers
-# 5 下载模型
-python download-model.py facebook/galactica-1.3b
-# 其他可选如 facebook/opt-1.3b
-# facebook/galactica-1.3b
-# facebook/galactica-6.7b
-# facebook/galactica-120b
-# facebook/pygmalion-1.3b 等
-# 详情见 https://github.com/oobabooga/text-generation-webui
-
-# 6 启动text-generation
-python server.py --cpu --listen --listen-port 7865 --model facebook_galactica-1.3b
-```
-
-### 2. 修改config.py
-
-``` sh
-# LLM_MODEL格式: tgui:[模型]@[ws地址]:[ws端口] , 端口要和上面给定的端口一致
-LLM_MODEL = "tgui:galactica-1.3b@localhost:7860"
-```
-
-### 3. 运行!
-``` sh
-cd chatgpt-academic
-python main.py
-```
diff --git a/spaces/hdhzk/bingo/src/pages/api/create.ts b/spaces/hdhzk/bingo/src/pages/api/create.ts
deleted file mode 100644
index 508fa97ef609cbb215a61085711638e116235ebe..0000000000000000000000000000000000000000
--- a/spaces/hdhzk/bingo/src/pages/api/create.ts
+++ /dev/null
@@ -1,31 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { fetch, debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-
-// const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create'
-const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create';
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const headers = createHeaders(req.cookies)
-
- res.writeHead(200, {
- 'Content-Type': 'application/json',
- })
-
- debug('headers', headers)
- const response = await fetch(API_ENDPOINT, { method: 'GET', headers })
- .then((res) => res.text())
-
- res.end(response)
- } catch (e) {
- return res.end(JSON.stringify({
- result: {
- value: 'UnauthorizedRequest',
- message: `${e}`
- }
- }))
- }
-}
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_Ranger_lr1en2.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_Ranger_lr1en2.py
deleted file mode 100644
index b1f33aa088c2b0fcc8296ed3b7fd97471ab836d4..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_Ranger_lr1en2.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2
-from nnunet.training.optimizer.ranger import Ranger
-
-
-class nnUNetTrainerV2_Ranger_lr1en2(nnUNetTrainerV2):
- def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None,
- unpack_data=True, deterministic=True, fp16=False):
- super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data,
- deterministic, fp16)
- self.initial_lr = 1e-2
-
- def initialize_optimizer_and_scheduler(self):
- self.optimizer = Ranger(self.network.parameters(), self.initial_lr, k=6, N_sma_threshhold=5,
- weight_decay=self.weight_decay)
- self.lr_scheduler = None
-
diff --git a/spaces/huggingface-projects/InstructPix2Pix-Chatbot-ui/README.md b/spaces/huggingface-projects/InstructPix2Pix-Chatbot-ui/README.md
deleted file mode 100644
index 6c0a1cbaefaf5e7dfd7dc5da9a17118da6483fe4..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/InstructPix2Pix-Chatbot-ui/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: InstructPix2Pix Chatbot ui
-emoji: 👁
-colorFrom: pink
-colorTo: purple
-sdk: docker
-pinned: false
-app_port: 8080
-fullWidth: true
-duplicated_from: radames/sveltekit-tailwindcss-docker
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/chunks/_page-802cc2a3.js b/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/chunks/_page-802cc2a3.js
deleted file mode 100644
index 470b827d2ea3fc294e0b8a1ff4ca645db498ef5a..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/chunks/_page-802cc2a3.js
+++ /dev/null
@@ -1 +0,0 @@
-const e=!0,r=Object.freeze(Object.defineProperty({__proto__:null,prerender:!0},Symbol.toStringTag,{value:"Module"}));export{r as _,e as p};
diff --git a/spaces/hushell/pmf_with_gis/models/protonet.py b/spaces/hushell/pmf_with_gis/models/protonet.py
deleted file mode 100644
index 30136053cc1839a3c74587bc14dce5cfc1fd8bbd..0000000000000000000000000000000000000000
--- a/spaces/hushell/pmf_with_gis/models/protonet.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class ProtoNet(nn.Module):
- def __init__(self, backbone):
- super().__init__()
-
- # bias & scale of cosine classifier
- self.bias = nn.Parameter(torch.FloatTensor(1).fill_(0), requires_grad=True)
- self.scale_cls = nn.Parameter(torch.FloatTensor(1).fill_(10), requires_grad=True)
-
- # backbone
- self.backbone = backbone
-
- def cos_classifier(self, w, f):
- """
- w.shape = B, nC, d
- f.shape = B, M, d
- """
- f = F.normalize(f, p=2, dim=f.dim()-1, eps=1e-12)
- w = F.normalize(w, p=2, dim=w.dim()-1, eps=1e-12)
-
- cls_scores = f @ w.transpose(1, 2) # B, M, nC
- cls_scores = self.scale_cls * (cls_scores + self.bias)
- return cls_scores
-
- def forward(self, supp_x, supp_y, x):
- """
- supp_x.shape = [B, nSupp, C, H, W]
- supp_y.shape = [B, nSupp]
- x.shape = [B, nQry, C, H, W]
- """
- num_classes = supp_y.max() + 1 # NOTE: assume B==1
-
- B, nSupp, C, H, W = supp_x.shape
- supp_f = self.backbone.forward(supp_x.view(-1, C, H, W))
- supp_f = supp_f.view(B, nSupp, -1)
-
- supp_y_1hot = F.one_hot(supp_y, num_classes).transpose(1, 2) # B, nC, nSupp
-
- # B, nC, nSupp x B, nSupp, d = B, nC, d
- prototypes = torch.bmm(supp_y_1hot.float(), supp_f)
- prototypes = prototypes / supp_y_1hot.sum(dim=2, keepdim=True) # NOTE: may div 0 if some classes got 0 images
-
- feat = self.backbone.forward(x.view(-1, C, H, W))
- feat = feat.view(B, x.shape[1], -1) # B, nQry, d
-
- logits = self.cos_classifier(prototypes, feat) # B, nQry, nC
- return logits
diff --git a/spaces/hysts/list-of-demos/constants.py b/spaces/hysts/list-of-demos/constants.py
deleted file mode 100644
index 7b2d1701f854cb951e1298eab3b41e70781d1f97..0000000000000000000000000000000000000000
--- a/spaces/hysts/list-of-demos/constants.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from huggingface_hub import HfApi
-
-STATUS_CHOICES = [
- "RUNNING",
- "PAUSED",
- "SLEEPING",
- "RUNTIME_ERROR",
- "BUILD_ERROR",
- "CONFIG_ERROR",
- "BUILDING",
- "RUNNING_BUILDING",
- "NO_APP_FILE",
-]
-HARDWARE_CHOICES = [
- "cpu-basic",
- "cpu-upgrade",
- "t4-small",
- "t4-medium",
- "zero-a10g",
- "a10g-small",
- "a10g-large",
- "a100-large",
-]
-SDK_CHOICES = [
- "gradio",
- "streamlit",
- "docker",
-]
-SLEEP_TIME_INT_TO_STR = {
- 0: "null",
- 300: "5 minutes",
- 900: "15 minutes",
- 1800: "30 minutes",
- 3600: "1 hour",
- 36000: "10 hours",
- 86400: "24 hours",
- 172800: "48 hours",
- 259200: "72 hours",
- 604800: "1 week",
-}
-SLEEP_TIME_CHOICES = list(SLEEP_TIME_INT_TO_STR.values())
-SLEEP_TIME_STR_TO_INT = {v: k for k, v in SLEEP_TIME_INT_TO_STR.items()}
-
-VISIBILITY_CHOICES = ["public", "private"]
-
-api = HfApi()
-WHOAMI = api.whoami()["name"]
-OWNER_CHOICES = [WHOAMI, "other organizations"]
diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/backbones/__init__.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/backbones/__init__.py
deleted file mode 100644
index 94288c3af835e3513ddc70eb4cfb7f7e86852e3f..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/backbones/__init__.py
+++ /dev/null
@@ -1,157 +0,0 @@
-from .iresnet import iresnet100
-from .iresnet import iresnet18
-from .iresnet import iresnet200
-from .iresnet import iresnet34
-from .iresnet import iresnet50
-from .mobilefacenet import get_mbf
-
-
-def get_model(name, **kwargs):
- # resnet
- if name == "r18":
- return iresnet18(False, **kwargs)
- elif name == "r34":
- return iresnet34(False, **kwargs)
- elif name == "r50":
- return iresnet50(False, **kwargs)
- elif name == "r100":
- return iresnet100(False, **kwargs)
- elif name == "r200":
- return iresnet200(False, **kwargs)
- elif name == "r2060":
- from .iresnet2060 import iresnet2060
-
- return iresnet2060(False, **kwargs)
-
- elif name == "mbf":
- fp16 = kwargs.get("fp16", False)
- num_features = kwargs.get("num_features", 512)
- return get_mbf(fp16=fp16, num_features=num_features)
-
- elif name == "mbf_large":
- from .mobilefacenet import get_mbf_large
-
- fp16 = kwargs.get("fp16", False)
- num_features = kwargs.get("num_features", 512)
- return get_mbf_large(fp16=fp16, num_features=num_features)
-
- elif name == "vit_t":
- num_features = kwargs.get("num_features", 512)
- from .vit import VisionTransformer
-
- return VisionTransformer(
- img_size=112,
- patch_size=9,
- num_classes=num_features,
- embed_dim=256,
- depth=12,
- num_heads=8,
- drop_path_rate=0.1,
- norm_layer="ln",
- mask_ratio=0.1,
- )
-
- elif name == "vit_t_dp005_mask0": # For WebFace42M
- num_features = kwargs.get("num_features", 512)
- from .vit import VisionTransformer
-
- return VisionTransformer(
- img_size=112,
- patch_size=9,
- num_classes=num_features,
- embed_dim=256,
- depth=12,
- num_heads=8,
- drop_path_rate=0.05,
- norm_layer="ln",
- mask_ratio=0.0,
- )
-
- elif name == "vit_s":
- num_features = kwargs.get("num_features", 512)
- from .vit import VisionTransformer
-
- return VisionTransformer(
- img_size=112,
- patch_size=9,
- num_classes=num_features,
- embed_dim=512,
- depth=12,
- num_heads=8,
- drop_path_rate=0.1,
- norm_layer="ln",
- mask_ratio=0.1,
- )
-
- elif name == "vit_s_dp005_mask_0": # For WebFace42M
- num_features = kwargs.get("num_features", 512)
- from .vit import VisionTransformer
-
- return VisionTransformer(
- img_size=112,
- patch_size=9,
- num_classes=num_features,
- embed_dim=512,
- depth=12,
- num_heads=8,
- drop_path_rate=0.05,
- norm_layer="ln",
- mask_ratio=0.0,
- )
-
- elif name == "vit_b":
- # this is a feature
- num_features = kwargs.get("num_features", 512)
- from .vit import VisionTransformer
-
- return VisionTransformer(
- img_size=112,
- patch_size=9,
- num_classes=num_features,
- embed_dim=512,
- depth=24,
- num_heads=8,
- drop_path_rate=0.1,
- norm_layer="ln",
- mask_ratio=0.1,
- using_checkpoint=True,
- )
-
- elif name == "vit_b_dp005_mask_005": # For WebFace42M
- # this is a feature
- num_features = kwargs.get("num_features", 512)
- from .vit import VisionTransformer
-
- return VisionTransformer(
- img_size=112,
- patch_size=9,
- num_classes=num_features,
- embed_dim=512,
- depth=24,
- num_heads=8,
- drop_path_rate=0.05,
- norm_layer="ln",
- mask_ratio=0.05,
- using_checkpoint=True,
- )
-
- elif name == "vit_l_dp005_mask_005": # For WebFace42M
- # this is a feature
- num_features = kwargs.get("num_features", 512)
- from .vit import VisionTransformer
-
- return VisionTransformer(
- img_size=112,
- patch_size=9,
- num_classes=num_features,
- embed_dim=768,
- depth=24,
- num_heads=8,
- drop_path_rate=0.05,
- norm_layer="ln",
- mask_ratio=0.05,
- using_checkpoint=True,
- )
-
- else:
- raise ValueError()
diff --git a/spaces/imseldrith/ChatGPT-Detection/app.py b/spaces/imseldrith/ChatGPT-Detection/app.py
deleted file mode 100644
index e10b51f78e8aec67940cd974bb8d98f89b8fb0fc..0000000000000000000000000000000000000000
--- a/spaces/imseldrith/ChatGPT-Detection/app.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import gradio as gr
-import spacy
-import os
-import re
-os.system("python -m spacy download en_core_web_sm")
-nlp = spacy.load("en_core_web_sm")
-
-def detect_ai_content(text):
- # Count the number of words in the text
- word_count = len(text.split())
-
- # Analyze the text using Spacy
- doc = nlp(text)
-
- # Count the number of tokens that are not in Spacy's default stop word list
- non_stopword_tokens = [token for token in doc if not token.is_stop]
- non_stopword_count = len(non_stopword_tokens)
-
- # Calculate the percentage of non-stopword tokens
- percentage_ai = (1 - non_stopword_count / word_count) * 100
-
- # Clean the text by removing extra spaces, line breaks and special characters
- cleaned_text = re.sub(r'\s+', ' ', text).strip()
- cleaned_text = re.sub(r'[^\w\s]', '', cleaned_text)
-
- # Return a dictionary with the percentage of AI-generated content and the cleaned text
- return {
- "text": cleaned_text,
- "percentage": f"{percentage_ai:.2f}% AI-generated content"
- }
-
-gr.Interface(detect_ai_content, "text", "json").launch()
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/3d Piping Library For Autocad Download Crack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/3d Piping Library For Autocad Download Crack.md
deleted file mode 100644
index 1febf2b835caf36361365dcea97889ba59e1cd3f..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/3d Piping Library For Autocad Download Crack.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
the library contains cad files of commonly used pipe fittings which you can use in your projects. also added a package of symbols and symbols of very common pipe fittings. all symbols are open-source and can be used for your own designs. see the license.
-
2. symbols library piping design software will come equipped with symbol libraries to make your designs clear, consistent and comprehensive. the extent of the available symbols will vary from tool to tool, but you can expect to see the basics ball valves, hoses, connectors, compressors, etc. in the majority of piping design software programs.
m4plant is a 3d pipework design software that supports p&id creation and the automated production of piping isometrics for plants and factories. according to their website, m4plant provides the basis for rule-based quotation creation, integrated design, technical presentation, detailed design and documentation of your projects.
-
want to download the whole library tlcharger l'ensemble du catalogue you can download all cad blocks directly from your autocad, without logins and any limitations. see the add-on application block catalog for autocad 2013 and higher and the add-on application bim-families for revit 2015 and higher. cad blocks can be downloaded and used for your own personal or company design use only. any distribution of the catalog content (to other catalogs, web download, cd/dvd media, etc.) is prohibited - see terms of use.. the dwg-version problem (not valid file, invalid file, drawing not valid, cannot open) can be solved by the tip 2869. see also block-statistics and the latest 100 blocks.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Cash Register Express 10.1 Crack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Cash Register Express 10.1 Crack.md
deleted file mode 100644
index 578d54e9419499885d45af2a9edc930f9a9a4629..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Cash Register Express 10.1 Crack.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
these features have been added to cash register express:
previewing, printing, and sharing of multiple versions of a document in one view
new features and functionality
saving options
new ways to share documents
-
cash register express 11 is available for purchase on the apple store and apple.com starting at $199.99. cash register express 11, available for purchase on the apple store and apple.com starts at $199. the new version is available for sale immediately, but to take advantage of the new version, you must update from the current version of cash register express.
the new version of cash register express is a free update to previous versions of cash register express. the new version is available for purchase immediately, but to take advantage of the new version, you must update from the current version of cash register express.
-
and the new version of cash register express allows you to manage multiple versions of a single document. with one-click previewing, you can see multiple versions of a document at once and easily switch between them. and with the ability to create new versions of the same document, you can always have access to the exact version you need.
-
through the built-in touch bar, you can easily perform vital tasks like verifying and entering sales, storing inventory, and reporting and posting financial information. and with all-new security features and new ways to share documents, you can have full control over your business with cash register express.
-
7.4. if the original covered equipment cannot be returned for any reason, apple will provide you with an express replacement product to you. apple will pay for the express replacement product and any applicable shipping charges, and you will be required to return the original covered equipment. however, if you return an express replacement product that does not meet the covered equipment warranty, apple will not cover the cost of the express replacement product.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/itbeard/CarperAI-stable-vicuna-13b-delta/README.md b/spaces/itbeard/CarperAI-stable-vicuna-13b-delta/README.md
deleted file mode 100644
index f454e9cfe196f93cf256aac75b8ddd8bc7ed81c8..0000000000000000000000000000000000000000
--- a/spaces/itbeard/CarperAI-stable-vicuna-13b-delta/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CarperAI Stable Vicuna 13b Delta
-emoji: 🦀
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.28.3
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/modules.py b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/modules.py
deleted file mode 100644
index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000
--- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/modules.py
+++ /dev/null
@@ -1,387 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/jackyccl/segment-anything/segment_anything/utils/__init__.py b/spaces/jackyccl/segment-anything/segment_anything/utils/__init__.py
deleted file mode 100644
index 5277f46157403e47fd830fc519144b97ef69d4ae..0000000000000000000000000000000000000000
--- a/spaces/jackyccl/segment-anything/segment_anything/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/interpolateGradio.ts b/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/interpolateGradio.ts
deleted file mode 100644
index f7286e244a3f6b984b4ce2c118f2afabb6115f7a..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/interpolateGradio.ts
+++ /dev/null
@@ -1,41 +0,0 @@
-
-const gradioApi = `${process.env.INTERPOLATION_API_GRADIO_URL || ""}`
-const accessToken = `${process.env.AUTH_INTERPOLATION_API_GRADIO_TOKEN || ""}`
-
-export async function interpolateGradio(assetUrl: string): Promise {
- // we need to remove this header perhaps
- const videoInBase64 = assetUrl.split("data:video/mp4;base64,").pop()
-
- const interpolationSteps = 2
- const nbFramesPerSecond = 32
-
- const res = await fetch(gradioApi + (gradioApi.endsWith("/") ? "" : "/") + "api/predict", {
- method: "POST",
- headers: {
- "Content-Type": "application/json",
- // Authorization: `Bearer ${token}`,
- },
- body: JSON.stringify({
- fn_index: 0, // <- important!
- data: [
- accessToken,
- videoInBase64,
- interpolationSteps,
- nbFramesPerSecond
- ],
- }),
- cache: "no-store",
- // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache)
- // next: { revalidate: 1 }
- })
-
- const { data } = await res.json()
-
- if (res.status !== 200 || !data[0]?.length) {
- // This will activate the closest `error.js` Error Boundary
- throw new Error(`Failed to fetch data (status: ${res.status})`)
- }
-
- return data[0]
-}
-
diff --git a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/__init__.py b/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/__init__.py
deleted file mode 100644
index 7c0106aa5c5fae9b4acc63b17cf47b4458888620..0000000000000000000000000000000000000000
--- a/spaces/jgerbscheid/dpa-example/dijkprofile_annotator/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from . import models
-from . import dataset
-from . import training
-from . import utils
-from . import config
-from . import preprocessing
-from .utils import visualize_sample
-from .utils import visualize_prediction
-from .utils import visualize_files
-from .utils import visualize_dict
-from .annotator import annotate
-from .annotator import make_predictions
-from .annotator import write_predictions_
\ No newline at end of file
diff --git a/spaces/jiangjiechen/Auction-Arena-Demo/app_modules/presets.py b/spaces/jiangjiechen/Auction-Arena-Demo/app_modules/presets.py
deleted file mode 100644
index a0974fd1efa14b02e4365a68447de87903359c51..0000000000000000000000000000000000000000
--- a/spaces/jiangjiechen/Auction-Arena-Demo/app_modules/presets.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import gradio as gr
-
-title = """
Auction Arena
-
-An interactive demo for this paper: Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena. Details of this work can be found at this page.
-
-
-After choosing items and setting the basic auction rules (like shuffle item order, setting minimal increase, enable discount if none bids, etc.), you can either watch AI vs AI in this auction arena by setting `model_name` as LLMs. Or if you like to participate in the competition yourself, you can set `model_name=human` to engage in the arena personally. Please enter your API key before start. OpenAI API Key is a must, others are not. Otherwise you will encounter errors (please refresh the page if you do).
-
-
-Feel free to contact us if you have any questions!
-"""
-
-# description_top = """\
-#
-"""
-
-small_and_beautiful_theme = gr.themes.Soft(
- primary_hue=gr.themes.Color(
- c50="#02C160",
- c100="rgba(2, 193, 96, 0.2)",
- c200="#02C160",
- c300="rgba(2, 193, 96, 0.32)",
- c400="rgba(2, 193, 96, 0.32)",
- c500="rgba(2, 193, 96, 1.0)",
- c600="rgba(2, 193, 96, 1.0)",
- c700="rgba(2, 193, 96, 0.32)",
- c800="rgba(2, 193, 96, 0.32)",
- c900="#02C160",
- c950="#02C160",
- ),
- secondary_hue=gr.themes.Color(
- c50="#576b95",
- c100="#576b95",
- c200="#576b95",
- c300="#576b95",
- c400="#576b95",
- c500="#576b95",
- c600="#576b95",
- c700="#576b95",
- c800="#576b95",
- c900="#576b95",
- c950="#576b95",
- ),
- neutral_hue=gr.themes.Color(
- name="gray",
- c50="#f9fafb",
- c100="#f3f4f6",
- c200="#e5e7eb",
- c300="#d1d5db",
- c400="#B2B2B2",
- c500="#808080",
- c600="#636363",
- c700="#515151",
- c800="#393939",
- c900="#272727",
- c950="#171717",
- ),
- radius_size=gr.themes.sizes.radius_sm,
- ).set(
- button_primary_background_fill="#06AE56",
- button_primary_background_fill_dark="#06AE56",
- button_primary_background_fill_hover="#07C863",
- button_primary_border_color="#06AE56",
- button_primary_border_color_dark="#06AE56",
- button_primary_text_color="#FFFFFF",
- button_primary_text_color_dark="#FFFFFF",
- button_secondary_background_fill="#F2F2F2",
- button_secondary_background_fill_dark="#2B2B2B",
- button_secondary_text_color="#393939",
- button_secondary_text_color_dark="#FFFFFF",
- # background_fill_primary="#F7F7F7",
- # background_fill_primary_dark="#1F1F1F",
- block_title_text_color="*primary_500",
- block_title_background_fill="*primary_100",
- input_background_fill="#F6F6F6",
- )
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py
deleted file mode 100644
index 29bd34ede2eda2b89e2683e4472f3a270c09e5a2..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_SHAKE.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2015, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-"""Self-test suite for Crypto.Hash.SHAKE128 and SHAKE256"""
-
-import unittest
-from binascii import hexlify, unhexlify
-
-from Crypto.SelfTest.loader import load_test_vectors
-from Crypto.SelfTest.st_common import list_test_cases
-
-from Crypto.Hash import SHAKE128, SHAKE256
-from Crypto.Util.py3compat import b, bchr, bord, tobytes
-
-class SHAKETest(unittest.TestCase):
-
- def test_new_positive(self):
-
- xof1 = self.shake.new()
- xof2 = self.shake.new(data=b("90"))
- xof3 = self.shake.new().update(b("90"))
-
- self.assertNotEqual(xof1.read(10), xof2.read(10))
- xof3.read(10)
- self.assertEqual(xof2.read(10), xof3.read(10))
-
- def test_update(self):
- pieces = [bchr(10) * 200, bchr(20) * 300]
- h = self.shake.new()
- h.update(pieces[0]).update(pieces[1])
- digest = h.read(10)
- h = self.shake.new()
- h.update(pieces[0] + pieces[1])
- self.assertEqual(h.read(10), digest)
-
- def test_update_negative(self):
- h = self.shake.new()
- self.assertRaises(TypeError, h.update, u"string")
-
- def test_digest(self):
- h = self.shake.new()
- digest = h.read(90)
-
- # read returns a byte string of the right length
- self.assertTrue(isinstance(digest, type(b("digest"))))
- self.assertEqual(len(digest), 90)
-
- def test_update_after_read(self):
- mac = self.shake.new()
- mac.update(b("rrrr"))
- mac.read(90)
- self.assertRaises(TypeError, mac.update, b("ttt"))
-
-
-class SHAKE128Test(SHAKETest):
- shake = SHAKE128
-
-
-class SHAKE256Test(SHAKETest):
- shake = SHAKE256
-
-
-class SHAKEVectors(unittest.TestCase):
- pass
-
-
-test_vectors_128 = load_test_vectors(("Hash", "SHA3"),
- "ShortMsgKAT_SHAKE128.txt",
- "Short Messages KAT SHAKE128",
- { "len" : lambda x: int(x) } ) or []
-
-for idx, tv in enumerate(test_vectors_128):
- if tv.len == 0:
- data = b("")
- else:
- data = tobytes(tv.msg)
-
- def new_test(self, data=data, result=tv.md):
- hobj = SHAKE128.new(data=data)
- digest = hobj.read(len(result))
- self.assertEqual(digest, result)
-
- setattr(SHAKEVectors, "test_128_%d" % idx, new_test)
-
-
-test_vectors_256 = load_test_vectors(("Hash", "SHA3"),
- "ShortMsgKAT_SHAKE256.txt",
- "Short Messages KAT SHAKE256",
- { "len" : lambda x: int(x) } ) or []
-
-for idx, tv in enumerate(test_vectors_256):
- if tv.len == 0:
- data = b("")
- else:
- data = tobytes(tv.msg)
-
- def new_test(self, data=data, result=tv.md):
- hobj = SHAKE256.new(data=data)
- digest = hobj.read(len(result))
- self.assertEqual(digest, result)
-
- setattr(SHAKEVectors, "test_256_%d" % idx, new_test)
-
-
-def get_tests(config={}):
- tests = []
- tests += list_test_cases(SHAKE128Test)
- tests += list_test_cases(SHAKE256Test)
- tests += list_test_cases(SHAKEVectors)
- return tests
-
-
-if __name__ == '__main__':
- import unittest
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Util/test_strxor.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Util/test_strxor.py
deleted file mode 100644
index c91d38f5c1a678daed6f81ea6bfd19f33e195bf9..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Util/test_strxor.py
+++ /dev/null
@@ -1,280 +0,0 @@
-#
-# SelfTest/Util/test_strxor.py: Self-test for XORing
-#
-# ===================================================================
-#
-# Copyright (c) 2014, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-import unittest
-from binascii import unhexlify, hexlify
-
-from Crypto.SelfTest.st_common import list_test_cases
-from Crypto.Util.strxor import strxor, strxor_c
-
-
-class StrxorTests(unittest.TestCase):
-
- def test1(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term2 = unhexlify(b"383d4ba020573314395b")
- result = unhexlify(b"c70ed123c59a7fcb6f12")
- self.assertEqual(strxor(term1, term2), result)
- self.assertEqual(strxor(term2, term1), result)
-
- def test2(self):
- es = b""
- self.assertEqual(strxor(es, es), es)
-
- def test3(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- all_zeros = b"\x00" * len(term1)
- self.assertEqual(strxor(term1, term1), all_zeros)
-
- def test_wrong_length(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term2 = unhexlify(b"ff339a83e5cd4cdf564990")
- self.assertRaises(ValueError, strxor, term1, term2)
-
- def test_bytearray(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term1_ba = bytearray(term1)
- term2 = unhexlify(b"383d4ba020573314395b")
- result = unhexlify(b"c70ed123c59a7fcb6f12")
-
- self.assertEqual(strxor(term1_ba, term2), result)
-
- def test_memoryview(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term1_mv = memoryview(term1)
- term2 = unhexlify(b"383d4ba020573314395b")
- result = unhexlify(b"c70ed123c59a7fcb6f12")
-
- self.assertEqual(strxor(term1_mv, term2), result)
-
- def test_output_bytearray(self):
- """Verify result can be stored in pre-allocated memory"""
-
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term2 = unhexlify(b"383d4ba020573314395b")
- original_term1 = term1[:]
- original_term2 = term2[:]
- expected_xor = unhexlify(b"c70ed123c59a7fcb6f12")
- output = bytearray(len(term1))
-
- result = strxor(term1, term2, output=output)
-
- self.assertEqual(result, None)
- self.assertEqual(output, expected_xor)
- self.assertEqual(term1, original_term1)
- self.assertEqual(term2, original_term2)
-
- def test_output_memoryview(self):
- """Verify result can be stored in pre-allocated memory"""
-
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term2 = unhexlify(b"383d4ba020573314395b")
- original_term1 = term1[:]
- original_term2 = term2[:]
- expected_xor = unhexlify(b"c70ed123c59a7fcb6f12")
- output = memoryview(bytearray(len(term1)))
-
- result = strxor(term1, term2, output=output)
-
- self.assertEqual(result, None)
- self.assertEqual(output, expected_xor)
- self.assertEqual(term1, original_term1)
- self.assertEqual(term2, original_term2)
-
- def test_output_overlapping_bytearray(self):
- """Verify result can be stored in overlapping memory"""
-
- term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))
- term2 = unhexlify(b"383d4ba020573314395b")
- original_term2 = term2[:]
- expected_xor = unhexlify(b"c70ed123c59a7fcb6f12")
-
- result = strxor(term1, term2, output=term1)
-
- self.assertEqual(result, None)
- self.assertEqual(term1, expected_xor)
- self.assertEqual(term2, original_term2)
-
- def test_output_overlapping_memoryview(self):
- """Verify result can be stored in overlapping memory"""
-
- term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649")))
- term2 = unhexlify(b"383d4ba020573314395b")
- original_term2 = term2[:]
- expected_xor = unhexlify(b"c70ed123c59a7fcb6f12")
-
- result = strxor(term1, term2, output=term1)
-
- self.assertEqual(result, None)
- self.assertEqual(term1, expected_xor)
- self.assertEqual(term2, original_term2)
-
- def test_output_ro_bytes(self):
- """Verify result cannot be stored in read-only memory"""
-
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term2 = unhexlify(b"383d4ba020573314395b")
-
- self.assertRaises(TypeError, strxor, term1, term2, output=term1)
-
- def test_output_ro_memoryview(self):
- """Verify result cannot be stored in read-only memory"""
-
- term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649"))
- term2 = unhexlify(b"383d4ba020573314395b")
-
- self.assertRaises(TypeError, strxor, term1, term2, output=term1)
-
- def test_output_incorrect_length(self):
- """Verify result cannot be stored in memory of incorrect length"""
-
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term2 = unhexlify(b"383d4ba020573314395b")
- output = bytearray(len(term1) - 1)
-
- self.assertRaises(ValueError, strxor, term1, term2, output=output)
-
-
-class Strxor_cTests(unittest.TestCase):
-
- def test1(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- result = unhexlify(b"be72dbc2a48c0d9e1708")
- self.assertEqual(strxor_c(term1, 65), result)
-
- def test2(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- self.assertEqual(strxor_c(term1, 0), term1)
-
- def test3(self):
- self.assertEqual(strxor_c(b"", 90), b"")
-
- def test_wrong_range(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- self.assertRaises(ValueError, strxor_c, term1, -1)
- self.assertRaises(ValueError, strxor_c, term1, 256)
-
- def test_bytearray(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term1_ba = bytearray(term1)
- result = unhexlify(b"be72dbc2a48c0d9e1708")
-
- self.assertEqual(strxor_c(term1_ba, 65), result)
-
- def test_memoryview(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- term1_mv = memoryview(term1)
- result = unhexlify(b"be72dbc2a48c0d9e1708")
-
- self.assertEqual(strxor_c(term1_mv, 65), result)
-
- def test_output_bytearray(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- original_term1 = term1[:]
- expected_result = unhexlify(b"be72dbc2a48c0d9e1708")
- output = bytearray(len(term1))
-
- result = strxor_c(term1, 65, output=output)
-
- self.assertEqual(result, None)
- self.assertEqual(output, expected_result)
- self.assertEqual(term1, original_term1)
-
- def test_output_memoryview(self):
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- original_term1 = term1[:]
- expected_result = unhexlify(b"be72dbc2a48c0d9e1708")
- output = memoryview(bytearray(len(term1)))
-
- result = strxor_c(term1, 65, output=output)
-
- self.assertEqual(result, None)
- self.assertEqual(output, expected_result)
- self.assertEqual(term1, original_term1)
-
- def test_output_overlapping_bytearray(self):
- """Verify result can be stored in overlapping memory"""
-
- term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))
- expected_xor = unhexlify(b"be72dbc2a48c0d9e1708")
-
- result = strxor_c(term1, 65, output=term1)
-
- self.assertEqual(result, None)
- self.assertEqual(term1, expected_xor)
-
- def test_output_overlapping_memoryview(self):
- """Verify result can be stored in overlapping memory"""
-
- term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649")))
- expected_xor = unhexlify(b"be72dbc2a48c0d9e1708")
-
- result = strxor_c(term1, 65, output=term1)
-
- self.assertEqual(result, None)
- self.assertEqual(term1, expected_xor)
-
- def test_output_ro_bytes(self):
- """Verify result cannot be stored in read-only memory"""
-
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
-
- self.assertRaises(TypeError, strxor_c, term1, 65, output=term1)
-
- def test_output_ro_memoryview(self):
- """Verify result cannot be stored in read-only memory"""
-
- term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649"))
- term2 = unhexlify(b"383d4ba020573314395b")
-
- self.assertRaises(TypeError, strxor_c, term1, 65, output=term1)
-
- def test_output_incorrect_length(self):
- """Verify result cannot be stored in memory of incorrect length"""
-
- term1 = unhexlify(b"ff339a83e5cd4cdf5649")
- output = bytearray(len(term1) - 1)
-
- self.assertRaises(ValueError, strxor_c, term1, 65, output=output)
-
-
-def get_tests(config={}):
- tests = []
- tests += list_test_cases(StrxorTests)
- tests += list_test_cases(Strxor_cTests)
- return tests
-
-
-if __name__ == '__main__':
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/pss.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/pss.py
deleted file mode 100644
index 5f34ace7838b992f1900048c04bbd283c295fe7a..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Signature/pss.py
+++ /dev/null
@@ -1,386 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2014, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-from Crypto.Util.py3compat import bchr, bord, iter_range
-import Crypto.Util.number
-from Crypto.Util.number import (ceil_div,
- long_to_bytes,
- bytes_to_long
- )
-from Crypto.Util.strxor import strxor
-from Crypto import Random
-
-
-class PSS_SigScheme:
- """A signature object for ``RSASSA-PSS``.
- Do not instantiate directly.
- Use :func:`Crypto.Signature.pss.new`.
- """
-
- def __init__(self, key, mgfunc, saltLen, randfunc):
- """Initialize this PKCS#1 PSS signature scheme object.
-
- :Parameters:
- key : an RSA key object
- If a private half is given, both signature and
- verification are possible.
- If a public half is given, only verification is possible.
- mgfunc : callable
- A mask generation function that accepts two parameters:
- a string to use as seed, and the lenth of the mask to
- generate, in bytes.
- saltLen : integer
- Length of the salt, in bytes.
- randfunc : callable
- A function that returns random bytes.
- """
-
- self._key = key
- self._saltLen = saltLen
- self._mgfunc = mgfunc
- self._randfunc = randfunc
-
- def can_sign(self):
- """Return ``True`` if this object can be used to sign messages."""
- return self._key.has_private()
-
- def sign(self, msg_hash):
- """Create the PKCS#1 PSS signature of a message.
-
- This function is also called ``RSASSA-PSS-SIGN`` and
- it is specified in
- `section 8.1.1 of RFC8017 `_.
-
- :parameter msg_hash:
- This is an object from the :mod:`Crypto.Hash` package.
- It has been used to digest the message to sign.
- :type msg_hash: hash object
-
- :return: the signature encoded as a *byte string*.
- :raise ValueError: if the RSA key is not long enough for the given hash algorithm.
- :raise TypeError: if the RSA key has no private half.
- """
-
- # Set defaults for salt length and mask generation function
- if self._saltLen is None:
- sLen = msg_hash.digest_size
- else:
- sLen = self._saltLen
-
- if self._mgfunc is None:
- mgf = lambda x, y: MGF1(x, y, msg_hash)
- else:
- mgf = self._mgfunc
-
- modBits = Crypto.Util.number.size(self._key.n)
-
- # See 8.1.1 in RFC3447
- k = ceil_div(modBits, 8) # k is length in bytes of the modulus
- # Step 1
- em = _EMSA_PSS_ENCODE(msg_hash, modBits-1, self._randfunc, mgf, sLen)
- # Step 2a (OS2IP)
- em_int = bytes_to_long(em)
- # Step 2b (RSASP1)
- m_int = self._key._decrypt(em_int)
- # Step 2c (I2OSP)
- signature = long_to_bytes(m_int, k)
- return signature
-
- def verify(self, msg_hash, signature):
- """Check if the PKCS#1 PSS signature over a message is valid.
-
- This function is also called ``RSASSA-PSS-VERIFY`` and
- it is specified in
- `section 8.1.2 of RFC8037 `_.
-
- :parameter msg_hash:
- The hash that was carried out over the message. This is an object
- belonging to the :mod:`Crypto.Hash` module.
- :type parameter: hash object
-
- :parameter signature:
- The signature that needs to be validated.
- :type signature: bytes
-
- :raise ValueError: if the signature is not valid.
- """
-
- # Set defaults for salt length and mask generation function
- if self._saltLen is None:
- sLen = msg_hash.digest_size
- else:
- sLen = self._saltLen
- if self._mgfunc:
- mgf = self._mgfunc
- else:
- mgf = lambda x, y: MGF1(x, y, msg_hash)
-
- modBits = Crypto.Util.number.size(self._key.n)
-
- # See 8.1.2 in RFC3447
- k = ceil_div(modBits, 8) # Convert from bits to bytes
- # Step 1
- if len(signature) != k:
- raise ValueError("Incorrect signature")
- # Step 2a (O2SIP)
- signature_int = bytes_to_long(signature)
- # Step 2b (RSAVP1)
- em_int = self._key._encrypt(signature_int)
- # Step 2c (I2OSP)
- emLen = ceil_div(modBits - 1, 8)
- em = long_to_bytes(em_int, emLen)
- # Step 3/4
- _EMSA_PSS_VERIFY(msg_hash, em, modBits-1, mgf, sLen)
-
-
-def MGF1(mgfSeed, maskLen, hash_gen):
- """Mask Generation Function, described in `B.2.1 of RFC8017
- `_.
-
- :param mfgSeed:
- seed from which the mask is generated
- :type mfgSeed: byte string
-
- :param maskLen:
- intended length in bytes of the mask
- :type maskLen: integer
-
- :param hash_gen:
- A module or a hash object from :mod:`Crypto.Hash`
- :type hash_object:
-
- :return: the mask, as a *byte string*
- """
-
- T = b""
- for counter in iter_range(ceil_div(maskLen, hash_gen.digest_size)):
- c = long_to_bytes(counter, 4)
- hobj = hash_gen.new()
- hobj.update(mgfSeed + c)
- T = T + hobj.digest()
- assert(len(T) >= maskLen)
- return T[:maskLen]
-
-
-def _EMSA_PSS_ENCODE(mhash, emBits, randFunc, mgf, sLen):
- r"""
- Implement the ``EMSA-PSS-ENCODE`` function, as defined
- in PKCS#1 v2.1 (RFC3447, 9.1.1).
-
- The original ``EMSA-PSS-ENCODE`` actually accepts the message ``M``
- as input, and hash it internally. Here, we expect that the message
- has already been hashed instead.
-
- :Parameters:
- mhash : hash object
- The hash object that holds the digest of the message being signed.
- emBits : int
- Maximum length of the final encoding, in bits.
- randFunc : callable
- An RNG function that accepts as only parameter an int, and returns
- a string of random bytes, to be used as salt.
- mgf : callable
- A mask generation function that accepts two parameters: a string to
- use as seed, and the lenth of the mask to generate, in bytes.
- sLen : int
- Length of the salt, in bytes.
-
- :Return: An ``emLen`` byte long string that encodes the hash
- (with ``emLen = \ceil(emBits/8)``).
-
- :Raise ValueError:
- When digest or salt length are too big.
- """
-
- emLen = ceil_div(emBits, 8)
-
- # Bitmask of digits that fill up
- lmask = 0
- for i in iter_range(8*emLen-emBits):
- lmask = lmask >> 1 | 0x80
-
- # Step 1 and 2 have been already done
- # Step 3
- if emLen < mhash.digest_size+sLen+2:
- raise ValueError("Digest or salt length are too long"
- " for given key size.")
- # Step 4
- salt = randFunc(sLen)
- # Step 5
- m_prime = bchr(0)*8 + mhash.digest() + salt
- # Step 6
- h = mhash.new()
- h.update(m_prime)
- # Step 7
- ps = bchr(0)*(emLen-sLen-mhash.digest_size-2)
- # Step 8
- db = ps + bchr(1) + salt
- # Step 9
- dbMask = mgf(h.digest(), emLen-mhash.digest_size-1)
- # Step 10
- maskedDB = strxor(db, dbMask)
- # Step 11
- maskedDB = bchr(bord(maskedDB[0]) & ~lmask) + maskedDB[1:]
- # Step 12
- em = maskedDB + h.digest() + bchr(0xBC)
- return em
-
-
-def _EMSA_PSS_VERIFY(mhash, em, emBits, mgf, sLen):
- """
- Implement the ``EMSA-PSS-VERIFY`` function, as defined
- in PKCS#1 v2.1 (RFC3447, 9.1.2).
-
- ``EMSA-PSS-VERIFY`` actually accepts the message ``M`` as input,
- and hash it internally. Here, we expect that the message has already
- been hashed instead.
-
- :Parameters:
- mhash : hash object
- The hash object that holds the digest of the message to be verified.
- em : string
- The signature to verify, therefore proving that the sender really
- signed the message that was received.
- emBits : int
- Length of the final encoding (em), in bits.
- mgf : callable
- A mask generation function that accepts two parameters: a string to
- use as seed, and the lenth of the mask to generate, in bytes.
- sLen : int
- Length of the salt, in bytes.
-
- :Raise ValueError:
- When the encoding is inconsistent, or the digest or salt lengths
- are too big.
- """
-
- emLen = ceil_div(emBits, 8)
-
- # Bitmask of digits that fill up
- lmask = 0
- for i in iter_range(8*emLen-emBits):
- lmask = lmask >> 1 | 0x80
-
- # Step 1 and 2 have been already done
- # Step 3
- if emLen < mhash.digest_size+sLen+2:
- raise ValueError("Incorrect signature")
- # Step 4
- if ord(em[-1:]) != 0xBC:
- raise ValueError("Incorrect signature")
- # Step 5
- maskedDB = em[:emLen-mhash.digest_size-1]
- h = em[emLen-mhash.digest_size-1:-1]
- # Step 6
- if lmask & bord(em[0]):
- raise ValueError("Incorrect signature")
- # Step 7
- dbMask = mgf(h, emLen-mhash.digest_size-1)
- # Step 8
- db = strxor(maskedDB, dbMask)
- # Step 9
- db = bchr(bord(db[0]) & ~lmask) + db[1:]
- # Step 10
- if not db.startswith(bchr(0)*(emLen-mhash.digest_size-sLen-2) + bchr(1)):
- raise ValueError("Incorrect signature")
- # Step 11
- if sLen > 0:
- salt = db[-sLen:]
- else:
- salt = b""
- # Step 12
- m_prime = bchr(0)*8 + mhash.digest() + salt
- # Step 13
- hobj = mhash.new()
- hobj.update(m_prime)
- hp = hobj.digest()
- # Step 14
- if h != hp:
- raise ValueError("Incorrect signature")
-
-
-def new(rsa_key, **kwargs):
- """Create an object for making or verifying PKCS#1 PSS signatures.
-
- :parameter rsa_key:
- The RSA key to use for signing or verifying the message.
- This is a :class:`Crypto.PublicKey.RSA` object.
- Signing is only possible when ``rsa_key`` is a **private** RSA key.
- :type rsa_key: RSA object
-
- :Keyword Arguments:
-
- * *mask_func* (``callable``) --
- A function that returns the mask (as `bytes`).
- It must accept two parameters: a seed (as `bytes`)
- and the length of the data to return.
-
- If not specified, it will be the function :func:`MGF1` defined in
- `RFC8017 `_ and
- combined with the same hash algorithm applied to the
- message to sign or verify.
-
- If you want to use a different function, for instance still :func:`MGF1`
- but together with another hash, you can do::
-
- from Crypto.Hash import SHA256
- from Crypto.Signature.pss import MGF1
- mgf = lambda x, y: MGF1(x, y, SHA256)
-
- * *salt_bytes* (``integer``) --
- Length of the salt, in bytes.
- It is a value between 0 and ``emLen - hLen - 2``, where ``emLen``
- is the size of the RSA modulus and ``hLen`` is the size of the digest
- applied to the message to sign or verify.
-
- The salt is generated internally, you don't need to provide it.
-
- If not specified, the salt length will be ``hLen``.
- If it is zero, the signature scheme becomes deterministic.
-
- Note that in some implementations such as OpenSSL the default
- salt length is ``emLen - hLen - 2`` (even though it is not more
- secure than ``hLen``).
-
- * *rand_func* (``callable``) --
- A function that returns random ``bytes``, of the desired length.
- The default is :func:`Crypto.Random.get_random_bytes`.
-
- :return: a :class:`PSS_SigScheme` signature object
- """
-
- mask_func = kwargs.pop("mask_func", None)
- salt_len = kwargs.pop("salt_bytes", None)
- rand_func = kwargs.pop("rand_func", None)
- if rand_func is None:
- rand_func = Random.get_random_bytes
- if kwargs:
- raise ValueError("Unknown keywords: " + str(kwargs.keys()))
- return PSS_SigScheme(rsa_key, mask_func, salt_len, rand_func)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/zoneinfo/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/zoneinfo/__init__.py
deleted file mode 100644
index 34f11ad66c88047f2c049a4cdcc937b4b78ea6d6..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/zoneinfo/__init__.py
+++ /dev/null
@@ -1,167 +0,0 @@
-# -*- coding: utf-8 -*-
-import warnings
-import json
-
-from tarfile import TarFile
-from pkgutil import get_data
-from io import BytesIO
-
-from dateutil.tz import tzfile as _tzfile
-
-__all__ = ["get_zonefile_instance", "gettz", "gettz_db_metadata"]
-
-ZONEFILENAME = "dateutil-zoneinfo.tar.gz"
-METADATA_FN = 'METADATA'
-
-
-class tzfile(_tzfile):
- def __reduce__(self):
- return (gettz, (self._filename,))
-
-
-def getzoneinfofile_stream():
- try:
- return BytesIO(get_data(__name__, ZONEFILENAME))
- except IOError as e: # TODO switch to FileNotFoundError?
- warnings.warn("I/O error({0}): {1}".format(e.errno, e.strerror))
- return None
-
-
-class ZoneInfoFile(object):
- def __init__(self, zonefile_stream=None):
- if zonefile_stream is not None:
- with TarFile.open(fileobj=zonefile_stream) as tf:
- self.zones = {zf.name: tzfile(tf.extractfile(zf), filename=zf.name)
- for zf in tf.getmembers()
- if zf.isfile() and zf.name != METADATA_FN}
- # deal with links: They'll point to their parent object. Less
- # waste of memory
- links = {zl.name: self.zones[zl.linkname]
- for zl in tf.getmembers() if
- zl.islnk() or zl.issym()}
- self.zones.update(links)
- try:
- metadata_json = tf.extractfile(tf.getmember(METADATA_FN))
- metadata_str = metadata_json.read().decode('UTF-8')
- self.metadata = json.loads(metadata_str)
- except KeyError:
- # no metadata in tar file
- self.metadata = None
- else:
- self.zones = {}
- self.metadata = None
-
- def get(self, name, default=None):
- """
- Wrapper for :func:`ZoneInfoFile.zones.get`. This is a convenience method
- for retrieving zones from the zone dictionary.
-
- :param name:
- The name of the zone to retrieve. (Generally IANA zone names)
-
- :param default:
- The value to return in the event of a missing key.
-
- .. versionadded:: 2.6.0
-
- """
- return self.zones.get(name, default)
-
-
-# The current API has gettz as a module function, although in fact it taps into
-# a stateful class. So as a workaround for now, without changing the API, we
-# will create a new "global" class instance the first time a user requests a
-# timezone. Ugly, but adheres to the api.
-#
-# TODO: Remove after deprecation period.
-_CLASS_ZONE_INSTANCE = []
-
-
-def get_zonefile_instance(new_instance=False):
- """
- This is a convenience function which provides a :class:`ZoneInfoFile`
- instance using the data provided by the ``dateutil`` package. By default, it
- caches a single instance of the ZoneInfoFile object and returns that.
-
- :param new_instance:
- If ``True``, a new instance of :class:`ZoneInfoFile` is instantiated and
- used as the cached instance for the next call. Otherwise, new instances
- are created only as necessary.
-
- :return:
- Returns a :class:`ZoneInfoFile` object.
-
- .. versionadded:: 2.6
- """
- if new_instance:
- zif = None
- else:
- zif = getattr(get_zonefile_instance, '_cached_instance', None)
-
- if zif is None:
- zif = ZoneInfoFile(getzoneinfofile_stream())
-
- get_zonefile_instance._cached_instance = zif
-
- return zif
-
-
-def gettz(name):
- """
- This retrieves a time zone from the local zoneinfo tarball that is packaged
- with dateutil.
-
- :param name:
- An IANA-style time zone name, as found in the zoneinfo file.
-
- :return:
- Returns a :class:`dateutil.tz.tzfile` time zone object.
-
- .. warning::
- It is generally inadvisable to use this function, and it is only
- provided for API compatibility with earlier versions. This is *not*
- equivalent to ``dateutil.tz.gettz()``, which selects an appropriate
- time zone based on the inputs, favoring system zoneinfo. This is ONLY
- for accessing the dateutil-specific zoneinfo (which may be out of
- date compared to the system zoneinfo).
-
- .. deprecated:: 2.6
- If you need to use a specific zoneinfofile over the system zoneinfo,
- instantiate a :class:`dateutil.zoneinfo.ZoneInfoFile` object and call
- :func:`dateutil.zoneinfo.ZoneInfoFile.get(name)` instead.
-
- Use :func:`get_zonefile_instance` to retrieve an instance of the
- dateutil-provided zoneinfo.
- """
- warnings.warn("zoneinfo.gettz() will be removed in future versions, "
- "to use the dateutil-provided zoneinfo files, instantiate a "
- "ZoneInfoFile object and use ZoneInfoFile.zones.get() "
- "instead. See the documentation for details.",
- DeprecationWarning)
-
- if len(_CLASS_ZONE_INSTANCE) == 0:
- _CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream()))
- return _CLASS_ZONE_INSTANCE[0].zones.get(name)
-
-
-def gettz_db_metadata():
- """ Get the zonefile metadata
-
- See `zonefile_metadata`_
-
- :returns:
- A dictionary with the database metadata
-
- .. deprecated:: 2.6
- See deprecation warning in :func:`zoneinfo.gettz`. To get metadata,
- query the attribute ``zoneinfo.ZoneInfoFile.metadata``.
- """
- warnings.warn("zoneinfo.gettz_db_metadata() will be removed in future "
- "versions, to use the dateutil-provided zoneinfo files, "
- "ZoneInfoFile object and query the 'metadata' attribute "
- "instead. See the documentation for details.",
- DeprecationWarning)
-
- if len(_CLASS_ZONE_INSTANCE) == 0:
- _CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream()))
- return _CLASS_ZONE_INSTANCE[0].metadata
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/dependencies/models.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/dependencies/models.py
deleted file mode 100644
index 61ef006387781b81c55fb8222449435c851097fa..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/dependencies/models.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from typing import Any, Callable, List, Optional, Sequence
-
-from fastapi._compat import ModelField
-from fastapi.security.base import SecurityBase
-
-
-class SecurityRequirement:
- def __init__(
- self, security_scheme: SecurityBase, scopes: Optional[Sequence[str]] = None
- ):
- self.security_scheme = security_scheme
- self.scopes = scopes
-
-
-class Dependant:
- def __init__(
- self,
- *,
- path_params: Optional[List[ModelField]] = None,
- query_params: Optional[List[ModelField]] = None,
- header_params: Optional[List[ModelField]] = None,
- cookie_params: Optional[List[ModelField]] = None,
- body_params: Optional[List[ModelField]] = None,
- dependencies: Optional[List["Dependant"]] = None,
- security_schemes: Optional[List[SecurityRequirement]] = None,
- name: Optional[str] = None,
- call: Optional[Callable[..., Any]] = None,
- request_param_name: Optional[str] = None,
- websocket_param_name: Optional[str] = None,
- http_connection_param_name: Optional[str] = None,
- response_param_name: Optional[str] = None,
- background_tasks_param_name: Optional[str] = None,
- security_scopes_param_name: Optional[str] = None,
- security_scopes: Optional[List[str]] = None,
- use_cache: bool = True,
- path: Optional[str] = None,
- ) -> None:
- self.path_params = path_params or []
- self.query_params = query_params or []
- self.header_params = header_params or []
- self.cookie_params = cookie_params or []
- self.body_params = body_params or []
- self.dependencies = dependencies or []
- self.security_requirements = security_schemes or []
- self.request_param_name = request_param_name
- self.websocket_param_name = websocket_param_name
- self.http_connection_param_name = http_connection_param_name
- self.response_param_name = response_param_name
- self.background_tasks_param_name = background_tasks_param_name
- self.security_scopes = security_scopes
- self.security_scopes_param_name = security_scopes_param_name
- self.name = name
- self.call = call
- self.use_cache = use_cache
- # Store the path to be able to re-generate a dependable from it in overrides
- self.path = path
- # Save the cache key at creation to optimize performance
- self.cache_key = (self.call, tuple(sorted(set(self.security_scopes or []))))
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/unicode.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/unicode.py
deleted file mode 100644
index a9ffeefac1c9e553c53bc12346e49e7ece8d364a..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/unicode.py
+++ /dev/null
@@ -1,50 +0,0 @@
-def _makeunicodes(f):
- lines = iter(f.readlines())
- unicodes = {}
- for line in lines:
- if not line:
- continue
- num, name = line.split(";")[:2]
- if name[0] == "<":
- continue # "", etc.
- num = int(num, 16)
- unicodes[num] = name
- return unicodes
-
-
-class _UnicodeCustom(object):
- def __init__(self, f):
- if isinstance(f, str):
- with open(f) as fd:
- codes = _makeunicodes(fd)
- else:
- codes = _makeunicodes(f)
- self.codes = codes
-
- def __getitem__(self, charCode):
- try:
- return self.codes[charCode]
- except KeyError:
- return "????"
-
-
-class _UnicodeBuiltin(object):
- def __getitem__(self, charCode):
- try:
- # use unicodedata backport to python2, if available:
- # https://github.com/mikekap/unicodedata2
- import unicodedata2 as unicodedata
- except ImportError:
- import unicodedata
- try:
- return unicodedata.name(chr(charCode))
- except ValueError:
- return "????"
-
-
-Unicode = _UnicodeBuiltin()
-
-
-def setUnicodeData(f):
- global Unicode
- Unicode = _UnicodeCustom(f)
diff --git a/spaces/johnberg/CLIPInverter/align_faces_parallel.py b/spaces/johnberg/CLIPInverter/align_faces_parallel.py
deleted file mode 100644
index a0d01296e5803e4a26b33a5dd00532767cee207a..0000000000000000000000000000000000000000
--- a/spaces/johnberg/CLIPInverter/align_faces_parallel.py
+++ /dev/null
@@ -1,209 +0,0 @@
-"""
-brief: face alignment with FFHQ method (https://github.com/NVlabs/ffhq-dataset)
-author: lzhbrian (https://lzhbrian.me)
-date: 2020.1.5
-note: code is heavily borrowed from
- https://github.com/NVlabs/ffhq-dataset
- http://dlib.net/face_landmark_detection.py.html
-
-requirements:
- apt install cmake
- conda install Pillow numpy scipy
- pip install dlib
- # download face landmark model from:
- # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
-"""
-from argparse import ArgumentParser
-import time
-import numpy as np
-import PIL
-import PIL.Image
-import os
-import scipy
-import scipy.ndimage
-import dlib
-import multiprocessing as mp
-import math
-
-
-SHAPE_PREDICTOR_PATH = "shape_predictor_68_face_landmarks.dat"
-
-
-def get_landmark(img, predictor):
- """get landmark with dlib
- :return: np.array shape=(68, 2)
- """
- detector = dlib.get_frontal_face_detector()
-
- # img = dlib.load_rgb_image(filepath)
- img = img
- img = np.uint8(np.array(img))
- dets = detector(img, 1)
-
- shape = None
- for k, d in enumerate(dets):
- shape = predictor(img, d)
-
- if not shape:
- # raise Exception("Could not find face in image! Please try another image!")
- return None
-
- t = list(shape.parts())
- a = []
- for tt in t:
- a.append([tt.x, tt.y])
- lm = np.array(a)
- return lm
-
-
-def align_face(img, predictor, output_size=256, transform_size=256):
- """
- :param filepath: str
- :return: PIL Image
- """
-
- lm = get_landmark(img, predictor)
- if lm is None:
- return None
-
- lm_chin = lm[0: 17] # left-right
- lm_eyebrow_left = lm[17: 22] # left-right
- lm_eyebrow_right = lm[22: 27] # left-right
- lm_nose = lm[27: 31] # top-down
- lm_nostrils = lm[31: 36] # top-down
- lm_eye_left = lm[36: 42] # left-clockwise
- lm_eye_right = lm[42: 48] # left-clockwise
- lm_mouth_outer = lm[48: 60] # left-clockwise
- lm_mouth_inner = lm[60: 68] # left-clockwise
-
- # Calculate auxiliary vectors.
- eye_left = np.mean(lm_eye_left, axis=0)
- eye_right = np.mean(lm_eye_right, axis=0)
- eye_avg = (eye_left + eye_right) * 0.5
- eye_to_eye = eye_right - eye_left
- mouth_left = lm_mouth_outer[0]
- mouth_right = lm_mouth_outer[6]
- mouth_avg = (mouth_left + mouth_right) * 0.5
- eye_to_mouth = mouth_avg - eye_avg
-
- # Choose oriented crop rectangle.
- x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]
- x /= np.hypot(*x)
- x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)
- y = np.flipud(x) * [-1, 1]
- c = eye_avg + eye_to_mouth * 0.1
- quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])
- qsize = np.hypot(*x) * 2
-
- # read image
- img = img
- enable_padding = True
-
- # Shrink.
- shrink = int(np.floor(qsize / output_size * 0.5))
- if shrink > 1:
- rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))
- img = img.resize(rsize, PIL.Image.LANCZOS)
- quad /= shrink
- qsize /= shrink
-
- # Crop.
- border = max(int(np.rint(qsize * 0.1)), 3)
- crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
- min(crop[3] + border, img.size[1]))
- if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
- img = img.crop(crop)
- quad -= crop[0:2]
-
- # Pad.
- pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
- int(np.ceil(max(quad[:, 1]))))
- pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
- max(pad[3] - img.size[1] + border, 0))
- if enable_padding and max(pad) > border - 4:
- pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
- img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
- h, w, _ = img.shape
- y, x, _ = np.ogrid[:h, :w, :1]
- mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
- 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
- blur = qsize * 0.02
- img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
- img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
- img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
- quad += pad[:2]
-
- # Transform.
- img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)
- if output_size < transform_size:
- img = img.resize((output_size, output_size), PIL.Image.LANCZOS)
-
- # Save aligned image.
- return img
-
-
-def chunks(lst, n):
- """Yield successive n-sized chunks from lst."""
- for i in range(0, len(lst), n):
- yield lst[i:i + n]
-
-
-def extract_on_paths(file_paths):
- predictor = dlib.shape_predictor(SHAPE_PREDICTOR_PATH)
- pid = mp.current_process().name
- print('\t{} is starting to extract on #{} images'.format(pid, len(file_paths)))
- tot_count = len(file_paths)
- count = 0
- for file_path, res_path in file_paths:
- count += 1
- if count % 100 == 0:
- print('{} done with {}/{}'.format(pid, count, tot_count))
- try:
- res = align_face(file_path, predictor)
- res = res.convert('RGB')
- os.makedirs(os.path.dirname(res_path), exist_ok=True)
- res.save(res_path)
- except Exception:
- continue
- print('\tDone!')
-
-
-def parse_args():
- parser = ArgumentParser(add_help=False)
- parser.add_argument('--num_threads', type=int, default=1)
- parser.add_argument('--root_path', type=str, default='')
- args = parser.parse_args()
- return args
-
-
-def run(args):
- root_path = args.root_path
- out_crops_path = root_path + '_crops'
- if not os.path.exists(out_crops_path):
- os.makedirs(out_crops_path, exist_ok=True)
-
- file_paths = []
- for root, dirs, files in os.walk(root_path):
- for file in files:
- file_path = os.path.join(root, file)
- fname = os.path.join(out_crops_path, os.path.relpath(file_path, root_path))
- res_path = '{}.jpg'.format(os.path.splitext(fname)[0])
- if os.path.splitext(file_path)[1] == '.txt' or os.path.exists(res_path):
- continue
- file_paths.append((file_path, res_path))
-
- file_chunks = list(chunks(file_paths, int(math.ceil(len(file_paths) / args.num_threads))))
- print(len(file_chunks))
- pool = mp.Pool(args.num_threads)
- print('Running on {} paths\nHere we goooo'.format(len(file_paths)))
- tic = time.time()
- pool.map(extract_on_paths, file_chunks)
- toc = time.time()
- print('Mischief managed in {}s'.format(toc - tic))
-
-
-if __name__ == '__main__':
- args = parse_args()
- run(args)
diff --git a/spaces/jordonpeter01/MusicGen/audiocraft/data/audio_utils.py b/spaces/jordonpeter01/MusicGen/audiocraft/data/audio_utils.py
deleted file mode 100644
index 76d4bc2a33ce722d879db2af33cd1336bd6b1fb3..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/MusicGen/audiocraft/data/audio_utils.py
+++ /dev/null
@@ -1,174 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-import typing as tp
-
-import julius
-import torch
-import torchaudio
-
-
-def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor:
- """Convert audio to the given number of channels.
-
- Args:
- wav (torch.Tensor): Audio wave of shape [B, C, T].
- channels (int): Expected number of channels as output.
- Returns:
- torch.Tensor: Downmixed or unchanged audio wave [B, C, T].
- """
- *shape, src_channels, length = wav.shape
- if src_channels == channels:
- pass
- elif channels == 1:
- # Case 1:
- # The caller asked 1-channel audio, and the stream has multiple
- # channels, downmix all channels.
- wav = wav.mean(dim=-2, keepdim=True)
- elif src_channels == 1:
- # Case 2:
- # The caller asked for multiple channels, but the input file has
- # a single channel, replicate the audio over all channels.
- wav = wav.expand(*shape, channels, length)
- elif src_channels >= channels:
- # Case 3:
- # The caller asked for multiple channels, and the input file has
- # more channels than requested. In that case return the first channels.
- wav = wav[..., :channels, :]
- else:
- # Case 4: What is a reasonable choice here?
- raise ValueError('The audio file has less channels than requested but is not mono.')
- return wav
-
-
-def convert_audio(wav: torch.Tensor, from_rate: float,
- to_rate: float, to_channels: int) -> torch.Tensor:
- """Convert audio to new sample rate and number of audio channels.
- """
- wav = julius.resample_frac(wav, int(from_rate), int(to_rate))
- wav = convert_audio_channels(wav, to_channels)
- return wav
-
-
-def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, energy_floor: float = 2e-3):
- """Normalize an input signal to a user loudness in dB LKFS.
- Audio loudness is defined according to the ITU-R BS.1770-4 recommendation.
-
- Args:
- wav (torch.Tensor): Input multichannel audio data.
- sample_rate (int): Sample rate.
- loudness_headroom_db (float): Target loudness of the output in dB LUFS.
- loudness_compressor (bool): Uses tanh for soft clipping.
- energy_floor (float): anything below that RMS level will not be rescaled.
- Returns:
- output (torch.Tensor): Loudness normalized output data.
- """
- energy = wav.pow(2).mean().sqrt().item()
- if energy < energy_floor:
- return wav
- transform = torchaudio.transforms.Loudness(sample_rate)
- input_loudness_db = transform(wav).item()
- # calculate the gain needed to scale to the desired loudness level
- delta_loudness = -loudness_headroom_db - input_loudness_db
- gain = 10.0 ** (delta_loudness / 20.0)
- output = gain * wav
- if loudness_compressor:
- output = torch.tanh(output)
- assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt())
- return output
-
-
-def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None:
- """Utility function to clip the audio with logging if specified."""
- max_scale = wav.abs().max()
- if log_clipping and max_scale > 1:
- clamp_prob = (wav.abs() > 1).float().mean().item()
- print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):",
- clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr)
- wav.clamp_(-1, 1)
-
-
-def normalize_audio(wav: torch.Tensor, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, log_clipping: bool = False,
- sample_rate: tp.Optional[int] = None,
- stem_name: tp.Optional[str] = None) -> torch.Tensor:
- """Normalize the audio according to the prescribed strategy (see after).
-
- Args:
- wav (torch.Tensor): Audio data.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): If True, uses tanh based soft clipping.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- sample_rate (int): Sample rate for the audio data (required for loudness).
- stem_name (Optional[str]): Stem name for clipping logging.
- Returns:
- torch.Tensor: Normalized audio.
- """
- scale_peak = 10 ** (-peak_clip_headroom_db / 20)
- scale_rms = 10 ** (-rms_headroom_db / 20)
- if strategy == 'peak':
- rescaling = (scale_peak / wav.abs().max())
- if normalize or rescaling < 1:
- wav = wav * rescaling
- elif strategy == 'clip':
- wav = wav.clamp(-scale_peak, scale_peak)
- elif strategy == 'rms':
- mono = wav.mean(dim=0)
- rescaling = scale_rms / mono.pow(2).mean().sqrt()
- if normalize or rescaling < 1:
- wav = wav * rescaling
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- elif strategy == 'loudness':
- assert sample_rate is not None, "Loudness normalization requires sample rate."
- wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor)
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- else:
- assert wav.abs().max() < 1
- assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'"
- return wav
-
-
-def f32_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to float 32 bits PCM format.
- """
- if wav.dtype.is_floating_point:
- return wav
- else:
- assert wav.dtype == torch.int16
- return wav.float() / 2**15
-
-
-def i16_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to int 16 bits PCM format.
-
- ..Warning:: There exist many formula for doing this convertion. None are perfect
- due to the asymetry of the int16 range. One either have possible clipping, DC offset,
- or inconsistancies with f32_pcm. If the given wav doesn't have enough headroom,
- it is possible that `i16_pcm(f32_pcm)) != Identity`.
- """
- if wav.dtype.is_floating_point:
- assert wav.abs().max() <= 1
- candidate = (wav * 2 ** 15).round()
- if candidate.max() >= 2 ** 15: # clipping would occur
- candidate = (wav * (2 ** 15 - 1)).round()
- return candidate.short()
- else:
- assert wav.dtype == torch.int16
- return wav
diff --git a/spaces/jordonpeter01/ai-comic-factory/src/app/interface/progress/index.tsx b/spaces/jordonpeter01/ai-comic-factory/src/app/interface/progress/index.tsx
deleted file mode 100644
index ce24276a4b241d185fce5bd306a0c3e339835626..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/ai-comic-factory/src/app/interface/progress/index.tsx
+++ /dev/null
@@ -1,56 +0,0 @@
-import { useEffect, useRef, useState } from "react"
-
-import { ProgressBar } from "./progress-bar"
-import { cn } from "@/lib/utils"
-
-export function Progress({
- isLoading,
- resetKey = "", // when this key change, this will re-spawn the progress bar
- className = "",
-}: {
- isLoading: boolean
- resetKey?: string
- className?: string
-}) {
- const timeoutRef = useRef()
- const [progressPercent, setProcessPercent] = useState(0)
- const progressRef = useRef(0)
- const isLoadingRef = useRef(isLoading)
-
- const updateProgressBar = () => {
- const duration = 1000 // 1 sec
- const frequency = 200 // 200ms
- const nbUpdatesPerSec = duration / frequency // 5x per second
-
- // normally it takes 45, and we will try to go below,
- // but to be safe let's set the counter a 1 min
- const nbSeconds = 80 // 1 min
- const amountInPercent = 100 / (nbUpdatesPerSec * nbSeconds) // 0.333
-
- progressRef.current = Math.min(100, progressRef.current + amountInPercent)
- setProcessPercent(progressRef.current)
- }
-
- useEffect(() => {
- clearInterval(timeoutRef.current)
- isLoadingRef.current = isLoading
- progressRef.current = 0
- setProcessPercent(0)
- if (isLoading) {
- timeoutRef.current = setInterval(updateProgressBar, 200)
- }
- }, [isLoading, resetKey])
-
- return (
-
-
-
- )
-}
\ No newline at end of file
diff --git a/spaces/jphwang/architectural_styles/app.py b/spaces/jphwang/architectural_styles/app.py
deleted file mode 100644
index 821b9e116ab58de20adf24193de1063faee455ae..0000000000000000000000000000000000000000
--- a/spaces/jphwang/architectural_styles/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import gradio as gr
-from fastai.vision.all import *
-from fastcore import *
-
-learn = load_learner('architecture.pkl')
-labels = learn.dls.vocab
-
-title = "Architectural style clasisfier"
-description = """
-Upload a picture to this architectural classifier trained with fast.ai.
-"""
-article = """
-We hope you found this useful!
-"""
-
-def predict(imgpath):
- img = PILImage.create(imgpath)
- pred, pred_idx, probs = learn.predict(img)
- return {labels[i]: float(probs[i]) for i in range(len(labels))}
-
-gr.Interface(
- fn=predict,
- inputs=gr.inputs.Image(shape=(512, 512)),
- outputs=gr.outputs.Label(num_top_classes=3),
- title=title,
- description=description,
- article=article
-).launch()
diff --git a/spaces/jphwang/colorful_vectors/app.py b/spaces/jphwang/colorful_vectors/app.py
deleted file mode 100644
index 8b67fd5ca65770289a2db4797c3b126b5c1e783e..0000000000000000000000000000000000000000
--- a/spaces/jphwang/colorful_vectors/app.py
+++ /dev/null
@@ -1,247 +0,0 @@
-# ========== (c) JP Hwang 25/9/2022 ==========
-
-import logging
-import pandas as pd
-import numpy as np
-import streamlit as st
-import plotly.express as px
-from scipy import spatial
-import random
-
-# ===== SET UP LOGGER =====
-logger = logging.getLogger(__name__)
-root_logger = logging.getLogger()
-root_logger.setLevel(logging.INFO)
-sh = logging.StreamHandler()
-formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
-sh.setFormatter(formatter)
-root_logger.addHandler(sh)
-# ===== END LOGGER SETUP =====
-
-desired_width = 320
-pd.set_option('display.max_columns', 20)
-pd.set_option('display.width', desired_width)
-
-sizes = [1, 20, 30]
-
-def get_top_tokens(ser_in):
- from collections import Counter
- tkn_list = '_'.join(ser_in.tolist()).split('_')
- tkn_counts = Counter(tkn_list)
- common_tokens = [i[0] for i in tkn_counts.most_common(10)]
- return common_tokens
-
-
-def build_chart(df_in):
- fig = px.scatter_3d(df_in, x='r', y='g', z='b',
- template='plotly_white',
- color=df_in['simple_name'],
- color_discrete_sequence=df_in['rgb'],
- size='size',
- hover_data=['name'])
- fig.update_layout(
- showlegend=False,
- margin=dict(l=5, r=5, t=20, b=5)
- )
- return fig
-
-
-def preproc_data():
- df = pd.read_csv('data/colors.csv', names=['simple_name', 'name', 'hex', 'r', 'g', 'b'])
-
- # Preprocessing
- df['rgb'] = df.apply(lambda x: f'rgb({x.r}, {x.g}, {x.b})', axis=1)
-
- # Get top 'basic' color names
- df = df.assign(category=df.simple_name.apply(lambda x: x.split('_')[-1]))
-
- # Set default size attribute
- df['size'] = sizes[0]
- return df
-
-
-def get_top_colors(df):
- top_colors = df['category'].value_counts()[:15].index.tolist()
- top_colors = [c for c in top_colors if c in df.simple_name.values]
- return top_colors
-
-
-def main():
- st.title('Colorful vectors')
- st.markdown("""
- You might have heard that objects like
- words or images can be represented by "vectors".
- What does that mean, exactly? It seems like a tricky concept, but it doesn't have to be.
-
- Let's start here, where colors are represented in 3-D space 🌈.
- Each axis represents how much of primary colors `(red, green, and blue)`
- each color comprises.
-
- For example, `Magenta` is represented by `(255, 0, 255)`,
- and `(80, 200, 120)` represents `Emerald`.
- That's all a *vector* is in this context - a sequence of numbers.
-
- Take a look at the resulting 3-D image below; it's kind of mesmerising!
- (You can spin the image around, as well as zoom in/out.)
- """
- )
-
- df = preproc_data()
- fig = build_chart(df)
- st.plotly_chart(fig)
-
- st.markdown("""
- ### Why does this matter?
-
- You see here that similar colors are placed close to each other in space.
-
- It seems obvious, but **this** is the crux of why a *vector representation* is so powerful.
- These objects being located *in space* based on their key property (`color`)
- enables an easy, objective assessment of similarity.
-
- Let's take this further:
- """)
-
- # ===== SCALAR SEARCH =====
- st.header('Searching in vector space')
- st.markdown("""
- Imagine that you need to identify colors similar to a given color.
- You could do it by name, for instance looking for colors containing matching words.
-
- But remember that in the 3-D chart above, similar colors are physically close to each other.
- So all you actually need to do is to calculate distances, and collect points based on a threshold!
-
- That's probably still a bit abstract - so pick a 'base' color, and we'll go from there.
- In fact - try a few different colors while you're at it!
- """)
- top_colors = get_top_colors(df)
-
- # def_choice = random.randrange(len(top_colors))
- query = st.selectbox('Pick a "base" color:', top_colors, index=5)
-
- match = df[df.simple_name == query].iloc[0]
- scalar_filter = df.simple_name.str.contains(query)
-
- st.markdown(f"""
- The color `{match.simple_name}` is also represented
- in our 3-D space by `({match.r}, {match.g}, {match.b})`.
- Let's see what we can find using either of these properties.
- (Oh, you can adjust the similarity threshold below as well.)
- """)
- with st.expander(f"Similarity search options"):
- st.markdown(f"""
- Do you want to find lots of similar colors, or
- just a select few *very* similar colors to `{match.simple_name}`.
- """)
- thresh_sel = st.slider('Select a similarity threshold',
- min_value=20, max_value=160,
- value=80, step=20)
- st.markdown("---")
-
- df['size'] = sizes[0]
- df.loc[scalar_filter, 'size'] = sizes[1]
- df.loc[df.simple_name == match.simple_name, 'size'] = sizes[2]
-
- scalar_fig = build_chart(df)
- scalar_hits = df[scalar_filter]['name'].values
-
- # ===== VECTOR SEARCH =====
- vector = match[['r', 'g', 'b']].values.tolist()
-
- dist_metric = 'euc'
- def get_dist(a, b, method):
- if method == 'euc':
- return np.linalg.norm(a-b)
- else:
- return spatial.distance.cosine(a, b)
-
- df['dist'] = df[['r', 'g', 'b']].apply(lambda x: get_dist(x, vector, dist_metric), axis=1)
- df['size'] = sizes[0]
-
- if dist_metric == 'euc':
- vec_filter = df['dist'] < thresh_sel
- else:
- vec_filter = df['dist'] < 0.05
-
- df.loc[vec_filter, 'size'] = sizes[1]
- df.loc[((df['r'] == vector[0]) &
- (df['g'] == vector[1]) &
- (df['b'] == vector[2])
- ),
- 'size'] = sizes[2]
-
- vector_fig = build_chart(df)
- vector_hits = df[vec_filter].sort_values('dist')['name'].values
-
- # ===== OUTPUTS =====
- col1, col2 = st.columns(2)
- with col1:
- st.markdown(f"These colors contain the text: `{match.simple_name}`:")
- st.plotly_chart(scalar_fig, use_container_width=True)
- st.markdown(f"Found {len(scalar_hits)} colors containing the string `{query}`.")
- with st.expander(f"Click to see the whole list"):
- st.markdown("- " + "\n- ".join(scalar_hits))
- with col2:
- st.markdown(f"These colors are close to the vector `({match.r}, {match.g}, {match.b})`:")
- st.plotly_chart(vector_fig, use_container_width=True)
- st.markdown(f"Found {len(vector_hits)} colors similar to `{query}` based on its `(R, G, B)` values.")
- with st.expander(f"Click to see the whole list"):
- st.markdown("- " + "\n- ".join(vector_hits))
-
- # ===== REFLECTIONS =====
- unique_hits = [c for c in vector_hits if c not in scalar_hits]
-
- st.markdown("---")
- st.header("So what?")
- st.markdown("""
- What did you notice?
-
- The thing that stood out to me is how *robust* and *consistent*
- the vector search results are.
-
- It manages to find a bunch of related colors
- regardless of what it's called. It doesn't matter that the color
- 'scarlet' does not contain the word 'red';
- it goes ahead and finds all the neighboring colors based on a consistent criterion.
-
- It easily found these colors which it otherwise would not have based on the name alone:
- """)
- with st.expander(f"See list:"):
- st.markdown("- " + "\n- ".join(unique_hits))
-
- st.markdown("""
- I think it's brilliant - think about how much of a pain word searching is,
- and how inconsistent it is. This has so many advantages!
-
- ---
- """)
- st.header("Generally speaking...")
- st.markdown("""
- Obviously, this is a pretty simple, self-contained example.
- Colors are particularly suited for representing using just a few
- numbers, like our primary colors. One number represents how much
- `red` each color contains, another for `green`, and the last for `blue`.
-
- But that core concept of representing similarity along different
- properties using numbers is exactly what happens in other domains.
-
- The only differences are in *how many* numbers are used, and what
- they represent. For example, words or documents might be represented by
- hundreds (e.g. 300 or 768) of AI-derived numbers.
-
- We'll take a look at those examples as well later on.
-
- Techniques used to visualise those high-dimensional vectors are called
- dimensionality reduction techniques. If you would like to see this in action, check out
- [this app](https://huggingface.co/spaces/jphwang/reduce_dimensions).
- """)
-
- st.markdown("""
- ---
-
- If you liked this - [follow me (@_jphwang) on Twitter](https://twitter.com/_jphwang)!
- """)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/partitioning_test.py b/spaces/juancopi81/youtube-music-transcribe/t5x/partitioning_test.py
deleted file mode 100644
index 68594d1ec842341c074d7d87534d8bb46ee25237..0000000000000000000000000000000000000000
--- a/spaces/juancopi81/youtube-music-transcribe/t5x/partitioning_test.py
+++ /dev/null
@@ -1,272 +0,0 @@
-# Copyright 2022 The T5X Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Tests for t5x.partitioning."""
-
-import collections
-
-from absl.testing import absltest
-from absl.testing import parameterized
-import flax.core
-from flax.linen import partitioning as nn_partitioning
-import jax
-import numpy as np
-from t5x import adafactor
-from t5x import optimizers
-from t5x import partitioning
-from t5x import test_utils as ptu
-from t5x import train_state
-
-jax.config.parse_flags_with_absl()
-
-mock = absltest.mock
-TpuDevice = ptu.TpuDevice
-TPUV3_32 = ptu.make_devices(4, 4, 1, 2, kind='TPU v3')
-AxisMetadata = nn_partitioning.AxisMetadata
-PartitionSpec = partitioning.PartitionSpec
-
-
-class PartitioningTest(absltest.TestCase):
-
- @mock.patch('jax.host_count')
- @mock.patch('jax.local_device_count')
- def test_bounds_from_last_device(self, local_device_count, host_count):
- last_device = mock.Mock(coords=(3, 3, 3), core_on_chip=1)
- tpu_bounds = partitioning.bounds_from_last_device(last_device)
- self.assertEqual(tpu_bounds, (4, 4, 4, 2))
-
- last_device = mock.Mock(spec=[])
- host_count.return_value = 1
- local_device_count.return_value = 4
- non_tpu_bounds = partitioning.bounds_from_last_device(last_device)
- self.assertEqual(non_tpu_bounds, (1, 4))
-
- @mock.patch('jax.local_device_count')
- def test_get_coords(self, local_device_count):
- device = mock.Mock(coords=(1, 0, 1), core_on_chip=1)
- coords = partitioning.get_coords(device)
- self.assertEqual(coords, (1, 0, 1, 1))
-
- device = mock.Mock(spec=['process_index', 'id'])
- device.process_index = 1
- device.id = 9
- local_device_count.return_value = 8
- coords = partitioning.get_coords(device)
- self.assertEqual(coords, (1, 1))
-
- @mock.patch('jax.local_devices')
- @mock.patch('jax.devices')
- @mock.patch('jax._src.lib.xla_bridge.process_index')
- def test_default_mesh(self, process_index_fn, devices_fn, local_devices_fn):
- devices_fn.return_value = TPUV3_32
- local_devices_fn.return_value = [
- d for d in TPUV3_32 if d.process_index == 0
- ]
- process_index_fn.return_value = 0
-
- global_mesh = partitioning.default_mesh(4)
- self.assertEqual(global_mesh.axis_names, ('data', 'model'))
- self.assertEqual(global_mesh.shape,
- collections.OrderedDict((('data', 8), ('model', 4))))
- self.assertEqual(global_mesh.size, 32)
-
- for process_index in (0, 1, 2, 3):
- process_index_fn.return_value = process_index
- local_mesh = global_mesh.local_mesh
- self.assertEqual(local_mesh.axis_names, ('data', 'model'))
- self.assertEqual(local_mesh.shape,
- collections.OrderedDict((('data', 2), ('model', 4))))
- self.assertEqual(local_mesh.size, 8)
-
- process_index_fn.return_value = 0
- local_mesh = global_mesh.local_mesh
- lds = np.array([
- [
- TpuDevice(id=0, process_index=0, coords=(0, 0, 0), core_on_chip=0),
- TpuDevice(id=1, process_index=0, coords=(0, 0, 0), core_on_chip=1),
- TpuDevice(id=2, process_index=0, coords=(1, 0, 0), core_on_chip=0),
- TpuDevice(id=3, process_index=0, coords=(1, 0, 0), core_on_chip=1)
- ],
- [
- TpuDevice(id=8, process_index=0, coords=(0, 1, 0), core_on_chip=0),
- TpuDevice(id=9, process_index=0, coords=(0, 1, 0), core_on_chip=1),
- TpuDevice(id=10, process_index=0, coords=(1, 1, 0), core_on_chip=0),
- TpuDevice(id=11, process_index=0, coords=(1, 1, 0), core_on_chip=1)
- ]
- ],
- dtype=object)
- np.testing.assert_array_equal(local_mesh.devices, lds)
-
- @mock.patch('jax.local_devices')
- @mock.patch('jax.devices')
- @mock.patch('jax._src.lib.xla_bridge.process_index')
- def test_local_chunker(self, process_index_fn, devices_fn, local_devices_fn):
- devices_fn.return_value = TPUV3_32
- local_devices_fn.return_value = [
- d for d in TPUV3_32 if d.process_index == 0
- ]
- process_index_fn.return_value = 0
- global_mesh = partitioning.default_mesh(4)
- local_chunker = partitioning.LocalChunker(global_mesh)
- self.assertEqual(local_chunker.num_chunks['data'], 4)
- self.assertEqual(local_chunker.num_chunks['model'], 1)
-
- # Derive the chunk order along the first 'data' dim for testing.
- host_ordering = []
- for d in global_mesh.devices[:, 0]:
- if d.process_index not in host_ordering:
- host_ordering.append(d.process_index)
- process_index_to_data_pos = {
- process_index: idx for idx, process_index in enumerate(host_ordering)
- }
-
- for process_indexx in (0, 1, 2, 3):
- process_index_fn.return_value = process_indexx
- global_mesh = partitioning.default_mesh(4)
- local_chunker = partitioning.LocalChunker(global_mesh)
- # get expected chunk for 'data' axis.
- expected_chunk = process_index_to_data_pos[process_indexx]
- self.assertEqual(local_chunker.chunk_ids['data'], expected_chunk)
- self.assertEqual(local_chunker.chunk_ids['model'], 0)
- # Sharded along both axes.
- local_chunk_info = local_chunker.get_local_chunk_info((128, 16),
- ['data', 'model'])
- self.assertEqual(local_chunk_info.replica_id, 0)
- self.assertEqual(local_chunk_info.slice,
- (slice(32 * expected_chunk, 32 *
- (expected_chunk + 1)), slice(0, 16)))
- # Replicated across first axis.
- local_chunk_info = local_chunker.get_local_chunk_info((128, 16),
- [None, 'model'])
- self.assertEqual(local_chunk_info.replica_id, expected_chunk)
- self.assertEqual(local_chunk_info.slice, (slice(None), slice(0, 16)))
-
-
-class ModelBasedPartitionerTest(parameterized.TestCase):
-
- def get_axes_spec(self, partitioner, factored, momentum):
- opt_def = adafactor.Adafactor(
- learning_rate=0.1,
- factored=factored,
- min_dim_size_to_factor=8,
- beta1=0.1 if momentum else None,
- logical_factor_rules={
- 'batch': adafactor.FactorDim.NONE,
- 'embed': adafactor.FactorDim.ROW,
- 'vocab': adafactor.FactorDim.COLUMN,
- 'mlp': adafactor.FactorDim.COLUMN,
- })
- state = train_state.FlaxOptimTrainState.create(
- opt_def,
- flax.core.freeze({
- 'params': {
- 'logits_dense': np.ones((16, 16), np.float32),
- 'mlp': {
- 'wo': {
- 'kernel': np.ones((32, 16), np.float32)
- }
- }
- },
- 'params_axes': {
- 'logits_dense_axes': AxisMetadata(names=('vocab', 'embed')),
- 'mlp': {
- 'wo': {
- 'kernel_axes': AxisMetadata(names=('embed', 'mlp'))
- }
- }
- }
- }))
- return partitioner.get_mesh_axes(state).state_dict()
-
- def get_expected_axes_spec(self,
- spec_0,
- spec_1,
- kernel_spec=PartitionSpec(None, 'model')):
- return train_state.FlaxOptimTrainState(
- optimizers.Optimizer(
- # opt_def,
- adafactor.Adafactor(0.1), # opt_def not compared.
- state=optimizers.OptimizerState(
- step=None,
- param_states={
- 'logits_dense': spec_0,
- 'mlp': {
- 'wo': {
- 'kernel': spec_1
- }
- }
- }),
- target={
- 'logits_dense': PartitionSpec('model', None),
- 'mlp': {
- 'wo': {
- 'kernel': kernel_spec
- }
- }
- })).state_dict()
-
- def test_get_mesh_axes(self):
- partitioner = partitioning.PjitPartitioner(
- num_partitions=1,
- logical_axis_rules=(('batch', 'data'), ('embed', None),
- ('vocab', 'model'), ('mlp', 'model')))
-
- p0_spec = PartitionSpec('model', None)
- p1_spec = PartitionSpec(None, 'model')
-
- # Test quadrant of conditions: factored or not / momentum or not.
- axes_spec = self.get_axes_spec(partitioner, factored=True, momentum=False)
- expected_axes_spec = self.get_expected_axes_spec(
- adafactor._AdafactorParamState(m=None, v=None, v_col=None, v_row=None),
- adafactor._AdafactorParamState(m=None, v=None, v_col=None, v_row=None))
- jax.tree_multimap(self.assertEqual, axes_spec, expected_axes_spec)
-
- axes_spec = self.get_axes_spec(partitioner, factored=True, momentum=True)
- expected_axes_spec = self.get_expected_axes_spec(
- adafactor._AdafactorParamState(
- m=p0_spec, v=None, v_col=None, v_row=None),
- adafactor._AdafactorParamState(
- m=p1_spec, v=None, v_col=None, v_row=None))
- jax.tree_multimap(self.assertEqual, axes_spec, expected_axes_spec)
-
- axes_spec = self.get_axes_spec(partitioner, factored=False, momentum=True)
- expected_axes_spec = self.get_expected_axes_spec(
- adafactor._AdafactorParamState(
- m=p0_spec, v=p0_spec, v_col=None, v_row=None),
- adafactor._AdafactorParamState(
- m=p1_spec, v=p1_spec, v_col=None, v_row=None))
- jax.tree_multimap(self.assertEqual, axes_spec, expected_axes_spec)
-
- axes_spec = self.get_axes_spec(partitioner, factored=False, momentum=False)
- expected_axes_spec = self.get_expected_axes_spec(
- adafactor._AdafactorParamState(
- m=None, v=p0_spec, v_col=None, v_row=None),
- adafactor._AdafactorParamState(
- m=None, v=p1_spec, v_col=None, v_row=None))
- jax.tree_multimap(self.assertEqual, axes_spec, expected_axes_spec)
-
- @parameterized.product(activation_dims=(1, 2), param_dims=(1, 2))
- def test_standard_logical_axis_rules(self, activation_dims, param_dims):
- default_rules = partitioning.standard_logical_axis_rules(
- activation_dims, param_dims, additional_rules=None)
- custom_rules = (('my-new-axis', 'data'), ('another-axis', None),
- ('another-one', 'model'))
- new_rules = partitioning.standard_logical_axis_rules(
- activation_dims, param_dims, additional_rules=custom_rules)
- self.assertEqual(new_rules[:len(default_rules)], default_rules)
- self.assertEqual(new_rules[len(default_rules):], list(custom_rules))
-
-
-if __name__ == '__main__':
- absltest.main()
diff --git a/spaces/jyseo/3DFuse/lora_util.py b/spaces/jyseo/3DFuse/lora_util.py
deleted file mode 100644
index a368ebf471c482b12a9bedb6d4d7205601309bf8..0000000000000000000000000000000000000000
--- a/spaces/jyseo/3DFuse/lora_util.py
+++ /dev/null
@@ -1,196 +0,0 @@
-from lora_diffusion.cli_lora_add import *
-from lora_diffusion.lora import *
-from lora_diffusion.to_ckpt_v2 import *
-
-def monkeypatch_or_replace_safeloras(models, safeloras):
- loras = parse_safeloras(safeloras)
-
- for name, (lora, ranks, target) in loras.items():
- model = getattr(models, name, None)
-
- if not model:
- print(f"No model provided for {name}, contained in Lora")
- continue
-
- monkeypatch_or_replace_lora_extended(model, lora, target, ranks)
-def parse_safeloras(
- safeloras,
-) -> Dict[str, Tuple[List[nn.parameter.Parameter], List[int], List[str]]]:
- """
- Converts a loaded safetensor file that contains a set of module Loras
- into Parameters and other information
-
- Output is a dictionary of {
- "module name": (
- [list of weights],
- [list of ranks],
- target_replacement_modules
- )
- }
- """
- loras = {}
- # metadata = safeloras.metadata()
- metadata = safeloras['metadata']
- safeloras_ = safeloras['weights']
- get_name = lambda k: k.split(":")[0]
-
- keys = list(safeloras_.keys())
- keys.sort(key=get_name)
-
- for name, module_keys in groupby(keys, get_name):
- info = metadata.get(name)
-
- if not info:
- raise ValueError(
- f"Tensor {name} has no metadata - is this a Lora safetensor?"
- )
-
- # Skip Textual Inversion embeds
- if info == EMBED_FLAG:
- continue
-
- # Handle Loras
- # Extract the targets
- target = json.loads(info)
-
- # Build the result lists - Python needs us to preallocate lists to insert into them
- module_keys = list(module_keys)
- ranks = [4] * (len(module_keys) // 2)
- weights = [None] * len(module_keys)
-
- for key in module_keys:
- # Split the model name and index out of the key
- _, idx, direction = key.split(":")
- idx = int(idx)
-
- # Add the rank
- ranks[idx] = int(metadata[f"{name}:{idx}:rank"])
-
- # Insert the weight into the list
- idx = idx * 2 + (1 if direction == "down" else 0)
- # weights[idx] = nn.parameter.Parameter(safeloras.get_tensor(key))
- weights[idx] = nn.parameter.Parameter(safeloras_[key])
- loras[name] = (weights, ranks, target)
-
- return loras
-
-
-def parse_safeloras_embeds(
- safeloras,
-) -> Dict[str, torch.Tensor]:
- """
- Converts a loaded safetensor file that contains Textual Inversion embeds into
- a dictionary of embed_token: Tensor
- """
- embeds = {}
- metadata = safeloras['metadata']
- safeloras_ = safeloras['weights']
-
- for key in safeloras_.keys():
- # Only handle Textual Inversion embeds
- meta=None
- if key in metadata:
- meta = metadata[key]
- if not meta or meta != EMBED_FLAG:
- continue
-
- embeds[key] = safeloras_[key]
-
- return embeds
-
-def patch_pipe(
- pipe,
- maybe_unet_path,
- token: Optional[str] = None,
- r: int = 4,
- patch_unet=True,
- patch_text=True,
- patch_ti=True,
- idempotent_token=True,
- unet_target_replace_module=DEFAULT_TARGET_REPLACE,
- text_target_replace_module=TEXT_ENCODER_DEFAULT_TARGET_REPLACE,
-):
- safeloras=maybe_unet_path
- monkeypatch_or_replace_safeloras(pipe, safeloras)
- tok_dict = parse_safeloras_embeds(safeloras)
-
- if patch_ti:
- apply_learned_embed_in_clip(
- tok_dict,
- pipe.text_encoder,
- pipe.tokenizer,
- token=token,
- idempotent=idempotent_token,
- )
- return tok_dict
-
-def lora_convert(model_path, as_half):
-
- """
- Modified version of lora_duffusion.to_ckpt_v2.convert_to_ckpt
- """
-
- assert model_path is not None, "Must provide a model path!"
-
- unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin")
- vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin")
- text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin")
-
- # Convert the UNet model
- unet_state_dict = torch.load(unet_path, map_location="cpu")
- unet_state_dict = convert_unet_state_dict(unet_state_dict)
- unet_state_dict = {
- "model.diffusion_model." + k: v for k, v in unet_state_dict.items()
- }
-
- # Convert the VAE model
- vae_state_dict = torch.load(vae_path, map_location="cpu")
- vae_state_dict = convert_vae_state_dict(vae_state_dict)
- vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()}
-
- # Convert the text encoder model
- text_enc_dict = torch.load(text_enc_path, map_location="cpu")
- text_enc_dict = convert_text_enc_state_dict(text_enc_dict)
- text_enc_dict = {
- "cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()
- }
-
- # Put together new checkpoint
- state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict}
- if as_half:
- state_dict = {k: v.half() for k, v in state_dict.items()}
-
- return state_dict
-
-def merge(path_1: str,
- path_2: str,
- alpha_1: float = 0.5,
- ):
-
- loaded_pipeline = StableDiffusionPipeline.from_pretrained(
- path_1,
- ).to("cpu")
-
- tok_dict = patch_pipe(loaded_pipeline, path_2, patch_ti=False)
- collapse_lora(loaded_pipeline.unet, alpha_1)
- collapse_lora(loaded_pipeline.text_encoder, alpha_1)
-
- monkeypatch_remove_lora(loaded_pipeline.unet)
- monkeypatch_remove_lora(loaded_pipeline.text_encoder)
-
- _tmp_output = "./merge.tmp"
-
- loaded_pipeline.save_pretrained(_tmp_output)
- state_dict = lora_convert(_tmp_output, as_half=True)
- # remove the tmp_output folder
- shutil.rmtree(_tmp_output)
-
- keys = sorted(tok_dict.keys())
- tok_catted = torch.stack([tok_dict[k] for k in keys])
- ret = {
- "string_to_token": {"*": torch.tensor(265)},
- "string_to_param": {"*": tok_catted},
- "name": "",
- }
-
- return state_dict, ret
\ No newline at end of file
diff --git a/spaces/kainy/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/kainy/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000
--- a/spaces/kainy/rvc_okiba_TTS/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/katanaml-org/sparrow-ui/tools/style.css b/spaces/katanaml-org/sparrow-ui/tools/style.css
deleted file mode 100644
index 26c1dc7734537f476f394d11d51bca99f22eeea4..0000000000000000000000000000000000000000
--- a/spaces/katanaml-org/sparrow-ui/tools/style.css
+++ /dev/null
@@ -1,53 +0,0 @@
-/* Move block container higher */
-div.block-container.css-z5fcl4.egzxvld4 {
- margin-top: -5em;
-}
-
-/* Move menu container higher */
-div.css-1544g2n.e1fqkh3o4 {
- padding-top: 3rem;
-}
-
-/* Hide anchor link */
-.css-1dgmtll.e16nr0p32 {
- display: none
-}
-
-div[data-testid="metric-container"] {
- background-color: #f7f8fa;
- border: 1px solid #0c0d0d;
- padding: 5% 5% 5% 10%;
- border-radius: 5px;
- color: rgb(30, 103, 119);
- overflow-wrap: break-word;
-}
-
-/* breakline for metric text */
-div[data-testid="metric-container"] > label[data-testid="stMetricLabel"] > div {
- overflow-wrap: break-word;
- white-space: break-spaces;
- color: black;
-}
-
-/* Hide Streamlit bars */
-#MainMenu {
- visibility: hidden;
-}
-
-footer {
- visibility: hidden;
-}
-
-/*header {*/
-/* visibility: hidden;*/
-/*}*/
-
-/*About page styling*/
-
-.css-1v0mbdj.etr89bj1 {
- display: block;
- margin-left: auto;
- margin-right: auto;
- min-width: 180px;
- max-width: 40%;
-}
\ No newline at end of file
diff --git a/spaces/kcagle/AutoGPT/autogpt/agent/__init__.py b/spaces/kcagle/AutoGPT/autogpt/agent/__init__.py
deleted file mode 100644
index e928af2205b1c52d19dc89ec4246e8c1d2c20e3f..0000000000000000000000000000000000000000
--- a/spaces/kcagle/AutoGPT/autogpt/agent/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from autogpt.agent.agent import Agent
-from autogpt.agent.agent_manager import AgentManager
-
-__all__ = ["Agent", "AgentManager"]
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/run.sh b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/run.sh
deleted file mode 100644
index 61af4b4950eb11334e55362e3e3c5e2796979a01..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/run.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50
-ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/options/__init__.py b/spaces/kevinwang676/VoiceChangers/src/face3d/options/__init__.py
deleted file mode 100644
index e7eedebe54aa70169fd25951b3034d819e396c90..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/face3d/options/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""This package options includes option modules: training options, test options, and basic options (used in both training and test)."""
diff --git a/spaces/kevinwang676/rvc-models-new/infer_pack/commons.py b/spaces/kevinwang676/rvc-models-new/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/rvc-models-new/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/kevinwang676/test-1/infer_pack/commons.py b/spaces/kevinwang676/test-1/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/test-1/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/kfahn/Animal_Pose_Control_Net/app.py b/spaces/kfahn/Animal_Pose_Control_Net/app.py
deleted file mode 100644
index 8545a852ef909228119bd4877dc88b5495f28194..0000000000000000000000000000000000000000
--- a/spaces/kfahn/Animal_Pose_Control_Net/app.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import gradio as gr
-import jax
-import jax.numpy as jnp
-import numpy as np
-from flax.jax_utils import replicate
-from flax.training.common_utils import shard
-from PIL import Image
-from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel
-import gc
-
-report_url = 'https://wandb.ai/john-fozard/dog-cat-pose/runs/kmwcvae5'
-sketch_url = 'https://editor.p5js.org/kfahn/full/OshQky7RS'
-
-def create_key(seed=0):
- return jax.random.PRNGKey(seed)
-
-def addp5sketch(url):
- iframe = f'