-
-```python
-import usesless
-
-message_id = ""
-token = # usesless.Account.create(logging=True)
-while True:
- prompt = input("Question: ")
- if prompt == "!stop":
- break
-
- req = usesless.Completion.create(prompt=prompt, parentMessageId=message_id, token=token)
-
- print(f"Answer: {req['text']}")
- message_id = req["id"]
-```
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chemistry Matters Book Free Download A 10-Volume Encyclopedia of Chemistry Topics and Concepts.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chemistry Matters Book Free Download A 10-Volume Encyclopedia of Chemistry Topics and Concepts.md
deleted file mode 100644
index 3dfa1f374dc899f15aa69377dbedc5e0a1a2a44c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chemistry Matters Book Free Download A 10-Volume Encyclopedia of Chemistry Topics and Concepts.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
Chemistry Matters Book Free Download
-
Are you looking for a free and easy way to learn chemistry? Do you want to master the concepts and skills of this fascinating subject? If yes, then you should download Chemistry Matters, a comprehensive and engaging textbook for high school students. In this article, we will tell you what Chemistry Matters is, why you should download it for free, and how to do it. Let's get started!
-
What is Chemistry Matters?
-
Chemistry Matters is a textbook that covers the syllabus of chemistry for high school students. It is written by a team of experienced and qualified authors who have a passion for teaching and learning chemistry. The book aims to help students develop a deep understanding of the principles and applications of chemistry, as well as to foster their interest and curiosity in the subject.
Chemistry Matters covers all the topics that you need to know for your chemistry exams, such as atomic structure, chemical bonding, chemical reactions, stoichiometry, gases, solutions, acids and bases, equilibrium, electrochemistry, organic chemistry, and more. The book also includes chapters on environmental chemistry, biochemistry, nanotechnology, and green chemistry, which are relevant and interesting topics for today's world.
-
The features and benefits of Chemistry Matters
-
Chemistry Matters is not just a textbook, but also a learning companion that offers many features and benefits for students. Some of them are:
-
-
Clear and concise explanations: The book uses simple and precise language to explain the concepts and theories of chemistry. It also provides examples, diagrams, tables, and graphs to illustrate the points and make them easier to understand.
-
Practice questions and exercises: The book contains a variety of questions and exercises at the end of each chapter to help students review and reinforce their learning. The questions range from multiple-choice, short-answer, structured, to essay-type questions. The book also provides answers and solutions to selected questions and exercises.
-
Summary and key points: The book provides a summary and key points at the end of each chapter to help students recall and revise the main ideas and facts of the chapter.
-
Learning objectives and outcomes: The book states the learning objectives and outcomes at the beginning of each chapter to help students focus on what they need to learn and achieve by the end of the chapter.
-
Experiments and investigations: The book includes experiments and investigations that students can perform in the laboratory or at home to demonstrate and explore the phenomena and principles of chemistry. The book also provides safety tips, procedures, observations, results, discussions, and conclusions for each experiment and investigation.
-
-
How to use Chemistry Matters effectively
-
To get the most out of Chemistry Matters, you should use it in conjunction with other learning resources and strategies. Here are some tips on how to use Chemistry Matters effectively:
-
-
Read the book before and after class: Reading the book before class will help you prepare for the lesson and have some background knowledge of the topic. Reading the book after class will help you consolidate your learning and fill in any gaps or doubts that you may have.
-
Do the practice questions and exercises: Doing the practice questions and exercises will help you check your understanding and apply your knowledge of the topic. It will also help you practice your problem-solving skills and prepare for your exams.
-
Review the summary and key points: Reviewing the summary and key points will help you refresh your memory and revise the important facts and concepts of the topic. It will also help you identify your strengths and weaknesses in the topic.
-
Do the experiments and investigations: Doing the experiments and investigations will help you learn by doing and discover by yourself how chemistry works in real life. It will also help you develop your scientific skills such as observation, measurement, analysis, evaluation, communication, etc.
-
-
Why should you download Chemistry Matters for free?
-
You may be wondering why you should download Chemistry Matters for free instead of buying a physical copy or renting one from a library. Well, there are many reasons why downloading Chemistry Matters for free is a smart choice. Here are some of them:
-
Save money and time
-
Downloading Chemistry Matters for free will save you money that you would otherwise spend on buying or renting a physical copy of the book. You can use that money for other purposes such as buying other books or materials that you need for your studies or hobbies. Downloading Chemistry Matters for free will also save you time that you would otherwise spend on going to a bookstore or a library to get a physical copy of the book. You can use that time for other activities such as studying more or having fun with your friends or family.
-
Access the book anytime and anywhere
-
Downloading Chemistry Matters for free will give you access to the book anytime and anywhere that you have an internet connection or a device that can read PDF files. You can read the book on your computer, laptop, tablet, smartphone, or e-reader at your convenience. You don't have to worry about losing or damaging your physical copy of the book or returning it on time to avoid fines or penalties. You can also share the book with your classmates or friends easily by sending them a link or a file.
-
Enhance your learning experience with interactive features
-
Downloading Chemistry Matters for free will enhance your learning experience with interactive features that are not available in a physical copy of the book. For example, you can zoom in or out on images or graphs to see them more clearly; you can highlight or annotate important parts of the text; you can search for keywords or phrases within the book; you can click on links or references to access more information or resources; you can watch videos or animations that explain or demonstrate some concepts or phenomena; etc.
-
chemistry matters textbook pdf download
-chemistry matters book online free
-chemistry matters ebook free download
-chemistry matters second edition pdf download
-chemistry matters book solutions free download
-chemistry matters gce o level textbook free download
-chemistry matters workbook pdf download
-chemistry matters book answers free download
-chemistry matters for the 21st century pdf download
-chemistry matters book review free download
-chemistry matters a molecular approach pdf download
-chemistry matters book summary free download
-chemistry matters an inquiry-based approach pdf download
-chemistry matters book notes free download
-chemistry matters by tan yin toon pdf download
-chemistry matters book quiz free download
-chemistry matters concepts and applications pdf download
-chemistry matters book test free download
-chemistry matters for cambridge igcse pdf download
-chemistry matters book questions free download
-chemistry matters fundamentals of chemistry pdf download
-chemistry matters book exercises free download
-chemistry matters gce n level textbook free download
-chemistry matters book worksheets free download
-chemistry matters in life and health pdf download
-chemistry matters book projects free download
-chemistry matters in the service of man pdf download
-chemistry matters book activities free download
-chemistry matters marshall cavendish pdf download
-chemistry matters book experiments free download
-chemistry matters practical book pdf download
-chemistry matters book videos free download
-chemistry matters student's book pdf download
-chemistry matters book slides free download
-chemistry matters teacher's edition pdf download
-chemistry matters book resources free download
-chemistry matters textbook answers pdf download
-chemistry matters book glossary free download
-chemistry matters textbook solutions pdf download
-chemistry matters book index free download
-how to get chemistry matters book for free
-where to find chemistry matters book free download
-best sites for chemistry matters book free download
-tips for downloading chemistry matters book for free
-alternatives to chemistry matters book free download
-benefits of reading chemistry matters book for free
-challenges of downloading chemistry matters book for free
-reviews of chemistry matters book free download
-feedback on chemistry matters book free download
-recommendations for chemistry matters book free download
-
How to download Chemistry Matters for free?
-
If you are convinced that downloading Chemistry Matters for free is a good idea, then you may be wondering how to do it. Well, it's very easy! Just follow these simple steps:
-
Step 1: Visit the official website of Chemistry Matters
-
The first step is to visit www.chemistrymatters.com, which is the official website of Chemistry Matters. There you will find all the information about the book such as its authors, editions, contents, reviews, etc. You will also find links to download the book for free in different formats such as PDF, EPUB, MOBI, etc.
-
Step 2: Register for a free account or log in with your existing one
-
The second step is to register for a free account or log in with your existing one on the website. To register, you just need to provide your name, email address, and password. You will also need to agree to the terms and conditions and privacy policy of the website. To log in, you just need to enter your email address and password. You will also have the option to log in with your social media accounts such as Facebook, Twitter, Google, etc.
-
Step 3: Choose the edition and format of the book you want to download
-
The third step is to choose the edition and format of the book you want to download. There are two editions of Chemistry Matters: the first edition, which was published in 2015, and the second edition, which was published in 2019. The second edition has been updated and revised to reflect the latest changes and developments in chemistry. You can choose either edition depending on your preference or requirement. You can also choose between different formats such as PDF, EPUB, MOBI, etc. depending on your device or reader.
-
Step 4: Click on the download button and enjoy your book
-
The final step is to click on the download button and enjoy your book. You will see a pop-up window that will ask you to confirm your download and show you the progress of the download. Once the download is complete, you will be able to open and read your book on your device or reader. You can also transfer your book to other devices or readers if you want. Congratulations! You have successfully downloaded Chemistry Matters for free!
-
Conclusion
-
Chemistry Matters is a great textbook for high school students who want to learn chemistry in a fun and easy way. It covers all the topics that you need to know for your exams, and it also offers many features and benefits that will enhance your learning experience. You can download Chemistry Matters for free from its official website in a few simple steps. By doing so, you will save money and time, access the book anytime and anywhere, and enjoy interactive features that are not available in a physical copy of the book. So what are you waiting for? Download Chemistry Matters for free today and start learning chemistry like never before!
-
FAQs
-
Here are some frequently asked questions about Chemistry Matters and its free download:
-
-
Q: Is Chemistry Matters suitable for all levels of high school students?
A: Yes, Chemistry Matters is suitable for all levels of high school students, from beginners to advanced. The book explains the concepts and theories of chemistry in a clear and concise way, and it also provides different levels of questions and exercises to cater to different abilities and needs of students.
-
Q: Is Chemistry Matters compatible with all devices and readers?
A: Yes, Chemistry Matters is compatible with all devices and readers that can read PDF, EPUB, or MOBI files. You can download the book in any of these formats depending on your preference or requirement.
-
Q: Is Chemistry Matters safe to download?
A: Yes, Chemistry Matters is safe to download from its official website. The website uses SSL encryption to protect your personal information and data. The book is also virus-free and malware-free.
-
Q: Is Chemistry Matters updated regularly?
A: Yes, Chemistry Matters is updated regularly to reflect the latest changes and developments in chemistry. The second edition of the book was published in 2019, which has been revised and improved from the first edition published in 2015.
-
Q: Is Chemistry Matters available in other languages?
A: No, Chemistry Matters is currently only available in English. However, the authors are working on translating the book into other languages such as Spanish, French, German, etc.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Find Facebook Password With Facebook Id !!EXCLUSIVE!!.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Find Facebook Password With Facebook Id !!EXCLUSIVE!!.md
deleted file mode 100644
index 8b229fa2335428ee4fcb83afa806d60d60eb0933..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Find Facebook Password With Facebook Id !!EXCLUSIVE!!.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
How to Find Your Facebook Password with Your Facebook ID
-
If you have forgotten your Facebook password and you only remember your Facebook ID, which is the email address or phone number you used to sign up for Facebook, you may be able to recover your account using the Find Your Account page or from a friend's or family memberâs account. Here are some steps you can follow to find your Facebook password with your Facebook ID.
-
-
Go to the Find Your Account page at facebook.com/login/identify and enter your Facebook ID in the search box. Click Search.
-
You will see a list of accounts that match your Facebook ID. Choose the one that belongs to you and click This Is My Account.
-
You will be asked how you want to reset your password. You can choose to receive a code via email, SMS, or a phone call. Select the option that works best for you and click Continue.
-
Enter the code you received and click Continue.
-
You will be able to create a new password for your Facebook account. Make sure to choose a strong and secure password that you can remember. Click Continue.
-
You will be logged into your Facebook account with your new password. You can also review and update your security settings at this point.
-
-
If you don't have access to the email address or phone number associated with your Facebook ID, you may still be able to recover your account from a friend's or family memberâs account. Here are some steps you can follow:
From a computer, go to the profile of the account you'd like to recover.
-
Click on the three dots icon below the cover photo and select Find support or report profile.
-
Choose Something Else, then click Next.
-
Click Recover this account and follow the steps.
-
-
If none of these methods work for you, you may have to create a new Facebook account with a different Facebook ID. However, before you do that, you can try contacting Facebook support and explain your situation. They may be able to help you restore your account if you can prove your identity.
-
Alternatively, if you have saved your Facebook password on your browser or device, you may be able to view it without resetting it. Here are some ways you can do that:
-
-
If you use Google Chrome, go to Settings > Passwords > Saved Passwords and look for facebook.com. Click on the eye icon beside the password and enter your device password or use Touch ID to view it[^4^].
-
If you use Safari on iOS, go to Settings > Passwords > Website & App Passwords and look for facebook.com. Tap on it and use Touch ID to view your login details (username and password)[^4^].
-
If you use Firefox, go to Options > Privacy & Security > Logins and Passwords > Saved Logins and look for facebook.com. Click on the eye icon beside the password and enter your device password or use Touch ID to view it[^3^].
-
-
We hope this article helped you find your Facebook password with your Facebook ID. Remember to always keep your password safe and secure, and change it regularly to prevent unauthorized access to your account.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/For Those Looking for a Key rpowersaves - Reddit[1].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/For Those Looking for a Key rpowersaves - Reddit[1].md
deleted file mode 100644
index dbf6a464da07ffffe67b298311f9b231ca4a308d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/For Those Looking for a Key rpowersaves - Reddit[1].md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
Powersaves License Key Generator Crack: How to Get Unlimited Access to Your Favorite Games
-
Do you love playing video games on your Nintendo 3DS, Switch, or Wii U? Do you wish you could unlock more cheats, codes, and enhancements for your favorite games? Do you want to backup and transfer your game saves between different consoles and regions? If you answered yes to any of these questions, then you might be interested in Powersaves.
Powersaves is a device that allows you to backup and enhance your game saves. It works with hundreds of games across various platforms, such as Pokemon, Animal Crossing, Zelda, Mario, Fire Emblem, and more. With Powersaves, you can:
-
-
Unlock cheats, codes, and enhancements that let you modify your game experience. For example, you can get unlimited items, money, health, lives, etc.
-
Backup your game saves to your PC or cloud storage. This way, you can restore them in case of corruption or deletion.
-
Transfer your game saves between different consoles and regions. This way, you can play your games on any device or location.
-
-
To use Powersaves, you need a compatible device (such as a 3DS PowerSaves or a Switch PowerSaves Plus), a USB cable, and a PC with internet connection. You also need to download and install the Powersaves software on your PC.
-
What is a License Key and Why Do You Need It?
-
A license key is a code that activates your Powersaves device. It is usually printed on a sticker or card that comes with your device. You need a license key to access the online features of Powersaves, such as downloading cheats, codes, and enhancements from the official website.
-
You can get a license key by purchasing a Powersaves device or a subscription. A subscription gives you access to all the features of Powersaves for a certain period of time (such as 6 months or 12 months). You can buy a subscription from the official website or from other online retailers.
-
What is a License Key Generator Crack and Why Do You Need It?
-
A license key generator crack is a software that creates fake license keys for Powersaves. It is usually made by hackers or modders who want to use Powersaves without paying for it. You need a license key generator crack if you want to use Powersaves without purchasing a device or a subscription.
-
powersaves 3ds license key generator free download
-powersaves pro license key generator online
-powersaves license key generator reddit
-powersaves license key generator no survey
-powersaves license key generator 2022
-powersaves license key generator mac
-powersaves license key generator windows 10
-powersaves license key generator software
-powersaves license key generator apk
-powersaves license key generator android
-powersaves license key generator ios
-powersaves license key generator exe
-powersaves license key generator zip
-powersaves license key generator rar
-powersaves license key generator xml
-powersaves license key generator crack download
-powersaves license key generator crack reddit
-powersaves license key generator crack online
-powersaves license key generator crack no survey
-powersaves license key generator crack 2022
-powersaves license key generator crack mac
-powersaves license key generator crack windows 10
-powersaves license key generator crack software
-powersaves license key generator crack apk
-powersaves license key generator crack android
-powersaves license key generator crack ios
-powersaves license key generator crack exe
-powersaves license key generator crack zip
-powersaves license key generator crack rar
-powersaves license key generator crack xml
-how to get a free powersaves license key generator
-how to use a powersaves license key generator
-how to activate a powersaves license key generator
-how to install a powersaves license key generator
-how to download a powersaves license key generator
-how to update a powersaves license key generator
-how to fix a powersaves license key generator
-how to hack a powersaves license key generator
-how to bypass a powersaves license key generator
-how to remove a powersaves license key generator
-where to find a powersaves license key generator
-where to buy a powersaves license key generator
-where to download a powersaves license key generator
-where to get a free powersaves license key generator
-where to get a working powersaves license key generator
-where to get a legit powersaves license key generator
-where to get a cracked powersaves license key generator
-where to get a safe powersaves license key generator
-where to get a reliable powersaves license key generator
-
You can find license key generator cracks online or create your own. Some websites offer free downloads of license key generator cracks for various versions of Powersaves. Some users also share their own license key generator cracks on forums or social media. Alternatively, you can make your own license key generator crack by using programming tools and reverse engineering techniques.
-
How to Use a License Key Generator Crack to Get Unlimited Access to Powersaves
-
To use a license key generator crack to get unlimited access to Powersaves, you need to follow these steps:
-
-
Download a license key generator crack from a reliable source or make your own. Make sure it is compatible with your version of Powersaves and your operating system.
-
Run the license key generator crack and copy the generated code. The code should look like a series of letters and numbers.
-
Enter the code in the Powersaves software and enjoy unlimited access to your favorite games. You should be able to download and apply cheats, codes, and enhancements from the official website or from other sources.
-
-
What are the Risks and Benefits of Using a License Key Generator Crack for Powersaves
-
Using a license key generator crack for Powersaves has its advantages and disadvantages. Here are some of them:
-
-
-
Benefits
-
Risks
-
-
-
You can save money by not buying a device or a subscription.
-
You can get banned from using Powersaves if the official website detects that you are using a fake license key.
-
-
-
You can access more features than the official version. For example, you can use cheats, codes, and enhancements that are not available on the official website.
-
You can get infected with malware if you download a license key generator crack from an untrusted source. Malware can harm your PC or steal your personal information.
-
-
-
You can customize your game experience according to your preferences. For example, you can make your games easier or harder by modifying various parameters.
-
You can lose your game saves if you use incompatible or corrupted cheats, codes, or enhancements. This can ruin your progress or damage your console.
-
-
-
You should weigh the pros and cons before using a license key generator crack for Powersaves. You should also be aware of the legal and ethical implications of using such software. Using a license key generator crack for Powersaves may violate the terms of service of the official website or the copyright laws of your country.
-
Conclusion
-
Powersaves is a device that allows you to backup and enhance your game saves. It works with hundreds of games across various platforms. To use it, you need a license key that activates your device. You can get one by buying a device or a subscription from the official website or other online retailers.
-
A license key generator crack is a software that creates fake license keys for Powersaves. It allows you to use Powersaves without paying for it. You can find one online or make one yourself. However, using one has its risks and benefits. You may get banned, infected with malware, or lose your game saves. You may also violate some laws or ethics by using one.
-
You should decide whether using a license key generator crack for Powersaves is worth it for you. You should also respect the rights of the creators and owners of Powersaves and the games that you play with it.
-
Frequently Asked Questions
-
-
Q: How do I know if my license key is valid?
-
A: You can check if your license key is valid by entering it in the Powersaves software. If it is valid, you should be able to access all the online features of Powersaves without any problems.
-
Q: How do I get more cheats, codes, and enhancements for my games?
-
A: You can get more cheats, codes, and enhancements for your games by visiting the official website of Powersaves or other websites that offer them. You can also create your own cheats, codes, and enhancements by using programming tools and hacking techniques.
-
Q: How do I backup and restore my game saves?
-
A: You can backup and restore your game saves by using the backup and restore functions in the Powersaves software. You can also backup your game saves to your PC or cloud storage by copying them manually.
-
Q: How do I transfer my game saves between different consoles and regions?
-using the transfer function in the Powersaves software. You can also transfer your game saves manually by copying them to and from your PC.
-
Q: How do I update my Powersaves device and software?
-
A: You can update your Powersaves device and software by connecting them to your PC and internet. The Powersaves software will automatically check for updates and prompt you to install them.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/AnyMusic 7.2.0 Crack 2020 With UPDATED Keygen.md b/spaces/1gistliPinn/ChatGPT4/Examples/AnyMusic 7.2.0 Crack 2020 With UPDATED Keygen.md
deleted file mode 100644
index 622d011eb757c290f0feff045429a25c0ceefc7d..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/AnyMusic 7.2.0 Crack 2020 With UPDATED Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Full how-to guide: | Huawei P9 review: Huawei P9 has . How to install Android on Huawei P9 Lite.
-Step-by-step instructions for flashing a Huawei P9 smartphone.
-Lte, P9 Plus, P9 Lite, P9, P9 Lite using the Multi Tool.
-Huawei P9 and P9 Plus.
-On this page, you will find information about "Huawei P9 Firmware" and also learn how to replace it.
-Firmware for Huawei P9 Lite.
-Huawei P9 Lite firmware.
-Instructions for firmware smartphone Huawei P9 Lite.
-Firmware - FlashTools.
-Firmware Huawei P9 Lite VNS-AL00 on Android 7.0 Nougat.
-Huawei P9 Lite - Firmware - w3bsit3-dns.com. 8a78ff9644
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bloons TD 6 33.1 APK Enjoy the New Features and Fixes.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bloons TD 6 33.1 APK Enjoy the New Features and Fixes.md
deleted file mode 100644
index 6916a9ba77def63c5fe15f81d6c28a4305da31c5..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bloons TD 6 33.1 APK Enjoy the New Features and Fixes.md
+++ /dev/null
@@ -1,220 +0,0 @@
-
-
Bloons TD 6 33.1 APK: Everything You Need to Know
-
If you are a fan of tower defense games, you have probably heard of Bloons TD 6. This is one of the most popular and successful games in the genre, with millions of players around the world. In this article, we will tell you everything you need to know about Bloons TD 6 33.1 APK, the latest version of the game that you can download and install on your Android device.
Bloons TD 6 is a tower defense game developed by Ninja Kiwi, a New Zealand-based company that has been making games since 2006. The game is part of the Bloons series, which started as a simple flash game where you had to pop balloons with darts.
-
In Bloons TD 6, you have to defend your base from waves of colorful balloons (called bloons) that are trying to reach the end of a path. To do this, you have to place various monkey towers along the path that can shoot darts, boomerangs, bombs, lasers, and other projectiles at the bloons.
-
The game features over a dozen types of monkey towers with three upgrade paths each and unique activated abilities. You can also use heroes, which are powerful monkeys with special skills that level up automatically during a match.
-
The game has a lot of content and variety to offer. You can play on over 60 maps with different themes and layouts. You can choose from several game modes with different rules and challenges. You can also customize your monkeys and bloons with cosmetic items from the trophy store.
-
What
What's New in Bloons TD 6 33.1 APK?
-
Bloons TD 6 is a game that is constantly updated with new content and improvements. The latest version of the game, 33.1, was released on June 16, 2023, and it brings a lot of new features and fixes to the game. Here are some of the highlights of the update:
A new map called The Bazaar, which is a desert-themed market with multiple paths and obstacles.
-
A new hero called Etienne, who is a drone operator that can deploy drones to attack bloons and support other monkeys.
-
A new boss event called The Dreadbloon, which is a massive metal bloon that can spawn other bloons and has multiple phases.
-
A new odyssey mode called Extreme Odyssey, which is a harder version of the regular odyssey mode with limited tower choices and lives.
-
A new trophy store item called Monkey Money Magnet, which increases the amount of monkey money you earn from playing the game.
-
Several balance changes, bug fixes, and performance improvements.
-
-
If you want to see the full patch notes of the update, you can check them out on the official website or on the game's subreddit.
-
How to Download and Install Bloons TD 6 33.1 APK?
-
If you are interested in playing Bloons TD 6 on your Android device, you have two options. You can either buy the game from the Google Play Store for $4.99, or you can download the APK file for free from various sources online.
-
An APK file is an Android application package that contains all the files and data needed to run an app on your device. By downloading and installing an APK file, you can bypass the official app store and get access to apps that are not available or restricted in your region.
-
However, there are some risks and drawbacks associated with downloading and installing APK files. For one thing, you may not get the latest updates and features of the app. For another thing, you may expose your device to malware or viruses that can harm your data or system. Therefore, you should always be careful when downloading and installing APK files from unknown sources.
-
Here are the steps you need to follow if you want to download and install Bloons TD 6 33.1 APK on your device:
Requirements for Bloons TD 6 33.1 APK
-
Before you download and install Bloons TD 6 33.1 APK, you should make sure that your device meets the minimum and recommended requirements for running the game smoothly. Here are the specifications you need to check:
-
-
-
Minimum Requirements
-
Recommended Requirements
-
-
-
Android 5.0 or higher
-
Android 8.0 or higher
-
-
-
2 GB of RAM
-
4 GB of RAM or more
-
-
-
1 GB of free storage space
-
2 GB of free storage space or more
-
-
-
A stable internet connection
-
A fast and reliable internet connection
-
-
-
If your device does not meet the minimum requirements, you may experience lag, crashes, or errors while playing the game. If your device meets the recommended requirements, you will enjoy a smooth and optimal gaming experience.
-
Download Links for Bloons TD 6 33.1 APK
-
Once you have checked your device's specifications, you can proceed to download the Bloons TD 6 33.1 APK file from one of the sources below. We have provided links to different websites that offer the APK file for free. However, we cannot guarantee the safety or quality of these files, so download them at your own risk.
A website that provides APK and OBB files for apps and games, as well as modded versions with unlimited money.
-
Installation Instructions for Bloons TD 6 33.1 APK
-
After you have downloaded the Bloons TD 6 33.1 APK file from one of the sources above, you can install it on your device by following these steps:
-
-
Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install APK files that are not from the Google Play Store.
-
Locate the APK file that you have downloaded on your device's storage. You can use a file manager app to help you find it.
-
Tap on the APK file and follow the on-screen instructions to install it. You may need to grant some permissions to the app, such as access to your storage, network, and device information.
-
Wait for the installation process to finish. You may see a confirmation message when it is done.
-
Launch the game from your app drawer or home screen and enjoy playing Bloons TD 6.
-
-
Note: If you have downloaded an OBB file along with the APK file, you will need to copy the OBB file to the Android/obb folder on your device's storage before installing the APK file. The OBB file contains additional data for the game, such as graphics and sounds.
-
How to Play Bloons TD 6?
-
Bloons TD 6 is a fun and addictive game that will keep you entertained for hours. The game has a simple and intuitive interface that makes it easy to play. However, if you are new to the game or want to improve your skills, here are some basic tips on how to play Bloons TD 6:
-
Game Modes in Bloons TD 6
-
Bloons TD 6 has several game modes that you can choose from, depending on your preference and mood. Here are some of the game modes available:
-
-
Standard Mode: This is the default mode where you can play any map with any difficulty level. You can also choose between different sub-modes, such as easy, medium, hard, impoppable, etc.
-
Co-op Mode: This is a multiplayer mode where you can team up with up to three other players online and work together to defend against the bloons. You can chat with your teammates and share money and lives.
-
Odyssey Mode: This is a special mode where you have to complete a series of maps with limited tower choices and lives. You can earn rewards for completing each map and the whole odyssey.
-
Boss Event Mode: This is a limited-time mode where you have to face a powerful boss bloon that has unique abilities and attacks. You can earn trophies and rewards for defeating the boss.
-
Monkey Towers and Heroes in Bloons TD 6
-
Bloons TD 6 has a wide range of monkey towers and heroes that you can use to pop the bloons. Each tower and hero has its own strengths, weaknesses, and abilities that you need to consider when placing them on the map. Here are some of the monkey towers and heroes available:
-
-
Dart Monkey: This is the basic tower that shoots a single dart at a single bloon. It is cheap and versatile, but not very powerful. It can be upgraded to shoot faster, farther, or more darts at once.
-
Boomerang Monkey: This is a tower that throws a boomerang that can hit multiple bloons in a curved path. It is good for dealing with grouped bloons, but not very accurate. It can be upgraded to throw faster, more, or bigger boomerangs.
-
Bomb Shooter: This is a tower that launches a bomb that explodes and pops bloons in a radius. It is good for dealing with dense clusters of bloons, but not very fast. It can be upgraded to shoot bigger, faster, or more bombs.
-
Tack Shooter: This is a tower that shoots tacks in eight directions that can pop multiple bloons. It is good for covering a large area, but not very precise. It can be upgraded to shoot more, faster, or hotter tacks.
-
Ice Monkey: This is a tower that freezes bloons in its range, slowing them down and making them vulnerable to other attacks. It is good for controlling the bloon speed, but not very damaging. It can be upgraded to freeze more, longer, or stronger bloons.
-
Glue Gunner: This is a tower that shoots glue at bloons, slowing them down and making them take more damage from other attacks. It is good for weakening the bloon defense, but not very popping. It can be upgraded to shoot more, faster, or stronger glue.
-
Sniper Monkey: This is a tower that shoots a powerful bullet that can pop any bloon type and pierce through multiple layers. It is good for dealing with high-health bloons, but not very fast. It can be upgraded to shoot faster, harder, or farther.
-
Monkey Sub: This is a tower that can only be placed on water and shoots darts or torpedoes at bloons. It is good for covering water areas, but not very flexible. It can be upgraded to shoot faster, more, or underwater.
-
Monkey Buccaneer: This is a tower that can only be placed on water and shoots cannonballs or grapeshot at bloons. It is good for covering water areas, but not very precise. It can be upgraded to shoot bigger, faster, or more projectiles.
-
Monkey Ace: This is a tower that flies in the air and drops bombs or darts at bloons. It is good for covering large areas, but not very consistent. It can be upgraded to fly faster, more accurately, or more frequently.
-
Heli Pilot: This is a tower that flies in the air and shoots darts or missiles at bloons. It is good for targeting specific bloons, but not very cheap. It can be upgraded to fly faster, more powerfully, or more autonomously.
-
Mortar Monkey: This is a tower that launches explosive shells at a target area on the map. It is good for hitting hidden or hard-to-reach bloons, but not very accurate. It can be upgraded to launch bigger, faster, or more shells.
-
Wizard Monkey: This is a tower that casts magic spells that can pop different types of bloons. It is good for dealing with various bloon properties, but not very durable. It can be upgraded to cast stronger, faster, or more spells.
-
Super Monkey: This is a tower that shoots powerful beams of energy that can pop multiple bloons at once. It is good for dealing with massive amounts of bloons, but not very affordable. It can be upgraded to shoot stronger, wider, or more beams.
-
Ninja Monkey: This is a tower that throws shurikens or caltrops at bloons. It is good for popping camo bloons and dealing critical hits, but not very fast. It can be upgraded to throw faster, more accurately, or more stealthily.
-
Alchemist: This is a tower that throws potions at monkeys or bloons. It is good for buffing other monkeys or debuffing bloons, but not very popping. It can be upgraded to throw stronger, longer-lasting, or more potions.
-
Druid: This is popping camo bloons and summoning totems. His abilities are Brambles, which creates a patch of thorns that pops bloons, and Wall of Trees, which creates a wall of trees that blocks and eats bloons.
-
Captain Churchill: This is a tank hero that shoots powerful shells and missiles at bloons. He is good for popping armored and fortified bloons and dealing massive damage. His abilities are Shell Shock, which fires a shell that pops and stuns bloons in a large radius, and MOAB Barrage, which fires a volley of missiles that target MOABs.
-
Benjamin: This is a hacker hero that uses cybernetics and hacking to manipulate the bloons and the economy. He is good for earning extra money and reducing the bloon threat. His abilities are Biohack, which boosts the attack speed of nearby monkeys for a short time, and Syphon Funding, which steals money from the bloons and reduces their health.
-
Ezili: This is a voodoo hero that uses curses and hexes to pop bloons. She is good for popping regrow and purple bloons and dealing damage based on the bloon health. Her abilities are Heartstopper, which prevents the bloons from regrowing or healing for a short time, and MOAB Hex, which damages and weakens MOABs over time.
-
Pat Fusty: This is a giant monkey hero that uses his fists and roar to pop bloons. He is good for popping large groups of bloons and buffing other monkeys with his presence. His abilities are Rallying Roar, which increases the damage and range of nearby monkeys for a short time, and Big Squeeze, which grabs and crushes a MOAB or BFB.
-
Adora: This is a divine hero that uses holy energy to pop bloons. She is good for popping all types of bloons and leveling up faster than other heroes. Her abilities are Long Range Judgement, which fires a beam of light that pops bloons in a line, and Blood Sacrifice, which sacrifices some of your monkeys to increase her level and power.
-
Brickell: This is a naval hero that can only be placed on water and uses mines and submarines to pop bloons. She is good for buffing water-based monkeys and popping submerged bloons. Her abilities are Naval Tactics, which increases the attack speed and pierce of nearby water-based monkeys for a short time, and Mega Mine, which deploys a huge mine that explodes and pops bloons in a large radius.
-
Etienne: This is a drone hero that can deploy drones to attack bloons and support other monkeys. He is good for covering multiple areas and granting camo detection to nearby monkeys. His abilities are Drone Swarm, which summons more drones to attack the bloons, and UCAV, which launches a powerful drone that fires missiles at the bloons.
-
-
Tips and Tricks for Bloons TD 6
-
Bloons TD 6 is a game that requires strategy, skill, and creativity to master. The game can be challenging at times, especially on higher difficulties or special modes. Here are some tips and tricks that can help you improve your performance and have more fun:
-
-
Experiment with different towers, heroes, and upgrades: The game has a lot of options for you to customize your defense. Try out different combinations of towers, heroes, and upgrades to see what works best for each map, mode, and situation.
-
Use monkey knowledge wisely: Monkey knowledge is a system that allows you to unlock permanent buffs and benefits for your monkeys. You can earn monkey knowledge points by leveling up or completing achievements. You can spend them on different branches of the monkey knowledge tree, such as primary, military, magic, support, powers, or heroes. Choose the ones that suit your playstyle and strategy.
-
Use powers sparingly: Powers are special items that can give you an edge in the game. You can use them to boost your monkeys, pop more bloons, or get more money. However, powers are limited in quantity and can be expensive to buy or earn. Use them only when you really need them or when you want to have some fun.
-
Watch out for special bloon properties: Bloons can have different properties that make them harder to pop or more dangerous. For example, camo bloons can only be seen by monkeys with camo detection, lead bloons can only be popped by explosive or energy attacks, regrow bloons can regenerate their layers if not popped quickly enough, etc. Learn the different types of bloon properties and how to counter them with the right towers and upgrades.
-
Plan ahead and save up for late game: The game gets harder as you progress, with more and stronger bloons appearing on the screen. You need to be prepared for the late game, where you will face MOABs, BFBs, ZOMGs, DDTs, and BADs. These are huge bloons that can take a lot of damage and spawn more bloons when popped. You need to save up money and space for powerful towers and upgrades that can deal with these threats.
-
Have fun and try new things: The game has a lot of replay value and variety, with different maps, modes, challenges, achievements, and trophies to explore. You can also create your own custom challenges and share them with other players. Don't be afraid to try new things and experiment with different strategies. You may discover something new and exciting.
-
-
Why You Should Play Bloons TD 6?
-
Bloons TD 6 is a game that has something for everyone. Whether you are a casual player who likes to relax and pop some bloons, or a hardcore player who likes to challenge yourself and test your skills, you will find something to enjoy in this game. Here are some of the reasons why you should play Bloons TD 6:
-
Pros of Bloons TD 6
-
-
It has amazing graphics and animations: The game has a colorful and vibrant art style that makes it pleasing to the eye. The game also has smooth and fluid animations that make it satisfying to watch. The game runs well on most devices and has options to adjust the graphics quality and performance.
-
It has tons of content and variety: The game has over 60 maps, over a dozen towers, over 10 heroes, over 100 upgrades, over 20 game modes, over 100 achievements, over 50 trophies, and more. The game also has regular updates that add new content and improvements to the game. You will never run out of things to do or see in this game.
-
It has a great gameplay and balance: The game has a simple but addictive gameplay that makes it easy to pick up and hard to put down. The game also has a good balance between difficulty and fun, with different options to suit your preference and skill level. The game also has a lot of strategy and depth, with different combinations of towers, heroes, upgrades, powers, and monkey knowledge.
-
It has a friendly and active community: The game has a large and loyal fan base that loves the game and supports the developers. The game also has a friendly and active community that shares tips, tricks, challenges, feedback, fan art, memes, and more. You can join the official website, subreddit, discord server, or other platforms to interact with other players and have more fun.
-
-
Cons of Bloons TD 6
-
-
It can be expensive and grindy: The game costs $4.99 to buy from the Google Play Store, which may be too much for some people. The game also has some in-app purchases that can help you progress faster or unlock more content, but they can be pricey as well. The game also requires you to grind a lot of money and experience to afford the more expensive towers and upgrades or level up your heroes and monkey knowledge.
-
It can be frustrating and repetitive: The game can be very challenging at times, especially on higher difficulties or special modes. You may encounter bloons that are too hard to pop or levels that are too long or complex. You may also lose your progress or lives due to mistakes or bad luck. The game can also get repetitive after a while, with the same bloons, towers, heroes, upgrades, powers, etc.
-
It can have some bugs and glitches: The game is not perfect and can have some bugs and glitches that can affect your gameplay or experience. For example, you may encounter crashes , freezes, lags, or errors while playing the game. You may also encounter some visual or audio glitches that can ruin the immersion or quality of the game. The developers are working hard to fix these issues, but they may still occur from time to time.
-
-
User Reviews of Bloons TD 6
-
To give you a better idea of what other players think of Bloons TD 6, here are some user reviews from different platforms, such as Steam, Google Play Store, etc. These reviews are taken verbatim from the sources and may contain some spelling or grammar errors.
-
-
Steam: "Bloons TD 6 is a great game. It has a lot of content and replay value. The graphics are amazing and the gameplay is addictive. The game is challenging but fair, and there are many ways to play it. The game also has a nice community and regular updates. I highly recommend this game to anyone who likes tower defense games or just wants to have some fun."
-
Google Play Store: "Bloons TD 6 is a good game. It has a lot of variety and options. The game is fun and relaxing. The game is also easy to play and learn. The game has some problems, though. The game is expensive and sometimes crashes. The game also has some ads and microtransactions. I like this game, but it could be better."
-
App Store: "Bloons TD 6 is an awesome game. It has a lot of maps and modes. The game is exciting and challenging. The game is also beautiful and smooth. The game has some flaws, however. The game is hard and sometimes frustrating. The game also has some bugs and glitches. I love this game, but it needs some improvement."
-
-
Conclusion
-
Bloons TD 6 is a tower defense game that will keep you entertained for hours with its colorful graphics, engaging gameplay, and varied content. Whether you are a casual or hardcore player, you will find something to enjoy in this game.
-
If you want to play Bloons TD 6 on your Android device, you can either buy it from the Google Play Store or download the APK file for free from various sources online. However, you should be careful when downloading and installing APK files from unknown sources, as they may pose some risks to your device or data.
-
If you want to learn more about Bloons TD 6, you can visit the official website, subreddit, discord server, or other platforms to get more information, tips, tricks, challenges, feedback, fan art, memes, and more.
-
We hope this article has helped you understand everything you need to know about Bloons TD 6 33.1 APK. Now go ahead and pop some bloons!
-
FAQs
-
Here are some frequently asked questions about Bloons TD 6 33.1 APK:
-
-
Q: Is Bloons TD 6 free?
-
A: No, Bloons TD 6 is not free. You have to pay $4.99 to buy it from the Google Play Store. However, you can download the APK file for free from various sources online.
-
Q: Is Bloons TD 6 offline?
-
A: Yes, Bloons TD 6 can be played offline. You don't need an internet connection to play the game, except for some features such as co-op mode, boss events, daily challenges, etc.
-
Q: Is Bloons TD 6 multiplayer?
-
A: Yes, Bloons TD 6 has a multiplayer mode called co-op mode. You can team up with up to three other players online and work together to defend against the bloons.
-
Q: Is Bloons TD 6 cross-platform?
-
A: Yes, Bloons TD 6 is cross-platform. You can play with other players who have the game on different devices or platforms, such as Android, iOS, Windows, Mac, etc.
-
Q: Is Bloons TD 6 modded?
-
A: No, Bloons TD 6 is not modded. The APK file that we have provided in this article is the original version of the game that has not been modified or hacked in any way.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download D-Mod and Unlock New Abilities for Foxes in Minecraft.md b/spaces/1phancelerku/anime-remove-background/Download D-Mod and Unlock New Abilities for Foxes in Minecraft.md
deleted file mode 100644
index 0994915cd525404427965c744b6eb211ea92b693..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download D-Mod and Unlock New Abilities for Foxes in Minecraft.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
How to Download Mod Dmod for Your Favorite Games
-
Do you love playing games on your Android device? Do you wish you could change or add something to make them more fun, challenging, or immersive? If so, you might be interested in mod dmod.
-
Mod dmod is a term that refers to modifying or adding new features to existing games, especially on Android devices. Modding can enhance the gameplay, graphics, sound, or content of a game, making it more enjoyable and satisfying. Some mods can even create entirely new games based on the original ones.
For example, you can download mods for Minecraft that add new blocks, items, creatures, biomes, dimensions, quests, and more. You can also download mods for GTA San Andreas that improve the graphics, physics, vehicles, weapons, missions, characters, and more. Or you can download mods for Dmod that let you play custom maps created by other users.
-
In this article, we will show you how to download mod dmod for your favorite games. We will also explain the benefits and risks of modding games, and provide some tips and precautions to ensure a safe and smooth modding experience.
-
Benefits of Modding Games
-
Modding games can improve your gaming experience in various ways. Here are some of the benefits of modding games:
-
-
You can customize your gaming experience according to your preferences and tastes. You can choose the mods that suit your style, mood, or interest. You can also mix and match different mods to create your own unique combination.
-
You can explore new possibilities and scenarios that are not available in the original game. You can discover new worlds, stories, characters, mechanics, and challenges that expand your gaming horizon. You can also create your own content and share it with other players.
-
You can improve the performance and compatibility of your game on different devices and platforms. You can optimize the graphics, sound, or controls of your game to match your device's specifications and capabilities. You can also fix bugs, errors, or glitches that may affect your game.
-
You can support the creative work of modders and developers who share their mods for free or for a small fee. You can appreciate their efforts and skills, and give them feedback or suggestions to improve their mods. You can also contribute to the modding community by donating, rating, reviewing, or recommending mods.
-
You can learn new skills and knowledge about game design, programming, art, and more. You can study how mods are made and how they work, and apply what you learn to your own projects. You can also collaborate with other modders and learn from their experiences.
-
-
As you can see, modding games can offer you many benefits that can make your gaming experience more enjoyable and satisfying. However, modding games also has some risks and challenges that you should be aware of.
-
Risks and Challenges of Modding Games
-
Modding games is not without its drawbacks and dangers. Here are some of the risks and challenges of modding games:
-
-
You may encounter bugs, glitches, crashes, or compatibility issues that affect your game or device. Some mods may not work properly or conflict with each other or with the original game. Some mods may also require additional resources or permissions that may slow down or harm your device.
-
You may violate the terms of service or intellectual property rights of the original game developers or publishers. Some mods may use unauthorized or illegal content or features that may infringe on the rights of the original game creators. Some mods may also be banned or removed by the game developers or publishers for violating their policies.
-
You may expose your device or data to malware, viruses, or hackers that may harm your security or privacy. Some mods may contain malicious code or software that may infect your device or steal your data. Some mods may also require you to access unsafe or untrusted websites or sources that may compromise your security or privacy.
-
You may lose your progress or achievements in the original game if you overwrite or delete any files or data. Some mods may require you to modify or replace some files or data in the original game folder. Some mods may also prevent you from saving or loading your game normally.
-
-
Therefore, you should always be careful and responsible when downloading and installing mods for your games. You should also respect the rights and wishes of the original game creators and modders, and give them proper credit and feedback for their work.
-
How to Download and Install Mods for Your Games
-
Now that you know the benefits and risks of modding games, let's see how to download and install mods for your games. The general steps and methods are as follows:
-
download mod dmod minecraft
-download mod dmod curseforge
-download mod dmod files
-download mod dmod foxes
-download mod dmod bundles
-download mod dmod 1.7.10
-download mod dmod latest version
-download mod dmod installer
-download mod dmod patches
-download mod dmod demos
-download mod dmod media
-download mod dmod wiki
-download mod dmod license
-download mod dmod unlicense
-download mod dmod mojang
-download mod dmod et futurum requiem
-download mod dmod sweet berries
-download mod dmod rabbits
-download mod dmod nei
-download mod dmod gtnh fork
-download mod dmod hodgepodge
-download mod dmod mixin
-download mod dmod backlytra
-download mod dmod mixingasm
-download mod dmod looting fox fix
-download mod dmod windows
-download mod dmod macos
-download mod dmod linux
-download mod dmod android
-download mod dmod ios
-download mod dmod apk
-download mod dmod zip
-download mod dmod jar
-download mod dmod exe
-download mod dmod source code
-download mod dmod github
-download mod dmod reviews
-download mod dmod ratings
-download mod dmod comments
-download mod dmod feedbacks
-download mod dmod support
-download mod dmod issues
-download mod dmod bugs
-download mod dmod fixes
-download mod dmod updates
-download mod dmod changelog
-download mod dmod features
-download mod dmod screenshots
-download mod dmod videos
-
-
Find a mod that you like and want to try. You can search online for mod websites, forums, blogs, videos, reviews, or recommendations. Some popular mod websites are Mod DB, Nexus Mods, APKPure, HappyMod, Android-1, etc.
-
Download the mod file to your device. Make sure the mod file is compatible with your device's specifications and capabilities. Make sure the mod file is safe and secure from malware, viruses, or hackers. Make sure the mod file is legal and authorized by the original game developers or publishers.
-
Install the mod file on your device. Depending on the type and format of the mod file, you may need to use different methods to install it. Some common methods are:
-
Using a mod installer app: Some mods come with a mod installer app that can automatically install the mod for you. For example, Dmod Installer is a mod installer app that can install Dmod maps for you.
-
Using a file manager app: Some mods require you to manually copy or move the mod file to a specific folder on your device using a file manager app. For example, some Minecraft mods require you to copy or move the mod file to the "games/com.mojang/minecraftWorlds" folder on your device using a file manager app.
-
Using an APK file: Some mods are packaged as APK files that can be installed as standalone apps on your device. For example, some GTA San Andreas mods are APK files that can be installed as separate games on your device.
-
-
-
Launch the modded game on your device. Depending on the type and format of the mod file, you may need to use different methods to launch it. Some common methods are:
-
Using a mod launcher app: Some mods require you to use a mod launcher app to launch the modded game. For example, BlockLauncher is a mod launcher app that can launch Minecraft with mods.
-
Using the original game app: Some mods can be launched directly from the original game app. For example, some Dmod maps can be launched from the Dmod app.
-
Using the modded game app: Some mods are installed as separate apps that can be launched independently from the original game app. For example, some GTA San Andreas mods are installed as separate games that can be launched from their own icons.
-
-
-
-
These are the general steps and methods to download and install mods for your games. However, different games and mods may have different requirements and instructions, so you should always follow the specific guidelines and instructions provided by the modders or developers. You should also backup your original game files and data before installing any mods, in case something goes wrong or you want to revert to the original game.
-
Conclusion
-
In this article, we have shown you how to download mod dmod for your favorite games. We have also explained the benefits and risks of modding games, and provided some tips and precautions to ensure a safe and smooth modding experience.
-
Modding games can offer you many advantages that can make your gaming experience more enjoyable and satisfying. However, modding games also has some disadvantages and dangers that you should be aware of and avoid. Therefore, you should always be careful and responsible when downloading and installing mods for your games. You should also respect the rights and wishes of the original game creators and modders, and give them proper credit and feedback for their work.
-
If you are interested in modding games, you can explore more online resources and communities that can help you find, download, install, create, or share mods for your games. You can also learn more skills and knowledge about game design, programming, art, and more by studying how mods are made and how they work.
-
We hope this article has been helpful and informative for you. Happy modding!
-
FAQs
-
Here are some common or relevant questions that readers may have about mod dmod:
-
-
What is the difference between mod dmod and hack?
-
A mod dmod is a modification or addition of new features to an existing game, while a hack is a manipulation or alteration of the game code or data to gain an unfair advantage or bypass restrictions. Mods are usually made for fun or creativity, while hacks are usually made for cheating or exploiting. Mods are usually legal and authorized by the original game developers or publishers, while hacks are usually illegal and unauthorized by them.
-
Where can I find mods for my games?
-
You can find mods for your games online on various websites, forums, blogs, videos, reviews, or recommendations. Some popular mod websites are Mod DB, Nexus Mods, APKPure, HappyMod, Android-1, etc. You can also find mods on social media platforms such as Facebook, Twitter, Instagram, YouTube, Reddit, Discord, etc.
-
How do I know if a mod is safe and secure?
-
You can check if a mod is safe and secure by following these tips:
-
Download mods from reputable and trusted sources that have positive ratings, reviews, or feedback from other users.
-
Scan the mod file with an antivirus or anti-malware software before installing it on your device.
-
Read the description, instructions, permissions, requirements, changelog, updates, comments, or FAQs of the mod carefully before installing it on your device.
-
Avoid mods that ask for too many or unnecessary permissions or resources that may harm your device or data.
-
Avoid mods that use unauthorized or illegal content or features that may infringe on the rights of the original game creators or publishers.
-
-
-
How do I uninstall or remove mods from my games?
-
You can uninstall or remove mods from your games by following these steps:
-
Find the mod file or folder that you want to uninstall or remove on your device using a file manager app.
-
Delete the mod file or folder from your device, or move it to another location if you want to keep it for later use.
-
Launch the original game app on your device and check if the mod is gone or disabled.
-
Restore your original game files and data from a backup if you have one, or reinstall the original game app from the official source if you don't have one.
-
-
-
What are some of the best mods for my games?
-
The answer to this question depends on your personal preferences and tastes, as well as the type and genre of your games. However, here are some of the most popular and recommended mods for some of the most popular and played games on Android devices:
".join(
- [f'Title: {paper["title"]} Authors: {paper["authors"]} Score: {paper["Relevancy score"]} Reason: {paper["Reasons for match"]}'
- for paper in relevancy])
- if hallucination:
- body = "Warning: the model hallucinated some papers. We have tried to remove them, but the scores may not be accurate.
" + body
- else:
- body = "
".join(
- [f'Title: {paper["title"]} Authors: {paper["authors"]}'
- for paper in papers])
- return body
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--config", help="yaml config file to use", default="config.yaml")
- args = parser.parse_args()
- with open(args.config, "r") as f:
- config = yaml.safe_load(f)
- if "OPENAI_API_KEY" not in os.environ:
- raise RuntimeError("No openai api key found")
-
- topic = config["topic"]
- categories = config["categories"]
- from_email = config.get("from_email") or os.environ.get("FROM_EMAIL")
- to_email = config.get("to_email") or os.environ.get("TO_EMAIL")
- threshold = config["threshold"]
- interest = config["interest"]
- with open("digest.html", "w") as f:
- body = generate_body(topic, categories, interest, threshold)
- f.write(body)
- if os.environ.get('SENDGRID_API_KEY', None):
- sg = SendGridAPIClient(api_key=os.environ.get('SENDGRID_API_KEY'))
- from_email = Email(from_email) # Change to your verified sender
- to_email = To(to_email)
- subject = date.today().strftime("Personalized arXiv Digest, %d %b %Y")
- content = Content("text/html", body)
- mail = Mail(from_email, to_email, subject, content)
- mail_json = mail.get()
-
- # Send an HTTP POST request to /mail/send
- response = sg.client.mail.send.post(request_body=mail_json)
- if response.status_code >= 200 and response.status_code <= 300:
- print("Send test email: Success!")
- else:
- print("Send test email: Failure ({response.status_code}, {response.text})")
- else:
- print("No sendgrid api key found. Skipping email")
diff --git a/spaces/Aveygo/AstroSleuth/utils/convert_to_onnx.py b/spaces/Aveygo/AstroSleuth/utils/convert_to_onnx.py
deleted file mode 100644
index 5b737f2736a743862c075f0dcec1b1631a1220cf..0000000000000000000000000000000000000000
--- a/spaces/Aveygo/AstroSleuth/utils/convert_to_onnx.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from modules.realesr import Network
-import torch
-
-src = "model.pth"
-
-model = Network(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
-model.load_state_dict(torch.load(src), strict=True)
-model.eval()
-
-x = torch.randn(1, 3, 512, 512)
-input_names = ["input"]
-output_names = ["output_"]
-
-dynamic_axes_dict = {'input': {0: 'batch_size', 2: 'height', 3: 'width'}, 'output': {0: 'batch_size', 2: 'height', 3: 'width'}}
-torch.onnx.export(model, x, ".".join(src.split(".")[:-1]) + ".onnx", verbose=False, input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes_dict, export_params=True)
\ No newline at end of file
diff --git a/spaces/Aziizzz/ChestXrayClassification/README.md b/spaces/Aziizzz/ChestXrayClassification/README.md
deleted file mode 100644
index dfc509b0830e66f71b7cc6012ca51d5f29541ac4..0000000000000000000000000000000000000000
--- a/spaces/Aziizzz/ChestXrayClassification/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChestXrayClassification
-emoji: 🌖
-colorFrom: gray
-colorTo: purple
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Descargar Cama Guerras En Minecraft Educacin Edicin.md b/spaces/Benson/text-generation/Examples/Descargar Cama Guerras En Minecraft Educacin Edicin.md
deleted file mode 100644
index 7cb884a1f5dfefb335eaef080b7f32312c21c1dd..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Cama Guerras En Minecraft Educacin Edicin.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
Cómo descargar y jugar Bed Wars en Minecraft Education Edition
-
Bed Wars es uno de los modos de juego más populares en Minecraft, donde los jugadores tienen que proteger sus camas de ser destruidas por otros equipos mientras intentan destruir las camas de sus oponentes. Es una forma divertida y emocionante de poner a prueba tus habilidades en el trabajo en equipo, la estrategia y el combate.
-
descargar cama guerras en minecraft educación edición
Si eres usuario de Minecraft Education y quieres probar Bed Wars, te estarás preguntando cómo hacerlo. A diferencia de la versión regular de Minecraft, Minecraft Education Edition no tiene acceso a servidores o reinos donde puedes unirte a otros jugadores en Bed Wars. Sin embargo, hay una manera de agregar Bed Wars a tu experiencia de Minecraft Education descargando e importando un mapa y un complemento que permiten el modo de juego.
-
En este artículo, le mostraremos cómo descargar y jugar Bed Wars en Minecraft Education Edition en cinco sencillos pasos. También te daremos algunos consejos y trucos para jugar a Bed Wars en Minecraft Education Edition que te ayudarán a mejorar tu juego.
-
Cómo descargar el mapa de Bed Wars
-
El primer paso para obtener Bed Wars en Minecraft Education Edition es descargar el mapa de Bed Wars. Puedes encontrar mapas de Bed Wars en varios sitios web de mapas de Minecraft o buscando en Google. Asegúrate de descargar un mapa de Bed Wars que sea compatible con Minecraft Education Edition.
-
Uno de los sitios web donde puedes encontrar un buen mapa de Bed Wars para Minecraft Education Edition es MediaFire. En este sitio web, se puede encontrar un archivo llamado "Bedwars.mcworld" que contiene un mapa de Bed Wars de temática medieval con cuatro equipos y cuatro islas. Para descargar este archivo, simplemente haga clic en el botón verde "Descargar" y guárdelo en su dispositivo.
-
Cómo importar mapa de Bed Wars en Minecraft Education Edition
-
-
Esto agregará el archivo Bedwars.mcworld a su lista de mundos en Minecraft Education Edition. A continuación, puede hacer clic en él para ver sus detalles y ajustes.
-
Cómo instalar el complemento Bed Wars
-
Después de importar el archivo Bedwars.mcworld, debe instalar el complemento Bedwars. El complemento Bedwars es un script que añade el modo de juego Bedwars a Minecraft Education Edition. Puede encontrar el complemento Bedwars en varios sitios web adicionales de Minecraft o buscando en Google. Asegúrate de descargar un complemento de Bedwars que sea compatible con Minecraft Education Edition.
-
-
Uno de los sitios web donde se puede encontrar un buen Bedwars complemento para Minecraft Education Edition es MCPEDL. En este sitio web, puede encontrar un archivo llamado "Bedwars.zip" que contiene el complemento Bedwars. Para descargar este archivo, simplemente haga clic en el botón verde "Descargar" y guárdelo en su dispositivo.
-
Cómo activar el complemento Bed Wars
-
Una vez que haya descargado el complemento Bedwars, debe activarlo en Minecraft Education Edition. Para ello, abra el archivo Bedwars.mcworld que importó y haga clic en el botón "Editar". Luego haga clic en "Paquetes de recursos" y "Agregar". Busque el archivo Bedwars.zip que descargó y haga clic en "Abrir".
-
Esto agregará el complemento Bedwars a su lista de paquetes de recursos en Minecraft Education Edition. A continuación, puede hacer clic en él para ver sus detalles y ajustes. Asegúrate de activar la opción "Experimental Gameplay" en la configuración para permitir que el complemento Bedwars funcione correctamente.
-
Después de activar el complemento Bedwars, puedes empezar a jugar Bed Wars en Minecraft Education Edition.
-
Cómo jugar a las guerras de cama
-
Bed Wars es un modo de juego basado en equipos donde tienes que proteger tu cama de ser destruida por otros equipos mientras tratas de destruir sus camas. El último equipo en pie gana el juego.
-
-
El objetivo de Bed Wars es usar tus recursos para comprar artículos de la tienda y usarlos para defender tu cama y atacar otras camas. También puede actualizar su generador y las habilidades de su equipo con diamantes y esmeraldas. Si tu cama es destruida, no podrás reaparecer si mueres. Si destruyes la cama de otro equipo, ellos tampoco podrán reaparecer. El último equipo con una cama o el último equipo vivo gana el juego.
-
Consejos y trucos para la guerra de las camas en Minecraft Education Edition
-
Bed Wars es un juego que requiere estrategia, trabajo en equipo y habilidad. Aquí hay algunos consejos y trucos que te ayudarán a mejorar tu juego:
-
-
Comunicarse con sus compañeros de equipo. Utilice la función de chat o chat de voz para coordinar sus acciones y compartir información.
-
Protege tu cama. Usa bloques, trampas, perlas ender, etc. para cubrir tu cama y evitar que otros equipos la rompan.
-
Ataque a otras camas. Utilice herramientas, TNT, bolas de fuego, etc. para romper las camas de otros equipos y eliminarlos del juego.
-
Usa los recursos sabiamente. No desperdicies tus recursos en objetos o mejoras innecesarias. Guárdelos para elementos importantes o mejoras que le darán una ventaja.
-
Ten cuidado con tus alrededores. Ten cuidado con los enemigos que te acechan o te atacan desde diferentes direcciones. Utilice la brújula para localizar otros equipos y sus camas.
-
-
Conclusión
-
Bed Wars es un modo de juego divertido y emocionante que puedes jugar en Minecraft Education Edition descargando e importando un mapa y un complemento que lo habilita. Puedes jugar a Bed Wars con tus amigos o compañeros de clase y poner a prueba tus habilidades en el trabajo en equipo, la estrategia y el combate. También puedes usar Bed Wars como una oportunidad de aprendizaje para practicar matemáticas, lógica, resolución de problemas, comunicación, etc.
-
Si quieres probar Bed Wars en Minecraft Education Edition, sigue los pasos de este artículo y empieza a jugar hoy. ¡Te lo vas a pasar genial!
-
Preguntas frecuentes
-
-
-
¿Puedo jugar Bed Wars en Minecraft Education Edition sin descargar nada?
-
No, necesitas descargar un mapa y un complemento que habilite el modo de juego Bed Wars en Minecraft Education Edition.
-
¿Puedo jugar Bed Wars en Minecraft Education Edition con más de cuatro equipos?
-
No, el número máximo de equipos en Bed Wars on Minecraft Education Edition es de cuatro.
-
¿Puedo jugar a Bed Wars en Minecraft Education Edition sin conexión?
-
No, necesitas una conexión a Internet para jugar Bed Wars en Minecraft Education Edition.
-
¿Puedo jugar Bed Wars en Minecraft Education Edition con otros dispositivos?
-
Sí, puedes jugar Bed Wars en Minecraft Education Edition con otros dispositivos que lo soportan, como PC con Windows 10, iPads, Chromebooks, etc.
-
¿Puedo personalizar el mapa de Bed Wars o el complemento en Minecraft Education Edition?
-
Sí, puede personalizar el mapa de Bed Wars o el complemento en Minecraft Education Edition editando los archivos o utilizando la función de creación de código. Sin embargo, esto puede afectar la compatibilidad o funcionalidad del mapa o el complemento, así que hágalo bajo su propio riesgo.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat/src/lib/server/modelEndpoint.ts b/spaces/BetterAPI/BetterChat/src/lib/server/modelEndpoint.ts
deleted file mode 100644
index 4d187da21c37cbbe8efd722c09fee1815bd1c71f..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/src/lib/server/modelEndpoint.ts
+++ /dev/null
@@ -1,21 +0,0 @@
-import { MODEL_ENDPOINTS } from "$env/static/private";
-import { sum } from "$lib/utils/sum";
-
-const endpoints: Array<{ endpoint: string; authorization: string; weight: number }> =
- JSON.parse(MODEL_ENDPOINTS);
-const totalWeight = sum(endpoints.map((e) => e.weight));
-
-/**
- * Find a random load-balanced endpoint
- */
-export function modelEndpoint(): { endpoint: string; authorization: string; weight: number } {
- let random = Math.random() * totalWeight;
- for (const endpoint of endpoints) {
- if (random < endpoint.weight) {
- return endpoint;
- }
- random -= endpoint.weight;
- }
-
- throw new Error("Invalid config, no endpoint found");
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_securetransport/bindings.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_securetransport/bindings.py
deleted file mode 100644
index 264d564dbda676b52f446c0d25433a15939a78a3..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/contrib/_securetransport/bindings.py
+++ /dev/null
@@ -1,519 +0,0 @@
-"""
-This module uses ctypes to bind a whole bunch of functions and constants from
-SecureTransport. The goal here is to provide the low-level API to
-SecureTransport. These are essentially the C-level functions and constants, and
-they're pretty gross to work with.
-
-This code is a bastardised version of the code found in Will Bond's oscrypto
-library. An enormous debt is owed to him for blazing this trail for us. For
-that reason, this code should be considered to be covered both by urllib3's
-license and by oscrypto's:
-
- Copyright (c) 2015-2016 Will Bond
-
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the "Software"),
- to deal in the Software without restriction, including without limitation
- the rights to use, copy, modify, merge, publish, distribute, sublicense,
- and/or sell copies of the Software, and to permit persons to whom the
- Software is furnished to do so, subject to the following conditions:
-
- The above copyright notice and this permission notice shall be included in
- all copies or substantial portions of the Software.
-
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- DEALINGS IN THE SOFTWARE.
-"""
-from __future__ import absolute_import
-
-import platform
-from ctypes import (
- CDLL,
- CFUNCTYPE,
- POINTER,
- c_bool,
- c_byte,
- c_char_p,
- c_int32,
- c_long,
- c_size_t,
- c_uint32,
- c_ulong,
- c_void_p,
-)
-from ctypes.util import find_library
-
-from ...packages.six import raise_from
-
-if platform.system() != "Darwin":
- raise ImportError("Only macOS is supported")
-
-version = platform.mac_ver()[0]
-version_info = tuple(map(int, version.split(".")))
-if version_info < (10, 8):
- raise OSError(
- "Only OS X 10.8 and newer are supported, not %s.%s"
- % (version_info[0], version_info[1])
- )
-
-
-def load_cdll(name, macos10_16_path):
- """Loads a CDLL by name, falling back to known path on 10.16+"""
- try:
- # Big Sur is technically 11 but we use 10.16 due to the Big Sur
- # beta being labeled as 10.16.
- if version_info >= (10, 16):
- path = macos10_16_path
- else:
- path = find_library(name)
- if not path:
- raise OSError # Caught and reraised as 'ImportError'
- return CDLL(path, use_errno=True)
- except OSError:
- raise_from(ImportError("The library %s failed to load" % name), None)
-
-
-Security = load_cdll(
- "Security", "/System/Library/Frameworks/Security.framework/Security"
-)
-CoreFoundation = load_cdll(
- "CoreFoundation",
- "/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation",
-)
-
-
-Boolean = c_bool
-CFIndex = c_long
-CFStringEncoding = c_uint32
-CFData = c_void_p
-CFString = c_void_p
-CFArray = c_void_p
-CFMutableArray = c_void_p
-CFDictionary = c_void_p
-CFError = c_void_p
-CFType = c_void_p
-CFTypeID = c_ulong
-
-CFTypeRef = POINTER(CFType)
-CFAllocatorRef = c_void_p
-
-OSStatus = c_int32
-
-CFDataRef = POINTER(CFData)
-CFStringRef = POINTER(CFString)
-CFArrayRef = POINTER(CFArray)
-CFMutableArrayRef = POINTER(CFMutableArray)
-CFDictionaryRef = POINTER(CFDictionary)
-CFArrayCallBacks = c_void_p
-CFDictionaryKeyCallBacks = c_void_p
-CFDictionaryValueCallBacks = c_void_p
-
-SecCertificateRef = POINTER(c_void_p)
-SecExternalFormat = c_uint32
-SecExternalItemType = c_uint32
-SecIdentityRef = POINTER(c_void_p)
-SecItemImportExportFlags = c_uint32
-SecItemImportExportKeyParameters = c_void_p
-SecKeychainRef = POINTER(c_void_p)
-SSLProtocol = c_uint32
-SSLCipherSuite = c_uint32
-SSLContextRef = POINTER(c_void_p)
-SecTrustRef = POINTER(c_void_p)
-SSLConnectionRef = c_uint32
-SecTrustResultType = c_uint32
-SecTrustOptionFlags = c_uint32
-SSLProtocolSide = c_uint32
-SSLConnectionType = c_uint32
-SSLSessionOption = c_uint32
-
-
-try:
- Security.SecItemImport.argtypes = [
- CFDataRef,
- CFStringRef,
- POINTER(SecExternalFormat),
- POINTER(SecExternalItemType),
- SecItemImportExportFlags,
- POINTER(SecItemImportExportKeyParameters),
- SecKeychainRef,
- POINTER(CFArrayRef),
- ]
- Security.SecItemImport.restype = OSStatus
-
- Security.SecCertificateGetTypeID.argtypes = []
- Security.SecCertificateGetTypeID.restype = CFTypeID
-
- Security.SecIdentityGetTypeID.argtypes = []
- Security.SecIdentityGetTypeID.restype = CFTypeID
-
- Security.SecKeyGetTypeID.argtypes = []
- Security.SecKeyGetTypeID.restype = CFTypeID
-
- Security.SecCertificateCreateWithData.argtypes = [CFAllocatorRef, CFDataRef]
- Security.SecCertificateCreateWithData.restype = SecCertificateRef
-
- Security.SecCertificateCopyData.argtypes = [SecCertificateRef]
- Security.SecCertificateCopyData.restype = CFDataRef
-
- Security.SecCopyErrorMessageString.argtypes = [OSStatus, c_void_p]
- Security.SecCopyErrorMessageString.restype = CFStringRef
-
- Security.SecIdentityCreateWithCertificate.argtypes = [
- CFTypeRef,
- SecCertificateRef,
- POINTER(SecIdentityRef),
- ]
- Security.SecIdentityCreateWithCertificate.restype = OSStatus
-
- Security.SecKeychainCreate.argtypes = [
- c_char_p,
- c_uint32,
- c_void_p,
- Boolean,
- c_void_p,
- POINTER(SecKeychainRef),
- ]
- Security.SecKeychainCreate.restype = OSStatus
-
- Security.SecKeychainDelete.argtypes = [SecKeychainRef]
- Security.SecKeychainDelete.restype = OSStatus
-
- Security.SecPKCS12Import.argtypes = [
- CFDataRef,
- CFDictionaryRef,
- POINTER(CFArrayRef),
- ]
- Security.SecPKCS12Import.restype = OSStatus
-
- SSLReadFunc = CFUNCTYPE(OSStatus, SSLConnectionRef, c_void_p, POINTER(c_size_t))
- SSLWriteFunc = CFUNCTYPE(
- OSStatus, SSLConnectionRef, POINTER(c_byte), POINTER(c_size_t)
- )
-
- Security.SSLSetIOFuncs.argtypes = [SSLContextRef, SSLReadFunc, SSLWriteFunc]
- Security.SSLSetIOFuncs.restype = OSStatus
-
- Security.SSLSetPeerID.argtypes = [SSLContextRef, c_char_p, c_size_t]
- Security.SSLSetPeerID.restype = OSStatus
-
- Security.SSLSetCertificate.argtypes = [SSLContextRef, CFArrayRef]
- Security.SSLSetCertificate.restype = OSStatus
-
- Security.SSLSetCertificateAuthorities.argtypes = [SSLContextRef, CFTypeRef, Boolean]
- Security.SSLSetCertificateAuthorities.restype = OSStatus
-
- Security.SSLSetConnection.argtypes = [SSLContextRef, SSLConnectionRef]
- Security.SSLSetConnection.restype = OSStatus
-
- Security.SSLSetPeerDomainName.argtypes = [SSLContextRef, c_char_p, c_size_t]
- Security.SSLSetPeerDomainName.restype = OSStatus
-
- Security.SSLHandshake.argtypes = [SSLContextRef]
- Security.SSLHandshake.restype = OSStatus
-
- Security.SSLRead.argtypes = [SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t)]
- Security.SSLRead.restype = OSStatus
-
- Security.SSLWrite.argtypes = [SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t)]
- Security.SSLWrite.restype = OSStatus
-
- Security.SSLClose.argtypes = [SSLContextRef]
- Security.SSLClose.restype = OSStatus
-
- Security.SSLGetNumberSupportedCiphers.argtypes = [SSLContextRef, POINTER(c_size_t)]
- Security.SSLGetNumberSupportedCiphers.restype = OSStatus
-
- Security.SSLGetSupportedCiphers.argtypes = [
- SSLContextRef,
- POINTER(SSLCipherSuite),
- POINTER(c_size_t),
- ]
- Security.SSLGetSupportedCiphers.restype = OSStatus
-
- Security.SSLSetEnabledCiphers.argtypes = [
- SSLContextRef,
- POINTER(SSLCipherSuite),
- c_size_t,
- ]
- Security.SSLSetEnabledCiphers.restype = OSStatus
-
- Security.SSLGetNumberEnabledCiphers.argtype = [SSLContextRef, POINTER(c_size_t)]
- Security.SSLGetNumberEnabledCiphers.restype = OSStatus
-
- Security.SSLGetEnabledCiphers.argtypes = [
- SSLContextRef,
- POINTER(SSLCipherSuite),
- POINTER(c_size_t),
- ]
- Security.SSLGetEnabledCiphers.restype = OSStatus
-
- Security.SSLGetNegotiatedCipher.argtypes = [SSLContextRef, POINTER(SSLCipherSuite)]
- Security.SSLGetNegotiatedCipher.restype = OSStatus
-
- Security.SSLGetNegotiatedProtocolVersion.argtypes = [
- SSLContextRef,
- POINTER(SSLProtocol),
- ]
- Security.SSLGetNegotiatedProtocolVersion.restype = OSStatus
-
- Security.SSLCopyPeerTrust.argtypes = [SSLContextRef, POINTER(SecTrustRef)]
- Security.SSLCopyPeerTrust.restype = OSStatus
-
- Security.SecTrustSetAnchorCertificates.argtypes = [SecTrustRef, CFArrayRef]
- Security.SecTrustSetAnchorCertificates.restype = OSStatus
-
- Security.SecTrustSetAnchorCertificatesOnly.argstypes = [SecTrustRef, Boolean]
- Security.SecTrustSetAnchorCertificatesOnly.restype = OSStatus
-
- Security.SecTrustEvaluate.argtypes = [SecTrustRef, POINTER(SecTrustResultType)]
- Security.SecTrustEvaluate.restype = OSStatus
-
- Security.SecTrustGetCertificateCount.argtypes = [SecTrustRef]
- Security.SecTrustGetCertificateCount.restype = CFIndex
-
- Security.SecTrustGetCertificateAtIndex.argtypes = [SecTrustRef, CFIndex]
- Security.SecTrustGetCertificateAtIndex.restype = SecCertificateRef
-
- Security.SSLCreateContext.argtypes = [
- CFAllocatorRef,
- SSLProtocolSide,
- SSLConnectionType,
- ]
- Security.SSLCreateContext.restype = SSLContextRef
-
- Security.SSLSetSessionOption.argtypes = [SSLContextRef, SSLSessionOption, Boolean]
- Security.SSLSetSessionOption.restype = OSStatus
-
- Security.SSLSetProtocolVersionMin.argtypes = [SSLContextRef, SSLProtocol]
- Security.SSLSetProtocolVersionMin.restype = OSStatus
-
- Security.SSLSetProtocolVersionMax.argtypes = [SSLContextRef, SSLProtocol]
- Security.SSLSetProtocolVersionMax.restype = OSStatus
-
- try:
- Security.SSLSetALPNProtocols.argtypes = [SSLContextRef, CFArrayRef]
- Security.SSLSetALPNProtocols.restype = OSStatus
- except AttributeError:
- # Supported only in 10.12+
- pass
-
- Security.SecCopyErrorMessageString.argtypes = [OSStatus, c_void_p]
- Security.SecCopyErrorMessageString.restype = CFStringRef
-
- Security.SSLReadFunc = SSLReadFunc
- Security.SSLWriteFunc = SSLWriteFunc
- Security.SSLContextRef = SSLContextRef
- Security.SSLProtocol = SSLProtocol
- Security.SSLCipherSuite = SSLCipherSuite
- Security.SecIdentityRef = SecIdentityRef
- Security.SecKeychainRef = SecKeychainRef
- Security.SecTrustRef = SecTrustRef
- Security.SecTrustResultType = SecTrustResultType
- Security.SecExternalFormat = SecExternalFormat
- Security.OSStatus = OSStatus
-
- Security.kSecImportExportPassphrase = CFStringRef.in_dll(
- Security, "kSecImportExportPassphrase"
- )
- Security.kSecImportItemIdentity = CFStringRef.in_dll(
- Security, "kSecImportItemIdentity"
- )
-
- # CoreFoundation time!
- CoreFoundation.CFRetain.argtypes = [CFTypeRef]
- CoreFoundation.CFRetain.restype = CFTypeRef
-
- CoreFoundation.CFRelease.argtypes = [CFTypeRef]
- CoreFoundation.CFRelease.restype = None
-
- CoreFoundation.CFGetTypeID.argtypes = [CFTypeRef]
- CoreFoundation.CFGetTypeID.restype = CFTypeID
-
- CoreFoundation.CFStringCreateWithCString.argtypes = [
- CFAllocatorRef,
- c_char_p,
- CFStringEncoding,
- ]
- CoreFoundation.CFStringCreateWithCString.restype = CFStringRef
-
- CoreFoundation.CFStringGetCStringPtr.argtypes = [CFStringRef, CFStringEncoding]
- CoreFoundation.CFStringGetCStringPtr.restype = c_char_p
-
- CoreFoundation.CFStringGetCString.argtypes = [
- CFStringRef,
- c_char_p,
- CFIndex,
- CFStringEncoding,
- ]
- CoreFoundation.CFStringGetCString.restype = c_bool
-
- CoreFoundation.CFDataCreate.argtypes = [CFAllocatorRef, c_char_p, CFIndex]
- CoreFoundation.CFDataCreate.restype = CFDataRef
-
- CoreFoundation.CFDataGetLength.argtypes = [CFDataRef]
- CoreFoundation.CFDataGetLength.restype = CFIndex
-
- CoreFoundation.CFDataGetBytePtr.argtypes = [CFDataRef]
- CoreFoundation.CFDataGetBytePtr.restype = c_void_p
-
- CoreFoundation.CFDictionaryCreate.argtypes = [
- CFAllocatorRef,
- POINTER(CFTypeRef),
- POINTER(CFTypeRef),
- CFIndex,
- CFDictionaryKeyCallBacks,
- CFDictionaryValueCallBacks,
- ]
- CoreFoundation.CFDictionaryCreate.restype = CFDictionaryRef
-
- CoreFoundation.CFDictionaryGetValue.argtypes = [CFDictionaryRef, CFTypeRef]
- CoreFoundation.CFDictionaryGetValue.restype = CFTypeRef
-
- CoreFoundation.CFArrayCreate.argtypes = [
- CFAllocatorRef,
- POINTER(CFTypeRef),
- CFIndex,
- CFArrayCallBacks,
- ]
- CoreFoundation.CFArrayCreate.restype = CFArrayRef
-
- CoreFoundation.CFArrayCreateMutable.argtypes = [
- CFAllocatorRef,
- CFIndex,
- CFArrayCallBacks,
- ]
- CoreFoundation.CFArrayCreateMutable.restype = CFMutableArrayRef
-
- CoreFoundation.CFArrayAppendValue.argtypes = [CFMutableArrayRef, c_void_p]
- CoreFoundation.CFArrayAppendValue.restype = None
-
- CoreFoundation.CFArrayGetCount.argtypes = [CFArrayRef]
- CoreFoundation.CFArrayGetCount.restype = CFIndex
-
- CoreFoundation.CFArrayGetValueAtIndex.argtypes = [CFArrayRef, CFIndex]
- CoreFoundation.CFArrayGetValueAtIndex.restype = c_void_p
-
- CoreFoundation.kCFAllocatorDefault = CFAllocatorRef.in_dll(
- CoreFoundation, "kCFAllocatorDefault"
- )
- CoreFoundation.kCFTypeArrayCallBacks = c_void_p.in_dll(
- CoreFoundation, "kCFTypeArrayCallBacks"
- )
- CoreFoundation.kCFTypeDictionaryKeyCallBacks = c_void_p.in_dll(
- CoreFoundation, "kCFTypeDictionaryKeyCallBacks"
- )
- CoreFoundation.kCFTypeDictionaryValueCallBacks = c_void_p.in_dll(
- CoreFoundation, "kCFTypeDictionaryValueCallBacks"
- )
-
- CoreFoundation.CFTypeRef = CFTypeRef
- CoreFoundation.CFArrayRef = CFArrayRef
- CoreFoundation.CFStringRef = CFStringRef
- CoreFoundation.CFDictionaryRef = CFDictionaryRef
-
-except (AttributeError):
- raise ImportError("Error initializing ctypes")
-
-
-class CFConst(object):
- """
- A class object that acts as essentially a namespace for CoreFoundation
- constants.
- """
-
- kCFStringEncodingUTF8 = CFStringEncoding(0x08000100)
-
-
-class SecurityConst(object):
- """
- A class object that acts as essentially a namespace for Security constants.
- """
-
- kSSLSessionOptionBreakOnServerAuth = 0
-
- kSSLProtocol2 = 1
- kSSLProtocol3 = 2
- kTLSProtocol1 = 4
- kTLSProtocol11 = 7
- kTLSProtocol12 = 8
- # SecureTransport does not support TLS 1.3 even if there's a constant for it
- kTLSProtocol13 = 10
- kTLSProtocolMaxSupported = 999
-
- kSSLClientSide = 1
- kSSLStreamType = 0
-
- kSecFormatPEMSequence = 10
-
- kSecTrustResultInvalid = 0
- kSecTrustResultProceed = 1
- # This gap is present on purpose: this was kSecTrustResultConfirm, which
- # is deprecated.
- kSecTrustResultDeny = 3
- kSecTrustResultUnspecified = 4
- kSecTrustResultRecoverableTrustFailure = 5
- kSecTrustResultFatalTrustFailure = 6
- kSecTrustResultOtherError = 7
-
- errSSLProtocol = -9800
- errSSLWouldBlock = -9803
- errSSLClosedGraceful = -9805
- errSSLClosedNoNotify = -9816
- errSSLClosedAbort = -9806
-
- errSSLXCertChainInvalid = -9807
- errSSLCrypto = -9809
- errSSLInternal = -9810
- errSSLCertExpired = -9814
- errSSLCertNotYetValid = -9815
- errSSLUnknownRootCert = -9812
- errSSLNoRootCert = -9813
- errSSLHostNameMismatch = -9843
- errSSLPeerHandshakeFail = -9824
- errSSLPeerUserCancelled = -9839
- errSSLWeakPeerEphemeralDHKey = -9850
- errSSLServerAuthCompleted = -9841
- errSSLRecordOverflow = -9847
-
- errSecVerifyFailed = -67808
- errSecNoTrustSettings = -25263
- errSecItemNotFound = -25300
- errSecInvalidTrustSettings = -25262
-
- # Cipher suites. We only pick the ones our default cipher string allows.
- # Source: https://developer.apple.com/documentation/security/1550981-ssl_cipher_suite_values
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 = 0xC02C
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = 0xC030
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 = 0xC02B
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = 0xC02F
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA9
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA8
- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = 0x009F
- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = 0x009E
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 = 0xC024
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 = 0xC028
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA = 0xC00A
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA = 0xC014
- TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 = 0x006B
- TLS_DHE_RSA_WITH_AES_256_CBC_SHA = 0x0039
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 = 0xC023
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 = 0xC027
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA = 0xC009
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA = 0xC013
- TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 = 0x0067
- TLS_DHE_RSA_WITH_AES_128_CBC_SHA = 0x0033
- TLS_RSA_WITH_AES_256_GCM_SHA384 = 0x009D
- TLS_RSA_WITH_AES_128_GCM_SHA256 = 0x009C
- TLS_RSA_WITH_AES_256_CBC_SHA256 = 0x003D
- TLS_RSA_WITH_AES_128_CBC_SHA256 = 0x003C
- TLS_RSA_WITH_AES_256_CBC_SHA = 0x0035
- TLS_RSA_WITH_AES_128_CBC_SHA = 0x002F
- TLS_AES_128_GCM_SHA256 = 0x1301
- TLS_AES_256_GCM_SHA384 = 0x1302
- TLS_AES_128_CCM_8_SHA256 = 0x1305
- TLS_AES_128_CCM_SHA256 = 0x1304
diff --git a/spaces/CAPTY222/runwayml-stable-diffusion-v1-5/README.md b/spaces/CAPTY222/runwayml-stable-diffusion-v1-5/README.md
deleted file mode 100644
index ca0a1923175d16474e9d715ab932fcef778499a4..0000000000000000000000000000000000000000
--- a/spaces/CAPTY222/runwayml-stable-diffusion-v1-5/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Runwayml Stable Diffusion V1 5
-emoji: ⚡
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_copy_move.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_copy_move.cpp
deleted file mode 100644
index 0f698bdf058dc53fceb21e504959fe334973bafb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_copy_move.cpp
+++ /dev/null
@@ -1,213 +0,0 @@
-/*
- tests/test_copy_move_policies.cpp -- 'copy' and 'move' return value policies
- and related tests
-
- Copyright (c) 2016 Ben North
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#include "pybind11_tests.h"
-#include "constructor_stats.h"
-#include
-
-template
-struct empty {
- static const derived& get_one() { return instance_; }
- static derived instance_;
-};
-
-struct lacking_copy_ctor : public empty {
- lacking_copy_ctor() {}
- lacking_copy_ctor(const lacking_copy_ctor& other) = delete;
-};
-
-template <> lacking_copy_ctor empty::instance_ = {};
-
-struct lacking_move_ctor : public empty {
- lacking_move_ctor() {}
- lacking_move_ctor(const lacking_move_ctor& other) = delete;
- lacking_move_ctor(lacking_move_ctor&& other) = delete;
-};
-
-template <> lacking_move_ctor empty::instance_ = {};
-
-/* Custom type caster move/copy test classes */
-class MoveOnlyInt {
-public:
- MoveOnlyInt() { print_default_created(this); }
- MoveOnlyInt(int v) : value{std::move(v)} { print_created(this, value); }
- MoveOnlyInt(MoveOnlyInt &&m) { print_move_created(this, m.value); std::swap(value, m.value); }
- MoveOnlyInt &operator=(MoveOnlyInt &&m) { print_move_assigned(this, m.value); std::swap(value, m.value); return *this; }
- MoveOnlyInt(const MoveOnlyInt &) = delete;
- MoveOnlyInt &operator=(const MoveOnlyInt &) = delete;
- ~MoveOnlyInt() { print_destroyed(this); }
-
- int value;
-};
-class MoveOrCopyInt {
-public:
- MoveOrCopyInt() { print_default_created(this); }
- MoveOrCopyInt(int v) : value{std::move(v)} { print_created(this, value); }
- MoveOrCopyInt(MoveOrCopyInt &&m) { print_move_created(this, m.value); std::swap(value, m.value); }
- MoveOrCopyInt &operator=(MoveOrCopyInt &&m) { print_move_assigned(this, m.value); std::swap(value, m.value); return *this; }
- MoveOrCopyInt(const MoveOrCopyInt &c) { print_copy_created(this, c.value); value = c.value; }
- MoveOrCopyInt &operator=(const MoveOrCopyInt &c) { print_copy_assigned(this, c.value); value = c.value; return *this; }
- ~MoveOrCopyInt() { print_destroyed(this); }
-
- int value;
-};
-class CopyOnlyInt {
-public:
- CopyOnlyInt() { print_default_created(this); }
- CopyOnlyInt(int v) : value{std::move(v)} { print_created(this, value); }
- CopyOnlyInt(const CopyOnlyInt &c) { print_copy_created(this, c.value); value = c.value; }
- CopyOnlyInt &operator=(const CopyOnlyInt &c) { print_copy_assigned(this, c.value); value = c.value; return *this; }
- ~CopyOnlyInt() { print_destroyed(this); }
-
- int value;
-};
-PYBIND11_NAMESPACE_BEGIN(pybind11)
-PYBIND11_NAMESPACE_BEGIN(detail)
-template <> struct type_caster {
- PYBIND11_TYPE_CASTER(MoveOnlyInt, _("MoveOnlyInt"));
- bool load(handle src, bool) { value = MoveOnlyInt(src.cast()); return true; }
- static handle cast(const MoveOnlyInt &m, return_value_policy r, handle p) { return pybind11::cast(m.value, r, p); }
-};
-
-template <> struct type_caster {
- PYBIND11_TYPE_CASTER(MoveOrCopyInt, _("MoveOrCopyInt"));
- bool load(handle src, bool) { value = MoveOrCopyInt(src.cast()); return true; }
- static handle cast(const MoveOrCopyInt &m, return_value_policy r, handle p) { return pybind11::cast(m.value, r, p); }
-};
-
-template <> struct type_caster {
-protected:
- CopyOnlyInt value;
-public:
- static constexpr auto name = _("CopyOnlyInt");
- bool load(handle src, bool) { value = CopyOnlyInt(src.cast()); return true; }
- static handle cast(const CopyOnlyInt &m, return_value_policy r, handle p) { return pybind11::cast(m.value, r, p); }
- static handle cast(const CopyOnlyInt *src, return_value_policy policy, handle parent) {
- if (!src) return none().release();
- return cast(*src, policy, parent);
- }
- operator CopyOnlyInt*() { return &value; }
- operator CopyOnlyInt&() { return value; }
- template using cast_op_type = pybind11::detail::cast_op_type;
-};
-PYBIND11_NAMESPACE_END(detail)
-PYBIND11_NAMESPACE_END(pybind11)
-
-TEST_SUBMODULE(copy_move_policies, m) {
- // test_lacking_copy_ctor
- py::class_(m, "lacking_copy_ctor")
- .def_static("get_one", &lacking_copy_ctor::get_one,
- py::return_value_policy::copy);
- // test_lacking_move_ctor
- py::class_(m, "lacking_move_ctor")
- .def_static("get_one", &lacking_move_ctor::get_one,
- py::return_value_policy::move);
-
- // test_move_and_copy_casts
- m.def("move_and_copy_casts", [](py::object o) {
- int r = 0;
- r += py::cast(o).value; /* moves */
- r += py::cast(o).value; /* moves */
- r += py::cast(o).value; /* copies */
- MoveOrCopyInt m1(py::cast(o)); /* moves */
- MoveOnlyInt m2(py::cast(o)); /* moves */
- CopyOnlyInt m3(py::cast(o)); /* copies */
- r += m1.value + m2.value + m3.value;
-
- return r;
- });
-
- // test_move_and_copy_loads
- m.def("move_only", [](MoveOnlyInt m) { return m.value; });
- m.def("move_or_copy", [](MoveOrCopyInt m) { return m.value; });
- m.def("copy_only", [](CopyOnlyInt m) { return m.value; });
- m.def("move_pair", [](std::pair p) {
- return p.first.value + p.second.value;
- });
- m.def("move_tuple", [](std::tuple t) {
- return std::get<0>(t).value + std::get<1>(t).value + std::get<2>(t).value;
- });
- m.def("copy_tuple", [](std::tuple t) {
- return std::get<0>(t).value + std::get<1>(t).value;
- });
- m.def("move_copy_nested", [](std::pair>, MoveOrCopyInt>> x) {
- return x.first.value + std::get<0>(x.second.first).value + std::get<1>(x.second.first).value +
- std::get<0>(std::get<2>(x.second.first)).value + x.second.second.value;
- });
- m.def("move_and_copy_cstats", []() {
- ConstructorStats::gc();
- // Reset counts to 0 so that previous tests don't affect later ones:
- auto &mc = ConstructorStats::get();
- mc.move_assignments = mc.move_constructions = mc.copy_assignments = mc.copy_constructions = 0;
- auto &mo = ConstructorStats::get();
- mo.move_assignments = mo.move_constructions = mo.copy_assignments = mo.copy_constructions = 0;
- auto &co = ConstructorStats::get();
- co.move_assignments = co.move_constructions = co.copy_assignments = co.copy_constructions = 0;
- py::dict d;
- d["MoveOrCopyInt"] = py::cast(mc, py::return_value_policy::reference);
- d["MoveOnlyInt"] = py::cast(mo, py::return_value_policy::reference);
- d["CopyOnlyInt"] = py::cast(co, py::return_value_policy::reference);
- return d;
- });
-#ifdef PYBIND11_HAS_OPTIONAL
- // test_move_and_copy_load_optional
- m.attr("has_optional") = true;
- m.def("move_optional", [](std::optional o) {
- return o->value;
- });
- m.def("move_or_copy_optional", [](std::optional o) {
- return o->value;
- });
- m.def("copy_optional", [](std::optional o) {
- return o->value;
- });
- m.def("move_optional_tuple", [](std::optional> x) {
- return std::get<0>(*x).value + std::get<1>(*x).value + std::get<2>(*x).value;
- });
-#else
- m.attr("has_optional") = false;
-#endif
-
- // #70 compilation issue if operator new is not public
- struct PrivateOpNew {
- int value = 1;
- private:
-#if defined(_MSC_VER)
-# pragma warning(disable: 4822) // warning C4822: local class member function does not have a body
-#endif
- void *operator new(size_t bytes);
- };
- py::class_(m, "PrivateOpNew").def_readonly("value", &PrivateOpNew::value);
- m.def("private_op_new_value", []() { return PrivateOpNew(); });
- m.def("private_op_new_reference", []() -> const PrivateOpNew & {
- static PrivateOpNew x{};
- return x;
- }, py::return_value_policy::reference);
-
- // test_move_fallback
- // #389: rvp::move should fall-through to copy on non-movable objects
- struct MoveIssue1 {
- int v;
- MoveIssue1(int v) : v{v} {}
- MoveIssue1(const MoveIssue1 &c) = default;
- MoveIssue1(MoveIssue1 &&) = delete;
- };
- py::class_(m, "MoveIssue1").def(py::init()).def_readwrite("value", &MoveIssue1::v);
-
- struct MoveIssue2 {
- int v;
- MoveIssue2(int v) : v{v} {}
- MoveIssue2(MoveIssue2 &&) = default;
- };
- py::class_(m, "MoveIssue2").def(py::init()).def_readwrite("value", &MoveIssue2::v);
-
- m.def("get_moveissue1", [](int i) { return new MoveIssue1(i); }, py::return_value_policy::move);
- m.def("get_moveissue2", [](int i) { return MoveIssue2(i); }, py::return_value_policy::move);
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config/cpp_dialect.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config/cpp_dialect.h
deleted file mode 100644
index 5b7ecc2ebe1f3c525c08bc0691e82d5650f29423..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/config/cpp_dialect.h
+++ /dev/null
@@ -1,124 +0,0 @@
-/*
- * Copyright 2020 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file cpp_dialect.h
- * \brief Detect the version of the C++ standard used by the compiler.
- */
-
-#pragma once
-
-#include
-
-// Deprecation warnings may be silenced by defining the following macros. These
-// may be combined.
-// - THRUST_IGNORE_DEPRECATED_CPP_DIALECT:
-// Ignore all deprecated C++ dialects and outdated compilers.
-// - THRUST_IGNORE_DEPRECATED_CPP_11:
-// Ignore deprecation warnings when compiling with C++11. C++03 and outdated
-// compilers will still issue warnings.
-// - THRUST_IGNORE_DEPRECATED_COMPILER
-// Ignore deprecation warnings when using deprecated compilers. Compiling
-// with C++03 and C++11 will still issue warnings.
-
-// Check for the CUB opt-outs as well:
-#if !defined(THRUST_IGNORE_DEPRECATED_CPP_DIALECT) && \
- defined(CUB_IGNORE_DEPRECATED_CPP_DIALECT)
-# define THRUST_IGNORE_DEPRECATED_CPP_DIALECT
-#endif
-#if !defined(THRUST_IGNORE_DEPRECATED_CPP_11) && \
- defined(CUB_IGNORE_DEPRECATED_CPP_11)
-# define THRUST_IGNORE_DEPRECATED_CPP_11
-#endif
-#if !defined(THRUST_IGNORE_DEPRECATED_COMPILER) && \
- defined(CUB_IGNORE_DEPRECATED_COMPILER)
-# define THRUST_IGNORE_DEPRECATED_COMPILER
-#endif
-
-#ifdef THRUST_IGNORE_DEPRECATED_CPP_DIALECT
-# define THRUST_IGNORE_DEPRECATED_CPP_11
-# define THRUST_IGNORE_DEPRECATED_COMPILER
-#endif
-
-// Define this to override the built-in detection.
-#ifndef THRUST_CPP_DIALECT
-
-// MSVC does not define __cplusplus correctly. _MSVC_LANG is used instead.
-// This macro is only defined in MSVC 2015U3+.
-# ifdef _MSVC_LANG // Do not replace with THRUST_HOST_COMPILER test (see above)
-// MSVC2015 reports C++14 but lacks extended constexpr support. Treat as C++11.
-# if THRUST_MSVC_VERSION < 1910 && _MSVC_LANG > 201103L /* MSVC < 2017 && CPP > 2011 */
-# define THRUST_CPLUSPLUS 201103L /* Fix to 2011 */
-# else
-# define THRUST_CPLUSPLUS _MSVC_LANG /* We'll trust this for now. */
-# endif // MSVC 2015 C++14 fix
-# else
-# define THRUST_CPLUSPLUS __cplusplus
-# endif
-
-// Detect current dialect:
-# if THRUST_CPLUSPLUS < 201103L
-# define THRUST_CPP_DIALECT 2003
-# elif THRUST_CPLUSPLUS < 201402L
-# define THRUST_CPP_DIALECT 2011
-# elif THRUST_CPLUSPLUS < 201703L
-# define THRUST_CPP_DIALECT 2014
-# elif THRUST_CPLUSPLUS == 201703L
-# define THRUST_CPP_DIALECT 2017
-# elif THRUST_CPLUSPLUS > 201703L // unknown, but is higher than 2017.
-# define THRUST_CPP_DIALECT 2020
-# endif
-
-# undef THRUST_CPLUSPLUS // cleanup
-
-#endif // !THRUST_CPP_DIALECT
-
-// Define THRUST_COMPILER_DEPRECATION macro:
-#if THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC
-# define THRUST_COMP_DEPR_IMPL(msg) \
- __pragma(message(__FILE__ ":" THRUST_COMP_DEPR_IMPL0(__LINE__) ": warning: " #msg))
-# define THRUST_COMP_DEPR_IMPL0(x) THRUST_COMP_DEPR_IMPL1(x)
-# define THRUST_COMP_DEPR_IMPL1(x) #x
-#else // clang / gcc:
-# define THRUST_COMP_DEPR_IMPL(msg) THRUST_COMP_DEPR_IMPL0(GCC warning #msg)
-# define THRUST_COMP_DEPR_IMPL0(expr) _Pragma(#expr)
-# define THRUST_COMP_DEPR_IMPL1 /* intentionally blank */
-#endif
-
-#define THRUST_COMPILER_DEPRECATION(REQ, FIX) \
- THRUST_COMP_DEPR_IMPL(Thrust requires REQ. Please FIX. Define THRUST_IGNORE_DEPRECATED_CPP_DIALECT to suppress this message.)
-
-// Minimum required compiler checks:
-#ifndef THRUST_IGNORE_DEPRECATED_COMPILER
-# if THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC && THRUST_GCC_VERSION < 50000
- THRUST_COMPILER_DEPRECATION(GCC 5.0, upgrade your compiler);
-# endif
-# if THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_CLANG && THRUST_CLANG_VERSION < 60000
- THRUST_COMPILER_DEPRECATION(Clang 6.0, upgrade your compiler);
-# endif
-# if THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC && THRUST_MSVC_VERSION < 1910
- THRUST_COMPILER_DEPRECATION(MSVC 2017, upgrade your compiler);
-# endif
-#endif
-
-#if !defined(THRUST_IGNORE_DEPRECATED_CPP_DIALECT) && THRUST_CPP_DIALECT < 2014 && \
- (THRUST_CPP_DIALECT != 2011 || !defined(THRUST_IGNORE_DEPRECATED_CPP_11))
- THRUST_COMPILER_DEPRECATION(C++14, pass -std=c++14 to your compiler);
-#endif
-
-#undef THRUST_COMPILER_DEPRECATION
-#undef THRUST_COMP_DEPR_IMPL
-#undef THRUST_COMP_DEPR_IMPL0
-#undef THRUST_COMP_DEPR_IMPL1
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/seq.h b/spaces/CVPR/LIVE/thrust/thrust/detail/seq.h
deleted file mode 100644
index b548652d2d9d24c5cd143e39a5184182175453a8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/seq.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace detail
-{
-
-
-struct seq_t : thrust::system::detail::sequential::execution_policy,
- thrust::detail::allocator_aware_execution_policy<
- thrust::system::detail::sequential::execution_policy>
-{
- __host__ __device__
- THRUST_CONSTEXPR seq_t() : thrust::system::detail::sequential::execution_policy() {}
-
- // allow any execution_policy to convert to seq_t
- template
- __host__ __device__
- seq_t(const thrust::execution_policy &)
- : thrust::system::detail::sequential::execution_policy()
- {}
-};
-
-
-} // end detail
-
-
-THRUST_INLINE_CONSTANT detail::seq_t seq;
-
-
-} // end thrust
-
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/device_allocator.h b/spaces/CVPR/LIVE/thrust/thrust/device_allocator.h
deleted file mode 100644
index f5ff0d9654c997a8fcccb24db9707cd43cf18f17..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/device_allocator.h
+++ /dev/null
@@ -1,146 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file device_allocator.h
- * \brief An allocator which creates new elements in device memory
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-
-#include
-#include
-
-namespace thrust
-{
-
-/** \addtogroup memory_resources Memory Resources
- * \ingroup memory_management_classes
- * \{
- */
-
-/*! Memory resource adaptor that turns any memory resource that returns a fancy
- * with the same tag as \p device_ptr, and adapts it to a resource that returns
- * a \p device_ptr.
- */
-template
-class device_ptr_memory_resource THRUST_FINAL
- : public thrust::mr::memory_resource<
- device_ptr
- >
-{
- typedef typename Upstream::pointer upstream_ptr;
-
-public:
- /*! Initialize the adaptor with the global instance of the upstream resource. Obtains
- * the global instance by calling \p get_global_resource.
- */
- __host__
- device_ptr_memory_resource() : m_upstream(mr::get_global_resource())
- {
- }
-
- /*! Initialize the adaptor with an upstream resource.
- *
- * \param upstream the upstream memory resource to adapt.
- */
- __host__
- device_ptr_memory_resource(Upstream * upstream) : m_upstream(upstream)
- {
- }
-
- THRUST_NODISCARD __host__
- virtual pointer do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
- {
- return pointer(m_upstream->do_allocate(bytes, alignment).get());
- }
-
- __host__
- virtual void do_deallocate(pointer p, std::size_t bytes, std::size_t alignment) THRUST_OVERRIDE
- {
- m_upstream->do_deallocate(upstream_ptr(p.get()), bytes, alignment);
- }
-
-private:
- Upstream * m_upstream;
-};
-
-/*! \}
- */
-
-/*! \addtogroup memory_management Memory Management
- * \addtogroup memory_management_classes Memory Management Classes
- * \ingroup memory_management
- * \{
- */
-template
-class device_allocator
- : public thrust::mr::stateless_resource_allocator<
- T,
- device_ptr_memory_resource
- >
-{
- typedef thrust::mr::stateless_resource_allocator<
- T,
- device_ptr_memory_resource
- > base;
-
-public:
- /*! The \p rebind metafunction provides the type of a \p device_allocator
- * instantiated with another type.
- *
- * \tparam U the other type to use for instantiation.
- */
- template
- struct rebind
- {
- /*! The typedef \p other gives the type of the rebound \p device_allocator.
- */
- typedef device_allocator other;
- };
-
- /*! Default constructor has no effect. */
- __host__
- device_allocator() {}
-
- /*! Copy constructor has no effect. */
- __host__
- device_allocator(const device_allocator& other) : base(other) {}
-
- /*! Constructor from other \p device_allocator has no effect. */
- template
- __host__
- device_allocator(const device_allocator& other) : base(other) {}
-
-#if THRUST_CPP_DIALECT >= 2011
- device_allocator & operator=(const device_allocator &) = default;
-#endif
-
- /*! Destructor has no effect. */
- __host__
- ~device_allocator() {}
-};
-
-/*! \}
- */
-
-} // end thrust
-
diff --git a/spaces/CVPR/Text2Human/README.md b/spaces/CVPR/Text2Human/README.md
deleted file mode 100644
index 4022ed94445b9b36774b5fd7875bf4a592bb835c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text2Human
-emoji: 🏃
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.0.17
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/CarperAI/pile-v2-eda/app.py b/spaces/CarperAI/pile-v2-eda/app.py
deleted file mode 100644
index f6062871897f9ef6bf45d8b5497752648042cacc..0000000000000000000000000000000000000000
--- a/spaces/CarperAI/pile-v2-eda/app.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import streamlit as st
-import datasets
-import os
-import json
-from transformers import AutoTokenizer
-import ast
-import re
-
-version = st.sidebar.selectbox("Choose a version", ["init","local_dedup", "reformatted"])
-if version == "init":
- CACHE_DIR = "cache_ds/" #Use this to build the dataset
-elif version == "local_dedup":
- CACHE_DIR = "local_dedup/"
-elif version == "reformatted":
- CACHE_DIR = "reformatted/"
-contribution_json = "contributors.json"
-
-contribution_dict = json.load(open(contribution_json,"r"))
-IGNORE_LIST = ["Bible","Tanzil","GNOME"]
-
-splits = [split for split in os.listdir(CACHE_DIR) if split not in IGNORE_LIST]
-
-cached_ds = os.listdir(CACHE_DIR)
-tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
-
-
-def load_page(split):
- with st.spinner('Downloading and buidling dataset...'):
- if split not in cached_ds:
- ds = datasets.load_dataset('CarperAI/pile-v2-small-filtered',"train", data_files="data/"+split+"/data.json")
- else:
- ds = datasets.load_from_disk(CACHE_DIR+split)
- print("Sucessfully loaded "+split)
- st.title("Dataset Explorer")
- st.write(f"# {split}")
- if split in contribution_dict:
- st.caption(f"Contributors: {','.join(contribution_dict[split])}")
- else:
- st.caption(f"Needs to be updated....")
- with st.form("dataset_form"):
- index = st.slider('Select a row', 0, len(ds)-1, 0)
- if st.form_submit_button("Load"):
- st.write(f"Row {index}")
- data = ds[index]
- content = data["text"]
- meta = data["meta"]
- with st.expander("Render Content"):
- st.write(content)
- with st.expander("Raw Content"):
- st.text(content)
- with st.expander("Metadata and Metrics"):
- st.write("### Meta:")
- try:
- st.write(ast.literal_eval(meta))
- except:
- st.write(meta)
- # Tokenizer-related count
- tokenized = tokenizer(content, return_length=True)['length'][0]
- token_count_metric = st.metric("Token Count(compared to 2048)",value=tokenized,delta=4096-tokenized)
- #Word related count
- split_words = re.findall(r'\w+', content)
- word_count_metric = st.metric("Word Count",value=len(split_words))
-
-
-
-demo_name = st.sidebar.selectbox("Choose a demo", splits)
-load_page(demo_name)
\ No newline at end of file
diff --git a/spaces/Cecil8352/vits-models/app.py b/spaces/Cecil8352/vits-models/app.py
deleted file mode 100644
index ffcfee009308052863d7569a661fa3adebe6332e..0000000000000000000000000000000000000000
--- a/spaces/Cecil8352/vits-models/app.py
+++ /dev/null
@@ -1,291 +0,0 @@
-# coding=utf-8
-import os
-import re
-import argparse
-import utils
-import commons
-import json
-import torch
-import gradio as gr
-from models import SynthesizerTrn
-from text import text_to_sequence, _clean_text
-from torch import no_grad, LongTensor
-import gradio.processing_utils as gr_processing_utils
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-hps_ms = utils.get_hparams_from_file(r'config/config.json')
-
-audio_postprocess_ori = gr.Audio.postprocess
-
-def audio_postprocess(self, y):
- data = audio_postprocess_ori(self, y)
- if data is None:
- return None
- return gr_processing_utils.encode_url_or_file_to_base64(data["name"])
-
-
-gr.Audio.postprocess = audio_postprocess
-
-def get_text(text, hps, is_symbol):
- text_norm, clean_text = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm, clean_text
-
-def create_tts_fn(net_g_ms, speaker_id):
- def tts_fn(text, language, noise_scale, noise_scale_w, length_scale, is_symbol):
- text = text.replace('\n', ' ').replace('\r', '').replace(" ", "")
- if limitation:
- text_len = len(re.sub("\[([A-Z]{2})\]", "", text))
- max_len = 100
- if is_symbol:
- max_len *= 3
- if text_len > max_len:
- return "Error: Text is too long", None
- if not is_symbol:
- if language == 0:
- text = f"[ZH]{text}[ZH]"
- elif language == 1:
- text = f"[JA]{text}[JA]"
- else:
- text = f"{text}"
- stn_tst, clean_text = get_text(text, hps_ms, is_symbol)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0).to(device)
- x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device)
- sid = LongTensor([speaker_id]).to(device)
- audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w,
- length_scale=length_scale)[0][0, 0].data.cpu().float().numpy()
-
- return "Success", (22050, audio)
- return tts_fn
-
-def create_to_symbol_fn(hps):
- def to_symbol_fn(is_symbol_input, input_text, temp_lang):
- if temp_lang == 0:
- clean_text = f'[ZH]{input_text}[ZH]'
- elif temp_lang == 1:
- clean_text = f'[JA]{input_text}[JA]'
- else:
- clean_text = input_text
- return _clean_text(clean_text, hps.data.text_cleaners) if is_symbol_input else ''
-
- return to_symbol_fn
-def change_lang(language):
- if language == 0:
- return 0.6, 0.668, 1.2
- elif language == 1:
- return 0.6, 0.668, 1
- else:
- return 0.6, 0.668, 1
-
-download_audio_js = """
-() =>{{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let audio = root.querySelector("#tts-audio-{audio_id}").querySelector("audio");
- let text = root.querySelector("#input-text-{audio_id}").querySelector("textarea");
- if (audio == undefined)
- return;
- text = text.value;
- if (text == undefined)
- text = Math.floor(Math.random()*100000000);
- audio = audio.src;
- let oA = document.createElement("a");
- oA.download = text.substr(0, 20)+'.wav';
- oA.href = audio;
- document.body.appendChild(oA);
- oA.click();
- oA.remove();
-}}
-"""
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--device', type=str, default='cpu')
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- parser.add_argument("--all", action="store_true", default=False, help="enable all models")
- args = parser.parse_args()
- device = torch.device(args.device)
- categories = ["Blue Archive", "Lycoris Recoil"]
- others = {
- "Princess Connect! Re:Dive": "https://huggingface.co/spaces/sayashi/vits-models-pcr",
- "Genshin Impact": "https://huggingface.co/spaces/sayashi/vits-models-genshin-bh3",
- "Honkai Impact 3rd": "https://huggingface.co/spaces/sayashi/vits-models-genshin-bh3",
- "Overwatch 2": "https://huggingface.co/spaces/sayashi/vits-models-ow2",
- }
- if args.all:
- categories = ["Blue Archive", "Lycoris Recoil", "Princess Connect! Re:Dive", "Genshin Impact", "Honkai Impact 3rd", "Overwatch 2"]
- others = {}
- models = []
- with open("pretrained_models/info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for i, info in models_info.items():
- if info['title'].split("-")[0] not in categories or not info['enable']:
- continue
- sid = info['sid']
- name_en = info['name_en']
- name_zh = info['name_zh']
- title = info['title']
- cover = f"pretrained_models/{i}/{info['cover']}"
- example = info['example']
- language = info['language']
- net_g_ms = SynthesizerTrn(
- len(hps_ms.symbols),
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=hps_ms.data.n_speakers if info['type'] == "multi" else 0,
- **hps_ms.model)
- utils.load_checkpoint(f'pretrained_models/{i}/{i}.pth', net_g_ms, None)
- _ = net_g_ms.eval().to(device)
- models.append((sid, name_en, name_zh, title, cover, example, language, net_g_ms, create_tts_fn(net_g_ms, sid), create_to_symbol_fn(hps_ms)))
- with gr.Blocks() as app:
- gr.Markdown(
- "#
vits-models\n"
- "##
Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n"
- "##
请不要生成会对个人以及组织造成侵害的内容\n"
- "\n\n"
- "[](https://colab.research.google.com/drive/10QOk9NPgoKZUXkIhhuVaZ7SYra1MPMKH?usp=share_link)\n\n"
- "[](https://huggingface.co/spaces/sayashi/vits-models?duplicate=true)\n\n"
- "[](https://github.com/SayaSS/vits-finetuning)"
- )
-
- with gr.Tabs():
- for category in categories:
- with gr.TabItem(category):
- with gr.TabItem("EN"):
- for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models:
- if title.split("-")[0] != category:
- continue
- with gr.TabItem(name_en):
- with gr.Row():
- gr.Markdown(
- '
You can skip the queue and load custom models in the colab:
- Running on {device}{(" in a Google Colab." if is_colab else "")}
-
-
You can also duplicate this space and upgrade to gpu by going to settings:
-
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name)
- with gr.Box(visible=False) as custom_model_group:
- custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True)
- gr.HTML("
Custom models have to be downloaded first, so give it some time.
- """)
- demo.load(update_state_info, inputs=state_info, outputs=state_info, every=0.5, show_progress=False)
-print(f"Space built in {time.time() - start_time:.2f} seconds")
-# if not is_colab:
-demo.queue(concurrency_count=1)
-demo.launch(debug=is_colab, share=True)
diff --git a/spaces/Sakil/image_generator/README.md b/spaces/Sakil/image_generator/README.md
deleted file mode 100644
index 09dc1fcd345370d876c1dfa51a6ad383f2d35837..0000000000000000000000000000000000000000
--- a/spaces/Sakil/image_generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Image_generator
-emoji: 👀
-colorFrom: purple
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stochastic_karras_ve/__init__.py b/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stochastic_karras_ve/__init__.py
deleted file mode 100644
index db2582043781130794e01b96b3e6beecbfe9f369..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stochastic_karras_ve/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# flake8: noqa
-from .pipeline_stochastic_karras_ve import KarrasVePipeline
diff --git a/spaces/Sambhavnoobcoder/stable-diffusion-inpainting/header.html b/spaces/Sambhavnoobcoder/stable-diffusion-inpainting/header.html
deleted file mode 100644
index ebad096b0cd71c23b5a8ad8287d2d20d04903f09..0000000000000000000000000000000000000000
--- a/spaces/Sambhavnoobcoder/stable-diffusion-inpainting/header.html
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
- Fashion-Generation Using Image Inpainting
-
-
-
-
- Grab any image you would like to change or modify , paint in the area that you would like to change , pass in the required change as a prompt and press inpaint to generate the required image .
-
-
-
\ No newline at end of file
diff --git a/spaces/Sandiago21/speech-to-speech-translation-german/README.md b/spaces/Sandiago21/speech-to-speech-translation-german/README.md
deleted file mode 100644
index 20898632889a3eb4a97e3a392698c210e8e42368..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/speech-to-speech-translation-german/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-title: speech-to-speech-translation-german
-app_file: app.py
-sdk: gradio
-sdk_version: 3.36.0
----
diff --git a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/README.md b/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/README.md
deleted file mode 100644
index fb45a36b5909585aa964f2033762ee59b55526b0..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-# External Colab Code
-Code used to make Google Colab work correctly
-- Repo link: https://github.com/IAHispano/Applio-RVC-Fork/
-
-Thanks to https://github.com/kalomaze/externalcolabcode
-
diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/data/__init__.py b/spaces/SuYuanS/AudioCraft_Plus/tests/data/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/tests/data/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_fileio.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_fileio.py
deleted file mode 100644
index 35e8e8af6c11dd6690a8382af6a23d1391fff9dc..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_fileio.py
+++ /dev/null
@@ -1,603 +0,0 @@
-from __future__ import annotations
-
-import os
-import pathlib
-import sys
-from dataclasses import dataclass
-from functools import partial
-from os import PathLike
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- AnyStr,
- AsyncIterator,
- Callable,
- Generic,
- Iterable,
- Iterator,
- Sequence,
- cast,
- overload,
-)
-
-from .. import to_thread
-from ..abc import AsyncResource
-
-if sys.version_info >= (3, 8):
- from typing import Final
-else:
- from typing_extensions import Final
-
-if TYPE_CHECKING:
- from _typeshed import OpenBinaryMode, OpenTextMode, ReadableBuffer, WriteableBuffer
-else:
- ReadableBuffer = OpenBinaryMode = OpenTextMode = WriteableBuffer = object
-
-
-class AsyncFile(AsyncResource, Generic[AnyStr]):
- """
- An asynchronous file object.
-
- This class wraps a standard file object and provides async friendly versions of the following
- blocking methods (where available on the original file object):
-
- * read
- * read1
- * readline
- * readlines
- * readinto
- * readinto1
- * write
- * writelines
- * truncate
- * seek
- * tell
- * flush
-
- All other methods are directly passed through.
-
- This class supports the asynchronous context manager protocol which closes the underlying file
- at the end of the context block.
-
- This class also supports asynchronous iteration::
-
- async with await open_file(...) as f:
- async for line in f:
- print(line)
- """
-
- def __init__(self, fp: IO[AnyStr]) -> None:
- self._fp: Any = fp
-
- def __getattr__(self, name: str) -> object:
- return getattr(self._fp, name)
-
- @property
- def wrapped(self) -> IO[AnyStr]:
- """The wrapped file object."""
- return self._fp
-
- async def __aiter__(self) -> AsyncIterator[AnyStr]:
- while True:
- line = await self.readline()
- if line:
- yield line
- else:
- break
-
- async def aclose(self) -> None:
- return await to_thread.run_sync(self._fp.close)
-
- async def read(self, size: int = -1) -> AnyStr:
- return await to_thread.run_sync(self._fp.read, size)
-
- async def read1(self: AsyncFile[bytes], size: int = -1) -> bytes:
- return await to_thread.run_sync(self._fp.read1, size)
-
- async def readline(self) -> AnyStr:
- return await to_thread.run_sync(self._fp.readline)
-
- async def readlines(self) -> list[AnyStr]:
- return await to_thread.run_sync(self._fp.readlines)
-
- async def readinto(self: AsyncFile[bytes], b: WriteableBuffer) -> bytes:
- return await to_thread.run_sync(self._fp.readinto, b)
-
- async def readinto1(self: AsyncFile[bytes], b: WriteableBuffer) -> bytes:
- return await to_thread.run_sync(self._fp.readinto1, b)
-
- @overload
- async def write(self: AsyncFile[bytes], b: ReadableBuffer) -> int:
- ...
-
- @overload
- async def write(self: AsyncFile[str], b: str) -> int:
- ...
-
- async def write(self, b: ReadableBuffer | str) -> int:
- return await to_thread.run_sync(self._fp.write, b)
-
- @overload
- async def writelines(
- self: AsyncFile[bytes], lines: Iterable[ReadableBuffer]
- ) -> None:
- ...
-
- @overload
- async def writelines(self: AsyncFile[str], lines: Iterable[str]) -> None:
- ...
-
- async def writelines(self, lines: Iterable[ReadableBuffer] | Iterable[str]) -> None:
- return await to_thread.run_sync(self._fp.writelines, lines)
-
- async def truncate(self, size: int | None = None) -> int:
- return await to_thread.run_sync(self._fp.truncate, size)
-
- async def seek(self, offset: int, whence: int | None = os.SEEK_SET) -> int:
- return await to_thread.run_sync(self._fp.seek, offset, whence)
-
- async def tell(self) -> int:
- return await to_thread.run_sync(self._fp.tell)
-
- async def flush(self) -> None:
- return await to_thread.run_sync(self._fp.flush)
-
-
-@overload
-async def open_file(
- file: str | PathLike[str] | int,
- mode: OpenBinaryMode,
- buffering: int = ...,
- encoding: str | None = ...,
- errors: str | None = ...,
- newline: str | None = ...,
- closefd: bool = ...,
- opener: Callable[[str, int], int] | None = ...,
-) -> AsyncFile[bytes]:
- ...
-
-
-@overload
-async def open_file(
- file: str | PathLike[str] | int,
- mode: OpenTextMode = ...,
- buffering: int = ...,
- encoding: str | None = ...,
- errors: str | None = ...,
- newline: str | None = ...,
- closefd: bool = ...,
- opener: Callable[[str, int], int] | None = ...,
-) -> AsyncFile[str]:
- ...
-
-
-async def open_file(
- file: str | PathLike[str] | int,
- mode: str = "r",
- buffering: int = -1,
- encoding: str | None = None,
- errors: str | None = None,
- newline: str | None = None,
- closefd: bool = True,
- opener: Callable[[str, int], int] | None = None,
-) -> AsyncFile[Any]:
- """
- Open a file asynchronously.
-
- The arguments are exactly the same as for the builtin :func:`open`.
-
- :return: an asynchronous file object
-
- """
- fp = await to_thread.run_sync(
- open, file, mode, buffering, encoding, errors, newline, closefd, opener
- )
- return AsyncFile(fp)
-
-
-def wrap_file(file: IO[AnyStr]) -> AsyncFile[AnyStr]:
- """
- Wrap an existing file as an asynchronous file.
-
- :param file: an existing file-like object
- :return: an asynchronous file object
-
- """
- return AsyncFile(file)
-
-
-@dataclass(eq=False)
-class _PathIterator(AsyncIterator["Path"]):
- iterator: Iterator[PathLike[str]]
-
- async def __anext__(self) -> Path:
- nextval = await to_thread.run_sync(next, self.iterator, None, cancellable=True)
- if nextval is None:
- raise StopAsyncIteration from None
-
- return Path(cast("PathLike[str]", nextval))
-
-
-class Path:
- """
- An asynchronous version of :class:`pathlib.Path`.
-
- This class cannot be substituted for :class:`pathlib.Path` or :class:`pathlib.PurePath`, but
- it is compatible with the :class:`os.PathLike` interface.
-
- It implements the Python 3.10 version of :class:`pathlib.Path` interface, except for the
- deprecated :meth:`~pathlib.Path.link_to` method.
-
- Any methods that do disk I/O need to be awaited on. These methods are:
-
- * :meth:`~pathlib.Path.absolute`
- * :meth:`~pathlib.Path.chmod`
- * :meth:`~pathlib.Path.cwd`
- * :meth:`~pathlib.Path.exists`
- * :meth:`~pathlib.Path.expanduser`
- * :meth:`~pathlib.Path.group`
- * :meth:`~pathlib.Path.hardlink_to`
- * :meth:`~pathlib.Path.home`
- * :meth:`~pathlib.Path.is_block_device`
- * :meth:`~pathlib.Path.is_char_device`
- * :meth:`~pathlib.Path.is_dir`
- * :meth:`~pathlib.Path.is_fifo`
- * :meth:`~pathlib.Path.is_file`
- * :meth:`~pathlib.Path.is_mount`
- * :meth:`~pathlib.Path.lchmod`
- * :meth:`~pathlib.Path.lstat`
- * :meth:`~pathlib.Path.mkdir`
- * :meth:`~pathlib.Path.open`
- * :meth:`~pathlib.Path.owner`
- * :meth:`~pathlib.Path.read_bytes`
- * :meth:`~pathlib.Path.read_text`
- * :meth:`~pathlib.Path.readlink`
- * :meth:`~pathlib.Path.rename`
- * :meth:`~pathlib.Path.replace`
- * :meth:`~pathlib.Path.rmdir`
- * :meth:`~pathlib.Path.samefile`
- * :meth:`~pathlib.Path.stat`
- * :meth:`~pathlib.Path.touch`
- * :meth:`~pathlib.Path.unlink`
- * :meth:`~pathlib.Path.write_bytes`
- * :meth:`~pathlib.Path.write_text`
-
- Additionally, the following methods return an async iterator yielding :class:`~.Path` objects:
-
- * :meth:`~pathlib.Path.glob`
- * :meth:`~pathlib.Path.iterdir`
- * :meth:`~pathlib.Path.rglob`
- """
-
- __slots__ = "_path", "__weakref__"
-
- __weakref__: Any
-
- def __init__(self, *args: str | PathLike[str]) -> None:
- self._path: Final[pathlib.Path] = pathlib.Path(*args)
-
- def __fspath__(self) -> str:
- return self._path.__fspath__()
-
- def __str__(self) -> str:
- return self._path.__str__()
-
- def __repr__(self) -> str:
- return f"{self.__class__.__name__}({self.as_posix()!r})"
-
- def __bytes__(self) -> bytes:
- return self._path.__bytes__()
-
- def __hash__(self) -> int:
- return self._path.__hash__()
-
- def __eq__(self, other: object) -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__eq__(target)
-
- def __lt__(self, other: Path) -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__lt__(target)
-
- def __le__(self, other: Path) -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__le__(target)
-
- def __gt__(self, other: Path) -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__gt__(target)
-
- def __ge__(self, other: Path) -> bool:
- target = other._path if isinstance(other, Path) else other
- return self._path.__ge__(target)
-
- def __truediv__(self, other: Any) -> Path:
- return Path(self._path / other)
-
- def __rtruediv__(self, other: Any) -> Path:
- return Path(other) / self
-
- @property
- def parts(self) -> tuple[str, ...]:
- return self._path.parts
-
- @property
- def drive(self) -> str:
- return self._path.drive
-
- @property
- def root(self) -> str:
- return self._path.root
-
- @property
- def anchor(self) -> str:
- return self._path.anchor
-
- @property
- def parents(self) -> Sequence[Path]:
- return tuple(Path(p) for p in self._path.parents)
-
- @property
- def parent(self) -> Path:
- return Path(self._path.parent)
-
- @property
- def name(self) -> str:
- return self._path.name
-
- @property
- def suffix(self) -> str:
- return self._path.suffix
-
- @property
- def suffixes(self) -> list[str]:
- return self._path.suffixes
-
- @property
- def stem(self) -> str:
- return self._path.stem
-
- async def absolute(self) -> Path:
- path = await to_thread.run_sync(self._path.absolute)
- return Path(path)
-
- def as_posix(self) -> str:
- return self._path.as_posix()
-
- def as_uri(self) -> str:
- return self._path.as_uri()
-
- def match(self, path_pattern: str) -> bool:
- return self._path.match(path_pattern)
-
- def is_relative_to(self, *other: str | PathLike[str]) -> bool:
- try:
- self.relative_to(*other)
- return True
- except ValueError:
- return False
-
- async def chmod(self, mode: int, *, follow_symlinks: bool = True) -> None:
- func = partial(os.chmod, follow_symlinks=follow_symlinks)
- return await to_thread.run_sync(func, self._path, mode)
-
- @classmethod
- async def cwd(cls) -> Path:
- path = await to_thread.run_sync(pathlib.Path.cwd)
- return cls(path)
-
- async def exists(self) -> bool:
- return await to_thread.run_sync(self._path.exists, cancellable=True)
-
- async def expanduser(self) -> Path:
- return Path(await to_thread.run_sync(self._path.expanduser, cancellable=True))
-
- def glob(self, pattern: str) -> AsyncIterator[Path]:
- gen = self._path.glob(pattern)
- return _PathIterator(gen)
-
- async def group(self) -> str:
- return await to_thread.run_sync(self._path.group, cancellable=True)
-
- async def hardlink_to(self, target: str | pathlib.Path | Path) -> None:
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(os.link, target, self)
-
- @classmethod
- async def home(cls) -> Path:
- home_path = await to_thread.run_sync(pathlib.Path.home)
- return cls(home_path)
-
- def is_absolute(self) -> bool:
- return self._path.is_absolute()
-
- async def is_block_device(self) -> bool:
- return await to_thread.run_sync(self._path.is_block_device, cancellable=True)
-
- async def is_char_device(self) -> bool:
- return await to_thread.run_sync(self._path.is_char_device, cancellable=True)
-
- async def is_dir(self) -> bool:
- return await to_thread.run_sync(self._path.is_dir, cancellable=True)
-
- async def is_fifo(self) -> bool:
- return await to_thread.run_sync(self._path.is_fifo, cancellable=True)
-
- async def is_file(self) -> bool:
- return await to_thread.run_sync(self._path.is_file, cancellable=True)
-
- async def is_mount(self) -> bool:
- return await to_thread.run_sync(os.path.ismount, self._path, cancellable=True)
-
- def is_reserved(self) -> bool:
- return self._path.is_reserved()
-
- async def is_socket(self) -> bool:
- return await to_thread.run_sync(self._path.is_socket, cancellable=True)
-
- async def is_symlink(self) -> bool:
- return await to_thread.run_sync(self._path.is_symlink, cancellable=True)
-
- def iterdir(self) -> AsyncIterator[Path]:
- gen = self._path.iterdir()
- return _PathIterator(gen)
-
- def joinpath(self, *args: str | PathLike[str]) -> Path:
- return Path(self._path.joinpath(*args))
-
- async def lchmod(self, mode: int) -> None:
- await to_thread.run_sync(self._path.lchmod, mode)
-
- async def lstat(self) -> os.stat_result:
- return await to_thread.run_sync(self._path.lstat, cancellable=True)
-
- async def mkdir(
- self, mode: int = 0o777, parents: bool = False, exist_ok: bool = False
- ) -> None:
- await to_thread.run_sync(self._path.mkdir, mode, parents, exist_ok)
-
- @overload
- async def open(
- self,
- mode: OpenBinaryMode,
- buffering: int = ...,
- encoding: str | None = ...,
- errors: str | None = ...,
- newline: str | None = ...,
- ) -> AsyncFile[bytes]:
- ...
-
- @overload
- async def open(
- self,
- mode: OpenTextMode = ...,
- buffering: int = ...,
- encoding: str | None = ...,
- errors: str | None = ...,
- newline: str | None = ...,
- ) -> AsyncFile[str]:
- ...
-
- async def open(
- self,
- mode: str = "r",
- buffering: int = -1,
- encoding: str | None = None,
- errors: str | None = None,
- newline: str | None = None,
- ) -> AsyncFile[Any]:
- fp = await to_thread.run_sync(
- self._path.open, mode, buffering, encoding, errors, newline
- )
- return AsyncFile(fp)
-
- async def owner(self) -> str:
- return await to_thread.run_sync(self._path.owner, cancellable=True)
-
- async def read_bytes(self) -> bytes:
- return await to_thread.run_sync(self._path.read_bytes)
-
- async def read_text(
- self, encoding: str | None = None, errors: str | None = None
- ) -> str:
- return await to_thread.run_sync(self._path.read_text, encoding, errors)
-
- def relative_to(self, *other: str | PathLike[str]) -> Path:
- return Path(self._path.relative_to(*other))
-
- async def readlink(self) -> Path:
- target = await to_thread.run_sync(os.readlink, self._path)
- return Path(cast(str, target))
-
- async def rename(self, target: str | pathlib.PurePath | Path) -> Path:
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(self._path.rename, target)
- return Path(target)
-
- async def replace(self, target: str | pathlib.PurePath | Path) -> Path:
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(self._path.replace, target)
- return Path(target)
-
- async def resolve(self, strict: bool = False) -> Path:
- func = partial(self._path.resolve, strict=strict)
- return Path(await to_thread.run_sync(func, cancellable=True))
-
- def rglob(self, pattern: str) -> AsyncIterator[Path]:
- gen = self._path.rglob(pattern)
- return _PathIterator(gen)
-
- async def rmdir(self) -> None:
- await to_thread.run_sync(self._path.rmdir)
-
- async def samefile(
- self, other_path: str | bytes | int | pathlib.Path | Path
- ) -> bool:
- if isinstance(other_path, Path):
- other_path = other_path._path
-
- return await to_thread.run_sync(
- self._path.samefile, other_path, cancellable=True
- )
-
- async def stat(self, *, follow_symlinks: bool = True) -> os.stat_result:
- func = partial(os.stat, follow_symlinks=follow_symlinks)
- return await to_thread.run_sync(func, self._path, cancellable=True)
-
- async def symlink_to(
- self,
- target: str | pathlib.Path | Path,
- target_is_directory: bool = False,
- ) -> None:
- if isinstance(target, Path):
- target = target._path
-
- await to_thread.run_sync(self._path.symlink_to, target, target_is_directory)
-
- async def touch(self, mode: int = 0o666, exist_ok: bool = True) -> None:
- await to_thread.run_sync(self._path.touch, mode, exist_ok)
-
- async def unlink(self, missing_ok: bool = False) -> None:
- try:
- await to_thread.run_sync(self._path.unlink)
- except FileNotFoundError:
- if not missing_ok:
- raise
-
- def with_name(self, name: str) -> Path:
- return Path(self._path.with_name(name))
-
- def with_stem(self, stem: str) -> Path:
- return Path(self._path.with_name(stem + self._path.suffix))
-
- def with_suffix(self, suffix: str) -> Path:
- return Path(self._path.with_suffix(suffix))
-
- async def write_bytes(self, data: bytes) -> int:
- return await to_thread.run_sync(self._path.write_bytes, data)
-
- async def write_text(
- self,
- data: str,
- encoding: str | None = None,
- errors: str | None = None,
- newline: str | None = None,
- ) -> int:
- # Path.write_text() does not support the "newline" parameter before Python 3.10
- def sync_write_text() -> int:
- with self._path.open(
- "w", encoding=encoding, errors=errors, newline=newline
- ) as fp:
- return fp.write(data)
-
- return await to_thread.run_sync(sync_write_text)
-
-
-PathLike.register(Path)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/qt.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/qt.py
deleted file mode 100644
index 222c81b91fcee7315ffae0ba61f3660653f58a5d..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/qt.py
+++ /dev/null
@@ -1,23 +0,0 @@
-""" A Qt API selector that can be used to switch between PyQt and PySide.
-
-This uses the ETS 4.0 selection pattern of:
-PySide first, PyQt with API v2. second.
-
-Do not use this if you need PyQt with the old QString/QVariant API.
-"""
-
-import os
-
-from pydev_ipython.qt_loaders import (load_qt, QT_API_PYSIDE,
- QT_API_PYQT, QT_API_PYQT5)
-
-QT_API = os.environ.get('QT_API', None)
-if QT_API not in [QT_API_PYSIDE, QT_API_PYQT, QT_API_PYQT5, None]:
- raise RuntimeError("Invalid Qt API %r, valid values are: %r, %r" %
- (QT_API, QT_API_PYSIDE, QT_API_PYQT, QT_API_PYQT5))
-if QT_API is None:
- api_opts = [QT_API_PYSIDE, QT_API_PYQT, QT_API_PYQT5]
-else:
- api_opts = [QT_API]
-
-QtCore, QtGui, QtSvg, QT_API = load_qt(api_opts)
diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/conditioners.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/conditioners.py
deleted file mode 100644
index 82792316024b88d4c5c38b0a28f443627771d509..0000000000000000000000000000000000000000
--- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/conditioners.py
+++ /dev/null
@@ -1,990 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import defaultdict
-from copy import deepcopy
-from dataclasses import dataclass, field
-from itertools import chain
-import logging
-import math
-import random
-import re
-import typing as tp
-import warnings
-
-from einops import rearrange
-from num2words import num2words
-import spacy
-from transformers import T5EncoderModel, T5Tokenizer # type: ignore
-import torchaudio
-import torch
-from torch import nn
-from torch import Tensor
-import torch.nn.functional as F
-from torch.nn.utils.rnn import pad_sequence
-
-from .streaming import StreamingModule
-from .transformer import create_sin_embedding
-from ..data.audio_dataset import SegmentInfo
-from ..utils.autocast import TorchAutocast
-from ..utils.utils import hash_trick, length_to_mask, collate
-
-
-logger = logging.getLogger(__name__)
-TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist)
-ConditionType = tp.Tuple[Tensor, Tensor] # condition, mask
-
-
-class WavCondition(tp.NamedTuple):
- wav: Tensor
- length: Tensor
- path: tp.List[tp.Optional[str]] = []
-
-
-def nullify_condition(condition: ConditionType, dim: int = 1):
- """This function transforms an input condition to a null condition.
- The way it is done by converting it to a single zero vector similarly
- to how it is done inside WhiteSpaceTokenizer and NoopTokenizer.
-
- Args:
- condition (ConditionType): a tuple of condition and mask (tp.Tuple[Tensor, Tensor])
- dim (int): the dimension that will be truncated (should be the time dimension)
- WARNING!: dim should not be the batch dimension!
- Returns:
- ConditionType: a tuple of null condition and mask
- """
- assert dim != 0, "dim cannot be the batch dimension!"
- assert type(condition) == tuple and \
- type(condition[0]) == Tensor and \
- type(condition[1]) == Tensor, "'nullify_condition' got an unexpected input type!"
- cond, mask = condition
- B = cond.shape[0]
- last_dim = cond.dim() - 1
- out = cond.transpose(dim, last_dim)
- out = 0. * out[..., :1]
- out = out.transpose(dim, last_dim)
- mask = torch.zeros((B, 1), device=out.device).int()
- assert cond.dim() == out.dim()
- return out, mask
-
-
-def nullify_wav(wav: Tensor) -> WavCondition:
- """Create a nullified WavCondition from a wav tensor with appropriate shape.
-
- Args:
- wav (Tensor): tensor of shape [B, T]
- Returns:
- WavCondition: wav condition with nullified wav.
- """
- null_wav, _ = nullify_condition((wav, torch.zeros_like(wav)), dim=wav.dim() - 1)
- return WavCondition(
- wav=null_wav,
- length=torch.tensor([0] * wav.shape[0], device=wav.device),
- path=['null_wav'] * wav.shape[0]
- )
-
-
-@dataclass
-class ConditioningAttributes:
- text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict)
- wav: tp.Dict[str, WavCondition] = field(default_factory=dict)
-
- def __getitem__(self, item):
- return getattr(self, item)
-
- @property
- def text_attributes(self):
- return self.text.keys()
-
- @property
- def wav_attributes(self):
- return self.wav.keys()
-
- @property
- def attributes(self):
- return {"text": self.text_attributes, "wav": self.wav_attributes}
-
- def to_flat_dict(self):
- return {
- **{f"text.{k}": v for k, v in self.text.items()},
- **{f"wav.{k}": v for k, v in self.wav.items()},
- }
-
- @classmethod
- def from_flat_dict(cls, x):
- out = cls()
- for k, v in x.items():
- kind, att = k.split(".")
- out[kind][att] = v
- return out
-
-
-class SegmentWithAttributes(SegmentInfo):
- """Base class for all dataclasses that are used for conditioning.
- All child classes should implement `to_condition_attributes` that converts
- the existing attributes to a dataclass of type ConditioningAttributes.
- """
- def to_condition_attributes(self) -> ConditioningAttributes:
- raise NotImplementedError()
-
-
-class Tokenizer:
- """Base class for all tokenizers
- (in case we want to introduce more advances tokenizers in the future).
- """
- def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]:
- raise NotImplementedError()
-
-
-class WhiteSpaceTokenizer(Tokenizer):
- """This tokenizer should be used for natural language descriptions.
- For example:
- ["he didn't, know he's going home.", 'shorter sentence'] =>
- [[78, 62, 31, 4, 78, 25, 19, 34],
- [59, 77, 0, 0, 0, 0, 0, 0]]
- """
- PUNCTUATIONS = "?:!.,;"
-
- def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm",
- lemma: bool = True, stopwords: bool = True) -> None:
- self.n_bins = n_bins
- self.pad_idx = pad_idx
- self.lemma = lemma
- self.stopwords = stopwords
- try:
- self.nlp = spacy.load(language)
- except IOError:
- spacy.cli.download(language) # type: ignore
- self.nlp = spacy.load(language)
-
- @tp.no_type_check
- def __call__(
- self,
- texts: tp.List[tp.Optional[str]],
- return_text: bool = False
- ) -> tp.Tuple[Tensor, Tensor]:
- """Take a list of strings and convert them to a tensor of indices.
-
- Args:
- texts (tp.List[str]): List of strings.
- return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False.
- Returns:
- tp.Tuple[Tensor, Tensor]:
- - Indices of words in the LUT.
- - And a mask indicating where the padding tokens are
- """
- output, lengths = [], []
- texts = deepcopy(texts)
- for i, text in enumerate(texts):
- # if current sample doesn't have a certain attribute, replace with pad token
- if text is None:
- output.append(Tensor([self.pad_idx]))
- lengths.append(0)
- continue
-
- # convert numbers to words
- text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore
- # normalize text
- text = self.nlp(text) # type: ignore
- # remove stopwords
- if self.stopwords:
- text = [w for w in text if not w.is_stop] # type: ignore
- # remove punctuations
- text = [w for w in text if w.text not in self.PUNCTUATIONS] # type: ignore
- # lemmatize if needed
- text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore
-
- texts[i] = " ".join(text)
- lengths.append(len(text))
- # convert to tensor
- tokens = Tensor([hash_trick(w, self.n_bins) for w in text])
- output.append(tokens)
-
- mask = length_to_mask(torch.IntTensor(lengths)).int()
- padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t()
- if return_text:
- return padded_output, mask, texts # type: ignore
- return padded_output, mask
-
-
-class NoopTokenizer(Tokenizer):
- """This tokenizer should be used for global conditioners such as: artist, genre, key, etc.
- The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split
- strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will
- split it to ["Jeff", "Buckley"] and return an index per word.
-
- For example:
- ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101]
- ["Metal", "Rock", "Classical"] => [0, 223, 51]
- """
- def __init__(self, n_bins: int, pad_idx: int = 0):
- self.n_bins = n_bins
- self.pad_idx = pad_idx
-
- def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]:
- output, lengths = [], []
- for text in texts:
- # if current sample doesn't have a certain attribute, replace with pad token
- if text is None:
- output.append(self.pad_idx)
- lengths.append(0)
- else:
- output.append(hash_trick(text, self.n_bins))
- lengths.append(1)
-
- tokens = torch.LongTensor(output).unsqueeze(1)
- mask = length_to_mask(torch.IntTensor(lengths)).int()
- return tokens, mask
-
-
-class BaseConditioner(nn.Module):
- """Base model for all conditioner modules. We allow the output dim to be different
- than the hidden dim for two reasons: 1) keep our LUTs small when the vocab is large;
- 2) make all condition dims consistent.
-
- Args:
- dim (int): Hidden dim of the model (text-encoder/LUT).
- output_dim (int): Output dim of the conditioner.
- """
- def __init__(self, dim, output_dim):
- super().__init__()
- self.dim = dim
- self.output_dim = output_dim
- self.output_proj = nn.Linear(dim, output_dim)
-
- def tokenize(self, *args, **kwargs) -> tp.Any:
- """Should be any part of the processing that will lead to a synchronization
- point, e.g. BPE tokenization with transfer to the GPU.
-
- The returned value will be saved and return later when calling forward().
- """
- raise NotImplementedError()
-
- def forward(self, inputs: tp.Any) -> ConditionType:
- """Gets input that should be used as conditioning (e.g, genre, description or a waveform).
- Outputs a ConditionType, after the input data was embedded as a dense vector.
-
- Returns:
- ConditionType:
- - A tensor of size [B, T, D] where B is the batch size, T is the length of the
- output embedding and D is the dimension of the embedding.
- - And a mask indicating where the padding tokens.
- """
- raise NotImplementedError()
-
-
-class TextConditioner(BaseConditioner):
- ...
-
-
-class LUTConditioner(TextConditioner):
- """Lookup table TextConditioner.
-
- Args:
- n_bins (int): Number of bins.
- dim (int): Hidden dim of the model (text-encoder/LUT).
- output_dim (int): Output dim of the conditioner.
- tokenizer (str): Name of the tokenizer.
- pad_idx (int, optional): Index for padding token. Defaults to 0.
- """
- def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0):
- super().__init__(dim, output_dim)
- self.embed = nn.Embedding(n_bins, dim)
- self.tokenizer: Tokenizer
- if tokenizer == "whitespace":
- self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx)
- elif tokenizer == "noop":
- self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx)
- else:
- raise ValueError(f"unrecognized tokenizer `{tokenizer}`.")
-
- def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- device = self.embed.weight.device
- tokens, mask = self.tokenizer(x)
- tokens, mask = tokens.to(device), mask.to(device)
- return tokens, mask
-
- def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType:
- tokens, mask = inputs
- embeds = self.embed(tokens)
- embeds = self.output_proj(embeds)
- embeds = (embeds * mask.unsqueeze(-1))
- return embeds, mask
-
-
-class T5Conditioner(TextConditioner):
- """T5-based TextConditioner.
-
- Args:
- name (str): Name of the T5 model.
- output_dim (int): Output dim of the conditioner.
- finetune (bool): Whether to fine-tune T5 at train time.
- device (str): Device for T5 Conditioner.
- autocast_dtype (tp.Optional[str], optional): Autocast dtype.
- word_dropout (float, optional): Word dropout probability.
- normalize_text (bool, optional): Whether to apply text normalization.
- """
- MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b",
- "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large",
- "google/flan-t5-xl", "google/flan-t5-xxl"]
- MODELS_DIMS = {
- "t5-small": 512,
- "t5-base": 768,
- "t5-large": 1024,
- "t5-3b": 1024,
- "t5-11b": 1024,
- "google/flan-t5-small": 512,
- "google/flan-t5-base": 768,
- "google/flan-t5-large": 1024,
- "google/flan-t5-3b": 1024,
- "google/flan-t5-11b": 1024,
- }
-
- def __init__(self, name: str, output_dim: int, finetune: bool, device: str,
- autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0.,
- normalize_text: bool = False):
- assert name in self.MODELS, f"unrecognized t5 model name (should in {self.MODELS})"
- super().__init__(self.MODELS_DIMS[name], output_dim)
- self.device = device
- self.name = name
- self.finetune = finetune
- self.word_dropout = word_dropout
-
- if autocast_dtype is None or self.device == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- if self.device != 'cpu':
- logger.warning("T5 has no autocast, this might lead to NaN")
- else:
- dtype = getattr(torch, autocast_dtype)
- assert isinstance(dtype, torch.dtype)
- logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}")
- self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype)
- # Let's disable logging temporarily because T5 will vomit some errors otherwise.
- # thanks https://gist.github.com/simon-weber/7853144
- previous_level = logging.root.manager.disable
- logging.disable(logging.ERROR)
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- try:
- self.t5_tokenizer = T5Tokenizer.from_pretrained(name)
- t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune)
- finally:
- logging.disable(previous_level)
- if finetune:
- self.t5 = t5
- else:
- # this makes sure that the t5 models is not part
- # of the saved checkpoint
- self.__dict__["t5"] = t5.to(device)
-
- self.normalize_text = normalize_text
- if normalize_text:
- self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True)
-
- def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]:
- # if current sample doesn't have a certain attribute, replace with empty string
- entries: tp.List[str] = [xi if xi is not None else "" for xi in x]
- if self.normalize_text:
- _, _, entries = self.text_normalizer(entries, return_text=True)
- if self.word_dropout > 0. and self.training:
- new_entries = []
- for entry in entries:
- words = [word for word in entry.split(" ") if random.random() >= self.word_dropout]
- new_entries.append(" ".join(words))
- entries = new_entries
-
- empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""])
-
- inputs = self.t5_tokenizer(entries, return_tensors="pt", padding=True).to(self.device)
- mask = inputs["attention_mask"]
- mask[empty_idx, :] = 0 # zero-out index where the input is non-existant
- return inputs
-
- def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType:
- mask = inputs["attention_mask"]
- with torch.set_grad_enabled(self.finetune), self.autocast:
- embeds = self.t5(**inputs).last_hidden_state
- embeds = self.output_proj(embeds.to(self.output_proj.weight))
- embeds = (embeds * mask.unsqueeze(-1))
- return embeds, mask
-
-
-class WaveformConditioner(BaseConditioner):
- """Base class for all conditioners that take a waveform as input.
- Classes that inherit must implement `_get_wav_embedding` that outputs
- a continuous tensor, and `_downsampling_factor` that returns the down-sampling
- factor of the embedding model.
-
- Args:
- dim (int): The internal representation dimension.
- output_dim (int): Output dimension.
- device (tp.Union[torch.device, str]): Device.
- """
- def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]):
- super().__init__(dim, output_dim)
- self.device = device
-
- def tokenize(self, wav_length: WavCondition) -> WavCondition:
- wav, length, path = wav_length
- assert length is not None
- return WavCondition(wav.to(self.device), length.to(self.device), path)
-
- def _get_wav_embedding(self, wav: Tensor) -> Tensor:
- """Gets as input a wav and returns a dense vector of conditions."""
- raise NotImplementedError()
-
- def _downsampling_factor(self):
- """Returns the downsampling factor of the embedding model."""
- raise NotImplementedError()
-
- def forward(self, inputs: WavCondition) -> ConditionType:
- """
- Args:
- input (WavCondition): Tuple of (waveform, lengths).
- Returns:
- ConditionType: Dense vector representing the conditioning along with its' mask.
- """
- wav, lengths, path = inputs
- with torch.no_grad():
- embeds = self._get_wav_embedding(wav)
- embeds = embeds.to(self.output_proj.weight)
- embeds = self.output_proj(embeds)
-
- if lengths is not None:
- lengths = lengths / self._downsampling_factor()
- mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore
- else:
- mask = torch.ones_like(embeds)
- embeds = (embeds * mask.unsqueeze(2).to(self.device))
-
- return embeds, mask
-
-
-class ChromaStemConditioner(WaveformConditioner):
- """Chroma conditioner that uses DEMUCS to first filter out drums and bass. The is followed by
- the insight the drums and bass often dominate the chroma, leading to the chroma not containing the
- information about melody.
-
- Args:
- output_dim (int): Output dimension for the conditioner.
- sample_rate (int): Sample rate for the chroma extractor.
- n_chroma (int): Number of chroma for the chroma extractor.
- radix2_exp (int): Radix2 exponent for the chroma extractor.
- duration (float): Duration used during training. This is later used for correct padding
- in case we are using chroma as prefix.
- match_len_on_eval (bool, optional): If True then all chromas are padded to the training
- duration. Defaults to False.
- eval_wavs (str, optional): Path to a json egg with waveform, this waveforms are used as
- conditions during eval (for cases where we don't want to leak test conditions like MusicCaps).
- Defaults to None.
- n_eval_wavs (int, optional): Limits the number of waveforms used for conditioning. Defaults to 0.
- device (tp.Union[torch.device, str], optional): Device for the conditioner.
- **kwargs: Additional parameters for the chroma extractor.
- """
- def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int,
- duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None,
- n_eval_wavs: int = 0, device: tp.Union[torch.device, str] = "cpu", **kwargs):
- from demucs import pretrained
- super().__init__(dim=n_chroma, output_dim=output_dim, device=device)
- self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32)
- self.sample_rate = sample_rate
- self.match_len_on_eval = match_len_on_eval
- self.duration = duration
- self.__dict__["demucs"] = pretrained.get_model('htdemucs').to(device)
- self.stem2idx = {'drums': 0, 'bass': 1, 'other': 2, 'vocal': 3}
- self.stem_idx = torch.LongTensor([self.stem2idx['vocal'], self.stem2idx['other']]).to(device)
- self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, radix2_exp=radix2_exp,
- device=device, **kwargs)
- self.chroma_len = self._get_chroma_len()
-
- def _downsampling_factor(self):
- return self.chroma.winhop
-
- def _get_chroma_len(self):
- """Get length of chroma during training"""
- dummy_wav = torch.zeros((1, self.sample_rate * self.duration), device=self.device)
- dummy_chr = self.chroma(dummy_wav)
- return dummy_chr.shape[1]
-
- @torch.no_grad()
- def _get_filtered_wav(self, wav):
- from demucs.apply import apply_model
- from demucs.audio import convert_audio
- with self.autocast:
- wav = convert_audio(wav, self.sample_rate, self.demucs.samplerate, self.demucs.audio_channels)
- stems = apply_model(self.demucs, wav, device=self.device)
- stems = stems[:, self.stem_idx] # extract stem
- stems = stems.sum(1) # merge extracted stems
- stems = stems.mean(1, keepdim=True) # mono
- stems = convert_audio(stems, self.demucs.samplerate, self.sample_rate, 1)
- return stems
-
- @torch.no_grad()
- def _get_wav_embedding(self, wav):
- # avoid 0-size tensors when we are working with null conds
- if wav.shape[-1] == 1:
- return self.chroma(wav)
- stems = self._get_filtered_wav(wav)
- chroma = self.chroma(stems)
-
- if self.match_len_on_eval:
- b, t, c = chroma.shape
- if t > self.chroma_len:
- chroma = chroma[:, :self.chroma_len]
- logger.debug(f'chroma was truncated! ({t} -> {chroma.shape[1]})')
- elif t < self.chroma_len:
- # chroma = F.pad(chroma, (0, 0, 0, self.chroma_len - t))
- n_repeat = int(math.ceil(self.chroma_len / t))
- chroma = chroma.repeat(1, n_repeat, 1)
- chroma = chroma[:, :self.chroma_len]
- logger.debug(f'chroma was zero-padded! ({t} -> {chroma.shape[1]})')
- return chroma
-
-
-class ChromaExtractor(nn.Module):
- """Chroma extraction class, handles chroma extraction and quantization.
-
- Args:
- sample_rate (int): Sample rate.
- n_chroma (int): Number of chroma to consider.
- radix2_exp (int): Radix2 exponent.
- nfft (tp.Optional[int], optional): Number of FFT.
- winlen (tp.Optional[int], optional): Window length.
- winhop (tp.Optional[int], optional): Window hop size.
- argmax (bool, optional): Whether to use argmax. Defaults to False.
- norm (float, optional): Norm for chroma normalization. Defaults to inf.
- device (tp.Union[torch.device, str], optional): Device to use. Defaults to cpu.
- """
- def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12,
- nfft: tp.Optional[int] = None, winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None,
- argmax: bool = False, norm: float = torch.inf, device: tp.Union[torch.device, str] = "cpu"):
- super().__init__()
- from librosa import filters
- self.device = device
- self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32)
- self.winlen = winlen or 2 ** radix2_exp
- self.nfft = nfft or self.winlen
- self.winhop = winhop or (self.winlen // 4)
- self.sr = sample_rate
- self.n_chroma = n_chroma
- self.norm = norm
- self.argmax = argmax
- self.window = torch.hann_window(self.winlen).to(device)
- self.fbanks = torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0,
- n_chroma=self.n_chroma)).to(device)
- self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen,
- hop_length=self.winhop, power=2, center=True,
- pad=0, normalized=True).to(device)
-
- def forward(self, wav):
- with self.autocast:
- T = wav.shape[-1]
- # in case we are getting a wav that was dropped out (nullified)
- # make sure wav length is no less that nfft
- if T < self.nfft:
- pad = self.nfft - T
- r = 0 if pad % 2 == 0 else 1
- wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0)
- assert wav.shape[-1] == self.nfft, f'expected len {self.nfft} but got {wav.shape[-1]}'
- spec = self.spec(wav).squeeze(1)
- raw_chroma = torch.einsum("cf,...ft->...ct", self.fbanks, spec)
- norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6)
- norm_chroma = rearrange(norm_chroma, "b d t -> b t d")
-
- if self.argmax:
- idx = norm_chroma.argmax(-1, keepdims=True)
- norm_chroma[:] = 0
- norm_chroma.scatter_(dim=-1, index=idx, value=1)
-
- return norm_chroma
-
-
-def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str):
- """Utility function for nullifying an attribute inside an ConditioningAttributes object.
- If the condition is of type "wav", then nullify it using "nullify_condition".
- If the condition is of any other type, set its' value to None.
- Works in-place.
- """
- if condition_type not in ["text", "wav"]:
- raise ValueError(
- "dropout_condition got an unexpected condition type!"
- f" expected 'wav' or 'text' but got '{condition_type}'"
- )
-
- if condition not in getattr(sample, condition_type):
- raise ValueError(
- "dropout_condition received an unexpected condition!"
- f" expected wav={sample.wav.keys()} and text={sample.text.keys()}"
- f"but got '{condition}' of type '{condition_type}'!"
- )
-
- if condition_type == "wav":
- wav, length, path = sample.wav[condition]
- sample.wav[condition] = nullify_wav(wav)
- else:
- sample.text[condition] = None
-
- return sample
-
-
-class DropoutModule(nn.Module):
- """Base class for all dropout modules."""
- def __init__(self, seed: int = 1234):
- super().__init__()
- self.rng = torch.Generator()
- self.rng.manual_seed(seed)
-
-
-class AttributeDropout(DropoutModule):
- """Applies dropout with a given probability per attribute. This is different from the behavior of
- ClassifierFreeGuidanceDropout as this allows for attributes to be dropped out separately. For example,
- "artist" can be dropped while "genre" remains. This is in contrast to ClassifierFreeGuidanceDropout
- where if "artist" is dropped "genre" must also be dropped.
-
- Args:
- p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example:
- ...
- "genre": 0.1,
- "artist": 0.5,
- "wav": 0.25,
- ...
- active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False.
- seed (int, optional): Random seed.
- """
- def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234):
- super().__init__(seed=seed)
- self.active_on_eval = active_on_eval
- # construct dict that return the values from p otherwise 0
- self.p = {}
- for condition_type, probs in p.items():
- self.p[condition_type] = defaultdict(lambda: 0, probs)
-
- def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]:
- """
- Args:
- samples (tp.List[ConditioningAttributes]): List of conditions.
- Returns:
- tp.List[ConditioningAttributes]: List of conditions after certain attributes were set to None.
- """
- if not self.training and not self.active_on_eval:
- return samples
-
- samples = deepcopy(samples)
-
- for condition_type, ps in self.p.items(): # for condition types [text, wav]
- for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre])
- if torch.rand(1, generator=self.rng).item() < p:
- for sample in samples:
- dropout_condition(sample, condition_type, condition)
-
- return samples
-
- def __repr__(self):
- return f"AttributeDropout({dict(self.p)})"
-
-
-class ClassifierFreeGuidanceDropout(DropoutModule):
- """Applies Classifier Free Guidance dropout, meaning all attributes
- are dropped with the same probability.
-
- Args:
- p (float): Probability to apply condition dropout during training.
- seed (int): Random seed.
- """
- def __init__(self, p: float, seed: int = 1234):
- super().__init__(seed=seed)
- self.p = p
-
- def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]:
- """
- Args:
- samples (tp.List[ConditioningAttributes]): List of conditions.
- Returns:
- tp.List[ConditioningAttributes]: List of conditions after all attributes were set to None.
- """
- if not self.training:
- return samples
-
- # decide on which attributes to drop in a batched fashion
- drop = torch.rand(1, generator=self.rng).item() < self.p
- if not drop:
- return samples
-
- # nullify conditions of all attributes
- samples = deepcopy(samples)
-
- for condition_type in ["wav", "text"]:
- for sample in samples:
- for condition in sample.attributes[condition_type]:
- dropout_condition(sample, condition_type, condition)
-
- return samples
-
- def __repr__(self):
- return f"ClassifierFreeGuidanceDropout(p={self.p})"
-
-
-class ConditioningProvider(nn.Module):
- """Main class to provide conditions given all the supported conditioners.
-
- Args:
- conditioners (dict): Dictionary of conditioners.
- merge_text_conditions_p (float, optional): Probability to merge all text sources
- into a single text condition. Defaults to 0.
- drop_desc_p (float, optional): Probability to drop the original description
- when merging all text sources into a single text condition. Defaults to 0.
- device (tp.Union[torch.device, str], optional): Device for conditioners and output condition types.
- """
- def __init__(
- self,
- conditioners: tp.Dict[str, BaseConditioner],
- merge_text_conditions_p: float = 0,
- drop_desc_p: float = 0,
- device: tp.Union[torch.device, str] = "cpu",
- ):
- super().__init__()
- self.device = device
- self.merge_text_conditions_p = merge_text_conditions_p
- self.drop_desc_p = drop_desc_p
- self.conditioners = nn.ModuleDict(conditioners)
-
- @property
- def text_conditions(self):
- return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)]
-
- @property
- def wav_conditions(self):
- return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)]
-
- @property
- def has_wav_condition(self):
- return len(self.wav_conditions) > 0
-
- def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]:
- """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly.
- This should be called before starting any real GPU work to avoid synchronization points.
- This will return a dict matching conditioner names to their arbitrary tokenized representations.
-
- Args:
- inputs (list[ConditioningAttribres]): List of ConditioningAttributes objects containing
- text and wav conditions.
- """
- assert all([type(x) == ConditioningAttributes for x in inputs]), \
- "got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]" \
- f" but types were {set([type(x) for x in inputs])}"
-
- output = {}
- text = self._collate_text(inputs)
- wavs = self._collate_wavs(inputs)
-
- assert set(text.keys() | wavs.keys()).issubset(set(self.conditioners.keys())), \
- f"got an unexpected attribute! Expected {self.conditioners.keys()}, got {text.keys(), wavs.keys()}"
-
- for attribute, batch in chain(text.items(), wavs.items()):
- output[attribute] = self.conditioners[attribute].tokenize(batch)
- return output
-
- def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]:
- """Compute pairs of `(embedding, mask)` using the configured conditioners
- and the tokenized representations. The output is for example:
-
- {
- "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])),
- "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])),
- ...
- }
-
- Args:
- tokenized (dict): Dict of tokenized representations as returned by `tokenize()`.
- """
- output = {}
- for attribute, inputs in tokenized.items():
- condition, mask = self.conditioners[attribute](inputs)
- output[attribute] = (condition, mask)
- return output
-
- def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]:
- """Given a list of ConditioningAttributes objects, compile a dictionary where the keys
- are the attributes and the values are the aggregated input per attribute.
- For example:
- Input:
- [
- ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...),
- ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...),
- ]
- Output:
- {
- "genre": ["Rock", "Hip-hop"],
- "description": ["A rock song with a guitar solo", "A hip-hop verse"]
- }
- """
- batch_per_attribute: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list)
-
- def _merge_conds(cond, merge_text_conditions_p=0, drop_desc_p=0):
- def is_valid(k, v):
- k_valid = k in ['key', 'bpm', 'genre', 'moods', 'instrument']
- v_valid = v is not None and isinstance(v, (int, float, str, list))
- return k_valid and v_valid
-
- def process_value(v):
- if isinstance(v, (int, float, str)):
- return v
- if isinstance(v, list):
- return ", ".join(v)
- else:
- RuntimeError(f"unknown type for text value! ({type(v), v})")
-
- desc = cond.text['description']
- meta_data = ""
- if random.uniform(0, 1) < merge_text_conditions_p:
- meta_pairs = [f'{k}: {process_value(v)}' for k, v in cond.text.items() if is_valid(k, v)]
- random.shuffle(meta_pairs)
- meta_data = ". ".join(meta_pairs)
- desc = desc if not random.uniform(0, 1) < drop_desc_p else None
-
- if desc is None:
- desc = meta_data if len(meta_data) > 1 else None
- else:
- desc = desc.rstrip('.') + ". " + meta_data
- cond.text['description'] = desc.strip() if desc else None
-
- if self.training and self.merge_text_conditions_p:
- for sample in samples:
- _merge_conds(sample, self.merge_text_conditions_p, self.drop_desc_p)
-
- texts = [x.text for x in samples]
- for text in texts:
- for condition in self.text_conditions:
- batch_per_attribute[condition].append(text[condition])
-
- return batch_per_attribute
-
- def _collate_wavs(self, samples: tp.List[ConditioningAttributes]):
- """Generate a dict where the keys are attributes by which we fetch similar wavs,
- and the values are Tensors of wavs according to said attribtues.
-
- *Note*: by the time the samples reach this function, each sample should have some waveform
- inside the "wav" attribute. It should be either:
- 1. A real waveform
- 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset)
- 3. A null waveform due to it being dropped in a dropout module (nullified by dropout)
-
- Args:
- samples (tp.List[ConditioningAttributes]): List of ConditioningAttributes samples.
- Returns:
- dict: A dicionary mapping an attribute name to wavs.
- """
- wavs = defaultdict(list)
- lens = defaultdict(list)
- paths = defaultdict(list)
- out = {}
-
- for sample in samples:
- for attribute in self.wav_conditions:
- wav, length, path = sample.wav[attribute]
- wavs[attribute].append(wav.flatten())
- lens[attribute].append(length)
- paths[attribute].append(path)
-
- # stack all wavs to a single tensor
- for attribute in self.wav_conditions:
- stacked_wav, _ = collate(wavs[attribute], dim=0)
- out[attribute] = WavCondition(stacked_wav.unsqueeze(1),
- torch.cat(lens['self_wav']), paths[attribute]) # type: ignore
-
- return out
-
-
-class ConditionFuser(StreamingModule):
- """Condition fuser handles the logic to combine the different conditions
- to the actual model input.
-
- Args:
- fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse
- each condition. For example:
- {
- "prepend": ["description"],
- "sum": ["genre", "bpm"],
- "cross": ["description"],
- }
- cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention.
- cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used.
- """
- FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"]
-
- def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False,
- cross_attention_pos_emb_scale: float = 1.0):
- super().__init__()
- assert all(
- [k in self.FUSING_METHODS for k in fuse2cond.keys()]
- ), f"got invalid fuse method, allowed methods: {self.FUSING_MEHTODS}"
- self.cross_attention_pos_emb = cross_attention_pos_emb
- self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale
- self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond
- self.cond2fuse: tp.Dict[str, str] = {}
- for fuse_method, conditions in fuse2cond.items():
- for condition in conditions:
- self.cond2fuse[condition] = fuse_method
-
- def forward(
- self,
- input: Tensor,
- conditions: tp.Dict[str, ConditionType]
- ) -> tp.Tuple[Tensor, tp.Optional[Tensor]]:
- """Fuse the conditions to the provided model input.
-
- Args:
- input (Tensor): Transformer input.
- conditions (tp.Dict[str, ConditionType]): Dict of conditions.
- Returns:
- tp.Tuple[Tensor, Tensor]: The first tensor is the transformer input
- after the conditions have been fused. The second output tensor is the tensor
- used for cross-attention or None if no cross attention inputs exist.
- """
- B, T, _ = input.shape
-
- if 'offsets' in self._streaming_state:
- first_step = False
- offsets = self._streaming_state['offsets']
- else:
- first_step = True
- offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device)
-
- assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \
- f"given conditions contain unknown attributes for fuser, " \
- f"expected {self.cond2fuse.keys()}, got {conditions.keys()}"
- cross_attention_output = None
- for cond_type, (cond, cond_mask) in conditions.items():
- op = self.cond2fuse[cond_type]
- if op == "sum":
- input += cond
- elif op == "input_interpolate":
- cond = rearrange(cond, "b t d -> b d t")
- cond = F.interpolate(cond, size=input.shape[1])
- input += rearrange(cond, "b d t -> b t d")
- elif op == "prepend":
- if first_step:
- input = torch.cat([cond, input], dim=1)
- elif op == "cross":
- if cross_attention_output is not None:
- cross_attention_output = torch.cat([cross_attention_output, cond], dim=1)
- else:
- cross_attention_output = cond
- else:
- raise ValueError(f"unknown op ({op})")
-
- if self.cross_attention_pos_emb and cross_attention_output is not None:
- positions = torch.arange(
- cross_attention_output.shape[1],
- device=cross_attention_output.device
- ).view(1, -1, 1)
- pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1])
- cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb
-
- if self._is_streaming:
- self._streaming_state['offsets'] = offsets + T
-
- return input, cross_attention_output
diff --git a/spaces/Svngoku/TableTransformer2CSV/README.md b/spaces/Svngoku/TableTransformer2CSV/README.md
deleted file mode 100644
index 0f2524983c52374654abe5f8a0e1a09416391aa1..0000000000000000000000000000000000000000
--- a/spaces/Svngoku/TableTransformer2CSV/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Image2Table
-emoji: 🚀
-colorFrom: indigo
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-duplicated_from: SalML/TableTransformer2CSV
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TRI-ML/risk_biased_prediction/import_dataset_from_huggingface.py b/spaces/TRI-ML/risk_biased_prediction/import_dataset_from_huggingface.py
deleted file mode 100644
index f9f6f3baac67142155cb04b6c8af2b5a2a9a0efb..0000000000000000000000000000000000000000
--- a/spaces/TRI-ML/risk_biased_prediction/import_dataset_from_huggingface.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from datasets import load_dataset
-import datasets
-import json
-from mmcv import Config
-import numpy
-import torch
-
-from risk_biased.utils.waymo_dataloader import WaymoDataloaders
-
-
-config_path = "risk_biased/config/waymo_config.py"
-cfg = Config.fromfile(config_path)
-dataloaders = WaymoDataloaders(cfg)
-sample_dataloader = dataloaders.sample_dataloader()
-(
- x,
- mask_x,
- y,
- mask_y,
- mask_loss,
- map_data,
- mask_map,
- offset,
- x_ego,
- y_ego,
-) = sample_dataloader.collate_fn(sample_dataloader.dataset)
-
-# dataset = load_dataset("json", data_files="../risk_biased_dataset/data.json", split="test", field="x")
-# dataset = load_from_disk("../risk_biased_dataset/data.json")
-dataset = load_dataset("jmercat/risk_biased_dataset", split="test")
-
-x_c = torch.from_numpy(numpy.array(dataset["x"]).astype(numpy.float32))
-mask_x_c = torch.from_numpy(numpy.array(dataset["mask_x"]).astype(numpy.bool_))
-y_c = torch.from_numpy(numpy.array(dataset["y"]).astype(numpy.float32))
-mask_y_c = torch.from_numpy(numpy.array(dataset["mask_y"]).astype(numpy.bool_))
-mask_loss_c = torch.from_numpy( numpy.array(dataset["mask_loss"]).astype(numpy.bool_))
-map_data_c = torch.from_numpy(numpy.array(dataset["map_data"]).astype(numpy.float32))
-mask_map_c = torch.from_numpy(numpy.array(dataset["mask_map"]).astype(numpy.bool_))
-offset_c = torch.from_numpy(numpy.array(dataset["offset"]).astype(numpy.float32))
-x_ego_c = torch.from_numpy(numpy.array(dataset["x_ego"]).astype(numpy.float32))
-y_ego_c = torch.from_numpy(numpy.array(dataset["y_ego"]).astype(numpy.float32))
-
-assert torch.allclose(x, x_c)
-assert torch.allclose(mask_x, mask_x_c)
-assert torch.allclose(y, y_c)
-assert torch.allclose(mask_y, mask_y_c)
-assert torch.allclose(mask_loss, mask_loss_c)
-assert torch.allclose(map_data, map_data_c)
-assert torch.allclose(mask_map, mask_map_c)
-assert torch.allclose(offset, offset_c)
-assert torch.allclose(x_ego, x_ego_c)
-assert torch.allclose(y_ego, y_ego_c)
-
-print("All good!")
-
diff --git a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/mpc_planner/test_planner.py b/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/mpc_planner/test_planner.py
deleted file mode 100644
index 950ce4a5cc87c37169ac4cc5e0c2b78239703778..0000000000000000000000000000000000000000
--- a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/mpc_planner/test_planner.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import os
-import pytest
-import torch
-from mmcv import Config
-
-from risk_biased.mpc_planner.planner import MPCPlanner, MPCPlannerParams
-from risk_biased.predictors.biased_predictor import (
- LitTrajectoryPredictorParams,
- LitTrajectoryPredictor,
-)
-
-from risk_biased.scene_dataset.loaders import SceneDataLoaders
-from risk_biased.utils.cost import TTCCostParams
-from risk_biased.utils.planner_utils import to_state
-
-
-@pytest.fixture(scope="module")
-def params():
- torch.manual_seed(0)
- working_dir = os.path.dirname(os.path.realpath(__file__))
- config_path = os.path.join(
- working_dir, "..", "..", "..", "risk_biased", "config", "learning_config.py"
- )
- cfg = Config.fromfile(config_path)
- cfg.num_control_samples = 10
- cfg.num_elite = 3
- cfg.iter_max = 3
- cfg.smoothing_factor = 0.2
- cfg.mean_warm_start = True
-
- cfg.acceleration_std_x_m_s2 = 2.0
- cfg.acceleration_std_y_m_s2 = 0.0
-
- cfg.dt = 0.1
- cfg.num_steps = 3
- cfg.num_steps_future = 5
-
- cfg.tracking_cost_scale_longitudinal = 0.1
- cfg.tracking_cost_scale_lateral = 1.0
- cfg.tracking_cost_reduce = "mean"
-
- cfg.cost_scale = 10
- cfg.cost_reduce = "mean"
- cfg.distance_bandwidth = 2
- cfg.time_bandwidth = 0.5
- cfg.min_velocity_diff = 0.01
-
- cfg.risk_estimator = {"type": "cvar", "eps": 1e-3}
-
- cfg.interaction_type = ""
- cfg.mcg_dim_expansion = 2
- cfg.mcg_num_layers = 0
- cfg.num_attention_heads = 4
- cfg.num_blocks = 3
- cfg.sequence_encoder_type = "MLP" # one of "MLP", "LSTM", "maskedLSTM"
- cfg.sequence_decoder_type = "MLP" # one of "MLP", "LSTM"
-
- cfg.state_dim = 2
- cfg.dynamic_state_dim = 2
- cfg.map_state_dim = 2
- cfg.max_size_lane = 0
- cfg.latent_dim = 2
- cfg.hidden_dim = 64
- cfg.num_hidden_layers = 3
- cfg.risk_distribution = {"type": "log-uniform", "min": 0, "max": 1, "scale": 3}
- cfg.kl_weight = 1.0
- cfg.kl_threshold = 0.1
- cfg.learning_rate = 1e-3
- cfg.n_mc_samples_risk = 2048
- cfg.n_mc_samples_biased = 128
- cfg.risk_weight = 1e3
- cfg.use_risk_constraint = True
- cfg.risk_constraint_update_every_n_epoch = 20
- cfg.risk_constraint_weight_update_factor = 1.5
- cfg.risk_constraint_weight_maximum = 1e5
- cfg.condition_on_ego_future = True
- cfg.is_mlp_residual = True
- cfg.num_samples_min_fde = 6
-
- return cfg
-
-
-class TestMPCPlanner:
- @pytest.fixture(autouse=True)
- def setup(self, params):
- self.planner_params = MPCPlannerParams.from_config(params)
- predictor_params = LitTrajectoryPredictorParams.from_config(params)
- self.predictor = LitTrajectoryPredictor(
- predictor_params,
- TTCCostParams.from_config(params),
- SceneDataLoaders.unnormalize_trajectory,
- )
- self.normalizer = SceneDataLoaders.normalize_trajectory
- self.planner = MPCPlanner(self.planner_params, self.predictor, self.normalizer)
-
- def test_reset(self):
- self.planner.reset()
- assert torch.allclose(
- self.planner.solver.control_input_mean_init,
- self.planner.control_input_mean_init,
- )
- assert torch.allclose(
- self.planner.solver.control_input_std_init,
- self.planner.control_input_std_init,
- )
- assert self.planner._ego_state_history == []
- assert self.planner._ego_state_target_trajectory == None
- assert self.planner._ego_state_planned_trajectory == None
-
- assert self.planner._ado_state_history == []
- assert self.planner._latest_ado_position_future_samples == None
-
- def test_replan(self, params):
- num_prediction_samples = 100
- num_agents = 1
- self.planner.reset()
- current_ego_state = to_state(torch.Tensor([[1, 1, 0, 0]]), params.dt)
- for step in range(params.num_steps + 1):
- self.planner._update_ego_state_history(current_ego_state)
-
- current_ado_state = to_state(torch.Tensor([[2.0, 0.0, 0, 0]]), params.dt)
- for step in range(params.num_steps + 1):
- self.planner._update_ado_state_history(current_ado_state)
-
- target_velocity = torch.Tensor([3.0, 0.0])
-
- self.planner.replan(
- current_ado_state,
- current_ego_state,
- target_velocity,
- num_prediction_samples=num_prediction_samples,
- )
- assert self.planner._ego_state_planned_trajectory.shape == torch.Size(
- [num_agents, params.num_steps_future]
- )
- next_ego_state = self.planner.get_planned_next_ego_state()
- assert next_ego_state.shape == torch.Size([1])
- assert self.planner.fetch_latest_prediction().shape == torch.Size(
- [num_prediction_samples, num_agents, params.num_steps_future]
- )
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/hebrewprober.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/hebrewprober.py
deleted file mode 100644
index 785d0057bcc0ea74a4b8d65ab7a0de78474bf892..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/hebrewprober.py
+++ /dev/null
@@ -1,316 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Universal charset detector code.
-#
-# The Initial Developer of the Original Code is
-# Shy Shalom
-# Portions created by the Initial Developer are Copyright (C) 2005
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from typing import Optional, Union
-
-from .charsetprober import CharSetProber
-from .enums import ProbingState
-from .sbcharsetprober import SingleByteCharSetProber
-
-# This prober doesn't actually recognize a language or a charset.
-# It is a helper prober for the use of the Hebrew model probers
-
-### General ideas of the Hebrew charset recognition ###
-#
-# Four main charsets exist in Hebrew:
-# "ISO-8859-8" - Visual Hebrew
-# "windows-1255" - Logical Hebrew
-# "ISO-8859-8-I" - Logical Hebrew
-# "x-mac-hebrew" - ?? Logical Hebrew ??
-#
-# Both "ISO" charsets use a completely identical set of code points, whereas
-# "windows-1255" and "x-mac-hebrew" are two different proper supersets of
-# these code points. windows-1255 defines additional characters in the range
-# 0x80-0x9F as some misc punctuation marks as well as some Hebrew-specific
-# diacritics and additional 'Yiddish' ligature letters in the range 0xc0-0xd6.
-# x-mac-hebrew defines similar additional code points but with a different
-# mapping.
-#
-# As far as an average Hebrew text with no diacritics is concerned, all four
-# charsets are identical with respect to code points. Meaning that for the
-# main Hebrew alphabet, all four map the same values to all 27 Hebrew letters
-# (including final letters).
-#
-# The dominant difference between these charsets is their directionality.
-# "Visual" directionality means that the text is ordered as if the renderer is
-# not aware of a BIDI rendering algorithm. The renderer sees the text and
-# draws it from left to right. The text itself when ordered naturally is read
-# backwards. A buffer of Visual Hebrew generally looks like so:
-# "[last word of first line spelled backwards] [whole line ordered backwards
-# and spelled backwards] [first word of first line spelled backwards]
-# [end of line] [last word of second line] ... etc' "
-# adding punctuation marks, numbers and English text to visual text is
-# naturally also "visual" and from left to right.
-#
-# "Logical" directionality means the text is ordered "naturally" according to
-# the order it is read. It is the responsibility of the renderer to display
-# the text from right to left. A BIDI algorithm is used to place general
-# punctuation marks, numbers and English text in the text.
-#
-# Texts in x-mac-hebrew are almost impossible to find on the Internet. From
-# what little evidence I could find, it seems that its general directionality
-# is Logical.
-#
-# To sum up all of the above, the Hebrew probing mechanism knows about two
-# charsets:
-# Visual Hebrew - "ISO-8859-8" - backwards text - Words and sentences are
-# backwards while line order is natural. For charset recognition purposes
-# the line order is unimportant (In fact, for this implementation, even
-# word order is unimportant).
-# Logical Hebrew - "windows-1255" - normal, naturally ordered text.
-#
-# "ISO-8859-8-I" is a subset of windows-1255 and doesn't need to be
-# specifically identified.
-# "x-mac-hebrew" is also identified as windows-1255. A text in x-mac-hebrew
-# that contain special punctuation marks or diacritics is displayed with
-# some unconverted characters showing as question marks. This problem might
-# be corrected using another model prober for x-mac-hebrew. Due to the fact
-# that x-mac-hebrew texts are so rare, writing another model prober isn't
-# worth the effort and performance hit.
-#
-#### The Prober ####
-#
-# The prober is divided between two SBCharSetProbers and a HebrewProber,
-# all of which are managed, created, fed data, inquired and deleted by the
-# SBCSGroupProber. The two SBCharSetProbers identify that the text is in
-# fact some kind of Hebrew, Logical or Visual. The final decision about which
-# one is it is made by the HebrewProber by combining final-letter scores
-# with the scores of the two SBCharSetProbers to produce a final answer.
-#
-# The SBCSGroupProber is responsible for stripping the original text of HTML
-# tags, English characters, numbers, low-ASCII punctuation characters, spaces
-# and new lines. It reduces any sequence of such characters to a single space.
-# The buffer fed to each prober in the SBCS group prober is pure text in
-# high-ASCII.
-# The two SBCharSetProbers (model probers) share the same language model:
-# Win1255Model.
-# The first SBCharSetProber uses the model normally as any other
-# SBCharSetProber does, to recognize windows-1255, upon which this model was
-# built. The second SBCharSetProber is told to make the pair-of-letter
-# lookup in the language model backwards. This in practice exactly simulates
-# a visual Hebrew model using the windows-1255 logical Hebrew model.
-#
-# The HebrewProber is not using any language model. All it does is look for
-# final-letter evidence suggesting the text is either logical Hebrew or visual
-# Hebrew. Disjointed from the model probers, the results of the HebrewProber
-# alone are meaningless. HebrewProber always returns 0.00 as confidence
-# since it never identifies a charset by itself. Instead, the pointer to the
-# HebrewProber is passed to the model probers as a helper "Name Prober".
-# When the Group prober receives a positive identification from any prober,
-# it asks for the name of the charset identified. If the prober queried is a
-# Hebrew model prober, the model prober forwards the call to the
-# HebrewProber to make the final decision. In the HebrewProber, the
-# decision is made according to the final-letters scores maintained and Both
-# model probers scores. The answer is returned in the form of the name of the
-# charset identified, either "windows-1255" or "ISO-8859-8".
-
-
-class HebrewProber(CharSetProber):
- SPACE = 0x20
- # windows-1255 / ISO-8859-8 code points of interest
- FINAL_KAF = 0xEA
- NORMAL_KAF = 0xEB
- FINAL_MEM = 0xED
- NORMAL_MEM = 0xEE
- FINAL_NUN = 0xEF
- NORMAL_NUN = 0xF0
- FINAL_PE = 0xF3
- NORMAL_PE = 0xF4
- FINAL_TSADI = 0xF5
- NORMAL_TSADI = 0xF6
-
- # Minimum Visual vs Logical final letter score difference.
- # If the difference is below this, don't rely solely on the final letter score
- # distance.
- MIN_FINAL_CHAR_DISTANCE = 5
-
- # Minimum Visual vs Logical model score difference.
- # If the difference is below this, don't rely at all on the model score
- # distance.
- MIN_MODEL_DISTANCE = 0.01
-
- VISUAL_HEBREW_NAME = "ISO-8859-8"
- LOGICAL_HEBREW_NAME = "windows-1255"
-
- def __init__(self) -> None:
- super().__init__()
- self._final_char_logical_score = 0
- self._final_char_visual_score = 0
- self._prev = self.SPACE
- self._before_prev = self.SPACE
- self._logical_prober: Optional[SingleByteCharSetProber] = None
- self._visual_prober: Optional[SingleByteCharSetProber] = None
- self.reset()
-
- def reset(self) -> None:
- self._final_char_logical_score = 0
- self._final_char_visual_score = 0
- # The two last characters seen in the previous buffer,
- # mPrev and mBeforePrev are initialized to space in order to simulate
- # a word delimiter at the beginning of the data
- self._prev = self.SPACE
- self._before_prev = self.SPACE
- # These probers are owned by the group prober.
-
- def set_model_probers(
- self,
- logical_prober: SingleByteCharSetProber,
- visual_prober: SingleByteCharSetProber,
- ) -> None:
- self._logical_prober = logical_prober
- self._visual_prober = visual_prober
-
- def is_final(self, c: int) -> bool:
- return c in [
- self.FINAL_KAF,
- self.FINAL_MEM,
- self.FINAL_NUN,
- self.FINAL_PE,
- self.FINAL_TSADI,
- ]
-
- def is_non_final(self, c: int) -> bool:
- # The normal Tsadi is not a good Non-Final letter due to words like
- # 'lechotet' (to chat) containing an apostrophe after the tsadi. This
- # apostrophe is converted to a space in FilterWithoutEnglishLetters
- # causing the Non-Final tsadi to appear at an end of a word even
- # though this is not the case in the original text.
- # The letters Pe and Kaf rarely display a related behavior of not being
- # a good Non-Final letter. Words like 'Pop', 'Winamp' and 'Mubarak'
- # for example legally end with a Non-Final Pe or Kaf. However, the
- # benefit of these letters as Non-Final letters outweighs the damage
- # since these words are quite rare.
- return c in [self.NORMAL_KAF, self.NORMAL_MEM, self.NORMAL_NUN, self.NORMAL_PE]
-
- def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
- # Final letter analysis for logical-visual decision.
- # Look for evidence that the received buffer is either logical Hebrew
- # or visual Hebrew.
- # The following cases are checked:
- # 1) A word longer than 1 letter, ending with a final letter. This is
- # an indication that the text is laid out "naturally" since the
- # final letter really appears at the end. +1 for logical score.
- # 2) A word longer than 1 letter, ending with a Non-Final letter. In
- # normal Hebrew, words ending with Kaf, Mem, Nun, Pe or Tsadi,
- # should not end with the Non-Final form of that letter. Exceptions
- # to this rule are mentioned above in isNonFinal(). This is an
- # indication that the text is laid out backwards. +1 for visual
- # score
- # 3) A word longer than 1 letter, starting with a final letter. Final
- # letters should not appear at the beginning of a word. This is an
- # indication that the text is laid out backwards. +1 for visual
- # score.
- #
- # The visual score and logical score are accumulated throughout the
- # text and are finally checked against each other in GetCharSetName().
- # No checking for final letters in the middle of words is done since
- # that case is not an indication for either Logical or Visual text.
- #
- # We automatically filter out all 7-bit characters (replace them with
- # spaces) so the word boundary detection works properly. [MAP]
-
- if self.state == ProbingState.NOT_ME:
- # Both model probers say it's not them. No reason to continue.
- return ProbingState.NOT_ME
-
- byte_str = self.filter_high_byte_only(byte_str)
-
- for cur in byte_str:
- if cur == self.SPACE:
- # We stand on a space - a word just ended
- if self._before_prev != self.SPACE:
- # next-to-last char was not a space so self._prev is not a
- # 1 letter word
- if self.is_final(self._prev):
- # case (1) [-2:not space][-1:final letter][cur:space]
- self._final_char_logical_score += 1
- elif self.is_non_final(self._prev):
- # case (2) [-2:not space][-1:Non-Final letter][
- # cur:space]
- self._final_char_visual_score += 1
- else:
- # Not standing on a space
- if (
- (self._before_prev == self.SPACE)
- and (self.is_final(self._prev))
- and (cur != self.SPACE)
- ):
- # case (3) [-2:space][-1:final letter][cur:not space]
- self._final_char_visual_score += 1
- self._before_prev = self._prev
- self._prev = cur
-
- # Forever detecting, till the end or until both model probers return
- # ProbingState.NOT_ME (handled above)
- return ProbingState.DETECTING
-
- @property
- def charset_name(self) -> str:
- assert self._logical_prober is not None
- assert self._visual_prober is not None
-
- # Make the decision: is it Logical or Visual?
- # If the final letter score distance is dominant enough, rely on it.
- finalsub = self._final_char_logical_score - self._final_char_visual_score
- if finalsub >= self.MIN_FINAL_CHAR_DISTANCE:
- return self.LOGICAL_HEBREW_NAME
- if finalsub <= -self.MIN_FINAL_CHAR_DISTANCE:
- return self.VISUAL_HEBREW_NAME
-
- # It's not dominant enough, try to rely on the model scores instead.
- modelsub = (
- self._logical_prober.get_confidence() - self._visual_prober.get_confidence()
- )
- if modelsub > self.MIN_MODEL_DISTANCE:
- return self.LOGICAL_HEBREW_NAME
- if modelsub < -self.MIN_MODEL_DISTANCE:
- return self.VISUAL_HEBREW_NAME
-
- # Still no good, back to final letter distance, maybe it'll save the
- # day.
- if finalsub < 0.0:
- return self.VISUAL_HEBREW_NAME
-
- # (finalsub > 0 - Logical) or (don't know what to do) default to
- # Logical.
- return self.LOGICAL_HEBREW_NAME
-
- @property
- def language(self) -> str:
- return "Hebrew"
-
- @property
- def state(self) -> ProbingState:
- assert self._logical_prober is not None
- assert self._visual_prober is not None
-
- # Remain active as long as any of the model probers are active.
- if (self._logical_prober.state == ProbingState.NOT_ME) and (
- self._visual_prober.state == ProbingState.NOT_ME
- ):
- return ProbingState.NOT_ME
- return ProbingState.DETECTING
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/rotated_fast_rcnn.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/rotated_fast_rcnn.py
deleted file mode 100644
index b1eedeebf8e3bde80722fc4acf51be6ca212cb3d..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/rotated_fast_rcnn.py
+++ /dev/null
@@ -1,270 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import numpy as np
-import torch
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, batched_nms_rotated
-from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated
-from detectron2.utils.events import get_event_storage
-
-from ..box_regression import Box2BoxTransformRotated
-from ..poolers import ROIPooler
-from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals
-from .box_head import build_box_head
-from .fast_rcnn import FastRCNNOutputLayers
-from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads
-
-logger = logging.getLogger(__name__)
-
-"""
-Shape shorthand in this module:
-
- N: number of images in the minibatch
- R: number of ROIs, combined over all images, in the minibatch
- Ri: number of ROIs in image i
- K: number of foreground classes. E.g.,there are 80 foreground classes in COCO.
-
-Naming convention:
-
- deltas: refers to the 5-d (dx, dy, dw, dh, da) deltas that parameterize the box2box
- transform (see :class:`box_regression.Box2BoxTransformRotated`).
-
- pred_class_logits: predicted class scores in [-inf, +inf]; use
- softmax(pred_class_logits) to estimate P(class).
-
- gt_classes: ground-truth classification labels in [0, K], where [0, K) represent
- foreground object classes and K represents the background class.
-
- pred_proposal_deltas: predicted rotated box2box transform deltas for transforming proposals
- to detection box predictions.
-
- gt_proposal_deltas: ground-truth rotated box2box transform deltas
-"""
-
-
-def fast_rcnn_inference_rotated(
- boxes, scores, image_shapes, score_thresh, nms_thresh, topk_per_image
-):
- """
- Call `fast_rcnn_inference_single_image_rotated` for all images.
-
- Args:
- boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic
- boxes for each image. Element i has shape (Ri, K * 5) if doing
- class-specific regression, or (Ri, 5) if doing class-agnostic
- regression, where Ri is the number of predicted objects for image i.
- This is compatible with the output of :meth:`FastRCNNOutputLayers.predict_boxes`.
- scores (list[Tensor]): A list of Tensors of predicted class scores for each image.
- Element i has shape (Ri, K + 1), where Ri is the number of predicted objects
- for image i. Compatible with the output of :meth:`FastRCNNOutputLayers.predict_probs`.
- image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch.
- score_thresh (float): Only return detections with a confidence score exceeding this
- threshold.
- nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1].
- topk_per_image (int): The number of top scoring detections to return. Set < 0 to return
- all detections.
-
- Returns:
- instances: (list[Instances]): A list of N instances, one for each image in the batch,
- that stores the topk most confidence detections.
- kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates
- the corresponding boxes/scores index in [0, Ri) from the input, for image i.
- """
- result_per_image = [
- fast_rcnn_inference_single_image_rotated(
- boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, topk_per_image
- )
- for scores_per_image, boxes_per_image, image_shape in zip(scores, boxes, image_shapes)
- ]
- return [x[0] for x in result_per_image], [x[1] for x in result_per_image]
-
-
-def fast_rcnn_inference_single_image_rotated(
- boxes, scores, image_shape, score_thresh, nms_thresh, topk_per_image
-):
- """
- Single-image inference. Return rotated bounding-box detection results by thresholding
- on scores and applying rotated non-maximum suppression (Rotated NMS).
-
- Args:
- Same as `fast_rcnn_inference_rotated`, but with rotated boxes, scores, and image shapes
- per image.
-
- Returns:
- Same as `fast_rcnn_inference_rotated`, but for only one image.
- """
- valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1)
- if not valid_mask.all():
- boxes = boxes[valid_mask]
- scores = scores[valid_mask]
-
- B = 5 # box dimension
- scores = scores[:, :-1]
- num_bbox_reg_classes = boxes.shape[1] // B
- # Convert to Boxes to use the `clip` function ...
- boxes = RotatedBoxes(boxes.reshape(-1, B))
- boxes.clip(image_shape)
- boxes = boxes.tensor.view(-1, num_bbox_reg_classes, B) # R x C x B
- # Filter results based on detection scores
- filter_mask = scores > score_thresh # R x K
- # R' x 2. First column contains indices of the R predictions;
- # Second column contains indices of classes.
- filter_inds = filter_mask.nonzero()
- if num_bbox_reg_classes == 1:
- boxes = boxes[filter_inds[:, 0], 0]
- else:
- boxes = boxes[filter_mask]
- scores = scores[filter_mask]
-
- # Apply per-class Rotated NMS
- keep = batched_nms_rotated(boxes, scores, filter_inds[:, 1], nms_thresh)
- if topk_per_image >= 0:
- keep = keep[:topk_per_image]
- boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep]
-
- result = Instances(image_shape)
- result.pred_boxes = RotatedBoxes(boxes)
- result.scores = scores
- result.pred_classes = filter_inds[:, 1]
-
- return result, filter_inds[:, 0]
-
-
-class RotatedFastRCNNOutputLayers(FastRCNNOutputLayers):
- """
- Two linear layers for predicting Rotated Fast R-CNN outputs.
- """
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- args = super().from_config(cfg, input_shape)
- args["box2box_transform"] = Box2BoxTransformRotated(
- weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS
- )
- return args
-
- def inference(self, predictions, proposals):
- """
- Returns:
- list[Instances]: same as `fast_rcnn_inference_rotated`.
- list[Tensor]: same as `fast_rcnn_inference_rotated`.
- """
- boxes = self.predict_boxes(predictions, proposals)
- scores = self.predict_probs(predictions, proposals)
- image_shapes = [x.image_size for x in proposals]
-
- return fast_rcnn_inference_rotated(
- boxes,
- scores,
- image_shapes,
- self.test_score_thresh,
- self.test_nms_thresh,
- self.test_topk_per_image,
- )
-
-
-@ROI_HEADS_REGISTRY.register()
-class RROIHeads(StandardROIHeads):
- """
- This class is used by Rotated Fast R-CNN to detect rotated boxes.
- For now, it only supports box predictions but not mask or keypoints.
- """
-
- @configurable
- def __init__(self, **kwargs):
- """
- NOTE: this interface is experimental.
- """
- super().__init__(**kwargs)
- assert (
- not self.mask_on and not self.keypoint_on
- ), "Mask/Keypoints not supported in Rotated ROIHeads."
- assert not self.train_on_pred_boxes, "train_on_pred_boxes not implemented for RROIHeads!"
-
- @classmethod
- def _init_box_head(cls, cfg, input_shape):
- # fmt: off
- in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- # fmt: on
- assert pooler_type in ["ROIAlignRotated"], pooler_type
- # assume all channel counts are equal
- in_channels = [input_shape[f].channels for f in in_features][0]
-
- box_pooler = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
- box_head = build_box_head(
- cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution)
- )
- # This line is the only difference v.s. StandardROIHeads
- box_predictor = RotatedFastRCNNOutputLayers(cfg, box_head.output_shape)
- return {
- "box_in_features": in_features,
- "box_pooler": box_pooler,
- "box_head": box_head,
- "box_predictor": box_predictor,
- }
-
- @torch.no_grad()
- def label_and_sample_proposals(self, proposals, targets):
- """
- Prepare some proposals to be used to train the RROI heads.
- It performs box matching between `proposals` and `targets`, and assigns
- training labels to the proposals.
- It returns `self.batch_size_per_image` random samples from proposals and groundtruth boxes,
- with a fraction of positives that is no larger than `self.positive_sample_fraction.
-
- Args:
- See :meth:`StandardROIHeads.forward`
-
- Returns:
- list[Instances]: length `N` list of `Instances`s containing the proposals
- sampled for training. Each `Instances` has the following fields:
- - proposal_boxes: the rotated proposal boxes
- - gt_boxes: the ground-truth rotated boxes that the proposal is assigned to
- (this is only meaningful if the proposal has a label > 0; if label = 0
- then the ground-truth box is random)
- - gt_classes: the ground-truth classification lable for each proposal
- """
- if self.proposal_append_gt:
- proposals = add_ground_truth_to_proposals(targets, proposals)
-
- proposals_with_gt = []
-
- num_fg_samples = []
- num_bg_samples = []
- for proposals_per_image, targets_per_image in zip(proposals, targets):
- has_gt = len(targets_per_image) > 0
- match_quality_matrix = pairwise_iou_rotated(
- targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
- )
- matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix)
- sampled_idxs, gt_classes = self._sample_proposals(
- matched_idxs, matched_labels, targets_per_image.gt_classes
- )
-
- proposals_per_image = proposals_per_image[sampled_idxs]
- proposals_per_image.gt_classes = gt_classes
-
- if has_gt:
- sampled_targets = matched_idxs[sampled_idxs]
- proposals_per_image.gt_boxes = targets_per_image.gt_boxes[sampled_targets]
-
- num_bg_samples.append((gt_classes == self.num_classes).sum().item())
- num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1])
- proposals_with_gt.append(proposals_per_image)
-
- # Log the number of fg/bg samples that are selected for training ROI heads
- storage = get_event_storage()
- storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples))
- storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples))
-
- return proposals_with_gt
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/colormap.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/colormap.py
deleted file mode 100644
index 150ccc372262ec4de0b36db66a303cae9495e67f..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/colormap.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-"""
-An awesome colormap for really neat visualizations.
-Copied from Detectron, and removed gray colors.
-"""
-
-import numpy as np
-
-__all__ = ["colormap", "random_color"]
-
-# fmt: off
-# RGB:
-_COLORS = np.array(
- [
- 0.000, 0.447, 0.741,
- 0.850, 0.325, 0.098,
- 0.929, 0.694, 0.125,
- 0.494, 0.184, 0.556,
- 0.466, 0.674, 0.188,
- 0.301, 0.745, 0.933,
- 0.635, 0.078, 0.184,
- 0.300, 0.300, 0.300,
- 0.600, 0.600, 0.600,
- 1.000, 0.000, 0.000,
- 1.000, 0.500, 0.000,
- 0.749, 0.749, 0.000,
- 0.000, 1.000, 0.000,
- 0.000, 0.000, 1.000,
- 0.667, 0.000, 1.000,
- 0.333, 0.333, 0.000,
- 0.333, 0.667, 0.000,
- 0.333, 1.000, 0.000,
- 0.667, 0.333, 0.000,
- 0.667, 0.667, 0.000,
- 0.667, 1.000, 0.000,
- 1.000, 0.333, 0.000,
- 1.000, 0.667, 0.000,
- 1.000, 1.000, 0.000,
- 0.000, 0.333, 0.500,
- 0.000, 0.667, 0.500,
- 0.000, 1.000, 0.500,
- 0.333, 0.000, 0.500,
- 0.333, 0.333, 0.500,
- 0.333, 0.667, 0.500,
- 0.333, 1.000, 0.500,
- 0.667, 0.000, 0.500,
- 0.667, 0.333, 0.500,
- 0.667, 0.667, 0.500,
- 0.667, 1.000, 0.500,
- 1.000, 0.000, 0.500,
- 1.000, 0.333, 0.500,
- 1.000, 0.667, 0.500,
- 1.000, 1.000, 0.500,
- 0.000, 0.333, 1.000,
- 0.000, 0.667, 1.000,
- 0.000, 1.000, 1.000,
- 0.333, 0.000, 1.000,
- 0.333, 0.333, 1.000,
- 0.333, 0.667, 1.000,
- 0.333, 1.000, 1.000,
- 0.667, 0.000, 1.000,
- 0.667, 0.333, 1.000,
- 0.667, 0.667, 1.000,
- 0.667, 1.000, 1.000,
- 1.000, 0.000, 1.000,
- 1.000, 0.333, 1.000,
- 1.000, 0.667, 1.000,
- 0.333, 0.000, 0.000,
- 0.500, 0.000, 0.000,
- 0.667, 0.000, 0.000,
- 0.833, 0.000, 0.000,
- 1.000, 0.000, 0.000,
- 0.000, 0.167, 0.000,
- 0.000, 0.333, 0.000,
- 0.000, 0.500, 0.000,
- 0.000, 0.667, 0.000,
- 0.000, 0.833, 0.000,
- 0.000, 1.000, 0.000,
- 0.000, 0.000, 0.167,
- 0.000, 0.000, 0.333,
- 0.000, 0.000, 0.500,
- 0.000, 0.000, 0.667,
- 0.000, 0.000, 0.833,
- 0.000, 0.000, 1.000,
- 0.000, 0.000, 0.000,
- 0.143, 0.143, 0.143,
- 0.857, 0.857, 0.857,
- 1.000, 1.000, 1.000
- ]
-).astype(np.float32).reshape(-1, 3)
-# fmt: on
-
-
-def colormap(rgb=False, maximum=255):
- """
- Args:
- rgb (bool): whether to return RGB colors or BGR colors.
- maximum (int): either 255 or 1
-
- Returns:
- ndarray: a float32 array of Nx3 colors, in range [0, 255] or [0, 1]
- """
- assert maximum in [255, 1], maximum
- c = _COLORS * maximum
- if not rgb:
- c = c[:, ::-1]
- return c
-
-
-def random_color(rgb=False, maximum=255):
- """
- Args:
- rgb (bool): whether to return RGB colors or BGR colors.
- maximum (int): either 255 or 1
-
- Returns:
- ndarray: a vector of 3 numbers
- """
- idx = np.random.randint(0, len(_COLORS))
- ret = _COLORS[idx] * maximum
- if not rgb:
- ret = ret[::-1]
- return ret
-
-
-if __name__ == "__main__":
- import cv2
-
- size = 100
- H, W = 10, 10
- canvas = np.random.rand(H * size, W * size, 3).astype("float32")
- for h in range(H):
- for w in range(W):
- idx = h * W + w
- if idx >= len(_COLORS):
- break
- canvas[h * size : (h + 1) * size, w * size : (w + 1) * size] = _COLORS[idx]
- cv2.imshow("a", canvas)
- cv2.waitKey(0)
diff --git a/spaces/TerrificTerry/HAAO_AI/README.md b/spaces/TerrificTerry/HAAO_AI/README.md
deleted file mode 100644
index 210c3ecee2217a531225c7534215f4e94e529a42..0000000000000000000000000000000000000000
--- a/spaces/TerrificTerry/HAAO_AI/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: HAAO-AI
-emoji: 🌍
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI/app_upload.py b/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI/app_upload.py
deleted file mode 100644
index 86b2bd96641cce3d87b245567cc06d49524b9941..0000000000000000000000000000000000000000
--- a/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI/app_upload.py
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-
-import gradio as gr
-
-from constants import MODEL_LIBRARY_ORG_NAME, UploadTarget
-from uploader import upload
-from utils import find_exp_dirs
-
-
-def load_local_model_list() -> dict:
- choices = find_exp_dirs()
- return gr.update(choices=choices, value=choices[0] if choices else None)
-
-
-def create_upload_demo(disable_run_button: bool = False) -> gr.Blocks:
- model_dirs = find_exp_dirs()
-
- with gr.Blocks() as demo:
- with gr.Box():
- gr.Markdown("Local Models")
- reload_button = gr.Button("Reload Model List")
- model_dir = gr.Dropdown(
- label="Model names", choices=model_dirs, value=model_dirs[0] if model_dirs else None
- )
- with gr.Box():
- gr.Markdown("Upload Settings")
- with gr.Row():
- use_private_repo = gr.Checkbox(label="Private", value=True)
- delete_existing_repo = gr.Checkbox(label="Delete existing repo of the same name", value=False)
- upload_to = gr.Radio(
- label="Upload to", choices=[_.value for _ in UploadTarget], value=UploadTarget.MODEL_LIBRARY.value
- )
- model_name = gr.Textbox(label="Model Name")
- hf_token = gr.Text(
- label="Hugging Face Write Token", type="password", visible=os.getenv("HF_TOKEN") is None
- )
- upload_button = gr.Button("Upload", interactive=not disable_run_button)
- gr.Markdown(
- f"""
- - You can upload your trained model to your personal profile (i.e. `https://huggingface.co/{{your_username}}/{{model_name}}`) or to the public [Tune-A-Video Library](https://huggingface.co/{MODEL_LIBRARY_ORG_NAME}) (i.e. `https://huggingface.co/{MODEL_LIBRARY_ORG_NAME}/{{model_name}}`).
- """
- )
- with gr.Box():
- gr.Markdown("Output message")
- output_message = gr.Markdown()
-
- reload_button.click(fn=load_local_model_list, inputs=None, outputs=model_dir)
- upload_button.click(
- fn=upload,
- inputs=[
- model_dir,
- model_name,
- upload_to,
- use_private_repo,
- delete_existing_repo,
- hf_token,
- ],
- outputs=output_message,
- )
- return demo
-
-
-if __name__ == "__main__":
- demo = create_upload_demo()
- demo.queue(api_open=False, max_size=1).launch()
diff --git a/spaces/Vijish/SkinDeep/app.py b/spaces/Vijish/SkinDeep/app.py
deleted file mode 100644
index 41d2938a438e890983fe67032f72b68071087294..0000000000000000000000000000000000000000
--- a/spaces/Vijish/SkinDeep/app.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import streamlit as st
-import urllib.request
-import PIL.Image
-from PIL import Image
-import requests
-import fastai
-from fastai.vision import *
-from fastai.utils.mem import *
-from fastai.vision import open_image, load_learner, image, torch
-import numpy as np
-from urllib.request import urlretrieve
-from io import BytesIO
-import numpy as np
-import torchvision.transforms as T
-from PIL import Image,ImageOps,ImageFilter
-from io import BytesIO
-import os
-
-
-
-class FeatureLoss(nn.Module):
- def __init__(self, m_feat, layer_ids, layer_wgts):
- super().__init__()
- self.m_feat = m_feat
- self.loss_features = [self.m_feat[i] for i in layer_ids]
- self.hooks = hook_outputs(self.loss_features, detach=False)
- self.wgts = layer_wgts
- self.metric_names = ['pixel',] + [f'feat_{i}' for i in range(len(layer_ids))
- ] + [f'gram_{i}' for i in range(len(layer_ids))]
-
- def make_features(self, x, clone=False):
- self.m_feat(x)
- return [(o.clone() if clone else o) for o in self.hooks.stored]
-
- def forward(self, input, target):
- out_feat = self.make_features(target, clone=True)
- in_feat = self.make_features(input)
- self.feat_losses = [base_loss(input,target)]
- self.feat_losses += [base_loss(f_in, f_out)*w
- for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
- self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3
- for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
- self.metrics = dict(zip(self.metric_names, self.feat_losses))
- return sum(self.feat_losses)
-
- def __del__(self): self.hooks.remove()
-
-
-MODEL_URL = "https://www.dropbox.com/s/vxgw0s7ktpla4dk/SkinDeep2.pkl?dl=1"
-urlretrieve(MODEL_URL, "SkinDeep2.pkl")
-path = Path(".")
-learn = load_learner(path, 'SkinDeep2.pkl')
-
-
-def predict(image):
- img_fast = open_image(image)
- a = PIL.Image.open(image).convert('RGB')
- st.image(a, caption='Input')
- p,img_hr,b = learn.predict(img_fast)
- x = np.minimum(np.maximum(image2np(img_hr.data*255), 0), 255).astype(np.uint8)
- img = PIL.Image.fromarray(x).convert('RGB')
- return st.image(img, caption='Tattoo')
-
-
-SIDEBAR_OPTION_DEMO_IMAGE = "Select a Demo Image"
-SIDEBAR_OPTION_UPLOAD_IMAGE = "Upload an Image"
-
-SIDEBAR_OPTIONS = [SIDEBAR_OPTION_DEMO_IMAGE, SIDEBAR_OPTION_UPLOAD_IMAGE]
-
-app_mode = st.sidebar.selectbox("Please select from the following", SIDEBAR_OPTIONS)
-photos = ["tatoo.jpg","tattoo2.jpg"]
-
-if app_mode == SIDEBAR_OPTION_DEMO_IMAGE:
- st.sidebar.write(" ------ ")
- option = st.sidebar.selectbox('Please select a sample image and then click PoP button', photos)
- pressed = st.sidebar.button('Predict')
- if pressed:
- st.empty()
- st.sidebar.write('Please wait for the magic to happen! This may take up to a minute.')
- predict(option)
-
-
-elif app_mode == SIDEBAR_OPTION_UPLOAD_IMAGE:
- uploaded_file = st.file_uploader("Choose an image...")
- if uploaded_file is not None:
- pressed = st.sidebar.button('Predict')
- if pressed:
- st.empty()
- st.sidebar.write('Please wait for the magic to happen! This may take up to a minute.')
- predict(uploaded_file)
\ No newline at end of file
diff --git a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/__init__.py b/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/__init__.py
deleted file mode 100644
index 159d48b876ae21fb777e7cf1f4ad157ccb356845..0000000000000000000000000000000000000000
--- a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__version__ = "2.0.1"
diff --git a/spaces/WZUN666/vits-uma-genshin-honkai/modules.py b/spaces/WZUN666/vits-uma-genshin-honkai/modules.py
deleted file mode 100644
index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000
--- a/spaces/WZUN666/vits-uma-genshin-honkai/modules.py
+++ /dev/null
@@ -1,388 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/text/models/forget_mult_cuda.cpp b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/text/models/forget_mult_cuda.cpp
deleted file mode 100644
index 65faf062ae63e1b93ca62262e382c5445f3fff9c..0000000000000000000000000000000000000000
--- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/text/models/forget_mult_cuda.cpp
+++ /dev/null
@@ -1,31 +0,0 @@
-#include
-
-#include
-
-// CUDA forward declarations
-at::Tensor forget_mult_cuda_forward(at::Tensor x, at::Tensor f, at::Tensor output, bool batch_first);
-
-// C++ interface
-
-#define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) AT_ASSERTM(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-at::Tensor forget_mult_forward(at::Tensor x, at::Tensor f, at::Tensor output, bool batch_first) {
- CHECK_INPUT(x); CHECK_INPUT(f); CHECK_INPUT(output);
- return forget_mult_cuda_forward(x, f, output, batch_first);
-}
-
-std::vector forget_mult_cuda_backward(at::Tensor x, at::Tensor f, at::Tensor output,
- at::Tensor grad_output, bool batch_first);
-
-std::vector forget_mult_backward(at::Tensor x, at::Tensor f, at::Tensor output,
- at::Tensor grad_output, bool batch_first) {
- CHECK_INPUT(x); CHECK_INPUT(f); CHECK_INPUT(output);
- return forget_mult_cuda_backward(x, f, output, grad_output, batch_first);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("forward", &forget_mult_forward, "ForgetMult forward (CUDA)");
- m.def("backward", &forget_mult_backward, "ForgetMult backward (CUDA)");
-}
diff --git a/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/text/ngu_dialect.py b/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/text/ngu_dialect.py
deleted file mode 100644
index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000
--- a/spaces/XlalalaX/VITS-Umamusume-voice-synthesizer/text/ngu_dialect.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import re
-import opencc
-
-
-dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou',
- 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing',
- 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang',
- 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan',
- 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen',
- 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'}
-
-converters = {}
-
-for dialect in dialects.values():
- try:
- converters[dialect] = opencc.OpenCC(dialect)
- except:
- pass
-
-
-def ngu_dialect_to_ipa(text, dialect):
- dialect = dialects[dialect]
- text = converters[dialect].convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/text/english.py b/spaces/XzJosh/Azuma-Bert-VITS2/text/english.py
deleted file mode 100644
index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Azuma-Bert-VITS2/text/english.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import pickle
-import os
-import re
-from g2p_en import G2p
-from string import punctuation
-
-from text import symbols
-
-current_file_path = os.path.dirname(__file__)
-CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep')
-CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle')
-_g2p = G2p()
-
-arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'}
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-def read_dict():
- g2p_dict = {}
- start_line = 49
- with open(CMU_DICT_PATH) as f:
- line = f.readline()
- line_index = 1
- while line:
- if line_index >= start_line:
- line = line.strip()
- word_split = line.split(' ')
- word = word_split[0]
-
- syllable_split = word_split[1].split(' - ')
- g2p_dict[word] = []
- for syllable in syllable_split:
- phone_split = syllable.split(' ')
- g2p_dict[word].append(phone_split)
-
- line_index = line_index + 1
- line = f.readline()
-
- return g2p_dict
-
-
-def cache_dict(g2p_dict, file_path):
- with open(file_path, 'wb') as pickle_file:
- pickle.dump(g2p_dict, pickle_file)
-
-
-def get_dict():
- if os.path.exists(CACHE_PATH):
- with open(CACHE_PATH, 'rb') as pickle_file:
- g2p_dict = pickle.load(pickle_file)
- else:
- g2p_dict = read_dict()
- cache_dict(g2p_dict, CACHE_PATH)
-
- return g2p_dict
-
-eng_dict = get_dict()
-
-def refine_ph(phn):
- tone = 0
- if re.search(r'\d$', phn):
- tone = int(phn[-1]) + 1
- phn = phn[:-1]
- return phn.lower(), tone
-
-def refine_syllables(syllables):
- tones = []
- phonemes = []
- for phn_list in syllables:
- for i in range(len(phn_list)):
- phn = phn_list[i]
- phn, tone = refine_ph(phn)
- phonemes.append(phn)
- tones.append(tone)
- return phonemes, tones
-
-
-def text_normalize(text):
- # todo: eng text normalize
- return text
-
-def g2p(text):
-
- phones = []
- tones = []
- words = re.split(r"([,;.\-\?\!\s+])", text)
- for w in words:
- if w.upper() in eng_dict:
- phns, tns = refine_syllables(eng_dict[w.upper()])
- phones += phns
- tones += tns
- else:
- phone_list = list(filter(lambda p: p != " ", _g2p(w)))
- for ph in phone_list:
- if ph in arpa:
- ph, tn = refine_ph(ph)
- phones.append(ph)
- tones.append(tn)
- else:
- phones.append(ph)
- tones.append(0)
- # todo: implement word2ph
- word2ph = [1 for i in phones]
-
- phones = [post_replace_ph(i) for i in phones]
- return phones, tones, word2ph
-
-if __name__ == "__main__":
- # print(get_dict())
- # print(eng_word_to_phoneme("hello"))
- print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder."))
- # all_phones = set()
- # for k, syllables in eng_dict.items():
- # for group in syllables:
- # for ph in group:
- # all_phones.add(ph)
- # print(all_phones)
\ No newline at end of file
diff --git a/spaces/XzJosh/Nana7mi-Bert-VITS2/attentions.py b/spaces/XzJosh/Nana7mi-Bert-VITS2/attentions.py
deleted file mode 100644
index 1192dd7268c20c11010e73a6017ed09549695afe..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Nana7mi-Bert-VITS2/attentions.py
+++ /dev/null
@@ -1,344 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import logging
-
-logger = logging.getLogger(__name__)
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- #if isflow:
- # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1)
- # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1)
- # self.cond_layer = weight_norm(cond_layer, name='weight')
- # self.gin_channels = 256
- self.cond_layer_idx = self.n_layers
- if 'gin_channels' in kwargs:
- self.gin_channels = kwargs['gin_channels']
- if self.gin_channels != 0:
- self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels)
- # vits2 says 3rd block, so idx is 2 by default
- self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2
- logging.debug(self.gin_channels, self.cond_layer_idx)
- assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers'
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
- def forward(self, x, x_mask, g=None):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- if i == self.cond_layer_idx and g is not None:
- g = self.spk_emb_linear(g.transpose(1, 2))
- g = g.transpose(1, 2)
- x = x + g
- x = x * x_mask
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/aws/userdata.sh b/spaces/YONG627/456123/yolov5-code-main/utils/aws/userdata.sh
deleted file mode 100644
index 5fc1332ac1b0d1794cf8f8c5f6918059ae5dc381..0000000000000000000000000000000000000000
--- a/spaces/YONG627/456123/yolov5-code-main/utils/aws/userdata.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
-# This script will run only once on first instance start (for a re-start script see mime.sh)
-# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir
-# Use >300 GB SSD
-
-cd home/ubuntu
-if [ ! -d yolov5 ]; then
- echo "Running first-time script." # install dependencies, download COCO, pull Docker
- git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5
- cd yolov5
- bash data/scripts/get_coco.sh && echo "COCO done." &
- sudo docker pull ultralytics/yolov5:latest && echo "Docker done." &
- python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." &
- wait && echo "All tasks done." # finish background tasks
-else
- echo "Running re-start script." # resume interrupted runs
- i=0
- list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour'
- while IFS= read -r id; do
- ((i++))
- echo "restarting container $i: $id"
- sudo docker start $id
- # sudo docker exec -it $id python train.py --resume # single-GPU
- sudo docker exec -d $id python utils/aws/resume.py # multi-scenario
- done <<<"$list"
-fi
diff --git a/spaces/YoHoCo0o0/Gradio/app.py b/spaces/YoHoCo0o0/Gradio/app.py
deleted file mode 100644
index cc4cec8a89febc1ec50e208c6447562bb957315e..0000000000000000000000000000000000000000
--- a/spaces/YoHoCo0o0/Gradio/app.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel
-
-title="My First Text Generator"
-description="Input Text."
-
-
-model1=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-model2=gr.Interface.load("huggingface/gpt2")
-model3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-125M")
-
-gr.Parallel(model1, model2, model3, title=title, description=description).launch()
diff --git a/spaces/abhibisht89/Donut_DocVQA/README.md b/spaces/abhibisht89/Donut_DocVQA/README.md
deleted file mode 100644
index 180f0d1841e62679fc97ff3a26e53fe34da7fa24..0000000000000000000000000000000000000000
--- a/spaces/abhibisht89/Donut_DocVQA/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Donut DocVQA
-emoji: 🍩
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py
deleted file mode 100644
index 3d2ad69f5c22adfe79d5fdabf920217628987166..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='GCHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- ratio=1 / 4.,
- pooling_type='att',
- fusion_types=('channel_add', ),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/nasfcos.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/nasfcos.py
deleted file mode 100644
index fb0148351546f45a451ef5f7a2a9ef4024e85b7c..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/nasfcos.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class NASFCOS(SingleStageDetector):
- """NAS-FCOS: Fast Neural Architecture Search for Object Detection.
-
- https://arxiv.org/abs/1906.0442
- """
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(NASFCOS, self).__init__(backbone, neck, bbox_head, train_cfg,
- test_cfg, pretrained)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/depthwise_separable_conv_module.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/depthwise_separable_conv_module.py
deleted file mode 100644
index 722d5d8d71f75486e2db3008907c4eadfca41d63..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/depthwise_separable_conv_module.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-
-from .conv_module import ConvModule
-
-
-class DepthwiseSeparableConvModule(nn.Module):
- """Depthwise separable convolution module.
-
- See https://arxiv.org/pdf/1704.04861.pdf for details.
-
- This module can replace a ConvModule with the conv block replaced by two
- conv block: depthwise conv block and pointwise conv block. The depthwise
- conv block contains depthwise-conv/norm/activation layers. The pointwise
- conv block contains pointwise-conv/norm/activation layers. It should be
- noted that there will be norm/activation layer in the depthwise conv block
- if `norm_cfg` and `act_cfg` are specified.
-
- Args:
- in_channels (int): Number of channels in the input feature map.
- Same as that in ``nn._ConvNd``.
- out_channels (int): Number of channels produced by the convolution.
- Same as that in ``nn._ConvNd``.
- kernel_size (int | tuple[int]): Size of the convolving kernel.
- Same as that in ``nn._ConvNd``.
- stride (int | tuple[int]): Stride of the convolution.
- Same as that in ``nn._ConvNd``. Default: 1.
- padding (int | tuple[int]): Zero-padding added to both sides of
- the input. Same as that in ``nn._ConvNd``. Default: 0.
- dilation (int | tuple[int]): Spacing between kernel elements.
- Same as that in ``nn._ConvNd``. Default: 1.
- norm_cfg (dict): Default norm config for both depthwise ConvModule and
- pointwise ConvModule. Default: None.
- act_cfg (dict): Default activation config for both depthwise ConvModule
- and pointwise ConvModule. Default: dict(type='ReLU').
- dw_norm_cfg (dict): Norm config of depthwise ConvModule. If it is
- 'default', it will be the same as `norm_cfg`. Default: 'default'.
- dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is
- 'default', it will be the same as `act_cfg`. Default: 'default'.
- pw_norm_cfg (dict): Norm config of pointwise ConvModule. If it is
- 'default', it will be the same as `norm_cfg`. Default: 'default'.
- pw_act_cfg (dict): Activation config of pointwise ConvModule. If it is
- 'default', it will be the same as `act_cfg`. Default: 'default'.
- kwargs (optional): Other shared arguments for depthwise and pointwise
- ConvModule. See ConvModule for ref.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- norm_cfg=None,
- act_cfg=dict(type='ReLU'),
- dw_norm_cfg='default',
- dw_act_cfg='default',
- pw_norm_cfg='default',
- pw_act_cfg='default',
- **kwargs):
- super(DepthwiseSeparableConvModule, self).__init__()
- assert 'groups' not in kwargs, 'groups should not be specified'
-
- # if norm/activation config of depthwise/pointwise ConvModule is not
- # specified, use default config.
- dw_norm_cfg = dw_norm_cfg if dw_norm_cfg != 'default' else norm_cfg
- dw_act_cfg = dw_act_cfg if dw_act_cfg != 'default' else act_cfg
- pw_norm_cfg = pw_norm_cfg if pw_norm_cfg != 'default' else norm_cfg
- pw_act_cfg = pw_act_cfg if pw_act_cfg != 'default' else act_cfg
-
- # depthwise convolution
- self.depthwise_conv = ConvModule(
- in_channels,
- in_channels,
- kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=in_channels,
- norm_cfg=dw_norm_cfg,
- act_cfg=dw_act_cfg,
- **kwargs)
-
- self.pointwise_conv = ConvModule(
- in_channels,
- out_channels,
- 1,
- norm_cfg=pw_norm_cfg,
- act_cfg=pw_act_cfg,
- **kwargs)
-
- def forward(self, x):
- x = self.depthwise_conv(x)
- x = self.pointwise_conv(x)
- return x
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/pyogg.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/pyogg.py
deleted file mode 100644
index 744d2708541b3f7e65055c3dc1eb47f25213d26c..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/codecs/pyogg.py
+++ /dev/null
@@ -1,474 +0,0 @@
-import pyogg
-
-import os.path
-import warnings
-
-from abc import abstractmethod
-from ctypes import c_void_p, POINTER, c_int, pointer, cast, c_char, c_char_p, CFUNCTYPE, c_ubyte
-from ctypes import memmove, create_string_buffer, byref
-
-from pyglet.media import StreamingSource
-from pyglet.media.codecs import AudioFormat, AudioData, MediaDecoder, StaticSource
-from pyglet.util import debug_print, DecodeException
-
-
-_debug = debug_print('Debug PyOgg codec')
-
-if _debug:
- if not pyogg.PYOGG_OGG_AVAIL and not pyogg.PYOGG_VORBIS_AVAIL and not pyogg.PYOGG_VORBIS_FILE_AVAIL:
- warnings.warn("PyOgg determined the ogg/vorbis libraries were not available.")
-
- if not pyogg.PYOGG_FLAC_AVAIL:
- warnings.warn("PyOgg determined the flac library was not available.")
-
- if not pyogg.PYOGG_OPUS_AVAIL and not pyogg.PYOGG_OPUS_FILE_AVAIL:
- warnings.warn("PyOgg determined the opus libraries were not available.")
-
-if not (
- pyogg.PYOGG_OGG_AVAIL and not pyogg.PYOGG_VORBIS_AVAIL and not pyogg.PYOGG_VORBIS_FILE_AVAIL) and (
- not pyogg.PYOGG_OPUS_AVAIL and not pyogg.PYOGG_OPUS_FILE_AVAIL) and not pyogg.PYOGG_FLAC_AVAIL:
- raise ImportError("PyOgg determined no supported libraries were found")
-
-# Some monkey patching PyOgg for FLAC.
-if pyogg.PYOGG_FLAC_AVAIL:
- # Original in PyOgg: FLAC__StreamDecoderEofCallback = CFUNCTYPE(FLAC__bool, POINTER(FLAC__StreamDecoder), c_void_p)
- # FLAC__bool is not valid for this return type (at least for ctypes). Needs to be an int or an error occurs.
- FLAC__StreamDecoderEofCallback = CFUNCTYPE(c_int, POINTER(pyogg.flac.FLAC__StreamDecoder), c_void_p)
-
- # Override explicits with c_void_p, so we can support non-seeking FLAC's (CFUNCTYPE does not accept None).
- pyogg.flac.libflac.FLAC__stream_decoder_init_stream.restype = pyogg.flac.FLAC__StreamDecoderInitStatus
- pyogg.flac.libflac.FLAC__stream_decoder_init_stream.argtypes = [POINTER(pyogg.flac.FLAC__StreamDecoder),
- pyogg.flac.FLAC__StreamDecoderReadCallback,
- c_void_p, # Seek
- c_void_p, # Tell
- c_void_p, # Length
- c_void_p, # EOF
- pyogg.flac.FLAC__StreamDecoderWriteCallback,
- pyogg.flac.FLAC__StreamDecoderMetadataCallback,
- pyogg.flac.FLAC__StreamDecoderErrorCallback,
- c_void_p]
-
-
- def metadata_callback(self, decoder, metadata, client_data):
- self.bits_per_sample = metadata.contents.data.stream_info.bits_per_sample # missing from pyogg
- self.total_samples = metadata.contents.data.stream_info.total_samples
- self.channels = metadata.contents.data.stream_info.channels
- self.frequency = metadata.contents.data.stream_info.sample_rate
-
-
- # Monkey patch metadata callback to include bits per sample as FLAC may rarely deviate from 16 bit.
- pyogg.FlacFileStream.metadata_callback = metadata_callback
-
-
-class MemoryVorbisObject:
- def __init__(self, file):
- self.file = file
-
- def read_func_cb(ptr, byte_size, size_to_read, datasource):
- data_size = size_to_read * byte_size
- data = self.file.read(data_size)
- read_size = len(data)
- memmove(ptr, data, read_size)
- return read_size
-
- def seek_func_cb(datasource, offset, whence):
- pos = self.file.seek(offset, whence)
- return pos
-
- def close_func_cb(datasource):
- return 0
-
- def tell_func_cb(datasource):
- return self.file.tell()
-
- self.read_func = pyogg.vorbis.read_func(read_func_cb)
- self.seek_func = pyogg.vorbis.seek_func(seek_func_cb)
- self.close_func = pyogg.vorbis.close_func(close_func_cb)
- self.tell_func = pyogg.vorbis.tell_func(tell_func_cb)
-
- self.callbacks = pyogg.vorbis.ov_callbacks(self.read_func, self.seek_func, self.close_func, self.tell_func)
-
-
-class UnclosedVorbisFileStream(pyogg.VorbisFileStream):
- def __del__(self):
- if self.exists:
- pyogg.vorbis.ov_clear(byref(self.vf))
- self.exists = False
-
- def clean_up(self):
- """PyOgg calls clean_up on end of data. We may want to loop a sound or replay. Prevent this.
- Rely on GC (__del__) to clean up objects instead.
- """
- return
-
-
-class UnclosedOpusFileStream(pyogg.OpusFileStream):
- def __del__(self):
- self.ptr.contents.value = self.ptr_init
-
- del self.ptr
-
- if self.of:
- pyogg.opus.op_free(self.of)
-
- def clean_up(self):
- pass
-
-
-class MemoryOpusObject:
- def __init__(self, filename, file):
- self.file = file
- self.filename = filename
-
- def read_func_cb(stream, buffer, size):
- data = self.file.read(size)
- read_size = len(data)
- memmove(buffer, data, read_size)
- return read_size
-
- def seek_func_cb(stream, offset, whence):
- self.file.seek(offset, whence)
- return 0
-
- def tell_func_cb(stream):
- pos = self.file.tell()
- return pos
-
- def close_func_cb(stream):
- return 0
-
- self.read_func = pyogg.opus.op_read_func(read_func_cb)
- self.seek_func = pyogg.opus.op_seek_func(seek_func_cb)
- self.tell_func = pyogg.opus.op_tell_func(tell_func_cb)
- self.close_func = pyogg.opus.op_close_func(close_func_cb)
-
- self.callbacks = pyogg.opus.OpusFileCallbacks(self.read_func, self.seek_func, self.tell_func, self.close_func)
-
-
-class MemoryOpusFileStream(UnclosedOpusFileStream):
- def __init__(self, filename, file):
- self.file = file
-
- self.memory_object = MemoryOpusObject(filename, file)
-
- self._dummy_fileobj = c_void_p()
-
- error = c_int()
-
- self.read_buffer = create_string_buffer(pyogg.PYOGG_STREAM_BUFFER_SIZE)
-
- self.ptr_buffer = cast(self.read_buffer, POINTER(c_ubyte))
-
- self.of = pyogg.opus.op_open_callbacks(
- self._dummy_fileobj,
- byref(self.memory_object.callbacks),
- self.ptr_buffer,
- 0, # Start length
- byref(error)
- )
-
- if error.value != 0:
- raise DecodeException(
- "file-like object: {} couldn't be processed. Error code : {}".format(filename, error.value))
-
- self.channels = pyogg.opus.op_channel_count(self.of, -1)
-
- self.pcm_size = pyogg.opus.op_pcm_total(self.of, -1)
-
- self.frequency = 48000
-
- self.bfarr_t = pyogg.opus.opus_int16 * (pyogg.PYOGG_STREAM_BUFFER_SIZE * self.channels * 2)
-
- self.buffer = cast(pointer(self.bfarr_t()), pyogg.opus.opus_int16_p)
-
- self.ptr = cast(pointer(self.buffer), POINTER(c_void_p))
-
- self.ptr_init = self.ptr.contents.value
-
-
-class MemoryVorbisFileStream(UnclosedVorbisFileStream):
- def __init__(self, path, file):
- buff = create_string_buffer(pyogg.PYOGG_STREAM_BUFFER_SIZE)
-
- self.vf = pyogg.vorbis.OggVorbis_File()
- self.memory_object = MemoryVorbisObject(file)
-
- error = pyogg.vorbis.libvorbisfile.ov_open_callbacks(buff, self.vf, None, 0, self.memory_object.callbacks)
- if error != 0:
- raise DecodeException("file couldn't be opened or doesn't exist. Error code : {}".format(error))
-
- info = pyogg.vorbis.ov_info(byref(self.vf), -1)
-
- self.channels = info.contents.channels
-
- self.frequency = info.contents.rate
-
- array = (c_char * (pyogg.PYOGG_STREAM_BUFFER_SIZE * self.channels))()
-
- self.buffer_ = cast(pointer(array), c_char_p)
-
- self.bitstream = c_int()
- self.bitstream_pointer = pointer(self.bitstream)
-
- self.exists = True
-
-
-class UnclosedFLACFileStream(pyogg.FlacFileStream):
- def __init__(self, *args, **kw):
- super().__init__(*args, **kw)
- self.seekable = True
-
- def __del__(self):
- if self.decoder:
- pyogg.flac.FLAC__stream_decoder_finish(self.decoder)
-
-
-class MemoryFLACFileStream(UnclosedFLACFileStream):
- def __init__(self, path, file):
- self.file = file
-
- self.file_size = 0
-
- if getattr(self.file, 'seek', None) and getattr(self.file, 'tell', None):
- self.seekable = True
- self.file.seek(0, 2)
- self.file_size = self.file.tell()
- self.file.seek(0)
- else:
- warnings.warn(f"Warning: {file} file object is not seekable.")
- self.seekable = False
-
- self.decoder = pyogg.flac.FLAC__stream_decoder_new()
-
- self.client_data = c_void_p()
-
- self.channels = None
-
- self.frequency = None
-
- self.total_samples = None
-
- self.buffer = None
-
- self.bytes_written = None
-
- self.write_callback_ = pyogg.flac.FLAC__StreamDecoderWriteCallback(self.write_callback)
- self.metadata_callback_ = pyogg.flac.FLAC__StreamDecoderMetadataCallback(self.metadata_callback)
- self.error_callback_ = pyogg.flac.FLAC__StreamDecoderErrorCallback(self.error_callback)
- self.read_callback_ = pyogg.flac.FLAC__StreamDecoderReadCallback(self.read_callback)
-
- if self.seekable:
- self.seek_callback_ = pyogg.flac.FLAC__StreamDecoderSeekCallback(self.seek_callback)
- self.tell_callback_ = pyogg.flac.FLAC__StreamDecoderTellCallback(self.tell_callback)
- self.length_callback_ = pyogg.flac.FLAC__StreamDecoderLengthCallback(self.length_callback)
- self.eof_callback_ = FLAC__StreamDecoderEofCallback(self.eof_callback)
- else:
- self.seek_callback_ = None
- self.tell_callback_ = None
- self.length_callback_ = None
- self.eof_callback_ = None
-
- init_status = pyogg.flac.libflac.FLAC__stream_decoder_init_stream(
- self.decoder,
- self.read_callback_,
- self.seek_callback_,
- self.tell_callback_,
- self.length_callback_,
- self.eof_callback_,
- self.write_callback_,
- self.metadata_callback_,
- self.error_callback_,
- self.client_data
- )
-
- if init_status: # error
- raise DecodeException("An error occurred when trying to open '{}': {}".format(
- path, pyogg.flac.FLAC__StreamDecoderInitStatusEnum[init_status]))
-
- metadata_status = pyogg.flac.FLAC__stream_decoder_process_until_end_of_metadata(self.decoder)
- if not metadata_status: # error
- raise DecodeException("An error occured when trying to decode the metadata of {}".format(path))
-
- def read_callback(self, decoder, buffer, size, data):
- chunk = size.contents.value
- data = self.file.read(chunk)
- read_size = len(data)
- memmove(buffer, data, read_size)
-
- size.contents.value = read_size
-
- if read_size > 0:
- return 0 # FLAC__STREAM_DECODER_READ_STATUS_CONTINUE
- elif read_size == 0:
- return 1 # FLAC__STREAM_DECODER_READ_STATUS_END_OF_STREAM
- else:
- return 2 # FLAC__STREAM_DECODER_READ_STATUS_ABORT
-
- def seek_callback(self, decoder, offset, data):
- pos = self.file.seek(offset, 0)
- if pos < 0:
- return 1 # FLAC__STREAM_DECODER_SEEK_STATUS_ERROR
- else:
- return 0 # FLAC__STREAM_DECODER_SEEK_STATUS_OK
-
- def tell_callback(self, decoder, offset, data):
- """Decoder wants to know the current position of the file stream."""
- pos = self.file.tell()
- if pos < 0:
- return 1 # FLAC__STREAM_DECODER_TELL_STATUS_ERROR
- else:
- offset.contents.value = pos
- return 0 # FLAC__STREAM_DECODER_TELL_STATUS_OK
-
- def length_callback(self, decoder, length, data):
- """Decoder wants to know the total length of the stream."""
- if self.file_size == 0:
- return 1 # FLAC__STREAM_DECODER_LENGTH_STATUS_ERROR
- else:
- length.contents.value = self.file_size
- return 0 # FLAC__STREAM_DECODER_LENGTH_STATUS_OK
-
- def eof_callback(self, decoder, data):
- return self.file.tell() >= self.file_size
-
-
-class PyOggSource(StreamingSource):
- def __init__(self, filename, file):
- self.filename = filename
- self.file = file
- self._stream = None
- self.sample_size = 16
-
- self._load_source()
-
- self.audio_format = AudioFormat(channels=self._stream.channels, sample_size=self.sample_size,
- sample_rate=self._stream.frequency)
-
- @abstractmethod
- def _load_source(self):
- pass
-
- def get_audio_data(self, num_bytes, compensation_time=0.0):
- """Data returns as c_short_array instead of LP_c_char or c_ubyte, cast each buffer."""
- data = self._stream.get_buffer() # Returns buffer, length or None
- if data is not None:
- buff, length = data
- buff_char_p = cast(buff, POINTER(c_char))
- return AudioData(buff_char_p[:length], length, 1000, 1000, [])
-
- return None
-
- def __del__(self):
- if self._stream:
- del self._stream
-
-
-class PyOggFLACSource(PyOggSource):
-
- def _load_source(self):
- if self.file:
- self._stream = MemoryFLACFileStream(self.filename, self.file)
- else:
- self._stream = UnclosedFLACFileStream(self.filename)
-
- self.sample_size = self._stream.bits_per_sample
- self._duration = self._stream.total_samples / self._stream.frequency
-
- # Unknown amount of samples. May occur in some sources.
- if self._stream.total_samples == 0:
- if _debug:
- warnings.warn(f"Unknown amount of samples found in {self.filename}. Seeking may be limited.")
- self._duration_per_frame = 0
- else:
- self._duration_per_frame = self._duration / self._stream.total_samples
-
- def seek(self, timestamp):
- if self._stream.seekable:
- # Convert sample to seconds.
- if self._duration_per_frame:
- timestamp = max(0.0, min(timestamp, self._duration))
- position = int(timestamp / self._duration_per_frame)
- else: # If we have no duration, we cannot reliably seek. However, 0.0 is still required to play and loop.
- position = 0
- seek_succeeded = pyogg.flac.FLAC__stream_decoder_seek_absolute(self._stream.decoder, position)
- if seek_succeeded is False:
- warnings.warn(f"Failed to seek FLAC file: {self.filename}")
- else:
- warnings.warn(f"Stream is not seekable for FLAC file: {self.filename}.")
-
-
-class PyOggVorbisSource(PyOggSource):
-
- def _load_source(self):
- if self.file:
- self._stream = MemoryVorbisFileStream(self.filename, self.file)
- else:
- self._stream = UnclosedVorbisFileStream(self.filename)
-
- self._duration = pyogg.vorbis.libvorbisfile.ov_time_total(byref(self._stream.vf), -1)
-
- def get_audio_data(self, num_bytes, compensation_time=0.0):
- data = self._stream.get_buffer() # Returns buffer, length or None
-
- if data is not None:
- return AudioData(*data, 1000, 1000, [])
-
- return None
-
- def seek(self, timestamp):
- seek_succeeded = pyogg.vorbis.ov_time_seek(self._stream.vf, timestamp)
- if seek_succeeded != 0:
- if _debug:
- warnings.warn(f"Failed to seek file {self.filename} - {seek_succeeded}")
-
-
-class PyOggOpusSource(PyOggSource):
- def _load_source(self):
- if self.file:
- self._stream = MemoryOpusFileStream(self.filename, self.file)
- else:
- self._stream = UnclosedOpusFileStream(self.filename)
-
- self._duration = self._stream.pcm_size / self._stream.frequency
- self._duration_per_frame = self._duration / self._stream.pcm_size
-
- def seek(self, timestamp):
- timestamp = max(0.0, min(timestamp, self._duration))
- position = int(timestamp / self._duration_per_frame)
- error = pyogg.opus.op_pcm_seek(self._stream.of, position)
- if error:
- warnings.warn(f"Opus stream could not seek properly {error}.")
-
-
-class PyOggDecoder(MediaDecoder):
- vorbis_exts = ('.ogg',) if pyogg.PYOGG_OGG_AVAIL and pyogg.PYOGG_VORBIS_AVAIL and pyogg.PYOGG_VORBIS_FILE_AVAIL else ()
- flac_exts = ('.flac',) if pyogg.PYOGG_FLAC_AVAIL else ()
- opus_exts = ('.opus',) if pyogg.PYOGG_OPUS_AVAIL and pyogg.PYOGG_OPUS_FILE_AVAIL else ()
- exts = vorbis_exts + flac_exts + opus_exts
-
- def get_file_extensions(self):
- return PyOggDecoder.exts
-
- def decode(self, filename, file, streaming=True):
- name, ext = os.path.splitext(filename)
- if ext in PyOggDecoder.vorbis_exts:
- source = PyOggVorbisSource
- elif ext in PyOggDecoder.flac_exts:
- source = PyOggFLACSource
- elif ext in PyOggDecoder.opus_exts:
- source = PyOggOpusSource
- else:
- raise DecodeException("Decoder could not find a suitable source to use with this filetype.")
-
- if streaming:
- return source(filename, file)
- else:
- return StaticSource(source(filename, file))
-
-
-def get_decoders():
- return [PyOggDecoder()]
-
-
-def get_encoders():
- return []
diff --git a/spaces/ajndkr/boilerplate-x/README.md b/spaces/ajndkr/boilerplate-x/README.md
deleted file mode 100644
index 37abf8fd3faaf50985bdc01a60453c34c20af196..0000000000000000000000000000000000000000
--- a/spaces/ajndkr/boilerplate-x/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Boilerplate X
-emoji: 🧱
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.23.0
-python_version: 3.9
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/qmsum.py b/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/qmsum.py
deleted file mode 100644
index 7d030c69495fcf1ee1b1b8dca1a56b95c39ca299..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/dataset/non_huggingface_datasets_builders/qmsum.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import os
-import json
-import datasets
-
-
-"""QMsum dataset."""
-
-
-_CITATION = """
-@inproceedings{zhong2021qmsum,
- title={{QMS}um: {A} {N}ew {B}enchmark for {Q}uery-based {M}ulti-domain {M}eeting {S}ummarization},
- author={Zhong, Ming and Yin, Da and Yu, Tao and Zaidi, Ahmad and Mutuma, Mutethia and Jha, Rahul and Hassan Awadallah, Ahmed and Celikyilmaz, Asli and Liu, Yang and Qiu, Xipeng and Radev, Dragomir},
- booktitle={North American Association for Computational Linguistics (NAACL)},
- year={2021}
-}
-"""
-
-_DESCRIPTION = """
-QMSum is a new human-annotated benchmark for query-based multi-domain meeting summarization task, \
-which consists of 1,808 query-summary pairs over 232 meetings in multiple domains.
-"""
-
-_HOMEPAGE = "https://github.com/Yale-LILY/QMSum"
-
-_BASE_URL = "https://raw.githubusercontent.com/Yale-LILY/QMSum/main/data/ALL/jsonl"
-_URLs = {
- "train": _BASE_URL + "/train.jsonl",
- "val": _BASE_URL + "/val.jsonl",
- "test": _BASE_URL + "/test.jsonl",
-}
-
-
-class SummertimeQmsum(datasets.GeneratorBasedBuilder):
- """QMsum dataset."""
-
- VERSION = datasets.Version("1.0.0")
-
- BUILDER_CONFIGS = [
- datasets.BuilderConfig(),
- ]
-
- def _info(self):
- features = datasets.Features(
- {
- "entry_number": datasets.Value("string"),
- "meeting_transcripts": [
- {
- "speaker": datasets.Value("string"),
- "content": datasets.Value("string"),
- }
- ],
- "general_query_list": [
- {
- "query": datasets.Value("string"),
- "answer": datasets.Value("string"),
- }
- ],
- "specific_query_list": [
- {
- "query": datasets.Value("string"),
- "answer": datasets.Value("string"),
- "relevant_text_span": [[datasets.Value("string")]],
- }
- ],
- }
- )
- return datasets.DatasetInfo(
- description=_DESCRIPTION,
- features=features,
- supervised_keys=None,
- homepage=_HOMEPAGE,
- license=None,
- citation=_CITATION,
- )
-
- def _split_generators(self, dl_manager):
- """Returns SplitGenerators."""
- my_urls = _URLs
- downloaded_files = dl_manager.download_and_extract(my_urls)
-
- trainpath = downloaded_files["train"]
- valpath = downloaded_files["val"]
- testpath = downloaded_files["test"]
-
- return [
- datasets.SplitGenerator(
- name=datasets.Split.TRAIN,
- # These kwargs will be passed to _generate_examples
- gen_kwargs={"filepath": trainpath, "split": "train"},
- ),
- datasets.SplitGenerator(
- name=datasets.Split.VALIDATION,
- # These kwargs will be passed to _generate_examples
- gen_kwargs={"filepath": valpath, "split": "val"},
- ),
- datasets.SplitGenerator(
- name=datasets.Split.TEST,
- # These kwargs will be passed to _generate_examples
- gen_kwargs={"filepath": testpath, "split": "test"},
- ),
- ]
-
- def _generate_examples(self, filepath, split):
- """Yields examples."""
-
- extraction_path = os.path.join(filepath)
-
- with open(extraction_path) as f:
- for i, line in enumerate(f):
-
- instance = json.loads(line)
-
- entry = {}
- entry["entry_number"] = split + "_" + str(i)
- entry["meeting_transcripts"] = instance["meeting_transcripts"]
- entry["general_query_list"] = instance["general_query_list"]
- entry["specific_query_list"] = instance["specific_query_list"]
-
- yield entry["entry_number"], entry
diff --git a/spaces/akhaliq/SummerTime/model/single_doc/pegasus_model.py b/spaces/akhaliq/SummerTime/model/single_doc/pegasus_model.py
deleted file mode 100644
index 91580ad6a57386276ba443e51a472d9b2d982f9f..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/single_doc/pegasus_model.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from transformers import PegasusForConditionalGeneration, PegasusTokenizer
-from .base_single_doc_model import SingleDocSummModel
-
-
-class PegasusModel(SingleDocSummModel):
- # static variables
- model_name = "Pegasus"
- is_extractive = False
- is_neural = True
-
- def __init__(self, device="cpu"):
- super(PegasusModel, self).__init__()
-
- self.device = device
- model_name = "google/pegasus-xsum"
- print("init load pretrained tokenizer")
- self.tokenizer = PegasusTokenizer.from_pretrained(model_name)
- print("init load pretrained model with tokenizer on " + device)
- # self.model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
- self.model = PegasusForConditionalGeneration.from_pretrained(model_name)
-
- def summarize(self, corpus, queries=None):
- self.assert_summ_input_type(corpus, queries)
-
- print("batching")
- # batch = self.tokenizer(corpus, truncation=True, padding='longest', return_tensors="pt").to(self.device)
- batch = self.tokenizer(corpus, truncation=True, return_tensors="pt")
- print("encoding batches")
- # encoded_summaries = self.model.generate(**batch, max_length=40, max_time=120)
- encoded_summaries = self.model.generate(batch["input_ids"], max_time=1024)
- print("decoding batches")
- # summaries = self.tokenizer.batch_decode(encoded_summaries, skip_special_tokens=True)
- summaries = [self.tokenizer.decode(encoded_summaries[0])]
-
- return summaries
-
- @classmethod
- def show_capability(cls):
- basic_description = cls.generate_basic_description()
- more_details = (
- "Introduced in 2019, a large neural abstractive summarization model trained on web crawl and "
- "news data.\n "
- "Strengths: \n - High accuracy \n - Performs well on almost all kinds of non-literary written "
- "text \n "
- "Weaknesses: \n - High memory usage \n "
- "Initialization arguments: \n "
- "- `device = 'cpu'` specifies the device the model is stored on and uses for computation. "
- "Use `device='gpu'` to run on an Nvidia GPU."
- )
- print(f"{basic_description} \n {'#'*20} \n {more_details}")
diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/infinibatch/torch/__init__.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/infinibatch/torch/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/allknowingroger/Image-Models-Test17/README.md b/spaces/allknowingroger/Image-Models-Test17/README.md
deleted file mode 100644
index f402770f3fc89ce629138b7c09e25bab7ef656be..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test17/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test16
----
-
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test23/README.md b/spaces/allknowingroger/Image-Models-Test23/README.md
deleted file mode 100644
index 1d9c07a646dd3c12e24ce33244fa7b0f88f64a72..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test23/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test22
----
-
-
\ No newline at end of file
diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/Low-VRAM-guide.md b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/Low-VRAM-guide.md
deleted file mode 100644
index 1dc86f9c7f764a886c454f7f76a2a89a77140655..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/Low-VRAM-guide.md
+++ /dev/null
@@ -1,51 +0,0 @@
-If you GPU is not large enough to fit a model, try these in the following order:
-
-### Load the model in 8-bit mode
-
-```
-python server.py --load-in-8bit
-```
-
-This reduces the memory usage by half with no noticeable loss in quality. Only newer GPUs support 8-bit mode.
-
-### Split the model across your GPU and CPU
-
-```
-python server.py --auto-devices
-```
-
-If you can load the model with this command but it runs out of memory when you try to generate text, try increasingly limiting the amount of memory allocated to the GPU until the error stops happening:
-
-```
-python server.py --auto-devices --gpu-memory 10
-python server.py --auto-devices --gpu-memory 9
-python server.py --auto-devices --gpu-memory 8
-...
-```
-
-where the number is in GiB.
-
-For finer control, you can also specify the unit in MiB explicitly:
-
-```
-python server.py --auto-devices --gpu-memory 8722MiB
-python server.py --auto-devices --gpu-memory 4725MiB
-python server.py --auto-devices --gpu-memory 3500MiB
-...
-```
-
-Additionally, you can also set the `--no-cache` value to reduce the GPU usage while generating text at a performance cost. This may allow you to set a higher value for `--gpu-memory`, resulting in a net performance gain.
-
-### Send layers to a disk cache
-
-As a desperate last measure, you can split the model across your GPU, CPU, and disk:
-
-```
-python server.py --auto-devices --disk
-```
-
-With this, I am able to load a 30b model into my RTX 3090, but it takes 10 seconds to generate 1 word.
-
-### DeepSpeed (experimental)
-
-An experimental alternative to all of the above is to use DeepSpeed: [guide](DeepSpeed.md).
diff --git a/spaces/aodianyun/stable-diffusion-webui/scripts/img2imgalt.py b/spaces/aodianyun/stable-diffusion-webui/scripts/img2imgalt.py
deleted file mode 100644
index 65b61533929a018f0cb97a89266154bf569cd40e..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/scripts/img2imgalt.py
+++ /dev/null
@@ -1,216 +0,0 @@
-from collections import namedtuple
-
-import numpy as np
-from tqdm import trange
-
-import modules.scripts as scripts
-import gradio as gr
-
-from modules import processing, shared, sd_samplers, prompt_parser, sd_samplers_common
-from modules.processing import Processed
-from modules.shared import opts, cmd_opts, state
-
-import torch
-import k_diffusion as K
-
-from PIL import Image
-from torch import autocast
-from einops import rearrange, repeat
-
-
-def find_noise_for_image(p, cond, uncond, cfg_scale, steps):
- x = p.init_latent
-
- s_in = x.new_ones([x.shape[0]])
- dnw = K.external.CompVisDenoiser(shared.sd_model)
- sigmas = dnw.get_sigmas(steps).flip(0)
-
- shared.state.sampling_steps = steps
-
- for i in trange(1, len(sigmas)):
- shared.state.sampling_step += 1
-
- x_in = torch.cat([x] * 2)
- sigma_in = torch.cat([sigmas[i] * s_in] * 2)
- cond_in = torch.cat([uncond, cond])
-
- image_conditioning = torch.cat([p.image_conditioning] * 2)
- cond_in = {"c_concat": [image_conditioning], "c_crossattn": [cond_in]}
-
- c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)]
- t = dnw.sigma_to_t(sigma_in)
-
- eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in)
- denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2)
-
- denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale
-
- d = (x - denoised) / sigmas[i]
- dt = sigmas[i] - sigmas[i - 1]
-
- x = x + d * dt
-
- sd_samplers_common.store_latent(x)
-
- # This shouldn't be necessary, but solved some VRAM issues
- del x_in, sigma_in, cond_in, c_out, c_in, t,
- del eps, denoised_uncond, denoised_cond, denoised, d, dt
-
- shared.state.nextjob()
-
- return x / x.std()
-
-
-Cached = namedtuple("Cached", ["noise", "cfg_scale", "steps", "latent", "original_prompt", "original_negative_prompt", "sigma_adjustment"])
-
-
-# Based on changes suggested by briansemrau in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/736
-def find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg_scale, steps):
- x = p.init_latent
-
- s_in = x.new_ones([x.shape[0]])
- dnw = K.external.CompVisDenoiser(shared.sd_model)
- sigmas = dnw.get_sigmas(steps).flip(0)
-
- shared.state.sampling_steps = steps
-
- for i in trange(1, len(sigmas)):
- shared.state.sampling_step += 1
-
- x_in = torch.cat([x] * 2)
- sigma_in = torch.cat([sigmas[i - 1] * s_in] * 2)
- cond_in = torch.cat([uncond, cond])
-
- image_conditioning = torch.cat([p.image_conditioning] * 2)
- cond_in = {"c_concat": [image_conditioning], "c_crossattn": [cond_in]}
-
- c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)]
-
- if i == 1:
- t = dnw.sigma_to_t(torch.cat([sigmas[i] * s_in] * 2))
- else:
- t = dnw.sigma_to_t(sigma_in)
-
- eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in)
- denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2)
-
- denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale
-
- if i == 1:
- d = (x - denoised) / (2 * sigmas[i])
- else:
- d = (x - denoised) / sigmas[i - 1]
-
- dt = sigmas[i] - sigmas[i - 1]
- x = x + d * dt
-
- sd_samplers_common.store_latent(x)
-
- # This shouldn't be necessary, but solved some VRAM issues
- del x_in, sigma_in, cond_in, c_out, c_in, t,
- del eps, denoised_uncond, denoised_cond, denoised, d, dt
-
- shared.state.nextjob()
-
- return x / sigmas[-1]
-
-
-class Script(scripts.Script):
- def __init__(self):
- self.cache = None
-
- def title(self):
- return "img2img alternative test"
-
- def show(self, is_img2img):
- return is_img2img
-
- def ui(self, is_img2img):
- info = gr.Markdown('''
- * `CFG Scale` should be 2 or lower.
- ''')
-
- override_sampler = gr.Checkbox(label="Override `Sampling method` to Euler?(this method is built for it)", value=True, elem_id=self.elem_id("override_sampler"))
-
- override_prompt = gr.Checkbox(label="Override `prompt` to the same value as `original prompt`?(and `negative prompt`)", value=True, elem_id=self.elem_id("override_prompt"))
- original_prompt = gr.Textbox(label="Original prompt", lines=1, elem_id=self.elem_id("original_prompt"))
- original_negative_prompt = gr.Textbox(label="Original negative prompt", lines=1, elem_id=self.elem_id("original_negative_prompt"))
-
- override_steps = gr.Checkbox(label="Override `Sampling Steps` to the same value as `Decode steps`?", value=True, elem_id=self.elem_id("override_steps"))
- st = gr.Slider(label="Decode steps", minimum=1, maximum=150, step=1, value=50, elem_id=self.elem_id("st"))
-
- override_strength = gr.Checkbox(label="Override `Denoising strength` to 1?", value=True, elem_id=self.elem_id("override_strength"))
-
- cfg = gr.Slider(label="Decode CFG scale", minimum=0.0, maximum=15.0, step=0.1, value=1.0, elem_id=self.elem_id("cfg"))
- randomness = gr.Slider(label="Randomness", minimum=0.0, maximum=1.0, step=0.01, value=0.0, elem_id=self.elem_id("randomness"))
- sigma_adjustment = gr.Checkbox(label="Sigma adjustment for finding noise for image", value=False, elem_id=self.elem_id("sigma_adjustment"))
-
- return [
- info,
- override_sampler,
- override_prompt, original_prompt, original_negative_prompt,
- override_steps, st,
- override_strength,
- cfg, randomness, sigma_adjustment,
- ]
-
- def run(self, p, _, override_sampler, override_prompt, original_prompt, original_negative_prompt, override_steps, st, override_strength, cfg, randomness, sigma_adjustment):
- # Override
- if override_sampler:
- p.sampler_name = "Euler"
- if override_prompt:
- p.prompt = original_prompt
- p.negative_prompt = original_negative_prompt
- if override_steps:
- p.steps = st
- if override_strength:
- p.denoising_strength = 1.0
-
- def sample_extra(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, prompts):
- lat = (p.init_latent.cpu().numpy() * 10).astype(int)
-
- same_params = self.cache is not None and self.cache.cfg_scale == cfg and self.cache.steps == st \
- and self.cache.original_prompt == original_prompt \
- and self.cache.original_negative_prompt == original_negative_prompt \
- and self.cache.sigma_adjustment == sigma_adjustment
- same_everything = same_params and self.cache.latent.shape == lat.shape and np.abs(self.cache.latent-lat).sum() < 100
-
- if same_everything:
- rec_noise = self.cache.noise
- else:
- shared.state.job_count += 1
- cond = p.sd_model.get_learned_conditioning(p.batch_size * [original_prompt])
- uncond = p.sd_model.get_learned_conditioning(p.batch_size * [original_negative_prompt])
- if sigma_adjustment:
- rec_noise = find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg, st)
- else:
- rec_noise = find_noise_for_image(p, cond, uncond, cfg, st)
- self.cache = Cached(rec_noise, cfg, st, lat, original_prompt, original_negative_prompt, sigma_adjustment)
-
- rand_noise = processing.create_random_tensors(p.init_latent.shape[1:], seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, seed_resize_from_h=p.seed_resize_from_h, seed_resize_from_w=p.seed_resize_from_w, p=p)
-
- combined_noise = ((1 - randomness) * rec_noise + randomness * rand_noise) / ((randomness**2 + (1-randomness)**2) ** 0.5)
-
- sampler = sd_samplers.create_sampler(p.sampler_name, p.sd_model)
-
- sigmas = sampler.model_wrap.get_sigmas(p.steps)
-
- noise_dt = combined_noise - (p.init_latent / sigmas[0])
-
- p.seed = p.seed + 1
-
- return sampler.sample_img2img(p, p.init_latent, noise_dt, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning)
-
- p.sample = sample_extra
-
- p.extra_generation_params["Decode prompt"] = original_prompt
- p.extra_generation_params["Decode negative prompt"] = original_negative_prompt
- p.extra_generation_params["Decode CFG scale"] = cfg
- p.extra_generation_params["Decode steps"] = st
- p.extra_generation_params["Randomness"] = randomness
- p.extra_generation_params["Sigma Adjustment"] = sigma_adjustment
-
- processed = processing.process_images(p)
-
- return processed
-
diff --git a/spaces/aodianyun/whisper/share_btn.py b/spaces/aodianyun/whisper/share_btn.py
deleted file mode 100644
index dff74adcc3c750c4e7a2cbd6fca31dff1dd62f1a..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/whisper/share_btn.py
+++ /dev/null
@@ -1,203 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': 'audio/wav',
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
-
- function audioResample(buffer, sampleRate){
- const offlineCtx = new OfflineAudioContext(2, (buffer.length / buffer.sampleRate) * sampleRate, sampleRate);
- const source = offlineCtx.createBufferSource();
- source.buffer = buffer;
- source.connect(offlineCtx.destination);
- source.start();
- return offlineCtx.startRendering();
- };
-
- function audioReduceChannels(buffer, targetChannelOpt){
- if(targetChannelOpt === 'both' || buffer.numberOfChannels < 2) return buffer;
- const outBuffer = new AudioBuffer({
- sampleRate: buffer.sampleRate,
- length: buffer.length,
- numberOfChannels: 1
- });
-
- const data = [buffer.getChannelData(0), buffer.getChannelData(1)];
- const newData = new Float32Array(buffer.length);
- for(let i = 0; i < buffer.length; ++i)
- newData[i] =
- targetChannelOpt === 'left'? data[0][i] :
- targetChannelOpt === 'right'? data[1][i] :
- (data[0][i] + data[1][i]) / 2 ;
- outBuffer.copyToChannel(newData, 0);
- return outBuffer;
- };
-
- function audioNormalize(buffer){
- const data = Array.from(Array(buffer.numberOfChannels)).map((_, idx) => buffer.getChannelData(idx));
- const maxAmplitude = Math.max(...data.map(chan => chan.reduce((acc, cur) => Math.max(acc, Math.abs(cur)), 0)));
- if(maxAmplitude >= 1.0) return buffer;
- const coeff = 1.0 / maxAmplitude;
- data.forEach(chan => {
- chan.forEach((v, idx) => chan[idx] = v*coeff);
- buffer.copyToChannel(chan, 0);
- });
- return buffer;
- };
-
- async function processAudioFile(
- audioBufferIn,
- targetChannelOpt,
- targetSampleRate
- ) {
- const resampled = await audioResample(audioBufferIn, targetSampleRate);
- const reduced = audioReduceChannels(resampled, targetChannelOpt);
- const normalized = audioNormalize(reduced);
- return normalized;
- }
-
- function audioToRawWave(audioChannels, bytesPerSample, mixChannels=false) {
- const bufferLength = audioChannels[0].length;
- const numberOfChannels = audioChannels.length === 1 ? 1 : 2;
- const reducedData = new Uint8Array(
- bufferLength * numberOfChannels * bytesPerSample
- );
- for (let i = 0; i < bufferLength; ++i) {
- for (
- let channel = 0;
- channel < (mixChannels ? 1 : numberOfChannels);
- ++channel
- ) {
- const outputIndex = (i * numberOfChannels + channel) * bytesPerSample;
- let sample;
- if (!mixChannels) sample = audioChannels[channel][i];
- else
- sample =
- audioChannels.reduce((prv, cur) => prv + cur[i], 0) /
- numberOfChannels;
- sample = sample > 1 ? 1 : sample < -1 ? -1 : sample; //check for clipping
- //bit reduce and convert to Uint8
- switch (bytesPerSample) {
- case 2:
- sample = sample * 32767;
- reducedData[outputIndex] = sample;
- reducedData[outputIndex + 1] = sample >> 8;
- break;
- case 1:
- reducedData[outputIndex] = (sample + 1) * 127;
- break;
- default:
- throw "Only 8, 16 bits per sample are supported";
- }
- }
- }
- return reducedData;
- }
-
- function makeWav(data, channels, sampleRate, bytesPerSample) {
- const headerLength = 44;
- var wav = new Uint8Array(headerLength + data.length);
- var view = new DataView(wav.buffer);
-
- view.setUint32(0, 1380533830, false); // RIFF identifier 'RIFF'
- view.setUint32(4, 36 + data.length, true); // file length minus RIFF identifier length and file description length
- view.setUint32(8, 1463899717, false); // RIFF type 'WAVE'
- view.setUint32(12, 1718449184, false); // format chunk identifier 'fmt '
- view.setUint32(16, 16, true); // format chunk length
- view.setUint16(20, 1, true); // sample format (raw)
- view.setUint16(22, channels, true); // channel count
- view.setUint32(24, sampleRate, true); // sample rate
- view.setUint32(28, sampleRate * bytesPerSample * channels, true); // byte rate (sample rate * block align)
- view.setUint16(32, bytesPerSample * channels, true); // block align (channel count * bytes per sample)
- view.setUint16(34, bytesPerSample * 8, true); // bits per sample
- view.setUint32(36, 1684108385, false); // data chunk identifier 'data'
- view.setUint32(40, data.length, true); // data chunk length
-
- wav.set(data, headerLength);
-
- return new Blob([wav.buffer], { type: "audio/wav" });
- }
-
- const gradioEl = document.querySelector('body > gradio-app');
- const audioEl = gradioEl.querySelector('audio');
- const resultTxt = gradioEl.querySelector('#result-textarea textarea').value;
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
-
- if(!audioEl){
- return;
- };
-
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
-
- const res = await fetch(audioEl.src);
- const blob = await res.blob();
-
- const channelOpt = "both";
- const sampleRate = 48000;
- const bytesPerSample = 1; // or 2
- const audioBufferIn = await new AudioContext().decodeAudioData(
- await blob.arrayBuffer()
- );
- const audioBuffer = await processAudioFile(
- audioBufferIn,
- channelOpt,
- sampleRate
- );
- const rawData = audioToRawWave(
- channelOpt === "both"
- ? [audioBuffer.getChannelData(0), audioBuffer.getChannelData(1)]
- : [audioBuffer.getChannelData(0)],
- bytesPerSample
- );
- const blobWav = makeWav(
- rawData,
- channelOpt === "both" ? 2 : 1,
- sampleRate,
- bytesPerSample
- );
-
- const fileName = `whisper-demo-input.wav`;
- const audioFile = new File([blobWav], fileName, { type: 'audio/wav' });
-
- const url = await uploadFile(audioFile);
-
- const descriptionMd = `#### Input audio:
-
-
-#### Transcription:
-
-> ${resultTxt}`;
-
- const params = new URLSearchParams({
- description: descriptionMd,
- });
-
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/openai/whisper/discussions/new?${paramsStr}`, '_blank');
-
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/thorsten_DE/tacotron2-DDC/train_tacotron_ddc.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/thorsten_DE/tacotron2-DDC/train_tacotron_ddc.py
deleted file mode 100644
index bc0274f5af2a6c1096c89e41d8b2e359fe5432f6..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/recipes/thorsten_DE/tacotron2-DDC/train_tacotron_ddc.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import os
-
-from trainer import Trainer, TrainerArgs
-
-from TTS.config.shared_configs import BaseAudioConfig
-from TTS.tts.configs.shared_configs import BaseDatasetConfig
-from TTS.tts.configs.tacotron2_config import Tacotron2Config
-from TTS.tts.datasets import load_tts_samples
-from TTS.tts.models.tacotron2 import Tacotron2
-from TTS.tts.utils.text.tokenizer import TTSTokenizer
-from TTS.utils.audio import AudioProcessor
-from TTS.utils.downloaders import download_thorsten_de
-
-# from TTS.tts.datasets.tokenizer import Tokenizer
-output_path = os.path.dirname(os.path.abspath(__file__))
-
-# init configs
-dataset_config = BaseDatasetConfig(
- formatter="thorsten", meta_file_train="metadata.csv", path=os.path.join(output_path, "../thorsten-de/")
-)
-
-# download dataset if not already present
-if not os.path.exists(dataset_config.path):
- print("Downloading dataset")
- download_thorsten_de(os.path.split(os.path.abspath(dataset_config.path))[0])
-
-audio_config = BaseAudioConfig(
- sample_rate=22050,
- do_trim_silence=True,
- trim_db=60.0,
- signal_norm=False,
- mel_fmin=0.0,
- mel_fmax=8000,
- spec_gain=1.0,
- log_func="np.log",
- ref_level_db=20,
- preemphasis=0.0,
-)
-
-config = Tacotron2Config( # This is the config that is saved for the future use
- audio=audio_config,
- batch_size=40, # BS of 40 and max length of 10s will use about 20GB of GPU memory
- eval_batch_size=16,
- num_loader_workers=4,
- num_eval_loader_workers=4,
- run_eval=True,
- test_delay_epochs=-1,
- r=6,
- gradual_training=[[0, 6, 64], [10000, 4, 32], [50000, 3, 32], [100000, 2, 32]],
- double_decoder_consistency=True,
- epochs=1000,
- text_cleaner="phoneme_cleaners",
- use_phonemes=True,
- phoneme_language="de",
- phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
- precompute_num_workers=8,
- print_step=25,
- print_eval=True,
- mixed_precision=False,
- test_sentences=[
- "Es hat mich viel Zeit gekostet ein Stimme zu entwickeln, jetzt wo ich sie habe werde ich nicht mehr schweigen.",
- "Sei eine Stimme, kein Echo.",
- "Es tut mir Leid David. Das kann ich leider nicht machen.",
- "Dieser Kuchen ist großartig. Er ist so lecker und feucht.",
- "Vor dem 22. November 1963.",
- ],
- # max audio length of 10 seconds, feel free to increase if you got more than 20GB GPU memory
- max_audio_len=22050 * 10,
- output_path=output_path,
- datasets=[dataset_config],
-)
-
-# init audio processor
-ap = AudioProcessor(**config.audio.to_dict())
-
-# INITIALIZE THE AUDIO PROCESSOR
-# Audio processor is used for feature extraction and audio I/O.
-# It mainly serves to the dataloader and the training loggers.
-ap = AudioProcessor.init_from_config(config)
-
-# INITIALIZE THE TOKENIZER
-# Tokenizer is used to convert text to sequences of token IDs.
-# If characters are not defined in the config, default characters are passed to the config
-tokenizer, config = TTSTokenizer.init_from_config(config)
-
-# LOAD DATA SAMPLES
-# Each sample is a list of ```[text, audio_file_path, speaker_name]```
-# You can define your custom sample loader returning the list of samples.
-# Or define your custom formatter and pass it to the `load_tts_samples`.
-# Check `TTS.tts.datasets.load_tts_samples` for more details.
-train_samples, eval_samples = load_tts_samples(
- dataset_config,
- eval_split=True,
- eval_split_max_size=config.eval_split_max_size,
- eval_split_size=config.eval_split_size,
-)
-
-# INITIALIZE THE MODEL
-# Models take a config object and a speaker manager as input
-# Config defines the details of the model like the number of layers, the size of the embedding, etc.
-# Speaker manager is used by multi-speaker models.
-model = Tacotron2(config, ap, tokenizer, speaker_manager=None)
-
-# init the trainer and 🚀
-trainer = Trainer(
- TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
-)
-trainer.fit()
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/client_proto.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/client_proto.py
deleted file mode 100644
index 3041157d61d78fe285fe2f688a4a8d5b75c5412d..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/client_proto.py
+++ /dev/null
@@ -1,251 +0,0 @@
-import asyncio
-from contextlib import suppress
-from typing import Any, Optional, Tuple
-
-from .base_protocol import BaseProtocol
-from .client_exceptions import (
- ClientOSError,
- ClientPayloadError,
- ServerDisconnectedError,
- ServerTimeoutError,
-)
-from .helpers import BaseTimerContext
-from .http import HttpResponseParser, RawResponseMessage
-from .streams import EMPTY_PAYLOAD, DataQueue, StreamReader
-
-
-class ResponseHandler(BaseProtocol, DataQueue[Tuple[RawResponseMessage, StreamReader]]):
- """Helper class to adapt between Protocol and StreamReader."""
-
- def __init__(self, loop: asyncio.AbstractEventLoop) -> None:
- BaseProtocol.__init__(self, loop=loop)
- DataQueue.__init__(self, loop)
-
- self._should_close = False
-
- self._payload: Optional[StreamReader] = None
- self._skip_payload = False
- self._payload_parser = None
-
- self._timer = None
-
- self._tail = b""
- self._upgraded = False
- self._parser: Optional[HttpResponseParser] = None
-
- self._read_timeout: Optional[float] = None
- self._read_timeout_handle: Optional[asyncio.TimerHandle] = None
-
- @property
- def upgraded(self) -> bool:
- return self._upgraded
-
- @property
- def should_close(self) -> bool:
- if self._payload is not None and not self._payload.is_eof() or self._upgraded:
- return True
-
- return (
- self._should_close
- or self._upgraded
- or self.exception() is not None
- or self._payload_parser is not None
- or len(self) > 0
- or bool(self._tail)
- )
-
- def force_close(self) -> None:
- self._should_close = True
-
- def close(self) -> None:
- transport = self.transport
- if transport is not None:
- transport.close()
- self.transport = None
- self._payload = None
- self._drop_timeout()
-
- def is_connected(self) -> bool:
- return self.transport is not None and not self.transport.is_closing()
-
- def connection_lost(self, exc: Optional[BaseException]) -> None:
- self._drop_timeout()
-
- if self._payload_parser is not None:
- with suppress(Exception):
- self._payload_parser.feed_eof()
-
- uncompleted = None
- if self._parser is not None:
- try:
- uncompleted = self._parser.feed_eof()
- except Exception:
- if self._payload is not None:
- self._payload.set_exception(
- ClientPayloadError("Response payload is not completed")
- )
-
- if not self.is_eof():
- if isinstance(exc, OSError):
- exc = ClientOSError(*exc.args)
- if exc is None:
- exc = ServerDisconnectedError(uncompleted)
- # assigns self._should_close to True as side effect,
- # we do it anyway below
- self.set_exception(exc)
-
- self._should_close = True
- self._parser = None
- self._payload = None
- self._payload_parser = None
- self._reading_paused = False
-
- super().connection_lost(exc)
-
- def eof_received(self) -> None:
- # should call parser.feed_eof() most likely
- self._drop_timeout()
-
- def pause_reading(self) -> None:
- super().pause_reading()
- self._drop_timeout()
-
- def resume_reading(self) -> None:
- super().resume_reading()
- self._reschedule_timeout()
-
- def set_exception(self, exc: BaseException) -> None:
- self._should_close = True
- self._drop_timeout()
- super().set_exception(exc)
-
- def set_parser(self, parser: Any, payload: Any) -> None:
- # TODO: actual types are:
- # parser: WebSocketReader
- # payload: FlowControlDataQueue
- # but they are not generi enough
- # Need an ABC for both types
- self._payload = payload
- self._payload_parser = parser
-
- self._drop_timeout()
-
- if self._tail:
- data, self._tail = self._tail, b""
- self.data_received(data)
-
- def set_response_params(
- self,
- *,
- timer: Optional[BaseTimerContext] = None,
- skip_payload: bool = False,
- read_until_eof: bool = False,
- auto_decompress: bool = True,
- read_timeout: Optional[float] = None,
- read_bufsize: int = 2**16,
- ) -> None:
- self._skip_payload = skip_payload
-
- self._read_timeout = read_timeout
- self._reschedule_timeout()
-
- self._parser = HttpResponseParser(
- self,
- self._loop,
- read_bufsize,
- timer=timer,
- payload_exception=ClientPayloadError,
- response_with_body=not skip_payload,
- read_until_eof=read_until_eof,
- auto_decompress=auto_decompress,
- )
-
- if self._tail:
- data, self._tail = self._tail, b""
- self.data_received(data)
-
- def _drop_timeout(self) -> None:
- if self._read_timeout_handle is not None:
- self._read_timeout_handle.cancel()
- self._read_timeout_handle = None
-
- def _reschedule_timeout(self) -> None:
- timeout = self._read_timeout
- if self._read_timeout_handle is not None:
- self._read_timeout_handle.cancel()
-
- if timeout:
- self._read_timeout_handle = self._loop.call_later(
- timeout, self._on_read_timeout
- )
- else:
- self._read_timeout_handle = None
-
- def _on_read_timeout(self) -> None:
- exc = ServerTimeoutError("Timeout on reading data from socket")
- self.set_exception(exc)
- if self._payload is not None:
- self._payload.set_exception(exc)
-
- def data_received(self, data: bytes) -> None:
- self._reschedule_timeout()
-
- if not data:
- return
-
- # custom payload parser
- if self._payload_parser is not None:
- eof, tail = self._payload_parser.feed_data(data)
- if eof:
- self._payload = None
- self._payload_parser = None
-
- if tail:
- self.data_received(tail)
- return
- else:
- if self._upgraded or self._parser is None:
- # i.e. websocket connection, websocket parser is not set yet
- self._tail += data
- else:
- # parse http messages
- try:
- messages, upgraded, tail = self._parser.feed_data(data)
- except BaseException as exc:
- if self.transport is not None:
- # connection.release() could be called BEFORE
- # data_received(), the transport is already
- # closed in this case
- self.transport.close()
- # should_close is True after the call
- self.set_exception(exc)
- return
-
- self._upgraded = upgraded
-
- payload: Optional[StreamReader] = None
- for message, payload in messages:
- if message.should_close:
- self._should_close = True
-
- self._payload = payload
-
- if self._skip_payload or message.code in (204, 304):
- self.feed_data((message, EMPTY_PAYLOAD), 0)
- else:
- self.feed_data((message, payload), 0)
- if payload is not None:
- # new message(s) was processed
- # register timeout handler unsubscribing
- # either on end-of-stream or immediately for
- # EMPTY_PAYLOAD
- if payload is not EMPTY_PAYLOAD:
- payload.on_eof(self._drop_timeout)
- else:
- self._drop_timeout()
-
- if tail:
- if upgraded:
- self.data_received(tail)
- else:
- self._tail = tail
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/zoneinfo/rebuild.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/zoneinfo/rebuild.py
deleted file mode 100644
index 684c6586f091350c347f2b6150935f5214ffec27..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/zoneinfo/rebuild.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import logging
-import os
-import tempfile
-import shutil
-import json
-from subprocess import check_call, check_output
-from tarfile import TarFile
-
-from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME
-
-
-def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None):
- """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar*
-
- filename is the timezone tarball from ``ftp.iana.org/tz``.
-
- """
- tmpdir = tempfile.mkdtemp()
- zonedir = os.path.join(tmpdir, "zoneinfo")
- moduledir = os.path.dirname(__file__)
- try:
- with TarFile.open(filename) as tf:
- for name in zonegroups:
- tf.extract(name, tmpdir)
- filepaths = [os.path.join(tmpdir, n) for n in zonegroups]
-
- _run_zic(zonedir, filepaths)
-
- # write metadata file
- with open(os.path.join(zonedir, METADATA_FN), 'w') as f:
- json.dump(metadata, f, indent=4, sort_keys=True)
- target = os.path.join(moduledir, ZONEFILENAME)
- with TarFile.open(target, "w:%s" % format) as tf:
- for entry in os.listdir(zonedir):
- entrypath = os.path.join(zonedir, entry)
- tf.add(entrypath, entry)
- finally:
- shutil.rmtree(tmpdir)
-
-
-def _run_zic(zonedir, filepaths):
- """Calls the ``zic`` compiler in a compatible way to get a "fat" binary.
-
- Recent versions of ``zic`` default to ``-b slim``, while older versions
- don't even have the ``-b`` option (but default to "fat" binaries). The
- current version of dateutil does not support Version 2+ TZif files, which
- causes problems when used in conjunction with "slim" binaries, so this
- function is used to ensure that we always get a "fat" binary.
- """
-
- try:
- help_text = check_output(["zic", "--help"])
- except OSError as e:
- _print_on_nosuchfile(e)
- raise
-
- if b"-b " in help_text:
- bloat_args = ["-b", "fat"]
- else:
- bloat_args = []
-
- check_call(["zic"] + bloat_args + ["-d", zonedir] + filepaths)
-
-
-def _print_on_nosuchfile(e):
- """Print helpful troubleshooting message
-
- e is an exception raised by subprocess.check_call()
-
- """
- if e.errno == 2:
- logging.error(
- "Could not find zic. Perhaps you need to install "
- "libc-bin or some other package that provides it, "
- "or it's not in your PATH?")
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/distributed/tpu_distributed_data_parallel.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/distributed/tpu_distributed_data_parallel.py
deleted file mode 100644
index 3b9e1033011db87100c64ec39845e81228a26381..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/distributed/tpu_distributed_data_parallel.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from torch import nn
-
-from fairseq.distributed import utils
-
-
-class TPUDistributedDataParallel(nn.Module):
- def __init__(self, module, process_group):
- super().__init__()
- self.module = module
- self.process_group = process_group
- self.world_size = utils.get_world_size(self.process_group)
-
- def forward(self, *inputs, **kwargs):
- return self.module(*inputs, **kwargs)
-
- def all_reduce_grads(self):
- gradients = []
- for p in self.parameters():
- if not p.requires_grad:
- continue
- if p.grad is None:
- p.grad = torch.zeros_like(p)
- if p.grad.requires_grad:
- raise RuntimeError(
- "TPUDistributedDataParallel only works with gradients that don't "
- "require grad"
- )
- gradients.append(p.grad)
-
- import torch_xla.core.xla_model as xm
-
- xm.all_reduce(
- "sum",
- gradients,
- scale=1.0 / self.world_size,
- groups=self.process_group[1],
- )
diff --git a/spaces/ashercn97/AsherTesting/docs/Chat-mode.md b/spaces/ashercn97/AsherTesting/docs/Chat-mode.md
deleted file mode 100644
index 08dd290dadbd8a590ace65d557b8916a2707fc26..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/docs/Chat-mode.md
+++ /dev/null
@@ -1,45 +0,0 @@
-## Chat characters
-
-Custom chat mode characters are defined by `.yaml` files inside the `characters` folder. An example is included: [Example.yaml](https://github.com/oobabooga/text-generation-webui/blob/main/characters/Example.yaml)
-
-The following fields may be defined:
-
-| Field | Description |
-|-------|-------------|
-| `name` or `bot` | The character's name. |
-| `your_name` or `user` (optional) | Your name. This overwrites what you had previously written in the `Your name` field in the interface. |
-| `context` | A string that appears at the top of the prompt. It usually contains a description of the character's personality. |
-| `greeting` (optional) | The character's opening message when a new conversation is started. |
-| `example_dialogue` (optional) | A few example messages to guide the model. |
-| `turn_template` (optional) | Used to define where the spaces and new line characters should be in Instruct mode. See the characters in `characters/instruction-following` for examples. |
-
-#### Special tokens
-
-* `{{char}}` or ``: are replaced with the character's name
-* `{{user}}` or ``: are replaced with your name
-
-These replacements happen when the character is loaded, and they apply to the `context`, `greeting`, and `example_dialogue` fields.
-
-#### How do I add a profile picture for my character?
-
-Put an image with the same name as your character's yaml file into the `characters` folder. For example, if your bot is `Character.yaml`, add `Character.jpg` or `Character.png` to the folder.
-
-#### Is the chat history truncated in the prompt?
-
-Once your prompt reaches the 2048 token limit, old messages will be removed one at a time. The context string will always stay at the top of the prompt and will never get truncated.
-
-#### Pygmalion format characters
-
-These are also supported out of the box. Simply put the JSON file in the `characters` folder, or upload it directly from the web UI by clicking on the "Upload character" tab at the bottom.
-
-## Chat styles
-
-Custom chat styles can be defined in the `text-generation-webui/css` folder. Simply create a new file with name starting in `chat_style-` and ending in `.css` and it will automatically appear in the "Chat style" dropdown menu in the interface. Examples:
-
-```
-chat_style-cai-chat.css
-chat_style-TheEncrypted777.css
-chat_style-wpp.css
-```
-
-You should use the same class names as in `chat_style-cai-chat.css` in your custom style.
\ No newline at end of file
diff --git a/spaces/awacke1/ASR-openai-whisper-large/README.md b/spaces/awacke1/ASR-openai-whisper-large/README.md
deleted file mode 100644
index 9f4b7723b8d96bd2d11ad0672e40f47bf72668ad..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ASR-openai-whisper-large/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ASR Openai Whisper Large
-emoji: 🦀
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/ActingGameMechanicsForSocialIntelligence/backup-app.py b/spaces/awacke1/ActingGameMechanicsForSocialIntelligence/backup-app.py
deleted file mode 100644
index 3df95c5e94a9775cbc07e92001cc027f4ce48868..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ActingGameMechanicsForSocialIntelligence/backup-app.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import streamlit as st
-import random
-
-# Define the player cards
-player_cards = {
- "Player 1": {
- "name": "Player 1",
- "sketch": "👩",
- "score": 0,
- "mime": ""
- },
- "Player 2": {
- "name": "Player 2",
- "sketch": "👨",
- "score": 0,
- "mime": ""
- }
-}
-
-# Define the game settings
-num_rounds = 5
-
-# Define the possible actions
-actions = ["jump", "dance", "sing", "sleep", "laugh", "cry", "eat", "drink", "run", "swim"]
-
-# Define the Streamlit app
-def app():
- st.set_page_config(page_title="Mime Game", page_icon="🎭", layout="wide")
- st.title("Mime Game")
- st.sidebar.write("# Player Cards")
- for player, attributes in player_cards.items():
- st.sidebar.write(f"## {player}")
- st.sidebar.write(f"Name: {attributes['name']}")
- st.sidebar.write(f"Sketch: {attributes['sketch']}")
- st.sidebar.write(f"Score: {attributes['score']}")
- st.sidebar.write("# Game Settings")
- num_rounds = st.sidebar.slider("Number of rounds to play", 1, 10, 5)
- # Start the game when the user clicks the "Play Game" button
- if st.button("Play Game"):
- # Play the game for the specified number of rounds
- for i in range(num_rounds):
- st.write(f"Round {i+1}")
- for player, attributes in player_cards.items():
- # Ask the player to perform an action using mime or mimicry
- st.write(f"{attributes['sketch']} {attributes['name']}, it's your turn to perform an action using mime or mimicry.")
- mime = st.text_input("Enter your mime/mimicry")
- attributes["mime"] = mime
- # Randomly select an action and ask the other player to guess it
- action = random.choice(actions)
- st.write(f"The action is: {action}")
- for player, attributes in player_cards.items():
- if attributes["mime"] == action:
- attributes["score"] += 1
- st.write(f"{attributes['sketch']} {attributes['name']} guessed the action correctly! 🎉")
- else:
- st.write(f"{attributes['sketch']} {attributes['name']} failed to guess the action.")
- # Display the final scores
- st.write("# Final Scores")
- for player, attributes in player_cards.items():
- st.write(f"{attributes['sketch']} {attributes['name']}: {attributes['score']} points")
-
-
-if __name__ == "__main__":
- app()
\ No newline at end of file
diff --git a/spaces/awacke1/CardGameActivity-TwoPlayerAndAI/app.py b/spaces/awacke1/CardGameActivity-TwoPlayerAndAI/app.py
deleted file mode 100644
index 4c205fb9f45cbe38a4f43221aaf839984b04ad15..0000000000000000000000000000000000000000
--- a/spaces/awacke1/CardGameActivity-TwoPlayerAndAI/app.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import os
-import random
-import streamlit as st
-import base64
-
-# Define the game rules
-NUM_ROUNDS = 26
-CARD_VALUES = {
- 'A': 14,
- 'K': 13,
- 'Q': 12,
- 'J': 11,
- '10': 10,
- '9': 9,
- '8': 8,
- '7': 7,
- '6': 6,
- '5': 5,
- '4': 4,
- '3': 3,
- '2': 2,
-}
-
-# Define the game mechanics
-def shuffle_deck():
- """Returns a shuffled deck of cards."""
- deck = [(value, suit) for value in CARD_VALUES for suit in ['♠', '♡', '♢', '♣']]
- random.shuffle(deck)
- return deck
-
-def draw_card(deck):
- """Draws a card from the top of the deck and removes it from the deck."""
- if len(deck) == 0:
- return None
- return deck.pop(0)
-
-def compare_cards(card1, card2):
- """Compares the values of two cards and returns the winner."""
- value1 = CARD_VALUES[card1[0]]
- value2 = CARD_VALUES[card2[0]]
- if value1 > value2:
- return 'player'
- elif value2 > value1:
- return 'ai'
- else:
- return 'tie'
-
-def determine_winner(player_card, ai_card):
- """Determines the winner of the round based on the values of the cards."""
- if player_card is None:
- return 'ai'
- elif ai_card is None:
- return 'player'
- else:
- return compare_cards(player_card, ai_card)
-
-def create_download_link(filename):
- with open(filename, 'r') as f:
- text = f.read()
- b64 = base64.b64encode(text.encode()).decode()
- href = f'Download {filename}'
- return href
-
-def start_game():
- """Initializes the game state and starts the game."""
- game_state = {'player_cards': [], 'ai_cards': [], 'player_score': 0, 'ai_score': 0, 'rounds_played': 0}
- deck = shuffle_deck()
- game_state['player_cards'] = deck[:26]
- game_state['ai_cards'] = deck[26:]
- return game_state
-
-# Define the game UI
-def game_ui(game_state):
- """Displays the game UI and updates the game state."""
- player_cards = game_state['player_cards']
- ai_cards = game_state['ai_cards']
- player_card = player_cards[-1] if len(player_cards) > 0 else None
- ai_card = ai_cards[-1] if len(ai_cards) > 0 else None
-
- st.write('# Peace and Love')
- st.write('---')
-
- st.write('**Player**')
- st.write('Cards: ', ' '.join([f"{card[0]}{card[1]}" for card in player_cards]))
- st.write('Score: ', game_state['player_score'])
- st.write('---')
-
- st.write('**Dealer**')
- st.write('Cards: ', ' '.join([f"🂠" if len(ai_cards) == 1 else f"{card[0]}{card[1]}" for card in ai_cards]))
- st.write('Score: ', game_state['ai_score'])
- st.write('---')
-
- if st.button('Play'):
- if player_card is None:
- st.write('Out of cards!')
- return
-
- winner = determine_winner(player_card, ai_card)
-
- if winner == 'player':
- st.write('Player wins!')
- game_state['player_cards'].extend([player_card, ai_card])
- game_state['player_score'] += 2
- elif winner == 'ai':
- st.write('Dealer wins!')
- game_state['ai_cards'].extend([player_card, ai_card])
- game_state['ai_score'] += 2
- else:
- st.write('Tie!')
- game_state['player_cards'].append(player_card)
- game_state['ai_cards'].append(ai_card)
-
- game_state['rounds_played'] += 1
-
- # Save game state to file
- with open('game_state.txt', 'w') as f:
- if not os.path.exists('game_state.txt'):
- f.write('player_cards,ai_cards,player_score,ai_score,rounds_played\n')
- f.write(','.join([str(game_state[key]) for key in game_state.keys()]) + '\n')
-
- st.sidebar.write('---')
- if st.sidebar.button('New Game'):
- # Reset game state
- game_state = start_game()
-
- # Save game state to file
- with open('game_state.txt', 'w') as f:
- f.write('player_cards,ai_cards,player_score,ai_score,rounds_played\n')
- f.write(','.join([str(game_state[key]) for key in game_state.keys()]) + '\n')
-
- if st.sidebar.button('Reset Game'):
- # Reset game state
- game_state = start_game()
-
- # Truncate game_state.txt file by deleting it and reloading it
- os.remove('game_state.txt')
- open('game_state.txt', 'w').close()
-
- # Save game state to file
- with open('game_state.txt', 'w') as f:
- f.write('player_cards,ai_cards,player_score,ai_score,rounds_played\n')
- f.write(','.join([str(game_state[key]) for key in game_state.keys()]) + '\n')
-
- if st.sidebar.button('Save'):
- # Save game state to file
- with open('game_state.txt', 'w') as f:
- if not os.path.exists('game_state.txt'):
- f.write('player_cards,ai_cards,player_score,ai_score,rounds_played\n')
- f.write(','.join([str(game_state[key]) for key in game_state.keys()]) + '\n')
-
- if st.sidebar.button('Reload'):
- # Reload game state from file
- game_state = {'player_cards': [], 'ai_cards': [], 'player_score': 0, 'ai_score': 0, 'rounds_played': 0}
- with open('game_state.txt', 'r') as f:
- headers = f.readline().strip().split(',')
- data = f.readlines()
- if len(data) > 0:
- last_line = data[-1].strip().split(',')
- for i in range(len(headers)):
- game_state[headers[i]] = eval(last_line[i])
-
- # Show game history
- st.write('# Game History')
- if not st.checkbox('Show game history'):
- if checkbox:
- with open('game_state.txt', 'r') as f:
- lines = f.readlines()
- headers = [header.strip() for header in lines[0].strip().split(',')]
- data = [
- [eval(cell) if cell.isdigit() else cell for cell in line.strip().split(',')]
- for line in lines[1:]
- ]
- st.dataframe(data, columns=headers)
-
- # Add download button for game history
- if st.sidebar.button('Download Game History'):
- st.sidebar.markdown(create_download_link('game_state.txt'), unsafe_allow_html=True)
-
-# Load game state from file or start new game
-if os.path.exists('game_state.txt'):
- game_state = {'player_cards': [], 'ai_cards': [], 'player_score': 0, 'ai_score': 0, 'rounds_played': 0}
- with open('game_state.txt', 'r') as f:
- headers = f.readline().strip().split(',')
- data = f.readlines()
- if len(data) > 0:
- last_line = data[-1].strip().split(',')
-# for i in range(len(headers)):
-# game_state[headers[i]] = eval(last_line[i])
-else:
- game_state = start_game()
-
-game_state = start_game()
-game_ui(game_state)
diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/diffusion.py b/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/diffusion.py
deleted file mode 100644
index decc1d31503e93e6611b02ced7b9c6f00b95db58..0000000000000000000000000000000000000000
--- a/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/diffusion.py
+++ /dev/null
@@ -1,317 +0,0 @@
-from collections import deque
-from functools import partial
-from inspect import isfunction
-import torch.nn.functional as F
-import librosa.sequence
-import numpy as np
-import torch
-from torch import nn
-from tqdm import tqdm
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def extract(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
-
-
-def linear_beta_schedule(timesteps, max_beta=0.02):
- """
- linear schedule
- """
- betas = np.linspace(1e-4, max_beta, timesteps)
- return betas
-
-
-def cosine_beta_schedule(timesteps, s=0.008):
- """
- cosine schedule
- as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
- """
- steps = timesteps + 1
- x = np.linspace(0, steps, steps)
- alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
- alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
- betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
- return np.clip(betas, a_min=0, a_max=0.999)
-
-
-beta_schedule = {
- "cosine": cosine_beta_schedule,
- "linear": linear_beta_schedule,
-}
-
-
-class GaussianDiffusion(nn.Module):
- def __init__(self,
- denoise_fn,
- out_dims=128,
- timesteps=1000,
- k_step=1000,
- max_beta=0.02,
- spec_min=-12,
- spec_max=2):
- super().__init__()
- self.denoise_fn = denoise_fn
- self.out_dims = out_dims
- betas = beta_schedule['linear'](timesteps, max_beta=max_beta)
-
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.k_step = k_step
-
- self.noise_list = deque(maxlen=4)
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- self.register_buffer('spec_min', torch.FloatTensor([spec_min])[None, None, :out_dims])
- self.register_buffer('spec_max', torch.FloatTensor([spec_max])[None, None, :out_dims])
-
- def q_mean_variance(self, x_start, t):
- mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
- variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, cond):
- noise_pred = self.denoise_fn(x, t, cond=cond)
- x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
-
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_plms(self, x, t, interval, cond, clip_denoised=True, repeat_noise=False):
- """
- Use the PLMS method from
- [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778).
- """
-
- def get_x_pred(x, noise_t, t):
- a_t = extract(self.alphas_cumprod, t, x.shape)
- a_prev = extract(self.alphas_cumprod, torch.max(t - interval, torch.zeros_like(t)), x.shape)
- a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
-
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x + x_delta
-
- return x_pred
-
- noise_list = self.noise_list
- noise_pred = self.denoise_fn(x, t, cond=cond)
-
- if len(noise_list) == 0:
- x_pred = get_x_pred(x, noise_pred, t)
- noise_pred_prev = self.denoise_fn(x_pred, max(t - interval, 0), cond=cond)
- noise_pred_prime = (noise_pred + noise_pred_prev) / 2
- elif len(noise_list) == 1:
- noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2
- elif len(noise_list) == 2:
- noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12
- else:
- noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24
-
- x_prev = get_x_pred(x, noise_pred_prime, t)
- noise_list.append(noise_pred)
-
- return x_prev
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (
- extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
- )
-
- def p_losses(self, x_start, t, cond, noise=None, loss_type='l2'):
- noise = default(noise, lambda: torch.randn_like(x_start))
-
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- x_recon = self.denoise_fn(x_noisy, t, cond)
-
- if loss_type == 'l1':
- loss = (noise - x_recon).abs().mean()
- elif loss_type == 'l2':
- loss = F.mse_loss(noise, x_recon)
- else:
- raise NotImplementedError()
-
- return loss
-
- def forward(self,
- condition,
- gt_spec=None,
- infer=True,
- infer_speedup=10,
- method='dpm-solver',
- k_step=300,
- use_tqdm=True):
- """
- conditioning diffusion, use fastspeech2 encoder output as the condition
- """
- cond = condition.transpose(1, 2)
- b, device = condition.shape[0], condition.device
-
- if not infer:
- spec = self.norm_spec(gt_spec)
- t = torch.randint(0, self.k_step, (b,), device=device).long()
- norm_spec = spec.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
- return self.p_losses(norm_spec, t, cond=cond)
- else:
- shape = (cond.shape[0], 1, self.out_dims, cond.shape[2])
-
- if gt_spec is None:
- t = self.k_step
- x = torch.randn(shape, device=device)
- else:
- t = k_step
- norm_spec = self.norm_spec(gt_spec)
- norm_spec = norm_spec.transpose(1, 2)[:, None, :, :]
- x = self.q_sample(x_start=norm_spec, t=torch.tensor([t - 1], device=device).long())
-
- if method is not None and infer_speedup > 1:
- if method == 'dpm-solver':
- from .dpm_solver_pytorch import NoiseScheduleVP, model_wrapper, DPM_Solver
- # 1. Define the noise schedule.
- noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas[:t])
-
- # 2. Convert your discrete-time `model` to the continuous-time
- # noise prediction model. Here is an example for a diffusion model
- # `model` with the noise prediction type ("noise") .
- def my_wrapper(fn):
- def wrapped(x, t, **kwargs):
- ret = fn(x, t, **kwargs)
- if use_tqdm:
- self.bar.update(1)
- return ret
-
- return wrapped
-
- model_fn = model_wrapper(
- my_wrapper(self.denoise_fn),
- noise_schedule,
- model_type="noise", # or "x_start" or "v" or "score"
- model_kwargs={"cond": cond}
- )
-
- # 3. Define dpm-solver and sample by singlestep DPM-Solver.
- # (We recommend singlestep DPM-Solver for unconditional sampling)
- # You can adjust the `steps` to balance the computation
- # costs and the sample quality.
- dpm_solver = DPM_Solver(model_fn, noise_schedule)
-
- steps = t // infer_speedup
- if use_tqdm:
- self.bar = tqdm(desc="sample time step", total=steps)
- x = dpm_solver.sample(
- x,
- steps=steps,
- order=3,
- skip_type="time_uniform",
- method="singlestep",
- )
- if use_tqdm:
- self.bar.close()
- elif method == 'pndm':
- self.noise_list = deque(maxlen=4)
- if use_tqdm:
- for i in tqdm(
- reversed(range(0, t, infer_speedup)), desc='sample time step',
- total=t // infer_speedup,
- ):
- x = self.p_sample_plms(
- x, torch.full((b,), i, device=device, dtype=torch.long),
- infer_speedup, cond=cond
- )
- else:
- for i in reversed(range(0, t, infer_speedup)):
- x = self.p_sample_plms(
- x, torch.full((b,), i, device=device, dtype=torch.long),
- infer_speedup, cond=cond
- )
- else:
- raise NotImplementedError(method)
- else:
- if use_tqdm:
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- else:
- for i in reversed(range(0, t)):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x.squeeze(1).transpose(1, 2) # [B, T, M]
- return self.denorm_spec(x)
-
- def norm_spec(self, x):
- return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
-
- def denorm_spec(self, x):
- return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/models.py b/spaces/azusarang/so-vits-svc-models-ba_P/models.py
deleted file mode 100644
index 4cfc5c4c9920cbd1a082f83e861faf86cdd41e74..0000000000000000000000000000000000000000
--- a/spaces/azusarang/so-vits-svc-models-ba_P/models.py
+++ /dev/null
@@ -1,420 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import modules.attentions as attentions
-import modules.commons as commons
-import modules.modules as modules
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-
-import utils
-from modules.commons import init_weights, get_padding
-from vdecoder.hifigan.models import Generator
-from utils import f0_to_coarse
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class Encoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- # print(x.shape,x_lengths.shape)
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- out_channels,
- hidden_channels,
- kernel_size,
- n_layers,
- gin_channels=0,
- filter_channels=None,
- n_heads=None,
- p_dropout=None):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
- self.f0_emb = nn.Embedding(256, hidden_channels)
-
- self.enc_ = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
-
- def forward(self, x, x_mask, f0=None, noice_scale=1):
- x = x + self.f0_emb(f0).transpose(1,2)
- x = self.enc_(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs) * noice_scale) * x_mask
-
- return z, m, logs, x_mask
-
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class SpeakerEncoder(torch.nn.Module):
- def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256):
- super(SpeakerEncoder, self).__init__()
- self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True)
- self.linear = nn.Linear(model_hidden_size, model_embedding_size)
- self.relu = nn.ReLU()
-
- def forward(self, mels):
- self.lstm.flatten_parameters()
- _, (hidden, _) = self.lstm(mels)
- embeds_raw = self.relu(self.linear(hidden[-1]))
- return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True)
-
- def compute_partial_slices(self, total_frames, partial_frames, partial_hop):
- mel_slices = []
- for i in range(0, total_frames-partial_frames, partial_hop):
- mel_range = torch.arange(i, i+partial_frames)
- mel_slices.append(mel_range)
-
- return mel_slices
-
- def embed_utterance(self, mel, partial_frames=128, partial_hop=64):
- mel_len = mel.size(1)
- last_mel = mel[:,-partial_frames:]
-
- if mel_len > partial_frames:
- mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop)
- mels = list(mel[:,s] for s in mel_slices)
- mels.append(last_mel)
- mels = torch.stack(tuple(mels), 0).squeeze(1)
-
- with torch.no_grad():
- partial_embeds = self(mels)
- embed = torch.mean(partial_embeds, axis=0).unsqueeze(0)
- #embed = embed / torch.linalg.norm(embed, 2)
- else:
- with torch.no_grad():
- embed = self(last_mel)
-
- return embed
-
-class F0Decoder(nn.Module):
- def __init__(self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- spk_channels=0):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.spk_channels = spk_channels
-
- self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1)
- self.decoder = attentions.FFT(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.f0_prenet = nn.Conv1d(1, hidden_channels , 3, padding=1)
- self.cond = nn.Conv1d(spk_channels, hidden_channels, 1)
-
- def forward(self, x, norm_f0, x_mask, spk_emb=None):
- x = torch.detach(x)
- if (spk_emb is not None):
- x = x + self.cond(spk_emb)
- x += self.f0_prenet(norm_f0)
- x = self.prenet(x) * x_mask
- x = self.decoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- ssl_dim,
- n_speakers,
- sampling_rate=44100,
- **kwargs):
-
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- self.ssl_dim = ssl_dim
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2)
-
- self.enc_p = TextEncoder(
- inter_channels,
- hidden_channels,
- filter_channels=filter_channels,
- n_heads=n_heads,
- n_layers=n_layers,
- kernel_size=kernel_size,
- p_dropout=p_dropout
- )
- hps = {
- "sampling_rate": sampling_rate,
- "inter_channels": inter_channels,
- "resblock": resblock,
- "resblock_kernel_sizes": resblock_kernel_sizes,
- "resblock_dilation_sizes": resblock_dilation_sizes,
- "upsample_rates": upsample_rates,
- "upsample_initial_channel": upsample_initial_channel,
- "upsample_kernel_sizes": upsample_kernel_sizes,
- "gin_channels": gin_channels,
- }
- self.dec = Generator(h=hps)
- self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
- self.f0_decoder = F0Decoder(
- 1,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- spk_channels=gin_channels
- )
- self.emb_uv = nn.Embedding(2, hidden_channels)
-
- def forward(self, c, f0, uv, spec, g=None, c_lengths=None, spec_lengths=None):
- g = self.emb_g(g).transpose(1,2)
- # ssl prenet
- x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype)
- x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2)
-
- # f0 predict
- lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500
- norm_lf0 = utils.normalize_f0(lf0, x_mask, uv)
- pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g)
-
- # encoder
- z_ptemp, m_p, logs_p, _ = self.enc_p(x, x_mask, f0=f0_to_coarse(f0))
- z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g)
-
- # flow
- z_p = self.flow(z, spec_mask, g=g)
- z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size)
-
- # nsf decoder
- o = self.dec(z_slice, g=g, f0=pitch_slice)
-
- return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0
-
- def infer(self, c, f0, uv, g=None, noice_scale=0.35, predict_f0=False):
- c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
- g = self.emb_g(g).transpose(1,2)
- x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype)
- x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2)
-
- if predict_f0:
- lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500
- norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False)
- pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g)
- f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1)
-
- z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), noice_scale=noice_scale)
- z = self.flow(z_p, c_mask, g=g, reverse=True)
- o = self.dec(z * c_mask, g=g, f0=f0)
- return o,f0
diff --git a/spaces/badayvedat/LLaVA/llava/model/multimodal_encoder/clip_encoder.py b/spaces/badayvedat/LLaVA/llava/model/multimodal_encoder/clip_encoder.py
deleted file mode 100644
index dbb9015b0fc9fa93483ba77cc303b793e86c36fc..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/LLaVA/llava/model/multimodal_encoder/clip_encoder.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import torch
-import torch.nn as nn
-
-from transformers import CLIPVisionModel, CLIPImageProcessor, CLIPVisionConfig
-
-
-class CLIPVisionTower(nn.Module):
- def __init__(self, vision_tower, args, delay_load=False):
- super().__init__()
-
- self.is_loaded = False
-
- self.vision_tower_name = vision_tower
- self.select_layer = args.mm_vision_select_layer
- self.select_feature = getattr(args, 'mm_vision_select_feature', 'patch')
-
- if not delay_load:
- self.load_model()
- else:
- self.cfg_only = CLIPVisionConfig.from_pretrained(self.vision_tower_name)
-
- def load_model(self):
- self.image_processor = CLIPImageProcessor.from_pretrained(self.vision_tower_name)
- self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name)
- self.vision_tower.requires_grad_(False)
-
- self.is_loaded = True
-
- def feature_select(self, image_forward_outs):
- image_features = image_forward_outs.hidden_states[self.select_layer]
- if self.select_feature == 'patch':
- image_features = image_features[:, 1:]
- elif self.select_feature == 'cls_patch':
- image_features = image_features
- else:
- raise ValueError(f'Unexpected select feature: {self.select_feature}')
- return image_features
-
- @torch.no_grad()
- def forward(self, images):
- if type(images) is list:
- image_features = []
- for image in images:
- image_forward_out = self.vision_tower(image.to(device=self.device, dtype=self.dtype).unsqueeze(0), output_hidden_states=True)
- image_feature = self.feature_select(image_forward_out).to(image.dtype)
- image_features.append(image_feature)
- else:
- image_forward_outs = self.vision_tower(images.to(device=self.device, dtype=self.dtype), output_hidden_states=True)
- image_features = self.feature_select(image_forward_outs).to(images.dtype)
-
- return image_features
-
- @property
- def dummy_feature(self):
- return torch.zeros(1, self.hidden_size, device=self.device, dtype=self.dtype)
-
- @property
- def dtype(self):
- return self.vision_tower.dtype
-
- @property
- def device(self):
- return self.vision_tower.device
-
- @property
- def config(self):
- if self.is_loaded:
- return self.vision_tower.config
- else:
- return self.cfg_only
-
- @property
- def hidden_size(self):
- return self.config.hidden_size
-
- @property
- def num_patches(self):
- return (self.config.image_size // self.config.patch_size) ** 2
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/color_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/color_fragment.glsl.js
deleted file mode 100644
index 6b62d5429f8ad3ca112dda89dc954a9488e0a6d9..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/color_fragment.glsl.js
+++ /dev/null
@@ -1,7 +0,0 @@
-export default /* glsl */`
-#ifdef USE_COLOR
-
- diffuseColor.rgb *= vColor;
-
-#endif
-`;
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/UniformsLib.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/UniformsLib.d.ts
deleted file mode 100644
index ffdff66938639903cee5f2000f7aec3fb1ae78bc..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/UniformsLib.d.ts
+++ /dev/null
@@ -1,136 +0,0 @@
-export interface IUniform {
- value: any;
-}
-
-export let UniformsLib: {
- common: {
- diffuse: IUniform;
- opacity: IUniform;
- map: IUniform;
- uvTransform: IUniform;
- alphaMap: IUniform;
- };
- specularmap: {
- specularMap: IUniform;
- };
- envmap: {
- envMap: IUniform;
- flipEnvMap: IUniform;
- reflectivity: IUniform;
- refractionRatio: IUniform;
- maxMipLevel: IUniform;
- };
- aomap: {
- aoMap: IUniform;
- aoMapIntensity: IUniform;
- };
- lightmap: {
- lightMap: IUniform;
- lightMapIntensity: IUniform;
- };
- emissivemap: {
- emissiveMap: IUniform;
- };
- bumpmap: {
- bumpMap: IUniform;
- bumpScale: IUniform;
- };
- normalmap: {
- normalMap: IUniform;
- normalScale: IUniform;
- };
- displacementmap: {
- displacementMap: IUniform;
- displacementScale: IUniform;
- displacementBias: IUniform;
- };
- roughnessmap: {
- roughnessMap: IUniform;
- };
- metalnessmap: {
- metalnessMap: IUniform;
- };
- gradientmap: {
- gradientMap: IUniform;
- };
- fog: {
- fogDensity: IUniform;
- fogNear: IUniform;
- fogFar: IUniform;
- fogColor: IUniform;
- };
- lights: {
- ambientLightColor: IUniform;
- directionalLights: {
- value: any[];
- properties: {
- direction: {};
- color: {};
- shadow: {};
- shadowBias: {};
- shadowRadius: {};
- shadowMapSize: {};
- };
- };
- directionalShadowMap: IUniform;
- directionalShadowMatrix: IUniform;
- spotLights: {
- value: any[];
- properties: {
- color: {};
- position: {};
- direction: {};
- distance: {};
- coneCos: {};
- penumbraCos: {};
- decay: {};
- shadow: {};
- shadowBias: {};
- shadowRadius: {};
- shadowMapSize: {};
- };
- };
- spotShadowMap: IUniform;
- spotShadowMatrix: IUniform;
- pointLights: {
- value: any[];
- properties: {
- color: {};
- position: {};
- decay: {};
- distance: {};
- shadow: {};
- shadowBias: {};
- shadowRadius: {};
- shadowMapSize: {};
- };
- };
- pointShadowMap: IUniform;
- pointShadowMatrix: IUniform;
- hemisphereLights: {
- value: any[];
- properties: {
- direction: {};
- skycolor: {};
- groundColor: {};
- };
- };
- rectAreaLights: {
- value: any[];
- properties: {
- color: {};
- position: {};
- width: {};
- height: {};
- };
- };
- };
- points: {
- diffuse: IUniform;
- opacity: IUniform;
- size: IUniform;
- scale: IUniform;
- map: IUniform;
- uvTransform: IUniform;
- };
-};
diff --git a/spaces/baruga/gpt4-sandbox/README.md b/spaces/baruga/gpt4-sandbox/README.md
deleted file mode 100644
index 71125537bc4462df805de88716dc0843fc82529b..0000000000000000000000000000000000000000
--- a/spaces/baruga/gpt4-sandbox/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Gpt4 Sandbox
-emoji: 💩
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: unknown
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/gfpgan/archs/stylegan2_clean_arch.py b/spaces/beihai/GFPGAN-V1.3-whole-image/gfpgan/archs/stylegan2_clean_arch.py
deleted file mode 100644
index 9e2ee94e50401b95e4c9997adef5581d521d725f..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/gfpgan/archs/stylegan2_clean_arch.py
+++ /dev/null
@@ -1,368 +0,0 @@
-import math
-import random
-import torch
-from basicsr.archs.arch_util import default_init_weights
-from basicsr.utils.registry import ARCH_REGISTRY
-from torch import nn
-from torch.nn import functional as F
-
-
-class NormStyleCode(nn.Module):
-
- def forward(self, x):
- """Normalize the style codes.
-
- Args:
- x (Tensor): Style codes with shape (b, c).
-
- Returns:
- Tensor: Normalized tensor.
- """
- return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8)
-
-
-class ModulatedConv2d(nn.Module):
- """Modulated Conv2d used in StyleGAN2.
-
- There is no bias in ModulatedConv2d.
-
- Args:
- in_channels (int): Channel number of the input.
- out_channels (int): Channel number of the output.
- kernel_size (int): Size of the convolving kernel.
- num_style_feat (int): Channel number of style features.
- demodulate (bool): Whether to demodulate in the conv layer. Default: True.
- sample_mode (str | None): Indicating 'upsample', 'downsample' or None. Default: None.
- eps (float): A value added to the denominator for numerical stability. Default: 1e-8.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- num_style_feat,
- demodulate=True,
- sample_mode=None,
- eps=1e-8):
- super(ModulatedConv2d, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.demodulate = demodulate
- self.sample_mode = sample_mode
- self.eps = eps
-
- # modulation inside each modulated conv
- self.modulation = nn.Linear(num_style_feat, in_channels, bias=True)
- # initialization
- default_init_weights(self.modulation, scale=1, bias_fill=1, a=0, mode='fan_in', nonlinearity='linear')
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channels, in_channels, kernel_size, kernel_size) /
- math.sqrt(in_channels * kernel_size**2))
- self.padding = kernel_size // 2
-
- def forward(self, x, style):
- """Forward function.
-
- Args:
- x (Tensor): Tensor with shape (b, c, h, w).
- style (Tensor): Tensor with shape (b, num_style_feat).
-
- Returns:
- Tensor: Modulated tensor after convolution.
- """
- b, c, h, w = x.shape # c = c_in
- # weight modulation
- style = self.modulation(style).view(b, 1, c, 1, 1)
- # self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1)
- weight = self.weight * style # (b, c_out, c_in, k, k)
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps)
- weight = weight * demod.view(b, self.out_channels, 1, 1, 1)
-
- weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size)
-
- # upsample or downsample if necessary
- if self.sample_mode == 'upsample':
- x = F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)
- elif self.sample_mode == 'downsample':
- x = F.interpolate(x, scale_factor=0.5, mode='bilinear', align_corners=False)
-
- b, c, h, w = x.shape
- x = x.view(1, b * c, h, w)
- # weight: (b*c_out, c_in, k, k), groups=b
- out = F.conv2d(x, weight, padding=self.padding, groups=b)
- out = out.view(b, self.out_channels, *out.shape[2:4])
-
- return out
-
- def __repr__(self):
- return (f'{self.__class__.__name__}(in_channels={self.in_channels}, out_channels={self.out_channels}, '
- f'kernel_size={self.kernel_size}, demodulate={self.demodulate}, sample_mode={self.sample_mode})')
-
-
-class StyleConv(nn.Module):
- """Style conv used in StyleGAN2.
-
- Args:
- in_channels (int): Channel number of the input.
- out_channels (int): Channel number of the output.
- kernel_size (int): Size of the convolving kernel.
- num_style_feat (int): Channel number of style features.
- demodulate (bool): Whether demodulate in the conv layer. Default: True.
- sample_mode (str | None): Indicating 'upsample', 'downsample' or None. Default: None.
- """
-
- def __init__(self, in_channels, out_channels, kernel_size, num_style_feat, demodulate=True, sample_mode=None):
- super(StyleConv, self).__init__()
- self.modulated_conv = ModulatedConv2d(
- in_channels, out_channels, kernel_size, num_style_feat, demodulate=demodulate, sample_mode=sample_mode)
- self.weight = nn.Parameter(torch.zeros(1)) # for noise injection
- self.bias = nn.Parameter(torch.zeros(1, out_channels, 1, 1))
- self.activate = nn.LeakyReLU(negative_slope=0.2, inplace=True)
-
- def forward(self, x, style, noise=None):
- # modulate
- out = self.modulated_conv(x, style) * 2**0.5 # for conversion
- # noise injection
- if noise is None:
- b, _, h, w = out.shape
- noise = out.new_empty(b, 1, h, w).normal_()
- out = out + self.weight * noise
- # add bias
- out = out + self.bias
- # activation
- out = self.activate(out)
- return out
-
-
-class ToRGB(nn.Module):
- """To RGB (image space) from features.
-
- Args:
- in_channels (int): Channel number of input.
- num_style_feat (int): Channel number of style features.
- upsample (bool): Whether to upsample. Default: True.
- """
-
- def __init__(self, in_channels, num_style_feat, upsample=True):
- super(ToRGB, self).__init__()
- self.upsample = upsample
- self.modulated_conv = ModulatedConv2d(
- in_channels, 3, kernel_size=1, num_style_feat=num_style_feat, demodulate=False, sample_mode=None)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, x, style, skip=None):
- """Forward function.
-
- Args:
- x (Tensor): Feature tensor with shape (b, c, h, w).
- style (Tensor): Tensor with shape (b, num_style_feat).
- skip (Tensor): Base/skip tensor. Default: None.
-
- Returns:
- Tensor: RGB images.
- """
- out = self.modulated_conv(x, style)
- out = out + self.bias
- if skip is not None:
- if self.upsample:
- skip = F.interpolate(skip, scale_factor=2, mode='bilinear', align_corners=False)
- out = out + skip
- return out
-
-
-class ConstantInput(nn.Module):
- """Constant input.
-
- Args:
- num_channel (int): Channel number of constant input.
- size (int): Spatial size of constant input.
- """
-
- def __init__(self, num_channel, size):
- super(ConstantInput, self).__init__()
- self.weight = nn.Parameter(torch.randn(1, num_channel, size, size))
-
- def forward(self, batch):
- out = self.weight.repeat(batch, 1, 1, 1)
- return out
-
-
-@ARCH_REGISTRY.register()
-class StyleGAN2GeneratorClean(nn.Module):
- """Clean version of StyleGAN2 Generator.
-
- Args:
- out_size (int): The spatial size of outputs.
- num_style_feat (int): Channel number of style features. Default: 512.
- num_mlp (int): Layer number of MLP style layers. Default: 8.
- channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
- narrow (float): Narrow ratio for channels. Default: 1.0.
- """
-
- def __init__(self, out_size, num_style_feat=512, num_mlp=8, channel_multiplier=2, narrow=1):
- super(StyleGAN2GeneratorClean, self).__init__()
- # Style MLP layers
- self.num_style_feat = num_style_feat
- style_mlp_layers = [NormStyleCode()]
- for i in range(num_mlp):
- style_mlp_layers.extend(
- [nn.Linear(num_style_feat, num_style_feat, bias=True),
- nn.LeakyReLU(negative_slope=0.2, inplace=True)])
- self.style_mlp = nn.Sequential(*style_mlp_layers)
- # initialization
- default_init_weights(self.style_mlp, scale=1, bias_fill=0, a=0.2, mode='fan_in', nonlinearity='leaky_relu')
-
- # channel list
- channels = {
- '4': int(512 * narrow),
- '8': int(512 * narrow),
- '16': int(512 * narrow),
- '32': int(512 * narrow),
- '64': int(256 * channel_multiplier * narrow),
- '128': int(128 * channel_multiplier * narrow),
- '256': int(64 * channel_multiplier * narrow),
- '512': int(32 * channel_multiplier * narrow),
- '1024': int(16 * channel_multiplier * narrow)
- }
- self.channels = channels
-
- self.constant_input = ConstantInput(channels['4'], size=4)
- self.style_conv1 = StyleConv(
- channels['4'],
- channels['4'],
- kernel_size=3,
- num_style_feat=num_style_feat,
- demodulate=True,
- sample_mode=None)
- self.to_rgb1 = ToRGB(channels['4'], num_style_feat, upsample=False)
-
- self.log_size = int(math.log(out_size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
- self.num_latent = self.log_size * 2 - 2
-
- self.style_convs = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channels = channels['4']
- # noise
- for layer_idx in range(self.num_layers):
- resolution = 2**((layer_idx + 5) // 2)
- shape = [1, 1, resolution, resolution]
- self.noises.register_buffer(f'noise{layer_idx}', torch.randn(*shape))
- # style convs and to_rgbs
- for i in range(3, self.log_size + 1):
- out_channels = channels[f'{2**i}']
- self.style_convs.append(
- StyleConv(
- in_channels,
- out_channels,
- kernel_size=3,
- num_style_feat=num_style_feat,
- demodulate=True,
- sample_mode='upsample'))
- self.style_convs.append(
- StyleConv(
- out_channels,
- out_channels,
- kernel_size=3,
- num_style_feat=num_style_feat,
- demodulate=True,
- sample_mode=None))
- self.to_rgbs.append(ToRGB(out_channels, num_style_feat, upsample=True))
- in_channels = out_channels
-
- def make_noise(self):
- """Make noise for noise injection."""
- device = self.constant_input.weight.device
- noises = [torch.randn(1, 1, 4, 4, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2**i, 2**i, device=device))
-
- return noises
-
- def get_latent(self, x):
- return self.style_mlp(x)
-
- def mean_latent(self, num_latent):
- latent_in = torch.randn(num_latent, self.num_style_feat, device=self.constant_input.weight.device)
- latent = self.style_mlp(latent_in).mean(0, keepdim=True)
- return latent
-
- def forward(self,
- styles,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- truncation=1,
- truncation_latent=None,
- inject_index=None,
- return_latents=False):
- """Forward function for StyleGAN2GeneratorClean.
-
- Args:
- styles (list[Tensor]): Sample codes of styles.
- input_is_latent (bool): Whether input is latent style. Default: False.
- noise (Tensor | None): Input noise or None. Default: None.
- randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
- truncation (float): The truncation ratio. Default: 1.
- truncation_latent (Tensor | None): The truncation latent tensor. Default: None.
- inject_index (int | None): The injection index for mixing noise. Default: None.
- return_latents (bool): Whether to return style latents. Default: False.
- """
- # style codes -> latents with Style MLP layer
- if not input_is_latent:
- styles = [self.style_mlp(s) for s in styles]
- # noises
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers # for each style conv layer
- else: # use the stored noise
- noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)]
- # style truncation
- if truncation < 1:
- style_truncation = []
- for style in styles:
- style_truncation.append(truncation_latent + truncation * (style - truncation_latent))
- styles = style_truncation
- # get style latents with injection
- if len(styles) == 1:
- inject_index = self.num_latent
-
- if styles[0].ndim < 3:
- # repeat latent code for all the layers
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- else: # used for encoder with different latent code for each layer
- latent = styles[0]
- elif len(styles) == 2: # mixing noises
- if inject_index is None:
- inject_index = random.randint(1, self.num_latent - 1)
- latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
- latent = torch.cat([latent1, latent2], 1)
-
- # main generation
- out = self.constant_input(latent.shape[0])
- out = self.style_conv1(out, latent[:, 0], noise=noise[0])
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2],
- noise[2::2], self.to_rgbs):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
- else:
- return image, None
diff --git a/spaces/bigcode/Reasoning-with-StarCoder/README.md b/spaces/bigcode/Reasoning-with-StarCoder/README.md
deleted file mode 100644
index 6b306284d6e1d41f8c95d36164ca24cf7e0236e2..0000000000000000000000000000000000000000
--- a/spaces/bigcode/Reasoning-with-StarCoder/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Reasoning With StarCoder
-emoji: 🧐
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bigscience-data/filter_values_distributions/app.py b/spaces/bigscience-data/filter_values_distributions/app.py
deleted file mode 100644
index 352c90c54f554703fab7acec3bb24b26de808e02..0000000000000000000000000000000000000000
--- a/spaces/bigscience-data/filter_values_distributions/app.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import streamlit as st
-
-
-PATH_PLOTS = "./plots"
-
-LANGUAGES = {
- "Arabic": "ar",
- "Basque": "eu",
- "Bengali": "bn",
- "Catalan": "ca",
- "Chinese": "zh",
- "English": "en",
- "French": "fr",
- "Hindi": "hi",
- "Indonesian": "id",
- "Portuguese": "pt",
- "Spanish": "es",
- "Urdu": "ur",
- "Vietnamese": "vi",
-}
-
-FILTERS = [
- "number of words",
- "character repetition ratio",
- "word repetition ratio",
- "special character ratio",
- "closed class word ratio",
- "flagged word ratio",
- "perplexity score",
-]
-
-
-class Visualization:
- def __init__(self):
- pass
-
- def set_title(self):
- st.title("Visualization of the distributions of the filter values for the BigScience Corpus")
-
- def choose_language(self):
- chosen_language = st.sidebar.selectbox(
- "Language",
- options=list(LANGUAGES.keys()),
- index=5 # English
- )
- self.chosen_language = LANGUAGES[chosen_language]
-
- def choose_filter(self):
- chosen_filter = st.sidebar.selectbox(
- "Filter on the",
- options=FILTERS,
- index=0
- )
- self.chosen_filter = chosen_filter.replace(" ", "_")
-
- def display_plot(self):
- path_image = f"{PATH_PLOTS}/{self.chosen_language}_{self.chosen_filter}.png"
-
- col1, col2, col3 = st.columns([1,6,1])
- with col1:
- st.write("")
- with col2:
- st.image(path_image)
- with col3:
- st.write("")
-
- def visualization(self):
- self.set_title()
- self.choose_language()
- self.choose_filter()
- self.display_plot()
-
-
-if __name__ == "__main__":
- st.set_page_config(layout="wide")
- visualization = Visualization()
- visualization.visualization()
diff --git a/spaces/bigscience/promptsource/README.md b/spaces/bigscience/promptsource/README.md
deleted file mode 100644
index 71509211b8bc66ec824d8a5433a28504e8029515..0000000000000000000000000000000000000000
--- a/spaces/bigscience/promptsource/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Promptsource
-emoji: 👁
-colorFrom: red
-colorTo: blue
-sdk: streamlit
-sdk_version: 0.82.0
-app_file: promptsource/app.py
-pinned: false
----
-
-PromptSource is a toolkit for creating, sharing and using natural language prompts. This Space is a hosted demo of Promptsource and allows you to browse through existing prompts.
-
-More information about Promptsource and how to use it is available on the [Github repository](https://github.com/bigscience-workshop/promptsource).
-
-NB: As of now, this Space is not synched with the Github repository automatically and captures the state of the repository on October 21, 2022.
diff --git a/spaces/binker/interpreter5/README.md b/spaces/binker/interpreter5/README.md
deleted file mode 100644
index 14cf33a53fb304374e37d69c1d287a9eee70b7cd..0000000000000000000000000000000000000000
--- a/spaces/binker/interpreter5/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Code Interpreter
-emoji: 👀
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/blaziant/ysda_nlp_ops_update/Dockerfile b/spaces/blaziant/ysda_nlp_ops_update/Dockerfile
deleted file mode 100644
index 587c772a5722b45d5a3cada3294f1a8de98774b7..0000000000000000000000000000000000000000
--- a/spaces/blaziant/ysda_nlp_ops_update/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM python:3.9
-
-WORKDIR /backend
-
-COPY ./requirements.txt /backend/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /backend/requirements.txt
-
-COPY ./app /backend/app
-COPY ./templates /backend/templates
-
-RUN useradd -m -u 1000 user
-USER user
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-WORKDIR $HOME/app
-
-COPY --chown=user . $HOME/app
-
-CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/audiogen/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/audiogen/__init__.py
deleted file mode 100644
index 8a0a2688450ce120088b79c3314a2f267394dc11..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/audiogen/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""AudioGen grids."""
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/tracking/test_hungarian_tracker.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/tracking/test_hungarian_tracker.py
deleted file mode 100644
index 660c635990a3370945e7f14422dcd978320e4782..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/tracking/test_hungarian_tracker.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import numpy as np
-import unittest
-from typing import Dict
-import torch
-
-from detectron2.config import instantiate
-from detectron2.structures import Boxes, Instances
-
-
-class TestBaseHungarianTracker(unittest.TestCase):
- def setUp(self):
- self._img_size = np.array([600, 800])
- self._prev_boxes = np.array(
- [
- [101, 101, 200, 200],
- [301, 301, 450, 450],
- ]
- ).astype(np.float32)
- self._prev_scores = np.array([0.9, 0.9])
- self._prev_classes = np.array([1, 1])
- self._prev_masks = np.ones((2, 600, 800)).astype("uint8")
- self._curr_boxes = np.array(
- [
- [302, 303, 451, 452],
- [101, 102, 201, 203],
- ]
- ).astype(np.float32)
- self._curr_scores = np.array([0.95, 0.85])
- self._curr_classes = np.array([1, 1])
- self._curr_masks = np.ones((2, 600, 800)).astype("uint8")
-
- self._prev_instances = {
- "image_size": self._img_size,
- "pred_boxes": self._prev_boxes,
- "scores": self._prev_scores,
- "pred_classes": self._prev_classes,
- "pred_masks": self._prev_masks,
- }
- self._prev_instances = self._convertDictPredictionToInstance(self._prev_instances)
- self._curr_instances = {
- "image_size": self._img_size,
- "pred_boxes": self._curr_boxes,
- "scores": self._curr_scores,
- "pred_classes": self._curr_classes,
- "pred_masks": self._curr_masks,
- }
- self._curr_instances = self._convertDictPredictionToInstance(self._curr_instances)
-
- self._max_num_instances = 200
- self._max_lost_frame_count = 0
- self._min_box_rel_dim = 0.02
- self._min_instance_period = 1
- self._track_iou_threshold = 0.5
-
- def _convertDictPredictionToInstance(self, prediction: Dict) -> Instances:
- """
- convert prediction from Dict to D2 Instances format
- """
- res = Instances(
- image_size=torch.IntTensor(prediction["image_size"]),
- pred_boxes=Boxes(torch.FloatTensor(prediction["pred_boxes"])),
- pred_masks=torch.IntTensor(prediction["pred_masks"]),
- pred_classes=torch.IntTensor(prediction["pred_classes"]),
- scores=torch.FloatTensor(prediction["scores"]),
- )
- return res
-
- def test_init(self):
- cfg = {
- "_target_": "detectron2.tracking.hungarian_tracker.BaseHungarianTracker",
- "video_height": self._img_size[0],
- "video_width": self._img_size[1],
- "max_num_instances": self._max_num_instances,
- "max_lost_frame_count": self._max_lost_frame_count,
- "min_box_rel_dim": self._min_box_rel_dim,
- "min_instance_period": self._min_instance_period,
- "track_iou_threshold": self._track_iou_threshold,
- }
- tracker = instantiate(cfg)
- self.assertTrue(tracker._video_height == self._img_size[0])
-
- def test_initialize_extra_fields(self):
- cfg = {
- "_target_": "detectron2.tracking.hungarian_tracker.BaseHungarianTracker",
- "video_height": self._img_size[0],
- "video_width": self._img_size[1],
- "max_num_instances": self._max_num_instances,
- "max_lost_frame_count": self._max_lost_frame_count,
- "min_box_rel_dim": self._min_box_rel_dim,
- "min_instance_period": self._min_instance_period,
- "track_iou_threshold": self._track_iou_threshold,
- }
- tracker = instantiate(cfg)
- instances = tracker._initialize_extra_fields(self._curr_instances)
- self.assertTrue(instances.has("ID"))
- self.assertTrue(instances.has("ID_period"))
- self.assertTrue(instances.has("lost_frame_count"))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/prepare_data.py b/spaces/caffeinum/VToonify/vtoonify/model/stylegan/prepare_data.py
deleted file mode 100644
index aa385d0ac13550e1ae5513f7a20b35997a5c3ea6..0000000000000000000000000000000000000000
--- a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/prepare_data.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import argparse
-from io import BytesIO
-import multiprocessing
-from functools import partial
-
-import os
-from PIL import Image
-import lmdb
-from tqdm import tqdm
-from torchvision import datasets
-from torchvision.transforms import functional as trans_fn
-
-
-def resize_and_convert(img, size, resample, quality=100):
- img = trans_fn.resize(img, size, resample)
- img = trans_fn.center_crop(img, size)
- buffer = BytesIO()
- img.save(buffer, format="jpeg", quality=quality)
- val = buffer.getvalue()
-
- return val
-
-
-def resize_multiple(
- img, sizes=(128, 256, 512, 1024), resample=Image.LANCZOS, quality=100
-):
- imgs = []
-
- for size in sizes:
- imgs.append(resize_and_convert(img, size, resample, quality))
-
- return imgs
-
-
-def resize_worker(img_file, sizes, resample):
- i, file = img_file
- img = Image.open(file)
- img = img.convert("RGB")
- out = resize_multiple(img, sizes=sizes, resample=resample)
-
- return i, out
-
-
-def prepare(
- env, dataset, n_worker, sizes=(128, 256, 512, 1024), resample=Image.LANCZOS
-):
- resize_fn = partial(resize_worker, sizes=sizes, resample=resample)
-
- files = sorted(dataset.imgs, key=lambda x: x[0])
- files = [(i, file) for i, (file, label) in enumerate(files)]
- total = 0
-
- with multiprocessing.Pool(n_worker) as pool:
- for i, imgs in tqdm(pool.imap_unordered(resize_fn, files)):
- for size, img in zip(sizes, imgs):
- key = f"{size}-{str(i).zfill(5)}".encode("utf-8")
-
- with env.begin(write=True) as txn:
- txn.put(key, img)
-
- total += 1
-
- with env.begin(write=True) as txn:
- txn.put("length".encode("utf-8"), str(total).encode("utf-8"))
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Preprocess images for model training")
- parser.add_argument("--out", type=str, help="filename of the result lmdb dataset")
- parser.add_argument(
- "--size",
- type=str,
- default="128,256,512,1024",
- help="resolutions of images for the dataset",
- )
- parser.add_argument(
- "--n_worker",
- type=int,
- default=8,
- help="number of workers for preparing dataset",
- )
- parser.add_argument(
- "--resample",
- type=str,
- default="lanczos",
- help="resampling methods for resizing images",
- )
- parser.add_argument("path", type=str, help="path to the image dataset")
-
- args = parser.parse_args()
-
- if not os.path.exists(args.out):
- os.makedirs(args.out)
-
- resample_map = {"lanczos": Image.LANCZOS, "bilinear": Image.BILINEAR}
- resample = resample_map[args.resample]
-
- sizes = [int(s.strip()) for s in args.size.split(",")]
-
- print(f"Make dataset of image sizes:", ", ".join(str(s) for s in sizes))
-
- imgset = datasets.ImageFolder(args.path)
-
- with lmdb.open(args.out, map_size=1024 ** 4, readahead=False) as env:
- prepare(env, imgset, args.n_worker, sizes=sizes, resample=resample)
diff --git a/spaces/candlend/vits-hoshimi/vits/utils.py b/spaces/candlend/vits-hoshimi/vits/utils.py
deleted file mode 100644
index 67215fb62f2f2488349e7e8254a8951b331ce175..0000000000000000000000000000000000000000
--- a/spaces/candlend/vits-hoshimi/vits/utils.py
+++ /dev/null
@@ -1,263 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-import re
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- lr = optimizer.param_groups[0]['lr']
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- if lr < optimizer.param_groups[0]['lr']:
- optimizer.param_groups[0]['lr'] = lr
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict= {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})" .format(
- checkpoint_path, iteration))
- global_step = int(re.compile(r'\d+').findall(checkpoint_path)[-1])
- return model, optimizer, learning_rate, iteration, global_step
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
-
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10,2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/vis/base.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/vis/base.py
deleted file mode 100644
index 7b35397b18e62c195dc15771aa79a1d42b321e7f..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/vis/base.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import numpy as np
-import cv2
-import torch
-
-Image = np.ndarray
-Boxes = torch.Tensor
-
-
-class MatrixVisualizer(object):
- """
- Base visualizer for matrix data
- """
-
- def __init__(
- self,
- inplace=True,
- cmap=cv2.COLORMAP_PARULA,
- val_scale=1.0,
- alpha=0.7,
- interp_method_matrix=cv2.INTER_LINEAR,
- interp_method_mask=cv2.INTER_NEAREST,
- ):
- self.inplace = inplace
- self.cmap = cmap
- self.val_scale = val_scale
- self.alpha = alpha
- self.interp_method_matrix = interp_method_matrix
- self.interp_method_mask = interp_method_mask
-
- def visualize(self, image_bgr, mask, matrix, bbox_xywh):
- self._check_image(image_bgr)
- self._check_mask_matrix(mask, matrix)
- if self.inplace:
- image_target_bgr = image_bgr
- else:
- image_target_bgr = image_bgr * 0
- x, y, w, h = [int(v) for v in bbox_xywh]
- if w <= 0 or h <= 0:
- return image_bgr
- mask, matrix = self._resize(mask, matrix, w, h)
- mask_bg = np.tile((mask == 0)[:, :, np.newaxis], [1, 1, 3])
- matrix_scaled = matrix.astype(np.float32) * self.val_scale
- _EPSILON = 1e-6
- if np.any(matrix_scaled > 255 + _EPSILON):
- logger = logging.getLogger(__name__)
- logger.warning(
- f"Matrix has values > {255 + _EPSILON} after " f"scaling, clipping to [0..255]"
- )
- matrix_scaled_8u = matrix_scaled.clip(0, 255).astype(np.uint8)
- matrix_vis = cv2.applyColorMap(matrix_scaled_8u, self.cmap)
- matrix_vis[mask_bg] = image_target_bgr[y : y + h, x : x + w, :][mask_bg]
- image_target_bgr[y : y + h, x : x + w, :] = (
- image_target_bgr[y : y + h, x : x + w, :] * (1.0 - self.alpha) + matrix_vis * self.alpha
- )
- return image_target_bgr.astype(np.uint8)
-
- def _resize(self, mask, matrix, w, h):
- if (w != mask.shape[1]) or (h != mask.shape[0]):
- mask = cv2.resize(mask, (w, h), self.interp_method_mask)
- if (w != matrix.shape[1]) or (h != matrix.shape[0]):
- matrix = cv2.resize(matrix, (w, h), self.interp_method_matrix)
- return mask, matrix
-
- def _check_image(self, image_rgb):
- assert len(image_rgb.shape) == 3
- assert image_rgb.shape[2] == 3
- assert image_rgb.dtype == np.uint8
-
- def _check_mask_matrix(self, mask, matrix):
- assert len(matrix.shape) == 2
- assert len(mask.shape) == 2
- assert mask.dtype == np.uint8
-
-
-class RectangleVisualizer(object):
-
- _COLOR_GREEN = (18, 127, 15)
-
- def __init__(self, color=_COLOR_GREEN, thickness=1):
- self.color = color
- self.thickness = thickness
-
- def visualize(self, image_bgr, bbox_xywh, color=None, thickness=None):
- x, y, w, h = bbox_xywh
- color = color or self.color
- thickness = thickness or self.thickness
- cv2.rectangle(image_bgr, (int(x), int(y)), (int(x + w), int(y + h)), color, thickness)
- return image_bgr
-
-
-class PointsVisualizer(object):
-
- _COLOR_GREEN = (18, 127, 15)
-
- def __init__(self, color_bgr=_COLOR_GREEN, r=5):
- self.color_bgr = color_bgr
- self.r = r
-
- def visualize(self, image_bgr, pts_xy, colors_bgr=None, rs=None):
- for j, pt_xy in enumerate(pts_xy):
- x, y = pt_xy
- color_bgr = colors_bgr[j] if colors_bgr is not None else self.color_bgr
- r = rs[j] if rs is not None else self.r
- cv2.circle(image_bgr, (x, y), r, color_bgr, -1)
- return image_bgr
-
-
-class TextVisualizer(object):
-
- _COLOR_GRAY = (218, 227, 218)
- _COLOR_WHITE = (255, 255, 255)
-
- def __init__(
- self,
- font_face=cv2.FONT_HERSHEY_SIMPLEX,
- font_color_bgr=_COLOR_GRAY,
- font_scale=0.35,
- font_line_type=cv2.LINE_AA,
- font_line_thickness=1,
- fill_color_bgr=_COLOR_WHITE,
- fill_color_transparency=1.0,
- frame_color_bgr=_COLOR_WHITE,
- frame_color_transparency=1.0,
- frame_thickness=1,
- ):
- self.font_face = font_face
- self.font_color_bgr = font_color_bgr
- self.font_scale = font_scale
- self.font_line_type = font_line_type
- self.font_line_thickness = font_line_thickness
- self.fill_color_bgr = fill_color_bgr
- self.fill_color_transparency = fill_color_transparency
- self.frame_color_bgr = frame_color_bgr
- self.frame_color_transparency = frame_color_transparency
- self.frame_thickness = frame_thickness
-
- def visualize(self, image_bgr, txt, topleft_xy):
- txt_w, txt_h = self.get_text_size_wh(txt)
- topleft_xy = tuple(map(int, topleft_xy))
- x, y = topleft_xy
- if self.frame_color_transparency < 1.0:
- t = self.frame_thickness
- image_bgr[y - t : y + txt_h + t, x - t : x + txt_w + t, :] = (
- image_bgr[y - t : y + txt_h + t, x - t : x + txt_w + t, :]
- * self.frame_color_transparency
- + np.array(self.frame_color_bgr) * (1.0 - self.frame_color_transparency)
- ).astype(np.float)
- if self.fill_color_transparency < 1.0:
- image_bgr[y : y + txt_h, x : x + txt_w, :] = (
- image_bgr[y : y + txt_h, x : x + txt_w, :] * self.fill_color_transparency
- + np.array(self.fill_color_bgr) * (1.0 - self.fill_color_transparency)
- ).astype(np.float)
- cv2.putText(
- image_bgr,
- txt,
- topleft_xy,
- self.font_face,
- self.font_scale,
- self.font_color_bgr,
- self.font_line_thickness,
- self.font_line_type,
- )
- return image_bgr
-
- def get_text_size_wh(self, txt):
- ((txt_w, txt_h), _) = cv2.getTextSize(
- txt, self.font_face, self.font_scale, self.font_line_thickness
- )
- return txt_w, txt_h
-
-
-class CompoundVisualizer(object):
- def __init__(self, visualizers):
- self.visualizers = visualizers
-
- def visualize(self, image_bgr, data):
- assert len(data) == len(
- self.visualizers
- ), "The number of datas {} should match the number of visualizers" " {}".format(
- len(data), len(self.visualizers)
- )
- image = image_bgr
- for i, visualizer in enumerate(self.visualizers):
- image = visualizer.visualize(image, data[i])
- return image
-
- def __str__(self):
- visualizer_str = ", ".join([str(v) for v in self.visualizers])
- return "Compound Visualizer [{}]".format(visualizer_str)
diff --git a/spaces/cfwef/gpt/crazy_functions/test_project/python/dqn/dqn.py b/spaces/cfwef/gpt/crazy_functions/test_project/python/dqn/dqn.py
deleted file mode 100644
index 6cea64d39baa7ff4c1e549869aaa4b0ae17779a9..0000000000000000000000000000000000000000
--- a/spaces/cfwef/gpt/crazy_functions/test_project/python/dqn/dqn.py
+++ /dev/null
@@ -1,245 +0,0 @@
-from typing import Any, Dict, List, Optional, Tuple, Type, Union
-
-import gym
-import numpy as np
-import torch as th
-from torch.nn import functional as F
-
-from stable_baselines3.common import logger
-from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
-from stable_baselines3.common.preprocessing import maybe_transpose
-from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
-from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update
-from stable_baselines3.dqn.policies import DQNPolicy
-
-
-class DQN(OffPolicyAlgorithm):
- """
- Deep Q-Network (DQN)
-
- Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236
- Default hyperparameters are taken from the nature paper,
- except for the optimizer and learning rate that were taken from Stable Baselines defaults.
-
- :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
- :param env: The environment to learn from (if registered in Gym, can be str)
- :param learning_rate: The learning rate, it can be a function
- of the current progress remaining (from 1 to 0)
- :param buffer_size: size of the replay buffer
- :param learning_starts: how many steps of the model to collect transitions for before learning starts
- :param batch_size: Minibatch size for each gradient update
- :param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update
- :param gamma: the discount factor
- :param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit
- like ``(5, "step")`` or ``(2, "episode")``.
- :param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``)
- Set to ``-1`` means to do as many gradient steps as steps done in the environment
- during the rollout.
- :param optimize_memory_usage: Enable a memory efficient variant of the replay buffer
- at a cost of more complexity.
- See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
- :param target_update_interval: update the target network every ``target_update_interval``
- environment steps.
- :param exploration_fraction: fraction of entire training period over which the exploration rate is reduced
- :param exploration_initial_eps: initial value of random action probability
- :param exploration_final_eps: final value of random action probability
- :param max_grad_norm: The maximum value for the gradient clipping
- :param tensorboard_log: the log location for tensorboard (if None, no logging)
- :param create_eval_env: Whether to create a second environment that will be
- used for evaluating the agent periodically. (Only available when passing string for the environment)
- :param policy_kwargs: additional arguments to be passed to the policy on creation
- :param verbose: the verbosity level: 0 no output, 1 info, 2 debug
- :param seed: Seed for the pseudo random generators
- :param device: Device (cpu, cuda, ...) on which the code should be run.
- Setting it to auto, the code will be run on the GPU if possible.
- :param _init_setup_model: Whether or not to build the network at the creation of the instance
- """
-
- def __init__(
- self,
- policy: Union[str, Type[DQNPolicy]],
- env: Union[GymEnv, str],
- learning_rate: Union[float, Schedule] = 1e-4,
- buffer_size: int = 1000000,
- learning_starts: int = 50000,
- batch_size: Optional[int] = 32,
- tau: float = 1.0,
- gamma: float = 0.99,
- train_freq: Union[int, Tuple[int, str]] = 4,
- gradient_steps: int = 1,
- optimize_memory_usage: bool = False,
- target_update_interval: int = 10000,
- exploration_fraction: float = 0.1,
- exploration_initial_eps: float = 1.0,
- exploration_final_eps: float = 0.05,
- max_grad_norm: float = 10,
- tensorboard_log: Optional[str] = None,
- create_eval_env: bool = False,
- policy_kwargs: Optional[Dict[str, Any]] = None,
- verbose: int = 0,
- seed: Optional[int] = None,
- device: Union[th.device, str] = "auto",
- _init_setup_model: bool = True,
- ):
-
- super(DQN, self).__init__(
- policy,
- env,
- DQNPolicy,
- learning_rate,
- buffer_size,
- learning_starts,
- batch_size,
- tau,
- gamma,
- train_freq,
- gradient_steps,
- action_noise=None, # No action noise
- policy_kwargs=policy_kwargs,
- tensorboard_log=tensorboard_log,
- verbose=verbose,
- device=device,
- create_eval_env=create_eval_env,
- seed=seed,
- sde_support=False,
- optimize_memory_usage=optimize_memory_usage,
- supported_action_spaces=(gym.spaces.Discrete,),
- )
-
- self.exploration_initial_eps = exploration_initial_eps
- self.exploration_final_eps = exploration_final_eps
- self.exploration_fraction = exploration_fraction
- self.target_update_interval = target_update_interval
- self.max_grad_norm = max_grad_norm
- # "epsilon" for the epsilon-greedy exploration
- self.exploration_rate = 0.0
- # Linear schedule will be defined in `_setup_model()`
- self.exploration_schedule = None
- self.q_net, self.q_net_target = None, None
-
- if _init_setup_model:
- self._setup_model()
-
- def _setup_model(self) -> None:
- super(DQN, self)._setup_model()
- self._create_aliases()
- self.exploration_schedule = get_linear_fn(
- self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction
- )
-
- def _create_aliases(self) -> None:
- self.q_net = self.policy.q_net
- self.q_net_target = self.policy.q_net_target
-
- def _on_step(self) -> None:
- """
- Update the exploration rate and target network if needed.
- This method is called in ``collect_rollouts()`` after each step in the environment.
- """
- if self.num_timesteps % self.target_update_interval == 0:
- polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau)
-
- self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
- logger.record("rollout/exploration rate", self.exploration_rate)
-
- def train(self, gradient_steps: int, batch_size: int = 100) -> None:
- # Update learning rate according to schedule
- self._update_learning_rate(self.policy.optimizer)
-
- losses = []
- for _ in range(gradient_steps):
- # Sample replay buffer
- replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env)
-
- with th.no_grad():
- # Compute the next Q-values using the target network
- next_q_values = self.q_net_target(replay_data.next_observations)
- # Follow greedy policy: use the one with the highest value
- next_q_values, _ = next_q_values.max(dim=1)
- # Avoid potential broadcast issue
- next_q_values = next_q_values.reshape(-1, 1)
- # 1-step TD target
- target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values
-
- # Get current Q-values estimates
- current_q_values = self.q_net(replay_data.observations)
-
- # Retrieve the q-values for the actions from the replay buffer
- current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long())
-
- # Compute Huber loss (less sensitive to outliers)
- loss = F.smooth_l1_loss(current_q_values, target_q_values)
- losses.append(loss.item())
-
- # Optimize the policy
- self.policy.optimizer.zero_grad()
- loss.backward()
- # Clip gradient norm
- th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
- self.policy.optimizer.step()
-
- # Increase update counter
- self._n_updates += gradient_steps
-
- logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
- logger.record("train/loss", np.mean(losses))
-
- def predict(
- self,
- observation: np.ndarray,
- state: Optional[np.ndarray] = None,
- mask: Optional[np.ndarray] = None,
- deterministic: bool = False,
- ) -> Tuple[np.ndarray, Optional[np.ndarray]]:
- """
- Overrides the base_class predict function to include epsilon-greedy exploration.
-
- :param observation: the input observation
- :param state: The last states (can be None, used in recurrent policies)
- :param mask: The last masks (can be None, used in recurrent policies)
- :param deterministic: Whether or not to return deterministic actions.
- :return: the model's action and the next state
- (used in recurrent policies)
- """
- if not deterministic and np.random.rand() < self.exploration_rate:
- if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space):
- n_batch = observation.shape[0]
- action = np.array([self.action_space.sample() for _ in range(n_batch)])
- else:
- action = np.array(self.action_space.sample())
- else:
- action, state = self.policy.predict(observation, state, mask, deterministic)
- return action, state
-
- def learn(
- self,
- total_timesteps: int,
- callback: MaybeCallback = None,
- log_interval: int = 4,
- eval_env: Optional[GymEnv] = None,
- eval_freq: int = -1,
- n_eval_episodes: int = 5,
- tb_log_name: str = "DQN",
- eval_log_path: Optional[str] = None,
- reset_num_timesteps: bool = True,
- ) -> OffPolicyAlgorithm:
-
- return super(DQN, self).learn(
- total_timesteps=total_timesteps,
- callback=callback,
- log_interval=log_interval,
- eval_env=eval_env,
- eval_freq=eval_freq,
- n_eval_episodes=n_eval_episodes,
- tb_log_name=tb_log_name,
- eval_log_path=eval_log_path,
- reset_num_timesteps=reset_num_timesteps,
- )
-
- def _excluded_save_params(self) -> List[str]:
- return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"]
-
- def _get_torch_save_params(self) -> Tuple[List[str], List[str]]:
- state_dicts = ["policy", "policy.optimizer"]
-
- return state_dicts, []
diff --git a/spaces/chasemcdo/hf_localai/examples/langchain-huggingface/README.md b/spaces/chasemcdo/hf_localai/examples/langchain-huggingface/README.md
deleted file mode 100644
index 23fdcd3214617250d5ba2e2d589653ab5ef9e1a6..0000000000000000000000000000000000000000
--- a/spaces/chasemcdo/hf_localai/examples/langchain-huggingface/README.md
+++ /dev/null
@@ -1,68 +0,0 @@
-# Data query example
-
-Example of integration with HuggingFace Inference API with help of [langchaingo](https://github.com/tmc/langchaingo).
-
-## Setup
-
-Download the LocalAI and start the API:
-
-```bash
-# Clone LocalAI
-git clone https://github.com/go-skynet/LocalAI
-
-cd LocalAI/examples/langchain-huggingface
-
-docker-compose up -d
-```
-
-Node: Ensure you've set `HUGGINGFACEHUB_API_TOKEN` environment variable, you can generate it
-on [Settings / Access Tokens](https://huggingface.co/settings/tokens) page of HuggingFace site.
-
-This is an example `.env` file for LocalAI:
-
-```ini
-MODELS_PATH=/models
-CONTEXT_SIZE=512
-HUGGINGFACEHUB_API_TOKEN=hg_123456
-```
-
-## Using remote models
-
-Now you can use any remote models available via HuggingFace API, for example let's enable using of
-[gpt2](https://huggingface.co/gpt2) model in `gpt-3.5-turbo.yaml` config:
-
-```yml
-name: gpt-3.5-turbo
-parameters:
- model: gpt2
- top_k: 80
- temperature: 0.2
- top_p: 0.7
-context_size: 1024
-backend: "langchain-huggingface"
-stopwords:
-- "HUMAN:"
-- "GPT:"
-roles:
- user: " "
- system: " "
-template:
- completion: completion
- chat: gpt4all
-```
-
-Here is you can see in field `parameters.model` equal `gpt2` and `backend` equal `langchain-huggingface`.
-
-## How to use
-
-```shell
-# Now API is accessible at localhost:8080
-curl http://localhost:8080/v1/models
-# {"object":"list","data":[{"id":"gpt-3.5-turbo","object":"model"}]}
-
-curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
- "model": "gpt-3.5-turbo",
- "prompt": "A long time ago in a galaxy far, far away",
- "temperature": 0.7
-}'
-```
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/run_t5_mlm_flax.py b/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/run_t5_mlm_flax.py
deleted file mode 100644
index 152760f4bf4bd437c517a640662d0fde2e2d3bd2..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/run_t5_mlm_flax.py
+++ /dev/null
@@ -1,988 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2021 The HuggingFace Team All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Pretraining the library models for T5-like span-masked language modeling on a text file or a dataset.
-
-Here is the full list of checkpoints on the hub that can be pretrained by this script:
-https://huggingface.co/models?filter=t5
-"""
-import json
-import logging
-import math
-import os
-import sys
-import time
-from dataclasses import asdict, dataclass, field
-
-# You can also adapt this script on your own masked language modeling task. Pointers for this are left as comments.
-from enum import Enum
-from itertools import chain
-from pathlib import Path
-from typing import Dict, List, Optional
-
-import flax
-import jax
-import jax.numpy as jnp
-import numpy as np
-import optax
-from datasets import load_dataset
-from flax import jax_utils, traverse_util
-from flax.jax_utils import pad_shard_unpad
-from flax.training import train_state
-from flax.training.common_utils import get_metrics, onehot, shard
-from huggingface_hub import Repository, create_repo
-from tqdm import tqdm
-
-from transformers import (
- CONFIG_MAPPING,
- FLAX_MODEL_FOR_MASKED_LM_MAPPING,
- AutoTokenizer,
- BatchEncoding,
- FlaxT5ForConditionalGeneration,
- HfArgumentParser,
- PreTrainedTokenizerBase,
- T5Config,
- is_tensorboard_available,
- set_seed,
-)
-from transformers.models.t5.modeling_flax_t5 import shift_tokens_right
-from transformers.utils import get_full_repo_name, send_example_telemetry
-
-
-MODEL_CONFIG_CLASSES = list(FLAX_MODEL_FOR_MASKED_LM_MAPPING.keys())
-MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
-
-
-@dataclass
-class TrainingArguments:
- output_dir: str = field(
- metadata={"help": "The output directory where the model predictions and checkpoints will be written."},
- )
- overwrite_output_dir: bool = field(
- default=False,
- metadata={
- "help": (
- "Overwrite the content of the output directory. "
- "Use this to continue training if output_dir points to a checkpoint directory."
- )
- },
- )
- do_train: bool = field(default=False, metadata={"help": "Whether to run training."})
- do_eval: bool = field(default=False, metadata={"help": "Whether to run eval on the dev set."})
- per_device_train_batch_size: int = field(
- default=8, metadata={"help": "Batch size per GPU/TPU core/CPU for training."}
- )
- per_device_eval_batch_size: int = field(
- default=8, metadata={"help": "Batch size per GPU/TPU core/CPU for evaluation."}
- )
- learning_rate: float = field(default=5e-5, metadata={"help": "The initial learning rate for AdamW."})
- weight_decay: float = field(default=0.0, metadata={"help": "Weight decay for AdamW if we apply some."})
- adam_beta1: float = field(default=0.9, metadata={"help": "Beta1 for AdamW optimizer"})
- adam_beta2: float = field(default=0.999, metadata={"help": "Beta2 for AdamW optimizer"})
- adam_epsilon: float = field(default=1e-8, metadata={"help": "Epsilon for AdamW optimizer."})
- adafactor: bool = field(default=False, metadata={"help": "Whether or not to replace AdamW by Adafactor."})
- num_train_epochs: float = field(default=3.0, metadata={"help": "Total number of training epochs to perform."})
- warmup_steps: int = field(default=0, metadata={"help": "Linear warmup over warmup_steps."})
- logging_steps: int = field(default=500, metadata={"help": "Log every X updates steps."})
- save_steps: int = field(default=500, metadata={"help": "Save checkpoint every X updates steps."})
- eval_steps: int = field(default=None, metadata={"help": "Run an evaluation every X steps."})
- seed: int = field(default=42, metadata={"help": "Random seed that will be set at the beginning of training."})
- push_to_hub: bool = field(
- default=False, metadata={"help": "Whether or not to upload the trained model to the model hub after training."}
- )
- hub_model_id: str = field(
- default=None, metadata={"help": "The name of the repository to keep in sync with the local `output_dir`."}
- )
- hub_token: str = field(default=None, metadata={"help": "The token to use to push to the Model Hub."})
-
- def __post_init__(self):
- if self.output_dir is not None:
- self.output_dir = os.path.expanduser(self.output_dir)
-
- def to_dict(self):
- """
- Serializes this instance while replace `Enum` by their values (for JSON serialization support). It obfuscates
- the token values by removing their value.
- """
- d = asdict(self)
- for k, v in d.items():
- if isinstance(v, Enum):
- d[k] = v.value
- if isinstance(v, list) and len(v) > 0 and isinstance(v[0], Enum):
- d[k] = [x.value for x in v]
- if k.endswith("_token"):
- d[k] = f"<{k.upper()}>"
- return d
-
-
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
- """
-
- model_name_or_path: Optional[str] = field(
- default=None,
- metadata={
- "help": (
- "The model checkpoint for weights initialization.Don't set if you want to train a model from scratch."
- )
- },
- )
- model_type: Optional[str] = field(
- default=None,
- metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- tokenizer_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
- )
- use_fast_tokenizer: bool = field(
- default=True,
- metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
- )
- dtype: Optional[str] = field(
- default="float32",
- metadata={
- "help": (
- "Floating-point format in which the model weights should be initialized and trained. Choose one of"
- " `[float32, float16, bfloat16]`."
- )
- },
- )
- use_auth_token: bool = field(
- default=False,
- metadata={
- "help": (
- "Will use the token generated when running `huggingface-cli login` (necessary to use this script "
- "with private models)."
- )
- },
- )
-
-
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
- """
-
- dataset_name: Optional[str] = field(
- default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
- )
- dataset_config_name: Optional[str] = field(
- default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
- )
- train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
- validation_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
- )
- train_ref_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input train ref data file for whole word masking in Chinese."},
- )
- validation_ref_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input validation ref data file for whole word masking in Chinese."},
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
- )
- validation_split_percentage: Optional[int] = field(
- default=5,
- metadata={
- "help": "The percentage of the train set used as validation set in case there's no validation split"
- },
- )
- max_seq_length: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "The maximum total input sequence length after tokenization and masking. Sequences longer than this"
- " will be truncated. Default to the max input length of the model."
- )
- },
- )
- preprocessing_num_workers: Optional[int] = field(
- default=None,
- metadata={"help": "The number of processes to use for the preprocessing."},
- )
- mlm_probability: float = field(
- default=0.15, metadata={"help": "Ratio of tokens to mask for span masked language modeling loss"}
- )
- mean_noise_span_length: float = field(
- default=3.0,
- metadata={"help": "Mean span length of masked tokens"},
- )
-
- def __post_init__(self):
- if self.dataset_name is None and self.train_file is None and self.validation_file is None:
- raise ValueError("Need either a dataset name or a training/validation file.")
- else:
- if self.train_file is not None:
- extension = self.train_file.split(".")[-1]
- assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
- if self.validation_file is not None:
- extension = self.validation_file.split(".")[-1]
- assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file."
-
-
-def compute_input_and_target_lengths(inputs_length, noise_density, mean_noise_span_length):
- """This function is copy of `random_spans_helper `__ .
-
- Training parameters to avoid padding with random_spans_noise_mask.
- When training a model with random_spans_noise_mask, we would like to set the other
- training hyperparmeters in a way that avoids padding.
- This function helps us compute these hyperparameters.
- We assume that each noise span in the input is replaced by extra_tokens_per_span_inputs sentinel tokens,
- and each non-noise span in the targets is replaced by extra_tokens_per_span_targets sentinel tokens.
- This function tells us the required number of tokens in the raw example (for split_tokens())
- as well as the length of the encoded targets. Note that this function assumes
- the inputs and targets will have EOS appended and includes that in the reported length.
-
- Args:
- inputs_length: an integer - desired length of the tokenized inputs sequence
- noise_density: a float
- mean_noise_span_length: a float
- Returns:
- tokens_length: length of original text in tokens
- targets_length: an integer - length in tokens of encoded targets sequence
- """
-
- def _tokens_length_to_inputs_length_targets_length(tokens_length):
- num_noise_tokens = int(round(tokens_length * noise_density))
- num_nonnoise_tokens = tokens_length - num_noise_tokens
- num_noise_spans = int(round(num_noise_tokens / mean_noise_span_length))
- # inputs contain all nonnoise tokens, sentinels for all noise spans
- # and one EOS token.
- _input_length = num_nonnoise_tokens + num_noise_spans + 1
- _output_length = num_noise_tokens + num_noise_spans + 1
- return _input_length, _output_length
-
- tokens_length = inputs_length
-
- while _tokens_length_to_inputs_length_targets_length(tokens_length + 1)[0] <= inputs_length:
- tokens_length += 1
-
- inputs_length, targets_length = _tokens_length_to_inputs_length_targets_length(tokens_length)
-
- # minor hack to get the targets length to be equal to inputs length
- # which is more likely to have been set to a nice round number.
- if noise_density == 0.5 and targets_length > inputs_length:
- tokens_length -= 1
- targets_length -= 1
- return tokens_length, targets_length
-
-
-@flax.struct.dataclass
-class FlaxDataCollatorForT5MLM:
- """
- Data collator used for T5 span-masked language modeling.
- It is made sure that after masking the inputs are of length `data_args.max_seq_length` and targets are also of fixed length.
- For more information on how T5 span-masked language modeling works, one can take a look
- at the `official paper `__
- or the `official code for preprocessing `__ .
-
- Args:
- tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`):
- The tokenizer used for encoding the data.
- noise_density (:obj:`float`):
- The probability with which to (randomly) mask tokens in the input.
- mean_noise_span_length (:obj:`float`):
- The average span length of the masked tokens.
- input_length (:obj:`int`):
- The expected input length after masking.
- target_length (:obj:`int`):
- The expected target length after masking.
- pad_token_id: (:obj:`int`):
- The pad token id of the model
- decoder_start_token_id: (:obj:`int):
- The decoder start token id of the model
- """
-
- tokenizer: PreTrainedTokenizerBase
- noise_density: float
- mean_noise_span_length: float
- input_length: int
- target_length: int
- pad_token_id: int
- decoder_start_token_id: int
-
- def __call__(self, examples: List[Dict[str, np.ndarray]]) -> BatchEncoding:
- # convert list to dict and tensorize input
- batch = BatchEncoding(
- {k: np.array([examples[i][k] for i in range(len(examples))]) for k, v in examples[0].items()}
- )
-
- input_ids = batch["input_ids"]
- batch_size, expandend_input_length = input_ids.shape
-
- mask_indices = np.asarray([self.random_spans_noise_mask(expandend_input_length) for i in range(batch_size)])
- labels_mask = ~mask_indices
-
- input_ids_sentinel = self.create_sentinel_ids(mask_indices.astype(np.int8))
- labels_sentinel = self.create_sentinel_ids(labels_mask.astype(np.int8))
-
- batch["input_ids"] = self.filter_input_ids(input_ids, input_ids_sentinel)
- batch["labels"] = self.filter_input_ids(input_ids, labels_sentinel)
-
- if batch["input_ids"].shape[-1] != self.input_length:
- raise ValueError(
- f"`input_ids` are incorrectly preprocessed. `input_ids` length is {batch['input_ids'].shape[-1]}, but"
- f" should be {self.input_length}."
- )
-
- if batch["labels"].shape[-1] != self.target_length:
- raise ValueError(
- f"`labels` are incorrectly preprocessed. `labels` length is {batch['labels'].shape[-1]}, but should be"
- f" {self.target_length}."
- )
-
- # to check that tokens are correctly preprocessed, one can run `self.tokenizer.batch_decode(input_ids)` and `self.tokenizer.batch_decode(labels)` here...
- batch["decoder_input_ids"] = shift_tokens_right(
- batch["labels"], self.pad_token_id, self.decoder_start_token_id
- )
-
- return batch
-
- def create_sentinel_ids(self, mask_indices):
- """
- Sentinel ids creation given the indices that should be masked.
- The start indices of each mask are replaced by the sentinel ids in increasing
- order. Consecutive mask indices to be deleted are replaced with `-1`.
- """
- start_indices = mask_indices - np.roll(mask_indices, 1, axis=-1) * mask_indices
- start_indices[:, 0] = mask_indices[:, 0]
-
- sentinel_ids = np.where(start_indices != 0, np.cumsum(start_indices, axis=-1), start_indices)
- sentinel_ids = np.where(sentinel_ids != 0, (len(self.tokenizer) - sentinel_ids), 0)
- sentinel_ids -= mask_indices - start_indices
-
- return sentinel_ids
-
- def filter_input_ids(self, input_ids, sentinel_ids):
- """
- Puts sentinel mask on `input_ids` and fuse consecutive mask tokens into a single mask token by deleting.
- This will reduce the sequence length from `expanded_inputs_length` to `input_length`.
- """
- batch_size = input_ids.shape[0]
-
- input_ids_full = np.where(sentinel_ids != 0, sentinel_ids, input_ids)
- # input_ids tokens and sentinel tokens are >= 0, tokens < 0 are
- # masked tokens coming after sentinel tokens and should be removed
- input_ids = input_ids_full[input_ids_full >= 0].reshape((batch_size, -1))
- input_ids = np.concatenate(
- [input_ids, np.full((batch_size, 1), self.tokenizer.eos_token_id, dtype=np.int32)], axis=-1
- )
- return input_ids
-
- def random_spans_noise_mask(self, length):
- """This function is copy of `random_spans_helper `__ .
-
- Noise mask consisting of random spans of noise tokens.
- The number of noise tokens and the number of noise spans and non-noise spans
- are determined deterministically as follows:
- num_noise_tokens = round(length * noise_density)
- num_nonnoise_spans = num_noise_spans = round(num_noise_tokens / mean_noise_span_length)
- Spans alternate between non-noise and noise, beginning with non-noise.
- Subject to the above restrictions, all masks are equally likely.
-
- Args:
- length: an int32 scalar (length of the incoming token sequence)
- noise_density: a float - approximate density of output mask
- mean_noise_span_length: a number
-
- Returns:
- a boolean tensor with shape [length]
- """
-
- orig_length = length
-
- num_noise_tokens = int(np.round(length * self.noise_density))
- # avoid degeneracy by ensuring positive numbers of noise and nonnoise tokens.
- num_noise_tokens = min(max(num_noise_tokens, 1), length - 1)
- num_noise_spans = int(np.round(num_noise_tokens / self.mean_noise_span_length))
-
- # avoid degeneracy by ensuring positive number of noise spans
- num_noise_spans = max(num_noise_spans, 1)
- num_nonnoise_tokens = length - num_noise_tokens
-
- # pick the lengths of the noise spans and the non-noise spans
- def _random_segmentation(num_items, num_segments):
- """Partition a sequence of items randomly into non-empty segments.
- Args:
- num_items: an integer scalar > 0
- num_segments: an integer scalar in [1, num_items]
- Returns:
- a Tensor with shape [num_segments] containing positive integers that add
- up to num_items
- """
- mask_indices = np.arange(num_items - 1) < (num_segments - 1)
- np.random.shuffle(mask_indices)
- first_in_segment = np.pad(mask_indices, [[1, 0]])
- segment_id = np.cumsum(first_in_segment)
- # count length of sub segments assuming that list is sorted
- _, segment_length = np.unique(segment_id, return_counts=True)
- return segment_length
-
- noise_span_lengths = _random_segmentation(num_noise_tokens, num_noise_spans)
- nonnoise_span_lengths = _random_segmentation(num_nonnoise_tokens, num_noise_spans)
-
- interleaved_span_lengths = np.reshape(
- np.stack([nonnoise_span_lengths, noise_span_lengths], axis=1), [num_noise_spans * 2]
- )
- span_starts = np.cumsum(interleaved_span_lengths)[:-1]
- span_start_indicator = np.zeros((length,), dtype=np.int8)
- span_start_indicator[span_starts] = True
- span_num = np.cumsum(span_start_indicator)
- is_noise = np.equal(span_num % 2, 1)
-
- return is_noise[:orig_length]
-
-
-def generate_batch_splits(samples_idx: np.ndarray, batch_size: int, drop_last=True) -> np.ndarray:
- """Generate batches of data for a specified batch size from sample indices. If the dataset size is not divisible by
- the batch size and `drop_last` is `True`, the last incomplete batch is dropped. Else, it is returned."""
- num_samples = len(samples_idx)
- if drop_last:
- samples_to_remove = num_samples % batch_size
- if samples_to_remove != 0:
- samples_idx = samples_idx[:-samples_to_remove]
- sections_split = num_samples // batch_size
- samples_idx = samples_idx.reshape((sections_split, batch_size))
- else:
- sections_split = math.ceil(num_samples / batch_size)
- samples_idx = np.array_split(samples_idx, sections_split)
- return samples_idx
-
-
-def write_train_metric(summary_writer, train_metrics, train_time, step):
- summary_writer.scalar("train_time", train_time, step)
-
- train_metrics = get_metrics(train_metrics)
- for key, vals in train_metrics.items():
- tag = f"train_{key}"
- for i, val in enumerate(vals):
- summary_writer.scalar(tag, val, step - len(vals) + i + 1)
-
-
-def write_eval_metric(summary_writer, eval_metrics, step):
- for metric_name, value in eval_metrics.items():
- summary_writer.scalar(f"eval_{metric_name}", value, step)
-
-
-def main():
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
-
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
- if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
- # If we pass only one argument to the script and it's the path to a json file,
- # let's parse it to get our arguments.
- model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
- else:
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
-
- # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
- # information sent is the one passed as arguments along with your Python/PyTorch versions.
- send_example_telemetry("run_t5_mlm", model_args, data_args, framework="flax")
-
- if (
- os.path.exists(training_args.output_dir)
- and os.listdir(training_args.output_dir)
- and training_args.do_train
- and not training_args.overwrite_output_dir
- ):
- raise ValueError(
- f"Output directory ({training_args.output_dir}) already exists and is not empty."
- "Use --overwrite_output_dir to overcome."
- )
-
- # Setup logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- level=logging.INFO,
- datefmt="[%X]",
- )
-
- # Log on each process the small summary:
- logger = logging.getLogger(__name__)
-
- # Set the verbosity to info of the Transformers logger (on main process only):
- logger.info(f"Training/evaluation parameters {training_args}")
-
- # Set seed before initializing model.
- set_seed(training_args.seed)
-
- # Handle the repository creation
- if training_args.push_to_hub:
- if training_args.hub_model_id is None:
- repo_name = get_full_repo_name(
- Path(training_args.output_dir).absolute().name, token=training_args.hub_token
- )
- else:
- repo_name = training_args.hub_model_id
- create_repo(repo_name, exist_ok=True, token=training_args.hub_token)
- repo = Repository(training_args.output_dir, clone_from=repo_name, token=training_args.hub_token)
-
- # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
- # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
- # (the dataset will be downloaded automatically from the datasets Hub).
- #
- # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
- # 'text' is found. You can easily tweak this behavior (see below).
- if data_args.dataset_name is not None:
- # Downloading and loading a dataset from the hub.
- datasets = load_dataset(
- data_args.dataset_name,
- data_args.dataset_config_name,
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
-
- if "validation" not in datasets.keys():
- datasets["validation"] = load_dataset(
- data_args.dataset_name,
- data_args.dataset_config_name,
- split=f"train[:{data_args.validation_split_percentage}%]",
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- datasets["train"] = load_dataset(
- data_args.dataset_name,
- data_args.dataset_config_name,
- split=f"train[{data_args.validation_split_percentage}%:]",
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- data_files = {}
- if data_args.train_file is not None:
- data_files["train"] = data_args.train_file
- if data_args.validation_file is not None:
- data_files["validation"] = data_args.validation_file
- extension = data_args.train_file.split(".")[-1]
- if extension == "txt":
- extension = "text"
- datasets = load_dataset(
- extension,
- data_files=data_files,
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
-
- if "validation" not in datasets.keys():
- datasets["validation"] = load_dataset(
- extension,
- data_files=data_files,
- split=f"train[:{data_args.validation_split_percentage}%]",
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- datasets["train"] = load_dataset(
- extension,
- data_files=data_files,
- split=f"train[{data_args.validation_split_percentage}%:]",
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
- # https://huggingface.co/docs/datasets/loading_datasets.html.
-
- # Load pretrained model and tokenizer
-
- if model_args.tokenizer_name:
- tokenizer = AutoTokenizer.from_pretrained(
- model_args.tokenizer_name,
- cache_dir=model_args.cache_dir,
- use_fast=model_args.use_fast_tokenizer,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- elif model_args.model_name_or_path:
- tokenizer = AutoTokenizer.from_pretrained(
- model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- use_fast=model_args.use_fast_tokenizer,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- raise ValueError(
- "You are instantiating a new tokenizer from scratch. This is not supported by this script."
- "You can do it from another script, save it, and load it from here, using --tokenizer_name."
- )
-
- if model_args.config_name:
- config = T5Config.from_pretrained(
- model_args.config_name,
- cache_dir=model_args.cache_dir,
- vocab_size=len(tokenizer),
- use_auth_token=True if model_args.use_auth_token else None,
- )
- elif model_args.model_name_or_path:
- config = T5Config.from_pretrained(
- model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- config = CONFIG_MAPPING[model_args.model_type]()
- logger.warning("You are instantiating a new config instance from scratch.")
-
- # Preprocessing the datasets.
- # First we tokenize all the texts.
- if training_args.do_train:
- column_names = datasets["train"].column_names
- else:
- column_names = datasets["validation"].column_names
- text_column_name = "text" if "text" in column_names else column_names[0]
-
- max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
-
- # Otherwise, we tokenize every text, then concatenate them together before splitting them in smaller parts.
- # Since we make sure that all sequences are of the same length, no attention_mask is needed.
- def tokenize_function(examples):
- return tokenizer(examples[text_column_name], return_attention_mask=False)
-
- tokenized_datasets = datasets.map(
- tokenize_function,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not data_args.overwrite_cache,
- )
-
- # T5-like span masked language modeling will fuse consecutively masked tokens to a single sentinel token.
- # To ensure that the input length is `max_seq_length`, we need to increase the maximum length
- # according to `mlm_probability` and `mean_noise_span_length`. We can also define the label length accordingly.
- expanded_inputs_length, targets_length = compute_input_and_target_lengths(
- inputs_length=max_seq_length,
- noise_density=data_args.mlm_probability,
- mean_noise_span_length=data_args.mean_noise_span_length,
- )
-
- # Main data processing function that will concatenate all texts from our dataset and generate chunks of expanded_inputs_length.
- def group_texts(examples):
- # Concatenate all texts.
- concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
- total_length = len(concatenated_examples[list(examples.keys())[0]])
- # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
- # customize this part to your needs.
- if total_length >= expanded_inputs_length:
- total_length = (total_length // expanded_inputs_length) * expanded_inputs_length
- # Split by chunks of max_len.
- result = {
- k: [t[i : i + expanded_inputs_length] for i in range(0, total_length, expanded_inputs_length)]
- for k, t in concatenated_examples.items()
- }
- return result
-
- # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a
- # remainder for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value
- # might be slower to preprocess.
- #
- # To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
- # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
- tokenized_datasets = tokenized_datasets.map(
- group_texts,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- load_from_cache_file=not data_args.overwrite_cache,
- )
-
- # Enable tensorboard only on the master node
- has_tensorboard = is_tensorboard_available()
- if has_tensorboard and jax.process_index() == 0:
- try:
- from flax.metrics.tensorboard import SummaryWriter
-
- summary_writer = SummaryWriter(log_dir=Path(training_args.output_dir))
- except ImportError as ie:
- has_tensorboard = False
- logger.warning(
- f"Unable to display metrics through TensorBoard because some package are not installed: {ie}"
- )
- else:
- logger.warning(
- "Unable to display metrics through TensorBoard because the package is not installed: "
- "Please run pip install tensorboard to enable."
- )
-
- # Initialize our training
- rng = jax.random.PRNGKey(training_args.seed)
- dropout_rngs = jax.random.split(rng, jax.local_device_count())
-
- if model_args.model_name_or_path:
- model = FlaxT5ForConditionalGeneration.from_pretrained(
- model_args.model_name_or_path,
- config=config,
- seed=training_args.seed,
- dtype=getattr(jnp, model_args.dtype),
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- config.vocab_size = len(tokenizer)
- model = FlaxT5ForConditionalGeneration(
- config,
- seed=training_args.seed,
- dtype=getattr(jnp, model_args.dtype),
- )
-
- # Data collator
- # This one will take care of randomly masking the tokens.
- data_collator = FlaxDataCollatorForT5MLM(
- tokenizer=tokenizer,
- noise_density=data_args.mlm_probability,
- mean_noise_span_length=data_args.mean_noise_span_length,
- input_length=max_seq_length,
- target_length=targets_length,
- pad_token_id=model.config.pad_token_id,
- decoder_start_token_id=model.config.decoder_start_token_id,
- )
-
- # Store some constant
- num_epochs = int(training_args.num_train_epochs)
- train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count()
- per_device_eval_batch_size = int(training_args.per_device_eval_batch_size)
- eval_batch_size = per_device_eval_batch_size * jax.device_count()
-
- num_train_steps = len(tokenized_datasets["train"]) // train_batch_size * num_epochs
-
- num_of_hosts = jax.process_count()
- current_host_idx = jax.process_index()
-
- # Create learning rate schedule
- warmup_fn = optax.linear_schedule(
- init_value=0.0, end_value=training_args.learning_rate, transition_steps=training_args.warmup_steps
- )
- decay_fn = optax.linear_schedule(
- init_value=training_args.learning_rate,
- end_value=0,
- transition_steps=num_train_steps - training_args.warmup_steps,
- )
- linear_decay_lr_schedule_fn = optax.join_schedules(
- schedules=[warmup_fn, decay_fn], boundaries=[training_args.warmup_steps]
- )
-
- # We use Optax's "masking" functionality to not apply weight decay
- # to bias and LayerNorm scale parameters. decay_mask_fn returns a
- # mask boolean with the same structure as the parameters.
- # The mask is True for parameters that should be decayed.
- def decay_mask_fn(params):
- flat_params = traverse_util.flatten_dict(params)
- # find out all LayerNorm parameters
- layer_norm_candidates = ["layernorm", "layer_norm", "ln"]
- layer_norm_named_params = {
- layer[-2:]
- for layer_norm_name in layer_norm_candidates
- for layer in flat_params.keys()
- if layer_norm_name in "".join(layer).lower()
- }
- flat_mask = {path: (path[-1] != "bias" and path[-2:] not in layer_norm_named_params) for path in flat_params}
- return traverse_util.unflatten_dict(flat_mask)
-
- # create adam optimizer
- if training_args.adafactor:
- # We use the default parameters here to initialize adafactor,
- # For more details about the parameters please check https://github.com/deepmind/optax/blob/ed02befef9bf81cbbf236be3d2b0e032e9ed4a40/optax/_src/alias.py#L74
- optimizer = optax.adafactor(
- learning_rate=linear_decay_lr_schedule_fn,
- )
- else:
- optimizer = optax.adamw(
- learning_rate=linear_decay_lr_schedule_fn,
- b1=training_args.adam_beta1,
- b2=training_args.adam_beta2,
- weight_decay=training_args.weight_decay,
- mask=decay_mask_fn,
- )
-
- # Setup train state
- state = train_state.TrainState.create(apply_fn=model.__call__, params=model.params, tx=optimizer)
-
- # Define gradient update step fn
- def train_step(state, batch, dropout_rng):
- dropout_rng, new_dropout_rng = jax.random.split(dropout_rng)
-
- def loss_fn(params):
- labels = batch.pop("labels")
-
- logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0]
-
- # compute loss
- loss = optax.softmax_cross_entropy(logits, onehot(labels, logits.shape[-1])).mean()
-
- return loss
-
- grad_fn = jax.value_and_grad(loss_fn)
- loss, grad = grad_fn(state.params)
- grad = jax.lax.pmean(grad, "batch")
- new_state = state.apply_gradients(grads=grad)
-
- metrics = jax.lax.pmean(
- {"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(state.step)}, axis_name="batch"
- )
-
- return new_state, metrics, new_dropout_rng
-
- # Create parallel version of the train step
- p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,))
-
- # Define eval fn
- def eval_step(params, batch):
- labels = batch.pop("labels")
-
- logits = model(**batch, params=params, train=False)[0]
-
- # compute loss
- loss = optax.softmax_cross_entropy(logits, onehot(labels, logits.shape[-1]))
-
- # compute accuracy
- accuracy = jnp.equal(jnp.argmax(logits, axis=-1), labels)
-
- # summarize metrics
- metrics = {"loss": loss.mean(), "accuracy": accuracy.mean()}
- metrics = jax.lax.pmean(metrics, axis_name="batch")
-
- return metrics
-
- p_eval_step = jax.pmap(eval_step, "batch", donate_argnums=(0,))
-
- # Replicate the train state on each device
- state = jax_utils.replicate(state)
-
- train_time = 0
- epochs = tqdm(range(num_epochs), desc="Epoch ... ", position=0)
- for epoch in epochs:
- # ======================== Training ================================
- train_start = time.time()
- train_metrics = []
-
- # Create sampling rng
- rng, input_rng = jax.random.split(rng)
-
- # Generate an epoch by shuffling sampling indices from the train dataset
- num_train_samples = len(tokenized_datasets["train"])
- # Avoid using jax.numpy here in case of TPU training
- train_samples_idx = np.random.permutation(np.arange(num_train_samples))
- train_batch_idx = generate_batch_splits(train_samples_idx, train_batch_size)
-
- # Gather the indexes for creating the batch and do a training step
- for step, batch_idx in enumerate(tqdm(train_batch_idx, desc="Training...", position=1)):
- samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
- model_inputs = data_collator(samples)
-
- local_host_model_inputs = {
- key: np.split(model_inputs.data[key], num_of_hosts, axis=0)[current_host_idx]
- for key, value in model_inputs.data.items()
- }
-
- # Model forward
- model_inputs = shard(local_host_model_inputs)
- state, train_metric, dropout_rngs = p_train_step(state, model_inputs, dropout_rngs)
- train_metrics.append(train_metric)
-
- cur_step = epoch * (num_train_samples // train_batch_size) + step
-
- if cur_step % training_args.logging_steps == 0 and cur_step > 0:
- # Save metrics
- train_metric = jax_utils.unreplicate(train_metric)
- train_time += time.time() - train_start
- if has_tensorboard and jax.process_index() == 0:
- write_train_metric(summary_writer, train_metrics, train_time, cur_step)
-
- epochs.write(
- f"Step... ({cur_step} | Loss: {train_metric['loss'].mean()}, Learning Rate:"
- f" {train_metric['learning_rate'].mean()})"
- )
-
- train_metrics = []
-
- if cur_step % training_args.eval_steps == 0 and cur_step > 0:
- # ======================== Evaluating ==============================
- num_eval_samples = len(tokenized_datasets["validation"])
- # Avoid using jax.numpy here in case of TPU training
- eval_samples_idx = np.arange(num_eval_samples)
- eval_batch_idx = generate_batch_splits(eval_samples_idx, eval_batch_size, drop_last=False)
-
- eval_metrics = []
- for i, batch_idx in enumerate(tqdm(eval_batch_idx, desc="Evaluating ...", position=2)):
- samples = [tokenized_datasets["validation"][int(idx)] for idx in batch_idx]
- model_inputs = data_collator(samples)
-
- # Model forward
- metrics = pad_shard_unpad(p_eval_step, static_return=True)(
- state.params, model_inputs.data, min_device_batch=per_device_eval_batch_size
- )
- eval_metrics.append(metrics)
-
- # get eval metrics
- eval_metrics = get_metrics(eval_metrics)
- eval_metrics = jax.tree_util.tree_map(jnp.mean, eval_metrics)
-
- # Update progress bar
- epochs.write(f"Step... ({cur_step} | Loss: {eval_metrics['loss']}, Acc: {eval_metrics['accuracy']})")
-
- # Save metrics
- if has_tensorboard and jax.process_index() == 0:
- write_eval_metric(summary_writer, eval_metrics, cur_step)
-
- if cur_step % training_args.save_steps == 0 and cur_step > 0:
- # save checkpoint after each epoch and push checkpoint to the hub
- if jax.process_index() == 0:
- params = jax.device_get(jax.tree_util.tree_map(lambda x: x[0], state.params))
- model.save_pretrained(training_args.output_dir, params=params)
- tokenizer.save_pretrained(training_args.output_dir)
- if training_args.push_to_hub:
- repo.push_to_hub(commit_message=f"Saving weights and logs of step {cur_step}", blocking=False)
-
- # Eval after training
- if training_args.do_eval:
- num_eval_samples = len(tokenized_datasets["validation"])
- # Avoid using jax.numpy here in case of TPU training
- eval_samples_idx = np.arange(num_eval_samples)
- eval_batch_idx = generate_batch_splits(eval_samples_idx, eval_batch_size, drop_last=False)
-
- eval_metrics = []
- for i, batch_idx in enumerate(tqdm(eval_batch_idx, desc="Evaluating ...", position=2)):
- samples = [tokenized_datasets["validation"][int(idx)] for idx in batch_idx]
- model_inputs = data_collator(samples)
-
- # Model forward
- metrics = pad_shard_unpad(p_eval_step, static_return=True)(
- state.params, model_inputs.data, min_device_batch=per_device_eval_batch_size
- )
- eval_metrics.append(metrics)
-
- # get eval metrics
- eval_metrics = get_metrics(eval_metrics)
- eval_metrics = jax.tree_util.tree_map(lambda metric: jnp.mean(metric).item(), eval_metrics)
-
- if jax.process_index() == 0:
- eval_metrics = {f"eval_{metric_name}": value for metric_name, value in eval_metrics.items()}
- path = os.path.join(training_args.output_dir, "eval_results.json")
- with open(path, "w") as f:
- json.dump(eval_metrics, f, indent=4, sort_keys=True)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chenyangqi/FateZero/example.py b/spaces/chenyangqi/FateZero/example.py
deleted file mode 100644
index c307cd99fc7cba85919faf7c29e81034e4248cc8..0000000000000000000000000000000000000000
--- a/spaces/chenyangqi/FateZero/example.py
+++ /dev/null
@@ -1,85 +0,0 @@
-num_steps = 15
-style_example = [
- [
- 'CompVis/stable-diffusion-v1-4',
- 'FateZero/data/teaser_car-turn.mp4',
- 'a silver jeep driving down a curvy road in the countryside',
- 'watercolor painting of a silver jeep driving down a curvy road in the countryside',
- 0.8,
- 0.8,
- "watercolor",
- 10,
- num_steps,
- 7.5,
- # input video argument
- None, 0, 8, 1, 0,0,0,0
-
- ],
- [
- 'CompVis/stable-diffusion-v1-4',
- 'FateZero/data/style/sunflower.mp4',
- 'a yellow sunflower',
- 'van gogh style painting of a yellow sunflower',
- 0.5,
- 0.5,
- 'van gogh',
- 10,
- num_steps,
- 7.5,
- None, 0, 8, 1, 0,0,0,0
- ],
- [
- 'CompVis/stable-diffusion-v1-4',
- 'FateZero/data/style/surf.mp4',
- 'a man with round helmet surfing on a white wave in blue ocean with a rope',
- 'The Ukiyo-e style painting of a man with round helmet surfing on a white wave in blue ocean with a rope',
- 0.9,
- 0.9,
- 'Ukiyo-e',
- 10,
- num_steps,
- 7.5,
- None, 0, 8, 1, 0,0,0,0
- ],
- [
- 'CompVis/stable-diffusion-v1-4',
- 'FateZero/data/style/train.mp4',
- 'a train traveling down tracks next to a forest filled with trees and flowers and a man on the side of the track',
- 'a train traveling down tracks next to a forest filled with trees and flowers and a man on the side of the track Makoto Shinkai style',
- 0.9,
- 0.9,
- 'Makoto Shinkai',
- 10,
- num_steps,
- 7.5,
- None, 0, 8, 28, 0,0,0,0
- ],
-
- [
- 'CompVis/stable-diffusion-v1-4',
- 'FateZero/data/attribute/swan_swarov.mp4',
- 'a black swan with a red beak swimming in a river near a wall and bushes',
- 'a Swarovski crystal swan with a red beak swimming in a river near a wall and bushes',
- 0.8,
- 0.6,
- 'Swarovski crystal',
- 10,
- num_steps,
- 7.5,
- None, 0, 8, 1, 0,0,0,0
- ],
- [
- 'CompVis/stable-diffusion-v1-4',
- 'FateZero/data/attribute/squirrel_carrot.mp4',
- 'A squirrel is eating a carrot',
- 'A rabbit is eating a eggplant',
- 0.5,
- 0.5,
- 'rabbit eggplant',
- 10,
- num_steps,
- 7.5,
- None, 0, 8, 1, 0,0,0,0
- ],
-
-]
\ No newline at end of file
diff --git a/spaces/chilge/taoli/attentions.py b/spaces/chilge/taoli/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/chilge/taoli/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/coloredlogs/converter/colors.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/coloredlogs/converter/colors.py
deleted file mode 100644
index 6e6d8f1afa57b36f78f4a004b6522eb3a781c65e..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/coloredlogs/converter/colors.py
+++ /dev/null
@@ -1,310 +0,0 @@
-# Mapping of ANSI color codes to HTML/CSS colors.
-#
-# Author: Peter Odding
-# Last Change: January 14, 2018
-# URL: https://coloredlogs.readthedocs.io
-
-"""Mapping of ANSI color codes to HTML/CSS colors."""
-
-EIGHT_COLOR_PALETTE = (
- '#010101', # black
- '#DE382B', # red
- '#39B54A', # green
- '#FFC706', # yellow
- '#006FB8', # blue
- '#762671', # magenta
- '#2CB5E9', # cyan
- '#CCC', # white
-)
-"""
-A tuple of strings mapping basic color codes to CSS colors.
-
-The items in this tuple correspond to the eight basic color codes for black,
-red, green, yellow, blue, magenta, cyan and white as defined in the original
-standard for ANSI escape sequences. The CSS colors are based on the `Ubuntu
-color scheme`_ described on Wikipedia and they are encoded as hexadecimal
-values to get the shortest strings, which reduces the size (in bytes) of
-conversion output.
-
-.. _Ubuntu color scheme: https://en.wikipedia.org/wiki/ANSI_escape_code#Colors
-"""
-
-BRIGHT_COLOR_PALETTE = (
- '#808080', # black
- '#F00', # red
- '#0F0', # green
- '#FF0', # yellow
- '#00F', # blue
- '#F0F', # magenta
- '#0FF', # cyan
- '#FFF', # white
-)
-"""
-A tuple of strings mapping bright color codes to CSS colors.
-
-This tuple maps the bright color variants of :data:`EIGHT_COLOR_PALETTE`.
-"""
-
-EXTENDED_COLOR_PALETTE = (
- '#000000',
- '#800000',
- '#008000',
- '#808000',
- '#000080',
- '#800080',
- '#008080',
- '#C0C0C0',
- '#808080',
- '#FF0000',
- '#00FF00',
- '#FFFF00',
- '#0000FF',
- '#FF00FF',
- '#00FFFF',
- '#FFFFFF',
- '#000000',
- '#00005F',
- '#000087',
- '#0000AF',
- '#0000D7',
- '#0000FF',
- '#005F00',
- '#005F5F',
- '#005F87',
- '#005FAF',
- '#005FD7',
- '#005FFF',
- '#008700',
- '#00875F',
- '#008787',
- '#0087AF',
- '#0087D7',
- '#0087FF',
- '#00AF00',
- '#00AF5F',
- '#00AF87',
- '#00AFAF',
- '#00AFD7',
- '#00AFFF',
- '#00D700',
- '#00D75F',
- '#00D787',
- '#00D7AF',
- '#00D7D7',
- '#00D7FF',
- '#00FF00',
- '#00FF5F',
- '#00FF87',
- '#00FFAF',
- '#00FFD7',
- '#00FFFF',
- '#5F0000',
- '#5F005F',
- '#5F0087',
- '#5F00AF',
- '#5F00D7',
- '#5F00FF',
- '#5F5F00',
- '#5F5F5F',
- '#5F5F87',
- '#5F5FAF',
- '#5F5FD7',
- '#5F5FFF',
- '#5F8700',
- '#5F875F',
- '#5F8787',
- '#5F87AF',
- '#5F87D7',
- '#5F87FF',
- '#5FAF00',
- '#5FAF5F',
- '#5FAF87',
- '#5FAFAF',
- '#5FAFD7',
- '#5FAFFF',
- '#5FD700',
- '#5FD75F',
- '#5FD787',
- '#5FD7AF',
- '#5FD7D7',
- '#5FD7FF',
- '#5FFF00',
- '#5FFF5F',
- '#5FFF87',
- '#5FFFAF',
- '#5FFFD7',
- '#5FFFFF',
- '#870000',
- '#87005F',
- '#870087',
- '#8700AF',
- '#8700D7',
- '#8700FF',
- '#875F00',
- '#875F5F',
- '#875F87',
- '#875FAF',
- '#875FD7',
- '#875FFF',
- '#878700',
- '#87875F',
- '#878787',
- '#8787AF',
- '#8787D7',
- '#8787FF',
- '#87AF00',
- '#87AF5F',
- '#87AF87',
- '#87AFAF',
- '#87AFD7',
- '#87AFFF',
- '#87D700',
- '#87D75F',
- '#87D787',
- '#87D7AF',
- '#87D7D7',
- '#87D7FF',
- '#87FF00',
- '#87FF5F',
- '#87FF87',
- '#87FFAF',
- '#87FFD7',
- '#87FFFF',
- '#AF0000',
- '#AF005F',
- '#AF0087',
- '#AF00AF',
- '#AF00D7',
- '#AF00FF',
- '#AF5F00',
- '#AF5F5F',
- '#AF5F87',
- '#AF5FAF',
- '#AF5FD7',
- '#AF5FFF',
- '#AF8700',
- '#AF875F',
- '#AF8787',
- '#AF87AF',
- '#AF87D7',
- '#AF87FF',
- '#AFAF00',
- '#AFAF5F',
- '#AFAF87',
- '#AFAFAF',
- '#AFAFD7',
- '#AFAFFF',
- '#AFD700',
- '#AFD75F',
- '#AFD787',
- '#AFD7AF',
- '#AFD7D7',
- '#AFD7FF',
- '#AFFF00',
- '#AFFF5F',
- '#AFFF87',
- '#AFFFAF',
- '#AFFFD7',
- '#AFFFFF',
- '#D70000',
- '#D7005F',
- '#D70087',
- '#D700AF',
- '#D700D7',
- '#D700FF',
- '#D75F00',
- '#D75F5F',
- '#D75F87',
- '#D75FAF',
- '#D75FD7',
- '#D75FFF',
- '#D78700',
- '#D7875F',
- '#D78787',
- '#D787AF',
- '#D787D7',
- '#D787FF',
- '#D7AF00',
- '#D7AF5F',
- '#D7AF87',
- '#D7AFAF',
- '#D7AFD7',
- '#D7AFFF',
- '#D7D700',
- '#D7D75F',
- '#D7D787',
- '#D7D7AF',
- '#D7D7D7',
- '#D7D7FF',
- '#D7FF00',
- '#D7FF5F',
- '#D7FF87',
- '#D7FFAF',
- '#D7FFD7',
- '#D7FFFF',
- '#FF0000',
- '#FF005F',
- '#FF0087',
- '#FF00AF',
- '#FF00D7',
- '#FF00FF',
- '#FF5F00',
- '#FF5F5F',
- '#FF5F87',
- '#FF5FAF',
- '#FF5FD7',
- '#FF5FFF',
- '#FF8700',
- '#FF875F',
- '#FF8787',
- '#FF87AF',
- '#FF87D7',
- '#FF87FF',
- '#FFAF00',
- '#FFAF5F',
- '#FFAF87',
- '#FFAFAF',
- '#FFAFD7',
- '#FFAFFF',
- '#FFD700',
- '#FFD75F',
- '#FFD787',
- '#FFD7AF',
- '#FFD7D7',
- '#FFD7FF',
- '#FFFF00',
- '#FFFF5F',
- '#FFFF87',
- '#FFFFAF',
- '#FFFFD7',
- '#FFFFFF',
- '#080808',
- '#121212',
- '#1C1C1C',
- '#262626',
- '#303030',
- '#3A3A3A',
- '#444444',
- '#4E4E4E',
- '#585858',
- '#626262',
- '#6C6C6C',
- '#767676',
- '#808080',
- '#8A8A8A',
- '#949494',
- '#9E9E9E',
- '#A8A8A8',
- '#B2B2B2',
- '#BCBCBC',
- '#C6C6C6',
- '#D0D0D0',
- '#DADADA',
- '#E4E4E4',
- '#EEEEEE',
-)
-"""
-A tuple of strings mapping 256 color mode color codes to CSS colors.
-
-The items in this tuple correspond to the color codes in the 256 color mode palette.
-"""
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/confection/util.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/confection/util.py
deleted file mode 100644
index d2041186c1a07f2c94341a8a51b19ec03ac6bebf..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/confection/util.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import functools
-import sys
-from typing import Any, Callable, Iterator, TypeVar
-
-if sys.version_info < (3, 8):
- # Ignoring type for mypy to avoid "Incompatible import" error (https://github.com/python/mypy/issues/4427).
- from typing_extensions import Protocol # type: ignore
-else:
- from typing import Protocol
-
-_DIn = TypeVar("_DIn")
-
-
-class Decorator(Protocol):
- """Protocol to mark a function as returning its child with identical signature."""
-
- def __call__(self, name: str) -> Callable[[_DIn], _DIn]:
- ...
-
-
-# This is how functools.partials seems to do it, too, to retain the return type
-PartialT = TypeVar("PartialT")
-
-
-def partial(
- func: Callable[..., PartialT], *args: Any, **kwargs: Any
-) -> Callable[..., PartialT]:
- """Wrapper around functools.partial that retains docstrings and can include
- other workarounds if needed.
- """
- partial_func = functools.partial(func, *args, **kwargs)
- partial_func.__doc__ = func.__doc__
- return partial_func
-
-
-class Generator(Iterator):
- """Custom generator type. Used to annotate function arguments that accept
- generators so they can be validated by pydantic (which doesn't support
- iterators/iterables otherwise).
- """
-
- @classmethod
- def __get_validators__(cls):
- yield cls.validate
-
- @classmethod
- def validate(cls, v):
- if not hasattr(v, "__iter__") and not hasattr(v, "__next__"):
- raise TypeError("not a valid iterator")
- return v
-
-
-DEFAULT_FROZEN_DICT_ERROR = (
- "Can't write to frozen dictionary. This is likely an internal "
- "error. Are you writing to a default function argument?"
-)
-
-DEFAULT_FROZEN_LIST_ERROR = (
- "Can't write to frozen list. Maybe you're trying to modify a computed "
- "property or default function argument?"
-)
-
-
-class SimpleFrozenDict(dict):
- """Simplified implementation of a frozen dict, mainly used as default
- function or method argument (for arguments that should default to empty
- dictionary). Will raise an error if the user attempts to add to dict.
- """
-
- def __init__(
- self,
- *args,
- error: str = DEFAULT_FROZEN_DICT_ERROR,
- **kwargs,
- ) -> None:
- """Initialize the frozen dict. Can be initialized with pre-defined
- values.
-
- error (str): The error message when user tries to assign to dict.
- """
- super().__init__(*args, **kwargs)
- self.error = error
-
- def __setitem__(self, key, value):
- raise NotImplementedError(self.error)
-
- def pop(self, key, default=None):
- raise NotImplementedError(self.error)
-
- def update(self, other):
- raise NotImplementedError(self.error)
-
-
-class SimpleFrozenList(list):
- """Wrapper class around a list that lets us raise custom errors if certain
- attributes/methods are accessed. Mostly used for properties that return an
- immutable list (and that we don't want to convert to a tuple to not break
- too much backwards compatibility). If a user accidentally calls
- frozen_list.append(), we can raise a more helpful error.
- """
-
- def __init__(
- self,
- *args,
- error: str = DEFAULT_FROZEN_LIST_ERROR,
- ) -> None:
- """Initialize the frozen list.
-
- error (str): The error message when user tries to mutate the list.
- """
- self.error = error
- super().__init__(*args)
-
- def append(self, *args, **kwargs):
- raise NotImplementedError(self.error)
-
- def clear(self, *args, **kwargs):
- raise NotImplementedError(self.error)
-
- def extend(self, *args, **kwargs):
- raise NotImplementedError(self.error)
-
- def insert(self, *args, **kwargs):
- raise NotImplementedError(self.error)
-
- def pop(self, *args, **kwargs):
- raise NotImplementedError(self.error)
-
- def remove(self, *args, **kwargs):
- raise NotImplementedError(self.error)
-
- def reverse(self, *args, **kwargs):
- raise NotImplementedError(self.error)
-
- def sort(self, *args, **kwargs):
- raise NotImplementedError(self.error)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/H_V_A_R_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/H_V_A_R_.py
deleted file mode 100644
index 094aedaea5ebc5c88b33e448ea8f131563acd3c0..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/H_V_A_R_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-class table_H_V_A_R_(BaseTTXConverter):
- pass
diff --git a/spaces/cihyFjudo/fairness-paper-search/Dstorm Liquid Pack For Newtek Lightwave 32 And 64 Bit Create Realistic Liquids with This Free Tool (UPDATED).md b/spaces/cihyFjudo/fairness-paper-search/Dstorm Liquid Pack For Newtek Lightwave 32 And 64 Bit Create Realistic Liquids with This Free Tool (UPDATED).md
deleted file mode 100644
index 81e9f587d3ef417ed6c43decc5656870683c198d..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Dstorm Liquid Pack For Newtek Lightwave 32 And 64 Bit Create Realistic Liquids with This Free Tool (UPDATED).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Dstorm Liquid Pack For Newtek Lightwave 32 And 64 Bit Setup Free UPDATED
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Munni Metric Pass 2 720p Full Movie Download Discover the Secrets of Bowling and Englich Woll with Munni in this Comedy Hit.md b/spaces/cihyFjudo/fairness-paper-search/Munni Metric Pass 2 720p Full Movie Download Discover the Secrets of Bowling and Englich Woll with Munni in this Comedy Hit.md
deleted file mode 100644
index 799fb7a704e2371e29019a2514d605de7808fb1f..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Munni Metric Pass 2 720p Full Movie Download Discover the Secrets of Bowling and Englich Woll with Munni in this Comedy Hit.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Munni Metric Pass 2 720p Full Movie Download bowling englich woll
Bitcoin is a digital currency that enables peer-to-peer transactions without intermediaries or central authorities. It is powered by a network of computers that run special software to validate and record transactions on a public ledger called the blockchain. To use bitcoin, you need to have some bitcoin software on your device. But what is bitcoin software and how do you choose, download, install, and use it? In this article, we will answer these questions and more.
-
Types of Bitcoin Software
-
There are different types of bitcoin software that serve different purposes and functions. Here are the main ones:
Wallets: These are applications that allow you to store, send, and receive bitcoins. They also provide you with a private key that proves your ownership of your bitcoins and a public address that you can share with others to receive payments. Wallets can be web-based, desktop-based, mobile-based, or hardware-based.
-
Miners: These are programs that use your computer's processing power to solve complex mathematical problems and earn bitcoins as a reward. They also help secure the network by verifying transactions and adding new blocks to the blockchain. Miners can be standalone software or part of a mining pool.
-
Nodes: These are computers that run a full copy of the bitcoin blockchain and enforce the rules of the network. They also relay transactions and blocks to other nodes. Nodes can be run by anyone who wants to support the network and have more control over their transactions.
-
-
How to Choose the Best Bitcoin Software for Your Needs
-
There is no one-size-fits-all solution when it comes to choosing bitcoin software. Depending on your goals, preferences, and resources, you may want to use different types of software or even multiple ones. Here are some factors to consider when making your choice:
-
-
Security: This is the most important factor when dealing with bitcoin. You want to make sure that your software is reliable, trustworthy, and protects your bitcoins from theft, loss, or hacking. Some features to look for are encryption, backup, recovery, multisig, cold storage, and open source.
-
Features: Depending on what you want to do with your bitcoins, you may need different features from your software. Some features to look for are transaction speed, fees, privacy, user interface, customer support, and extra services.
-
Compatibility: You want to make sure that your software is compatible with your device, operating system, and other software that you use. Some software may only work on certain platforms or devices, while others may require specific hardware or software requirements.
-
Ease of use: You want to make sure that your software is easy to download, install, set up, and use. Some software may have a steep learning curve or require technical skills, while others may be more user-friendly and intuitive.
-
-
How to Download and Install Bitcoin Software
-
The process of downloading and installing bitcoin software may vary depending on the type of software and the platform or device that you use. However, here are some general steps that you can follow:
-
-
Choose your software: Based on the factors mentioned above, choose the best bitcoin software for your needs. You can find various options on websites such as bitcoin.org, bitcoin.com, or bitcoincore.org.
-
Download your software: Go to the official website of your chosen software and click on the download link. Make sure that you download the latest version of the software from a trusted source. Avoid clicking on suspicious links or downloading. - Install your software: Once you have downloaded your software, open the file and follow the instructions to install it on your device. You may need to agree to some terms and conditions, choose a location, and create a shortcut. Some software may also require you to verify your identity or create an account.
-
Set up your software: After you have installed your software, you need to set it up according to your preferences and needs. You may need to choose a password, a recovery phrase, a network, a fee level, or other options. Some software may also require you to sync with the blockchain, which can take some time and space.
-
-
How to Use Bitcoin Software
-
Once you have downloaded and installed your bitcoin software, you are ready to use it. Here are some basic tips and best practices for using bitcoin software:
-
-
Send and receive bitcoins: To send bitcoins, you need to enter the recipient's address, the amount, and the fee. You can also scan a QR code or use a contact list if your software supports it. To receive bitcoins, you need to share your address or QR code with the sender. You can also generate multiple addresses for different purposes or transactions.
-
Store your bitcoins: To store your bitcoins securely, you need to keep your private key safe and backup your wallet. You can also use a hardware wallet or a paper wallet for extra security. You should avoid storing large amounts of bitcoins on web-based or mobile-based wallets, as they are more vulnerable to hacking or theft.
-
Monitor your transactions: To monitor your transactions, you can use your software's transaction history or explorer. You can also use external services such as blockchain.com or blockexplorer.com. You can check the status, confirmation, and details of your transactions. You can also view the balance and value of your bitcoins.
-
-
Conclusion
-
Bitcoin software is essential for using bitcoin. It allows you to store, send, receive, and manage your bitcoins. There are different types of bitcoin software that serve different purposes and functions. You need to choose the best bitcoin software for your needs based on factors such as security, features, compatibility, and ease of use. You also need to download, install, and set up your bitcoin software properly. Finally, you need to use your bitcoin software wisely and safely by following some basic tips and best practices.
-
FAQ
-
What is the best bitcoin software?
-
There is no definitive answer to this question, as different users may have different preferences and needs. However, some of the most popular and reputable bitcoin software are:
-
-
Bitcoin Core: This is the original and official bitcoin software that runs a full node and supports the network. It is highly secure, feature-rich, and compatible with various platforms. However, it is also resource-intensive, complex, and slow.
-
Electrum: This is a lightweight and user-friendly bitcoin software that runs a client node and connects to external servers. It is fast, easy, and customizable. However, it is less secure, less private, and less reliable than running a full node.
-
Trezor: This is a hardware wallet that stores your private key offline and connects to your device via USB. It is very secure, convenient, and compatible with various software. However, it is expensive, limited in features, and dependent on external devices.
-
-
How do I update my bitcoin software?
-
To update your bitcoin software, you need to download the latest version of the software from the official website or source and install it on your device. You may need to uninstall the previous version first or overwrite it with the new one. You may also need to backup your wallet before updating.
-
How do I uninstall my bitcoin software?
-
To uninstall your bitcoin software, you need to delete the program files from your device. You may also need to delete the data files such as the blockchain or the wallet. However, before uninstalling your bitcoin software, you should make sure that you have backed up your wallet or transferred your bitcoins to another wallet.
-
How do I troubleshoot my bitcoin software?
-
To troubleshoot your bitcoin software, you need to identify the problem and find the possible solutions. Some common problems and solutions are:
-
Download Bitcoin Core latest version for Windows
-How to install Bitcoin Core on your desktop
-Bitcoin Core source code and release signatures
-Best Bitcoin wallets for Windows users
-Compare Bitcoin Core with other Bitcoin clients
-Download Bitcoin Core for Linux and Mac OS
-Troubleshooting Bitcoin Core installation issues
-How to run a full node with Bitcoin Core
-How to backup and restore your Bitcoin Core wallet
-How to use Tor with Bitcoin Core for privacy
-How to change fees and use RBF or CPFP with Bitcoin Core
-How to verify Bitcoin Core binaries and signatures
-How to contribute to Bitcoin Core development
-How to update Bitcoin Core to the latest version
-How to sync Bitcoin Core with the blockchain faster
-How to enable SegWit and Bech32 addresses with Bitcoin Core
-How to use Bitcoin Core as a cold storage wallet
-How to encrypt and secure your Bitcoin Core wallet
-How to send and receive bitcoins with Bitcoin Core
-How to use the console and debug window in Bitcoin Core
-How to connect Bitcoin Core to your hardware wallet
-How to use multi-signature wallets with Bitcoin Core
-How to import and export private keys with Bitcoin Core
-How to sign and verify messages with Bitcoin Core
-How to use the testnet and regtest modes with Bitcoin Core
-How to configure Bitcoin Core settings and options
-How to use the RPC interface and API with Bitcoin Core
-How to monitor network activity and performance with Bitcoin Core
-How to prune the blockchain and save disk space with Bitcoin Core
-How to run Bitcoin Core in headless mode or as a daemon
-How to compile Bitcoin Core from source code on Windows
-How to download and verify the checksums of Bitcoin Core binaries
-How to use the peer-to-peer network with Bitcoin Core
-How to report bugs and issues with Bitcoin Core
-How to join the Bitcoin Core community and mailing list
-How to donate to the Bitcoin Core project and developers
-How to review the code and documentation of Bitcoin Core
-How to test new features and improvements of Bitcoin Core
-How to understand the architecture and design of Bitcoin Core
-How to learn more about the history and vision of Bitcoin Core
-
-
Your software is not syncing with the network: This could be due to a slow internet connection, a firewall blocking the connection, or an outdated version - of the software. You can try to restart your software, check your internet connection, disable your firewall, or update your software.
-
Your software is not sending or receiving bitcoins: This could be due to a low fee, a network congestion, a wrong address, or a corrupted wallet. You can try to increase your fee, wait for the network to clear, double-check your address, or restore your wallet.
-
Your software is not opening or crashing: This could be due to a virus, a malware, a hardware failure, or a software conflict. You can try to scan your device for viruses or malware, check your hardware for errors, or remove any conflicting software.
-
-
If none of these solutions work, you can also contact the customer support of your software or seek help from online forums or communities.
-
How do I secure my bitcoin software?
-
To secure your bitcoin software, you need to follow some basic security measures and precautions. Some of them are:
-
-
Use a strong password: You should use a password that is long, complex, and unique for your bitcoin software. You should also change it regularly and never share it with anyone.
-
Backup your wallet: You should backup your wallet regularly and store it in a safe and offline location. You should also encrypt it with a passphrase and test it for recovery.
-
Use a hardware wallet: You should use a hardware wallet to store your private key offline and connect it to your device only when you need to make a transaction. You should also keep it in a secure and physical location.
-
Update your software: You should update your software regularly to get the latest security patches and bug fixes. You should also download the updates only from the official website or source.
-
Be careful with phishing: You should be careful with any emails, messages, or websites that ask you for your password, private key, recovery phrase, or other sensitive information. You should also verify the sender's identity and the URL's authenticity before clicking on any links or attachments.
-
-
-
This is the end of the article. I hope you found it useful and informative. If you have any questions or feedback, please let me know. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Free PDF Download of NCERT Class 12 Chemistry Book for 2020-21 Session.md b/spaces/congsaPfin/Manga-OCR/logs/Free PDF Download of NCERT Class 12 Chemistry Book for 2020-21 Session.md
deleted file mode 100644
index 701ce7282a6508de07508f97056a6be43d341776..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Free PDF Download of NCERT Class 12 Chemistry Book for 2020-21 Session.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
Class 12 Chemistry Book PDF Download 2020-21
-
Are you looking for a Class 12 Chemistry book PDF for your board exams? If yes, then you have come to the right place. In this article, we will tell you everything you need to know about Class 12 Chemistry book PDF, including why you need it, how to download it for free, what are its benefits and features, and where to find the best sources. So, without further ado, let's get started.
-
Introduction
-
Chemistry is one of the most important subjects for Class 12 students who are preparing for various competitive exams like JEE, NEET, AIIMS, etc. It is also a fascinating subject that deals with the study of matter, its structure, properties, and reactions. However, to master Chemistry, you need a good book that can help you understand the concepts clearly and apply them in different situations.
A Class 12 Chemistry book is essential for your board exams as well as your entrance exams. It can help you in the following ways:
-
-
It can provide you with a comprehensive and systematic coverage of the entire syllabus.
-
It can help you clear your doubts and strengthen your fundamentals.
-
It can help you develop your analytical and problem-solving skills.
-
It can help you revise the topics quickly and effectively.
-
-
How to download Class 12 Chemistry book PDF for free?
-
If you want to download Class 12 Chemistry book PDF for free, you have two options:
-
-
You can visit the official website of NCERT and download the PDF files of the chapters or the entire book.
-
You can visit some other reliable websites that offer free PDF downloads of Class 12 Chemistry books from various publishers.
-
-
However, before downloading any PDF file, make sure that it is authentic, accurate, and updated. Also, check the file size and format before downloading it.
-
Benefits of Class 12 Chemistry book PDF
-
Downloading Class 12 Chemistry book PDF has many benefits over buying a hard copy of the book. Some of these benefits are:
-
Easy access and portability
-
You can access Class 12 Chemistry book PDF anytime and anywhere on your laptop, tablet, or smartphone. You don't have to carry a heavy book around with you or worry about losing or damaging it. You can also share it with your friends or classmates easily.
-
Saves time and money
-
You don't have to spend money on buying a new book or renting it from a library. You also don't have to waste time on searching for a book in a bookstore or waiting for it to be delivered. You can simply download Class 12 Chemistry book PDF for free from the internet and start studying right away.
-
NCERT class 12 chemistry textbook pdf free download 2020-21
-Download class 12 chemistry book pdf CBSE board 2020-21
-Class 12 chemistry book pdf download for NEET exam preparation 2020-21
-How to download class 12 chemistry book pdf online 2020-21
-Class 12 chemistry book pdf download latest edition 2020-21
-Class 12 chemistry book pdf download in Hindi medium 2020-21
-Class 12 chemistry book pdf download with solutions and answers 2020-21
-Class 12 chemistry book pdf download by Pradeep publication 2020-21
-Class 12 chemistry book pdf download by Nootan publication 2020-21
-Class 12 chemistry book pdf download by Arihant publication 2020-21
-Class 12 chemistry book pdf download by S Chand publication 2020-21
-Class 12 chemistry book pdf download by Balaji publication 2020-21
-Class 12 chemistry book pdf download by MTG publication 2020-21
-Class 12 chemistry book pdf download by Dinesh publication 2020-21
-Class 12 chemistry book pdf download by GRB publication 2020-21
-Class 12 chemistry book pdf download by OP Tandon publication 2020-21
-Class 12 chemistry book pdf download by JD Lee publication 2020-21
-Class 12 chemistry book pdf download by RC Mukherjee publication 2020-21
-Class 12 chemistry book pdf download by P Bahadur publication 2020-21
-Class 12 chemistry book pdf download by VK Jaiswal publication 2020-21
-Class 12 chemistry book pdf download by MS Chauhan publication 2020-21
-Class 12 chemistry book pdf download by Narendra Awasthi publication 2020-21
-Class 12 chemistry book pdf download by Himanshu Pandey publication 2020-21
-Class 12 chemistry book pdf download by SN Sanyal publication 2020-21
-Class 12 chemistry book pdf download by IL Finar publication 2020-21
-
Enhances learning and revision
-
You can use Class 12 Chemistry book PDF to enhance your learning and revision process. You can highlight important points, make notes, bookmark pages, zoom in or out, search for keywords, etc. You can also use online tools like dictionaries, calculators, converters, etc. to aid your learning. You can also print out specific pages or chapters if you want to study offline.
-
Features of Class 12 Chemistry book PDFFeatures of Class 12 Chemistry book PDF
-
Class 12 Chemistry book PDF is not just a digital copy of a printed book. It has some unique features that make it more useful and effective for your exam preparation. Some of these features are:
-
Based on the latest CBSE syllabus and NCERT guidelines
-
Class 12 Chemistry book PDF is based on the latest CBSE syllabus and NCERT guidelines for the academic year 2020-21. It covers all the units and chapters that are prescribed by the board and follows the same sequence and structure. It also adheres to the marking scheme and question paper pattern of the board exams.
-
Covers all the topics and concepts in detail
-
Class 12 Chemistry book PDF covers all the topics and concepts in detail with clear explanations, examples, and illustrations. It helps you understand the theoretical and practical aspects of Chemistry and apply them in various situations. It also covers the latest developments and trends in the field of Chemistry and relates them to the syllabus.
-
Includes solved examples, exercises, diagrams, and tables
-
Class 12 Chemistry book PDF includes solved examples, exercises, diagrams, and tables to help you practice and reinforce your learning. The solved examples show you how to solve different types of problems step by step. The exercises test your knowledge and skills on various topics and concepts. The diagrams and tables help you visualize and summarize the information.
-
Best sources to download Class 12 Chemistry book PDF
-
There are many sources on the internet that offer free PDF downloads of Class 12 Chemistry books from various publishers. However, not all of them are reliable or updated. Therefore, you need to be careful while choosing a source to download Class 12 Chemistry book PDF. Here are some of the best sources that we recommend:
-
NCERT official website
-
The NCERT official website is the best source to download Class 12 Chemistry book PDF for free. It offers the original and authentic PDF files of the NCERT books that are prescribed by the CBSE board. You can download the entire book or individual chapters as per your convenience. You can also download other NCERT books, solutions, exemplars, etc. from this website.
-
Jagran Josh website
-
The Jagran Josh website is another good source to download Class 12 Chemistry book PDF for free. It offers the PDF files of Class 12 Chemistry books from various publishers like Pradeep, S Chand, Modern ABC, etc. You can choose the book that suits your needs and preferences. You can also download other study materials like sample papers, previous year papers, notes, etc. from this website.
-
Other reliable websites
-
There are some other reliable websites that offer free PDF downloads of Class 12 Chemistry books from various publishers. Some of them are:
-
-
Vedantu
-
BYJU'S
-
Tiwari Academy
-
Career Point
-
Etoos India
-
-
You can visit these websites and download Class 12 Chemistry book PDF as per your choice. However, make sure that you check the quality and accuracy of the PDF files before downloading them.
-
Conclusion
-
In conclusion, Class 12 Chemistry book PDF is a great resource for your exam preparation. It can help you study effectively, efficiently, and conveniently. It can also save you time and money and enhance your learning and revision process. However, you need to download Class 12 Chemistry book PDF from a reliable source that offers authentic, accurate, and updated PDF files. We hope that this article has helped you understand everything about Class 12 Chemistry book PDF download 2020-21.
-
FAQs
-
Here are some frequently asked questions about Class 12 Chemistry book PDF download 2020-21:
-
-
Is Class 12 Chemistry book PDF enough for board exams?
-
Class 12 Chemistry book PDF is enough for board exams if you study it thoroughly and practice it regularly. However, you should also refer to other sources like NCERT solutions, exemplars, sample papers, previous year papers, etc. to enhance your preparation.
-
How can I improve my marks in Class 12 Chemistry?
-
You can improve your marks in Class 12 Chemistry by following these tips:
-
-
Read the NCERT books carefully and understand the concepts clearly.
-
Solve the NCERT exercises and exemplars at the end of each chapter.
-
Revise the topics regularly and make
-
Revise the topics regularly and make notes of important points, formulas, reactions, etc.
-
Practice solving different types of questions from various sources like sample papers, previous year papers, mock tests, etc.
-
Clear your doubts and queries from your teachers, peers, or online platforms.
-
Focus on your weak areas and improve them.
-
-
Which is the best Class 12 Chemistry book PDF?
-
There is no definitive answer to this question as different Class 12 Chemistry books have different features, advantages, and disadvantages. However, some of the factors that you can consider while choosing a Class 12 Chemistry book PDF are:
-
-
The book should be based on the latest CBSE syllabus and NCERT guidelines.
-
The book should cover all the topics and concepts in detail and in a simple and lucid manner.
-
The book should include solved examples, exercises, diagrams, tables, etc. to help you practice and revise.
-
The book should be from a reputed publisher and author who have expertise and experience in the field of Chemistry.
-
-
Some of the popular Class 12 Chemistry books are:
-
-
NCERT Chemistry Textbook for Class 12
-
Pradeep's New Course Chemistry for Class 12
-
S Chand's Chemistry for Class 12
-
Modern ABC of Chemistry for Class 12
-
-
How can I download Class 12 Chemistry book PDF from NCERT website?
-
You can download Class 12 Chemistry book PDF from NCERT website by following these steps:
You will see two books: Part I and Part II. Click on the book that you want to download.
-
You will see the list of chapters in the book. You can either download the entire book or individual chapters as per your need.
-
Click on the "Download complete book" or "Download complete chapter" link as per your choice.
-
The PDF file will open in a new tab. You can save it on your device or print it out as per your convenience.
-
-
Is Class 12 Chemistry book PDF legal to download?
-
Class 12 Chemistry book PDF is legal to download if it is offered by the original publisher or author or by an authorized source. However, if it is offered by an unauthorized or pirated source, then it is illegal to download. Therefore, you should always check the source and authenticity of the PDF file before downloading it. You should also respect the intellectual property rights of the publisher and author and use the PDF file for personal and educational purposes only.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Create Your Own Hangman Powerpoint Game with Custom Words.md b/spaces/congsaPfin/Manga-OCR/logs/How to Create Your Own Hangman Powerpoint Game with Custom Words.md
deleted file mode 100644
index 4ccf418f3d53e2752159cf5305bbbe023965505a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Create Your Own Hangman Powerpoint Game with Custom Words.md
+++ /dev/null
@@ -1,178 +0,0 @@
-
-
Hangman Powerpoint Game Free Download: How to Play and Where to Find It
-
Hangman is a classic word game that has been around for centuries. It is simple, fun, and challenging, and can be played by anyone who knows how to spell. In this article, you will learn how to play hangman on powerpoint, where to find free hangman powerpoint templates, and how to create your own hangman game from scratch. You will also discover some tips and tricks for making the game more enjoyable and educational.
-
What is Hangman and Why is it Fun?
-
Hangman is a word game where one player thinks of a word or phrase, and the other player tries to guess it by suggesting letters. The word or phrase is represented by a row of dashes, each representing a letter. If the guessing player suggests a letter that occurs in the word, the other player writes it in all its correct positions. If the guessing player suggests a letter that does not occur in the word, the other player draws one element of a hanged man stick figure as a tally mark. The guessing player has to guess the word before the hangman is completed.
Hangman is fun because it tests your vocabulary, spelling, and logic skills. It also stimulates your creativity and imagination, as you try to think of words that are hard to guess or guess words that are obscure or unusual. Hangman can be played with any language, theme, or topic, making it versatile and adaptable. You can play hangman with your friends, family, classmates, or colleagues, or even by yourself.
-
The Rules of Hangman
-
The rules of hangman are simple and easy to follow. Here are the basic steps:
-
-
One player thinks of a word or phrase and writes down the number of letters it has on a piece of paper or a board. For example, if the word is "hangman", the player writes "_ _ _ _ _ _ _".
-
The other player guesses a letter that they think might be in the word or phrase. For example, they might guess "A".
-
If the letter is in the word or phrase, the first player writes it in all its correct positions. For example, if the word is "hangman", the player writes "_ A _ _ _ A _".
-
If the letter is not in the word or phrase, the first player draws one element of a hangman stick figure on a separate piece of paper or board. The elements are usually drawn in this order: head, body, left arm, right arm, left leg, right leg.
-
The second player continues to guess letters until they either guess the word or phrase correctly, or the hangman is completed. If they guess the word or phrase correctly, they win. If they fail to guess the word or phrase before the hangman is completed, they lose.
-
-
The Benefits of Playing Hangman
-
Playing hangman can have many benefits for your brain and your mood. Here are some of them:
-
-
It improves your vocabulary and spelling skills. You can learn new words and their meanings, as well as how to spell them correctly.
-
It enhances your memory and concentration. You have to remember the letters you have already guessed and focus on finding the missing ones.
-
It develops your logical thinking and problem-solving abilities. You have to use clues and strategies to narrow down the possible words and eliminate the wrong ones.
-
It boosts your creativity and imagination. You can think of words that are related to a specific theme, topic, or category, or words that are uncommon or unusual.
-
It increases your confidence and self-esteem. You can feel proud of yourself when you guess a word correctly or stump your opponent with a difficult word.
-
It reduces your stress and anxiety. You can have fun and relax while playing hangman, as it distracts you from your worries and problems.
-
It strengthens your social and communication skills. You can play hangman with other people, either in person or online, and have a friendly and lively conversation with them.
-
-
How to Play Hangman on Powerpoint
-
Powerpoint is a popular presentation software that can also be used to create and play games, such as hangman. Playing hangman on powerpoint can be more convenient and fun than playing it on paper or board, as you can use animations, sounds, images, and other features to make the game more interactive and engaging. There are two ways to play hangman on powerpoint: download a ready-made template or create your own game from scratch.
-
Download a Ready-Made Template
-
One of the easiest ways to play hangman on powerpoint is to download a ready-made template that has all the elements and functions of the game already set up for you. All you have to do is open the template, choose a word or phrase, and start playing. There are many free hangman powerpoint templates available online that you can download and use for personal or educational purposes. Here are some of the websites where you can find them:
-
Where to Find Free Hangman Powerpoint Templates
-
-
PowerPoint Games: This website offers a variety of powerpoint games, including hangman, that you can download for free. The hangman template has 26 slides, each with a letter of the alphabet. When you click on a letter, it either reveals its position in the word or adds an element to the hangman figure. You can also customize the template by changing the background, font, color, sound, and word list.
-
Teachers Pay Teachers: This website is a marketplace where teachers can buy and sell educational resources, including powerpoint games. You can find several free hangman powerpoint templates here that are designed for different grade levels and subjects. Some of the templates have themes, such as animals, fruits, Halloween, or Christmas. You can also edit the templates to suit your needs and preferences.
-
Presentation Magazine: This website provides free powerpoint templates, backgrounds, tips, and tutorials for various purposes. It also has a section for powerpoint games, where you can download a free hangman template that has 10 slides. The template has a simple and clean design, with a white background and black letters. You can change the word or phrase by typing it in the notes section of each slide.
-
-
How to Customize Your Own Hangman Powerpoint Template
-
If you want to make your own hangman powerpoint template, you can use one of the free templates as a base and modify it according to your liking. Here are some of the steps you can follow to customize your own hangman powerpoint template:
-
-
Open the template in powerpoint and save it as a new file with a different name.
-
Change the background of the slides by right-clicking on them and selecting Format Background. You can choose a solid color, gradient fill, picture, or texture.
-
Change the font style, size, color, and alignment of the letters by selecting them and using the options in the Home tab.
-
Add sounds to the slides by clicking on Insert > Audio > Audio on My PC. You can choose sounds from your computer or online sources. You can also adjust the volume, start time, playback options, and animation effects of the sounds by using the options in the Audio Tools tab.
-
Add images to the slides by clicking on Insert > Pictures > Picture from File. You can choose images from your computer or online sources. You can also resize, crop, rotate, flip, align, group, and animate the images by using the options in the Picture Tools tab.
-
Add words or phrases to the slides by typing them in the notes section of each slide. You can also change the font style, size, color, and alignment of the words by selecting them and using the options in the Home tab.
-
Save your customized hangman powerpoint template as a new file with a different name.
-
Create Your Own Hangman Game from Scratch
-
If you want to create your own hangman game from scratch, you can use the basic features of powerpoint to set up the slides and animations. This way, you can have more control and flexibility over the design and functionality of your game. Here are some of the steps you can follow to create your own hangman game from scratch:
-
hangman powerpoint game esl
-hangman powerpoint game template
-hangman powerpoint game animals
-hangman powerpoint game halloween
-hangman powerpoint game comparative
-hangman powerpoint game vegetables
-hangman powerpoint game fruit
-hangman powerpoint game school objects
-hangman powerpoint game 2-player
-hangman powerpoint game online
-hangman powerpoint game maker
-hangman powerpoint game words
-hangman powerpoint game rules
-hangman powerpoint game instructions
-hangman powerpoint game ideas
-hangman powerpoint game categories
-hangman powerpoint game english
-hangman powerpoint game vocabulary
-hangman powerpoint game spelling
-hangman powerpoint game fun
-hangman powerpoint game interactive
-hangman powerpoint game editable
-hangman powerpoint game design
-hangman powerpoint game animation
-hangman powerpoint game sound effects
-hangman powerpoint game for kids
-hangman powerpoint game for adults
-hangman powerpoint game for beginners
-hangman powerpoint game for advanced learners
-hangman powerpoint game for teachers
-hangman powerpoint game for students
-hangman powerpoint game for classroom
-hangman powerpoint game for warm up
-hangman powerpoint game for review
-hangman powerpoint game for practice
-hangman powerpoint game for test
-hangman powerpoint game for quiz
-hangman powerpoint game for challenge
-hangman powerpoint game for entertainment
-hangman powerpoint game for learning
-how to play hangman powerpoint game
-how to make hangman powerpoint game
-how to download hangman powerpoint game
-how to use hangman powerpoint game
-how to create hangman powerpoint game
-how to customize hangman powerpoint game
-how to modify hangman powerpoint game
-how to share hangman powerpoint game
-how to teach with hangman powerpoint game
-
How to Set Up the Slides and Animations
-
-
Create a new blank presentation in powerpoint and save it as a new file with a name of your choice.
-
Insert a new slide by clicking on Home > New Slide > Blank. This will be your title slide, where you can write the name of your game and any other information you want to include.
-
Insert another new slide by clicking on Home > New Slide > Blank. This will be your game slide, where you will create the hangman figure and the word or phrase.
-
On the game slide, insert a text box by clicking on Insert > Text Box. Draw a text box on the top left corner of the slide and type in the number of letters in your word or phrase. For example, if your word is "hangman", type "_ _ _ _ _ _ _". You can change the font style, size, color, and alignment of the text by using the options in the Home tab.
-
On the game slide, insert another text box by clicking on Insert > Text Box. Draw a text box on the bottom left corner of the slide and type in "Guess a letter". You can change the font style, size, color, and alignment of the text by using the options in the Home tab.
-
On the game slide, insert a shape by clicking on Insert > Shapes > Line. Draw a horizontal line on the bottom right corner of the slide. This will be the base of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Line. Draw a vertical line on the left end of the horizontal line. This will be the pole of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Line. Draw a diagonal line on the top end of the vertical line. This will be the rope of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Oval. Draw a small circle on the right end of the diagonal line. This will be the head of your hangman figure. You can change the color, fill, outline, and size of the circle by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Line. Draw a vertical line below the circle. This will be the body of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Line. Draw a diagonal line from the middle of the vertical line to the left. This will be the left arm of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Line. Draw a diagonal line from the middle of the vertical line to the right. This will be the right arm of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Line. Draw a diagonal line from the bottom end of the vertical line to the left. This will be the left leg of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
On the game slide, insert another shape by clicking on Insert > Shapes > Line. Draw a diagonal line from the bottom end of the vertical line to the right. This will be the right leg of your hangman figure. You can change the color, width, and style of the line by using the options in the Shape Format tab.
-
-
Now you have created the hangman figure and the word or phrase on the game slide. The next step is to add animations to the elements so that they appear or disappear when you click on them. Here are some of the steps you can follow to add animations to the elements:
-
-
Select the circle that represents the head of the hangman figure. Click on Animations > Add Animation > Appear. This will make the circle appear when you click on the slide.
-
Select the vertical line that represents the body of the hangman figure. Click on Animations > Add Animation > Appear. This will make the line appear when you click on the slide.
-
Select the diagonal line that represents the left arm of the hangman figure. Click on Animations > Add Animation > Appear. This will make the line appear when you click on the slide.
-
Select the diagonal line that represents the right arm of the hangman figure. Click on Animations > Add Animation > Appear. This will make the line appear when you click on the slide.
-
Select the diagonal line that represents the left leg of the hangman figure. Click on Animations > Add Animation > Appear. This will make the line appear when you click on the slide.
-
Select the diagonal line that represents the right leg of the hangman figure. Click on Animations > Add Animation > Appear. This will make the line appear when you click on the slide.
-
Select all the letters in your word or phrase. Click on Animations > Add Animation > Wipe. This will make the letters appear from left to right when you click on them.
-
Click on Animations > Animation Pane to open a window that shows all the animations you have added. You can change the order, timing, duration, and trigger of each animation by using the options in this window.
-
-
Now you have added animations to all the elements on your game slide. The final step is to test your game and make sure it works as intended. Here are some of the steps you can follow to test your game:
-
-
Click on Slide Show > From Current Slide to start your game from your game slide.
-
Click on a letter that is in your word or phrase. The letter should appear in its correct position and a sound should play.
-
Click on a letter that is not in your word or phrase. An element of the hangman figure should appear and a sound should play.
-
Continue to click on letters until you either guess your word or phrase correctly or complete your hangman figure.
-
If you guess your word or phrase correctly, a message should appear saying "You win!" and a sound should play.
-
If you complete your hangman figure before guessing your word or phrase, a message should appear saying "You lose!" and a sound should play.
-
-
Tips and Tricks for Playing Hangman on Powerpoint
-
Playing hangman on powerpoint can be a lot of fun and learning, but it can also be challenging and frustrating at times. To make your game more enjoyable and educational, here are some tips and tricks you can use:
-
How to Make the Game More Challenging
-
If you want to make your game more difficult for yourself or your opponent, here are some things you can do:
-
-
Choose words or phrases that are long, uncommon, or have many repeated letters.
-
Choose words or phrases that belong to a specific category, such as animals, countries, movies, or sports.
-
Choose words or phrases that have homophones, such as "there", "their", and "they're".
-
Choose words or phrases that have silent letters, such as "knife", "knee", or "know".
-
Choose words or phrases that have contractions, such as "don't", "can't", or "won't".
-
-
How to Make the Game More Educational
-
If you want to make your game more informative and useful for yourself or your opponent, here are some things you can do:
-
-
Choose words or phrases that are related to a subject or topic that you want to learn more about, such as history, science, art, or literature.
-
Choose words or phrases that are in a different language than your native one, such as Spanish, French, or German.
-
Choose words or phrases that have synonyms, antonyms, or definitions, and explain them after the game.
-
Choose words or phrases that have spelling rules, such as "i before e except after c", and review them after the game.
-
Choose words or phrases that have prefixes, suffixes, or roots, and analyze them after the game.
-
-
How to Make the Game More Fun and Interactive
-
If you want to make your game more enjoyable and engaging for yourself or your opponent, here are some things you can do:
-
-
Add images, sounds, music, or videos to your slides to make them more appealing and attractive.
-
Add humor, jokes, puns, or riddles to your words or phrases to make them more amusing and witty.
-
Add feedback, praise, encouragement, or hints to your slides to make them more supportive and helpful.
-
Add challenges, rewards, penalties, or surprises to your game to make it more exciting and unpredictable.
-
Play with different settings, modes, levels, or variations of the game to make it more diverse and adaptable.
-
-
Conclusion
-
Hangman is a fun and educational word game that can be played on powerpoint. You can download a ready-made template or create your own game from scratch. You can also customize your game by changing the background, font, color, sound, image, word list, and animation of your slides. You can also make your game more challenging, informative, and enjoyable by choosing different words or phrases, categories, languages, rules, and features. Hangman is a great way to improve your vocabulary, spelling, logic, memory, concentration, creativity, imagination, confidence, self-esteem, stress relief, social skills, and communication skills. So what are you waiting for? Download or create your own hangman powerpoint game today and have fun playing with your friends!
-
FAQs
-
Here are some of the frequently asked questions about hangman powerpoint game:
-
-
Q: How many letters can I use in my word or phrase? A: You can use as many letters as you want in your word or phrase. However, it is recommended to use between 5 and 15 letters for optimal gameplay and difficulty.
-
Q: How many guesses do I have before I lose the game? A: You have as many guesses as the number of elements in your hangman figure. Usually, this is 6 guesses: head, body, left arm, right arm, left leg, and right leg. However, you can change this number by adding or removing elements from your hangman figure.
-
Q: How can I play hangman on powerpoint with multiple players? A: You can play hangman on powerpoint with multiple players by taking turns guessing letters or words. You can also divide the players into teams and compete against each other. Alternatively, you can use an online platform such as Kahoot, Quizizz, or Mentimeter to create and play hangman games with multiple players online.
-
Q: How can I play hangman on powerpoint without a computer? A: You can play hangman on powerpoint without a computer by printing out your slides and using them as cards. You can also use a projector, a smart board, a tablet, or a smartphone to display your slides on a screen.
-
Q: How can I make my own hangman powerpoint template? A: You can make your own hangman powerpoint template by following the steps in this article. You can also watch this video tutorial for more guidance:
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Play Los Angeles Crimes on Android - APK Pure Guide.md b/spaces/congsaPfin/Manga-OCR/logs/How to Play Los Angeles Crimes on Android - APK Pure Guide.md
deleted file mode 100644
index fcc5b80d2947fec45c180738fce15911fa8a55eb..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Play Los Angeles Crimes on Android - APK Pure Guide.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
Los Angeles Crimes APK Pure: A Realistic and Action-Packed Game for Android
-
If you are looking for a game that will give you a taste of the life of a criminal in the city of angels, then you should try Los Angeles Crimes APK Pure. This is a game that will let you explore, fight, and survive in a realistic and open-world environment. You can download Los Angeles Crimes APK Pure for free on your android device and enjoy its amazing features. In this article, we will tell you everything you need to know about this game, how to play it, and why you should play it.
Los Angeles Crimes APK Pure is a modified version of the original Los Angeles Crimes game, which is also known as GTA V Android. This is a game that simulates the life of a criminal in Los Angeles, where you can do whatever you want, such as stealing cars, shooting people, robbing banks, and more. You can also customize your character, choose your weapons, and interact with other players online.
-
Some of the features of Los Angeles Crimes APK Pure are:
-
-
It has unlimited ammo, which means you can fire as much as you want without running out of bullets.
-
It has no ads, which means you can play without any interruptions or distractions.
-
It has improved graphics, which means you can enjoy a more realistic and detailed view of the city.
-
It has faster loading times, which means you can start playing sooner and smoother.
-
-
How to download and install Los Angeles Crimes APK Pure on your device
-
To download and install Los Angeles Crimes APK Pure on your device, you need to follow these simple steps:
-
-
Go to [FileHippo](^1^) and click on the download button.
-
Wait for the file to be downloaded on your device.
-
Go to your file manager and locate the downloaded file.
-
Tap on the file and allow unknown sources if prompted.
-
Follow the instructions on the screen and install the game.
-
Launch the game and enjoy!
-
-
The benefits of using Los Angeles Crimes APK Pure over other versions
-
There are many benefits of using Los Angeles Crimes APK Pure over other versions of the game, such as:
-
los angeles crimes android game download apkpure
-los angeles crimes apk pure latest version
-los angeles crimes apk pure mod menu
-los angeles crimes apk pure offline
-los angeles crimes apk pure online
-los angeles crimes apk pure update
-los angeles crimes app free download apkpure
-los angeles crimes beta apk pure
-los angeles crimes car race apkpure
-los angeles crimes cheats apk pure
-los angeles crimes download for android apkpure
-los angeles crimes free roam apkpure
-los angeles crimes game apkpure.com
-los angeles crimes hack apk pure
-los angeles crimes lan support apkpure
-los angeles crimes mod apk pure unlimited money
-los angeles crimes multiplayer apk pure
-los angeles crimes new update apk pure
-los angeles crimes offline mode apkpure
-los angeles crimes old version apk pure
-los angeles crimes open world game apkpure
-los angeles crimes ps4 controller support apkpure
-los angeles crimes ragdoll physics apkpure
-los angeles crimes realistic graphics apkpure
-los angeles crimes soccer mode apkpure
-los angeles crimes survival mode apkpure
-los angeles crimes team deathmatch apkpure
-los angeles crimes third person view apkpure
-los angeles crimes zombie mode apkpure
-download los angeles crimes apk pure for free
-how to install los angeles crimes apk pure on android
-how to play los angeles crimes apk pure online with friends
-how to update los angeles crimes apk pure to latest version
-is los angeles crimes apk pure safe and secure to download
-what are the features of los angeles crimes apk pure game
-what are the requirements for los angeles crimes apk pure game
-what is the size of los angeles crimes apk pure game file
-where can i find the best reviews for los angeles crimes apk pure game
-where can i get the best tips and tricks for los angeles crimes apk pure game
-why is los angeles crimes apk pure game so popular and fun to play
-
-
You can save storage space on your device, as Los Angeles Crimes APK Pure is only 200 MB in size, while other versions are over 1 GB.
-
You can play offline, as Los Angeles Crimes APK Pure does not require an internet connection to run, while other versions do.
-
You can avoid viruses and malware, as Los Angeles Crimes APK Pure is safe and secure to use, while other versions may contain harmful files or links.
-
You can get updates faster, as Los Angeles Crimes APK Pure is regularly updated with new features and bug fixes, while other versions may be outdated or abandoned.
-
-
How to play Los Angeles Crimes APK Pure
-
The game modes and maps available in Los Angeles Crimes APK Pure
-
Los Angeles Crimes APK Pure offers five different game modes that you can choose from:
-
Free Mode: This is the mode where you can roam around the city and do whatever you want, such as driving, shooting, fighting, and more. You can also join or create online servers and play with other players.
-
Team Deathmatch: This is the mode where you can join a team and compete with another team in a battle to the death. You can choose from different weapons and vehicles and try to eliminate as many enemies as possible.
-
Zombie Mode: This is the mode where you have to survive a zombie apocalypse in the city. You can use any weapon or vehicle you find and try to stay alive as long as possible.
-
Parkour Mode: This is the mode where you have to perform various stunts and tricks on the rooftops and streets of the city. You can use your skills and agility to jump, slide, roll, and more.
-
Soccer Mode: This is the mode where you can play soccer with other players in a stadium. You can use your feet, hands, or weapons to kick the ball and score goals.
-
-
Los Angeles Crimes APK Pure also offers six different maps that you can explore:
-
-
Los Angeles: This is the main map of the game, where you can see the iconic landmarks and locations of the city, such as Hollywood, Downtown, Beverly Hills, and more.
-
Desert: This is the map where you can experience the dry and sandy terrain of the desert, where you can find cacti, rocks, and abandoned buildings.
-
Snow: This is the map where you can enjoy the snowy and icy landscape of the mountains, where you can find trees, cabins, and ski slopes.
-
Island: This is the map where you can relax on the tropical and sunny island, where you can find palm trees, beaches, and boats.
-
Airport: This is the map where you can visit the busy and crowded airport, where you can find planes, helicopters, and luggage carts.
-
Prison: This is the map where you can escape from the dark and gloomy prison, where you can find cells, guards, and barbed wires.
-
-
The controls and settings of Los Angeles Crimes APK Pure
-
The controls of Los Angeles Crimes APK Pure are simple and intuitive. You can use the virtual joystick on the left side of the screen to move your character, and the buttons on the right side of the screen to perform actions such as shooting, jumping, crouching, aiming, reloading, changing weapons, entering vehicles, etc. You can also use gestures such as swiping or tapping on the screen to interact with objects or other players.
-
The settings of Los Angeles Crimes APK Pure are customizable and flexible. You can adjust various options such as graphics quality, sound volume, language, sensitivity, camera angle, etc. You can also enable or disable features such as auto-aiming, ragdoll physics, blood effects, etc. You can access the settings menu by tapping on the gear icon on the top right corner of the screen.
-
The tips and tricks to master Los Angeles Crimes APK Pure
-
If you want to master Los Angeles Crimes APK Pure and become a pro player, here are some tips and tricks that you should know:
-
-
Always be aware of your surroundings and watch out for enemies or dangers. Use cover or stealth when necessary.
-
Use different weapons and vehicles according to your situation and preference. Experiment with different combinations and strategies.
-
Collect ammo, health kits, armor vests, money bags, etc. whenever you see them. They will help you survive longer and buy more items.
-
Use online chat or voice chat to communicate with other players. You can make friends or enemies depending on your choice of words.
-
Have fun and enjoy the game. Don't take it too seriously or get frustrated if you lose or die. It's just a game after all.
-
-
Why you should play Los Angeles Crimes APK Pure
-
The graphics and sound quality of Los Angeles Crimes APK Pure
-
One of the reasons why you should play Los Angeles Crimes APK Pure is because of its graphics and sound quality. The game has stunning 3D graphics that will make you feel like you are in a real city. The game also has realistic sound effects that will enhance your immersion. You will hear gunshots, explosions, car engines, sirens, screams, etc. The game also has a dynamic weather system that will change according to time and location. You will see raindrops, snowflakes, sun rays, etc.
-
The realism and immersion of Los Angeles Crimes
The realism and immersion of Los Angeles Crimes APK Pure
-
Another reason why you should play Los Angeles Crimes APK Pure is because of its realism and immersion. The game has a physics engine that will make you feel the impact of your actions. You will see bodies flying, cars crashing, buildings collapsing, etc. The game also has a ragdoll system that will make you laugh or scream at the hilarious or gruesome outcomes. You will see limbs twisting, heads rolling, blood splattering, etc. The game also has a damage system that will affect your performance and appearance. You will see bullet holes, bruises, scars, etc.
-
The fun and excitement of Los Angeles Crimes APK Pure
-
The final reason why you should play Los Angeles Crimes APK Pure is because of its fun and excitement. The game has a lot of content and variety that will keep you entertained for hours. You can play different game modes, explore different maps, use different weapons and vehicles, etc. You can also play online with other players and have a blast. You can team up or compete with them, chat or voice chat with them, make friends or enemies with them, etc. You can also create your own servers and invite your friends to join you. You can also customize your character and show off your style.
-
Conclusion
-
Los Angeles Crimes APK Pure is a game that you should not miss if you are a fan of action and adventure games. It is a game that will give you a realistic and action-packed experience of being a criminal in Los Angeles. You can download Los Angeles Crimes APK Pure for free on your android device and enjoy its amazing features. You can also learn how to play it and why you should play it from this article. So what are you waiting for? Download Los Angeles Crimes APK Pure now and have fun!
-
FAQs
-
Here are some frequently asked questions about Los Angeles Crimes APK Pure:
-
-
Q: Is Los Angeles Crimes APK Pure safe to use?
-
A: Yes, Los Angeles Crimes APK Pure is safe to use as long as you download it from a trusted source such as [FileHippo]. It does not contain any viruses or malware that can harm your device or data.
-
Q: Is Los Angeles Crimes APK Pure compatible with my device?
-
A: Los Angeles Crimes APK Pure is compatible with most android devices that have at least 1 GB of RAM and 200 MB of free storage space. However, some devices may experience lag or crashes due to their low specifications.
-
Q: How can I update Los Angeles Crimes APK Pure?
-
A: You can update Los Angeles Crimes APK Pure by visiting [FileHippo] and downloading the latest version of the game. You can also check for updates within the game by tapping on the update icon on the top left corner of the screen.
-
Q: How can I contact the developers of Los Angeles Crimes APK Pure?
-
A: You can contact the developers of Los Angeles Crimes APK Pure by visiting their official website at [LosAngelesCrimes.com] or their social media pages at [Facebook] or [Twitter]. You can also send them an email at [LosAngelesCrimes@gmail.com].
-
Q: How can I support the developers of Los Angeles Crimes APK Pure?
-
A: You can support the developers of Los Angeles Crimes APK Pure by rating and reviewing the game on [FileHippo] or other platforms. You can also share the game with your friends and family and invite them to play with you online.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Kamen Rider ZI-O Flash Belt APK Travel Through Time and Space with Your Favorite Riders.md b/spaces/congsaPfin/Manga-OCR/logs/Kamen Rider ZI-O Flash Belt APK Travel Through Time and Space with Your Favorite Riders.md
deleted file mode 100644
index 5a7ad577fc85ba9c76b2086be26bbf7d234d36a3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Kamen Rider ZI-O Flash Belt APK Travel Through Time and Space with Your Favorite Riders.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Kamen Rider ZI-O Flash Belt APK Download: How to Transform into a Time-Travelling Superhero
-
Do you love watching Kamen Rider, the Japanese tokusatsu series about masked heroes who fight evil using special devices and powers? Do you wish you could become one of them and experience the thrill of transforming and battling? If yes, then you are in luck, because there is an app that lets you do just that. It is called Kamen Rider ZI-O Flash Belt APK, and it is a fan-made simulation of the flash belt used by the main character of Kamen Rider ZI-O, the 20th and final series of the Heisei era.
-
Kamen Rider ZI-O is a story about a young man named Sougo Tokiwa, who dreams of becoming a king. He is visited by a mysterious girl named Tsukuyomi, who tells him that he is destined to become the demonic king of time, Ohma ZI-O, who will rule over all of history in the year 2068. She gives him a device called the Ziku-Driver, which allows him to transform into Kamen Rider ZI-O by using special items called Ridewatches, which contain the powers of past Kamen Riders. Sougo decides to use his new abilities to change his fate and protect the timeline from the Time Jackers, a group of villains who want to alter history for their own purposes.
Kamen Rider ZI-O Flash Belt APK is an app that recreates the Ziku-Driver and the Ridewatches in your smartphone. You can use it to transform into different forms of Kamen Rider ZI-O, as well as other Kamen Riders from previous series. You can also use various weapons and perform finishers with sound effects and animations. It is a fun and interactive way to immerse yourself in the world of Kamen Rider and unleash your inner hero.
-
What is Kamen Rider ZI-O Flash Belt?
-
Kamen Rider ZI-O Flash Belt is an unofficial app that simulates the flash belt used by Sougo Tokiwa, aka Kamen Rider ZI-O, in the TV show of the same name. It is developed by CometComics, a fan of Kamen Rider who has created several flash belts for other series as well. The app is not affiliated with Toei Company, the producer of Kamen Rider, or Bandai, the manufacturer of the official toys.
-
The app is designed to mimic the appearance and functionality of the real flash belt as closely as possible. You can select from various drivers, ridewatches, and weapons that appear in the show, and use them to transform and fight. The app also features realistic sound effects and voice clips from the show, as well as animations and graphics that match the style of the show. The app is updated regularly with new content based on the latest episodes and movies.
-
Features of Kamen Rider ZI-O Flash Belt
-
Kamen Rider ZI-O Flash Belt has many features that make it an enjoyable and authentic experience for fans of Kamen Rider. Some of the features are:
Ridewatches
-
Ridewatches are the main items that Kamen Rider ZI-O uses to transform and access the powers of past Kamen Riders. They are shaped like digital watches and have the face and name of a Kamen Rider on them. They can be inserted into the Ziku-Driver or other devices to activate different modes and abilities.
-
The app has over 100 ridewatches that you can choose from, including the ones used by Kamen Rider ZI-O and his allies, as well as the ones used by the Time Jackers and their minions. You can also create your own custom ridewatches by selecting a base color, a face image, and a name. You can save your custom ridewatches and use them in the app.
-
Drivers
-
Drivers are the devices that Kamen Rider ZI-O and other characters use to transform into Kamen Riders. They are usually worn around the waist or on the arm, and have slots for ridewatches or other items. They also have buttons, levers, or dials that trigger different functions and sounds.
-
kamen rider zi-o flash belt deviantart
-kamen rider zi-o flash belt newgrounds
-kamen rider zi-o flash belt google drive
-kamen rider zi-o flash belt patreon
-kamen rider zi-o flash belt update 1.12
-kamen rider zi-o flash belt amazons ridewatch
-kamen rider zi-o flash belt rx ridewatch
-kamen rider zi-o flash belt movie ridewatch
-kamen rider zi-o flash belt comet comics
-kamen rider zi-o flash belt zeronatt1233
-kamen rider zi-o flash belt sounds ripped
-kamen rider zi-o flash belt legend tier supporters
-kamen rider zi-o flash belt month focus poll
-kamen rider zi-o flash belt image details
-kamen rider zi-o flash belt image size
-kamen rider zi-o flash belt agito ridewatch
-kamen rider zi-o flash belt aid ridewatch
-kamen rider zi-o flash belt armor ridewatch
-kamen rider zi-o flash belt blade ridewatch
-kamen rider zi-o flash belt build ridewatch
-kamen rider zi-o flash belt cross ridewatch
-kamen rider zi-o flash belt decade ridewatch
-kamen rider zi-o flash belt deno ridewatch
-kamen rider zi-o flash belt double ridewatch
-kamen rider zi-o flash belt drive ridewatch
-kamen rider zi-o flash belt evol ridewatch
-kamen rider zi-o flash belt faiz ridewatch
-kamen rider zi-o flash belt gates ridewatch
-kamen rider zi-o flash belt geiz ridewatch
-kamen rider zi-o flash belt ghost ridewatch
-kamen rider zi-o flash belt grease ridewatch
-kamen rider zi-o flash belt hibiki ridewatch
-kamen rider zi-o flash belt jiji ridewatch
-kamen rider zi-o flash belt kuji ridewatch
-kamen rider zi-o flash belt play all belts online
-kamen rider zi-o flash belt download link deviantart
-kamen rider zi-o flash belt how to play instructions
-kamen rider zi-o flash belt how to support comet comics
-kamen rider zi-o flash belt how to vote for next update
-kamen rider zi-o flash belt how to watch update video
-kamen rider zi-o flash belt how to collect privately on deviantart
-kamen rider zi-o flash belt how to comment on deviantart
-kamen rider zi-o flash belt how to join toku groups on deviantart
-kamen rider zi-o flash belt how to share deviation on deviantart
-kamen rider zi-o flash belt how to add to favourites on deviantart
-kamen rider zi-o flash belt how to download image details
-kamen rider zi-o flash belt how to view image size
-kamen rider zi-o flash belt how to view image resolution
-kamen rider zi-o flash belt how to view image file size
-
The app has several drivers that you can use, such as the Ziku-Driver, the Beyondriver, the Miraidriver, and the Ohma Driver. Each driver has its own features and modes, such as Armor Time, Future Time, Another Time, and Ohma Time. You can switch between drivers by tapping on them on the screen.
-
Weapons
-
Weapons are the tools that Kamen Rider ZI-O and other characters use to fight their enemies. They are usually based on the theme or motif of a Kamen Rider or a historical figure. They can be used in conjunction with ridewatches or other items to enhance their power or perform finishers.
-
The app has many weapons that you can use, such as the Zikan Girade, the Zikan Zax, the Saikyo Girade, and the Ohma Zi-O Ridewatch. Each weapon has its own sound effects and animations, as well as special attacks that you can activate by swiping or tapping on the screen.
-
How to download and install Kamen Rider ZI-O Flash Belt APK?
-
If you want to download and install Kamen Rider ZI-O Flash Belt APK on your Android device, you need to follow these steps:
-
Step 1: Find a reliable source
-
Since Kamen Rider ZI-O Flash Belt APK is not available on Google Play Store or any official app store, you need to find a trustworthy website that offers it for download. You can search for it on Google or use a link provided by a friend or a fan community. However, be careful of fake or malicious websites that may harm your device or steal your data. Always check the reviews and ratings of the website before downloading anything from it.
-
Step 2: Download the APK file
-
Once you find a reliable source, you need to download the APK file of Kamen Rider ZI-O Flash Belt APK on your device. The APK file is a package that contains all the necessary files and data for installing and running an app. To download it, you need to tap on the download button or link on the website and wait for it to finish. The file size may vary depending on the version and content of the app.
-
Step 3: Enable unknown sources
-
Before you can install Kamen Rider ZI-O Flash Belt APK on your device, you need to enable unknown sources in your settings. This is because Android devices normally do not allow installing apps from sources other than Google Play Store or other official app stores. To enable unknown sources, you need to go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from any source.
-
Step 4: Install the APK file
-
After enabling unknown sources, you can install Kamen Rider ZI-O Flash Belt APK on your device. To do this, you need to locate the APK file in your downloads folder or wherever you saved it. Then, you need to tap on it and follow the instructions on the screen. The installation process may take a few seconds or minutes depending on your device and internet speed.
-
Step 5: Launch the app and enjoy
-
Once the installation is complete, you can launch Kamen Rider ZI-O Flash Belt APK on your device. You will see an icon of the app on your home screen or app drawer. Tap on it and start using it to transform into a time-travelling superhero.
How to use Kamen Rider ZI-O Flash Belt APK?
-
Using Kamen Rider ZI-O Flash Belt APK is very easy and fun. You just need to follow these steps:
-
Select a driver and a ridewatch
-
The first thing you need to do is to select a driver and a ridewatch that you want to use. You can do this by tapping on the icons on the bottom of the screen. You will see a list of available drivers and ridewatches that you can scroll through and select. You can also use the search bar to find a specific driver or ridewatch by typing its name.
-
Scan the ridewatch and press the button
-
After selecting a driver and a ridewatch, you need to scan the ridewatch and press the button on the driver. You can do this by dragging the ridewatch icon to the slot on the driver icon and releasing it. You will hear a sound effect and see an animation of the ridewatch being scanned. Then, you need to tap on the button icon on the driver to activate it. You will hear another sound effect and see an animation of the driver being activated.
-
Perform the henshin pose and sound effects
-
The final step is to perform the henshin pose and sound effects. Henshin is the Japanese word for transformation, and it is what Kamen Riders say when they transform. You can do this by holding your device in front of you and mimicking the pose of the Kamen Rider you want to transform into. You will hear a voice clip from the show saying "Henshin!" and see an animation of the transformation sequence. You can also make your own sound effects by saying "Henshin!" or anything else you like.
-
Congratulations, you have successfully transformed into a Kamen Rider using Kamen Rider ZI-O Flash Belt APK. You can now enjoy playing as your favorite hero and fighting evil with your awesome powers.
-
Alternatives to Kamen Rider ZI-O Flash Belt APK
-
If you like Kamen Rider ZI-O Flash Belt APK, you might also like some other flash belt apps that are based on other Kamen Rider series. Here are some of them:
-
Kamen Rider Build Flash Belt APK
-
Kamen Rider Build Flash Belt APK is an app that simulates the flash belt used by Sento Kiryu, aka Kamen Rider Build, in Kamen Rider Build, the 19th series of the Heisei era. It is developed by CometComics as well. The app allows you to transform into different forms of Kamen Rider Build by using special items called Fullbottles, which contain the essence of various substances and animals. You can also use different weapons and perform finishers with sound effects and animations.
-
Kamen Rider Ex-Aid Flash Belt APK
-
Kamen Rider Ex-Aid Flash Belt APK is an app that simulates the flash belt used by Emu Hojo, aka Kamen Rider Ex-Aid, in Kamen Rider Ex-Aid, the 18th series of the Heisei era. It is developed by CometComics as well. The app allows you to transform into different forms of Kamen Rider Ex-Aid by using special items called Gashats, which are based on video games. You can also use different weapons and perform finishers with sound effects and animations.
-
Kamen Rider Zero-One Flash Belt APK
-
Kamen Rider Zero-One Flash Belt APK is an app that simulates the flash belt used by Aruto Hiden, aka Kamen Rider Zero-One, in Kamen Rider Zero-One, the first series of the Reiwa era. It is developed by CometComics as well. The app allows you to transform into different forms of Kamen Rider Zero-One by using special items called Progrise Keys, which are based on animals and technology. You can also use different weapons and perform finishers with sound effects and animations.
-
Conclusion
-
Kamen Rider ZI-O Flash Belt APK is an amazing app that lets you transform into a time-travelling superhero using your smartphone. It is a fan-made simulation of the flash belt used by Sougo Tokiwa, aka Kamen Rider ZI-O, in the TV show of the same name. It has many features that make it an enjoyable and authentic experience for fans of Kamen Rider, such as ridewatches, drivers, weapons, sound effects, voice clips, animations, and graphics. It is easy to download, install, and use, and it is updated regularly with new content based on the latest episodes and movies.
-
If you love watching Kamen Rider and want to become one of them and experience the thrill of transforming and battling, then you should definitely try Kamen Rider ZI-O Flash Belt APK. It is a fun and interactive way to immerse yourself in the world of Kamen Rider and unleash your inner hero.
-
Here are some frequently asked questions about Kamen Rider ZI-O Flash Belt APK:
-
FAQs
-
Q: Is Kamen Rider ZI-O Flash Belt APK safe to use?
-
A: Yes, Kamen Rider ZI-O Flash Belt APK is safe to use as long as you download it from a reliable source and enable unknown sources in your settings. However, you should always be careful of fake or malicious websites that may harm your device or steal your data. Always check the reviews and ratings of the website before downloading anything from it.
-
Q: Is Kamen Rider ZI-O Flash Belt APK free to use?
-
A: Yes, Kamen Rider ZI-O Flash Belt APK is free to use and does not require any registration or subscription. However, you may see some ads or pop-ups on the app or the website that you download it from. You can support the developer by donating or sharing the app with your friends.
-
Q: Is Kamen Rider ZI-O Flash Belt APK compatible with my device?
-
A: Kamen Rider ZI-O Flash Belt APK is compatible with most Android devices that run on Android 4.4 or higher. However, some features or content may not work properly on some devices or versions. You can check the compatibility of your device by reading the description or the comments on the website that you download it from.
-
Q: How can I update Kamen Rider ZI-O Flash Belt APK?
-
A: Kamen Rider ZI-O Flash Belt APK is updated regularly with new content based on the latest episodes and movies. You can check for updates by visiting the website that you download it from or by following the developer on social media. You can also enable automatic updates in your settings if available.
-
Q: How can I contact the developer of Kamen Rider ZI-O Flash Belt APK?
-
A: You can contact the developer of Kamen Rider ZI-O Flash Belt APK by visiting their website or their social media accounts. You can also leave a comment or a review on the app or the website that you download it from. The developer is very responsive and appreciates feedback and suggestions from users.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Test Your Knowledge with Quiz of Kings Helper APK The Online Trivia Game with Chat and Groups.md b/spaces/congsaPfin/Manga-OCR/logs/Test Your Knowledge with Quiz of Kings Helper APK The Online Trivia Game with Chat and Groups.md
deleted file mode 100644
index f4772ef8ed2eaef93fa2a898701793e0ce20ddc3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Test Your Knowledge with Quiz of Kings Helper APK The Online Trivia Game with Chat and Groups.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
Quiz of Kings Helper APK: A Guide to Download and Play the Popular Trivia Game
-
If you are looking for a fun and engaging way to test your general knowledge, make new friends, and compete with others, you might want to try Quiz of Kings. Quiz of Kings is a popular trivia game designed for Persian speakers, with millions of players around the world. But what if you can't access the game from the Google Play Store, or you want to enjoy some extra features that are not available in the official version? In that case, you might be interested in Quiz of Kings Helper APK, a modified version of the game that you can download and install on your Android device. In this article, we will tell you everything you need to know about Quiz of Kings Helper APK, including what it is, how to download it, how to play it, and some tips and tricks to improve your performance.
-
What is Quiz of Kings?
-
Quiz of Kings is an online trivia game that challenges your knowledge on various topics, such as sports, entertainment, religion, cinema, music, math, football, and more. The game has over 1 million text and image questions that are updated regularly, so you will never run out of new things to learn. But Quiz of Kings is not just a trivia game; it is also a social and interactive platform where you can make friends, chat with other players, join or create groups, and compete with other teams. You can play Quiz of Kings in different modes, such as solo, duo, group, record, or daily quiz. You can also earn coins and gems by answering questions correctly, which you can use to buy hints, lifelines, avatars, or gifts. Quiz of Kings is a fun and addictive game that will keep you entertained for hours.
Quiz of Kings Helper APK is a modified version of the original game that has some extra features that are not available in the official version. For example, Quiz of Kings Helper APK allows you to see the correct answer before choosing your option, which can help you win more games. It also gives you unlimited coins and gems, which you can use to buy anything you want in the game. Moreover, Quiz of Kings Helper APK lets you access the game without using the Google Play Store, which can be useful if you live in a country where the game is not available or if you have problems with your Google account. However, Quiz of Kings Helper APK also has some drawbacks that you should be aware of. For instance, Quiz of Kings Helper APK is not authorized by the developers of the original game, which means that it may violate their terms and conditions. It may also contain malware or viruses that can harm your device or steal your personal information. Therefore, you should be careful when downloading and installing Quiz of Kings Helper APK from unknown sources.
-
How to Download and Install Quiz of Kings Helper APK?
-
If you want to try Quiz of Kings Helper APK on your Android device, here are the steps that you need to follow:
- H3: Find a reliable source for the APK file -
One of the most important steps to download and install Quiz of Kings Helper APK is to find a trustworthy source for the APK file. You can search online for websites that offer Quiz of Kings Helper APK, but you should be careful and check the reviews and ratings of the site before downloading anything. You should also scan the APK file with an antivirus software before opening it. Some of the websites that claim to provide Quiz of Kings Helper APK are:
- -
-
[APKPure]
-
[APKCombo]
-
[APKHome]
-
-- H3: Enable unknown sources on your device settings -
Another important step to download and install Quiz of Kings Helper APK is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, you need to go to your device settings, then security, then unknown sources, and toggle it on. You may also need to confirm your choice by tapping OK or Allow. You can always disable this option later if you want to.
-- H3: Follow the installation instructions and launch the game -
The final step to download and install Quiz of Kings Helper APK is to follow the installation instructions and launch the game. To do this, you need to locate the APK file that you downloaded on your device, then tap on it to start the installation process. You may need to grant some permissions to the app, such as access to your storage, contacts, or camera. After the installation is complete, you can open the game and enjoy playing Quiz of Kings Helper APK.
-
How to Play Quiz of Kings Helper APK?
-
Playing Quiz of Kings Helper APK is similar to playing the original game, but with some extra features that can make it easier or more fun. Here are some of the basic steps that you need to follow to play Quiz of Kings Helper APK:
- - H3: Create an account or log in with your existing one -
The first thing that you need to do to play Quiz of Kings Helper APK is to create an account or log in with your existing one. You can use your phone number, email address, or Facebook account to sign up or log in. You can also choose a username, a password, and an avatar for your profile. You can also edit your profile later if you want to change anything.
-
quiz of kings trivia games apk download
-quiz of kings android game free download
-quiz of kings apk latest version 2023
-quiz of kings online trivia game with chat
-quiz of kings intellectual game for persian language users
-quiz of kings helper apk mod
-quiz of kings hack apk unlimited coins
-quiz of kings cheat apk no root
-quiz of kings answer helper app
-quiz of kings question solver apk
-quiz of kings trivia game tips and tricks
-quiz of kings guide apk for beginners
-quiz of kings best strategies apk
-quiz of kings how to win every round apk
-quiz of kings challenge mode apk
-quiz of kings group competition apk
-quiz of kings dating program apk
-quiz of kings make friends and chat apk
-quiz of kings word game apk
-quiz of kings general knowledge game apk
-quiz of kings logo and entertainment game apk
-quiz of kings religious game apk
-quiz of kings cinema game apk
-quiz of kings music game apk
-quiz of kings math and intelligence game apk
-quiz of kings football game apk
-quiz of kings sports game apk
-quiz of kings fun and exciting game apk
-quiz of kings new and attractive game apk
-quiz of kings full-fledged online game experience apk
-quiz of kings persian knowledge game apk
-quiz of kings farsi online game apk
-quiz of kings iranian trivia game apk
-quiz of kings persian words game apk
-quiz of kings farsi language game apk
-quiz of kings iran online trivia game apk
-quiz of kings persian culture game apk
-quiz of kings farsi intellectual game apk
-quiz of kings iranian general information game apk
-quiz of kings persian words challenge game apk
-- H3: Choose a mode, a topic, and an opponent -
The next thing that you need to do to play Quiz of Kings Helper APK is to choose a mode, a topic, and an opponent. You can play Quiz of Kings Helper APK in different modes, such as solo, duo, group, record, or daily quiz. You can also choose from various topics, such as sports, entertainment, religion, cinema, music, math, football, and more. You can also choose an opponent from your friends list, your group members, or a random player.
-- H3: Answer the questions correctly and earn coins and gems -
The last thing that you need to do to play Quiz of Kings Helper APK is to answer the questions correctly and earn coins and gems. You will have 10 seconds to answer each question, and you will see four options to choose from. You can also use hints or lifelines if you are not sure about the answer. If you answer correctly, you will earn coins and gems that you can use to buy more hints, lifelines, avatars, or gifts. If you answer incorrectly, you will lose some coins and gems.
-
Tips and Tricks for Quiz of Kings Helper APK
-
If you want to improve your performance and have more fun playing Quiz of Kings Helper APK, here are some tips and tricks that you can use:
- - H3: Use the hints and lifelines wisely -
One of the tips that you can use for Quiz of Kings Helper APK is to use the hints and lifelines wisely. Hints are clues that can help you narrow down the options or reveal the correct answer. Lifelines are special powers that can help you skip a question, eliminate two options, or double your score. However, hints and lifelines are limited and cost coins and gems, so you should use them sparingly and only when necessary.
-- H3: Join or create a group and chat with other players -
Another tip that you can use for Quiz of Kings Helper APK is to join or create a group and chat with other players. Groups are communities of players who share the same interests or goals in the game. You can join an existing group or create your own group and invite your friends or other players. You can chat with your group members, send them gifts, challenge them to games, or compete with other groups. Joining or creating a group can help you make new friends, learn new things, and have more fun in the game.
-- H3: Challenge yourself with the record mode and the daily quiz -
A final tip that you can use for Quiz of Kings Helper APK is to challenge yourself with the record mode and the daily quiz. Record mode is a mode where you can play as many questions as you can without any time limit or opponent. You can try to beat your own record or compare it with other players. Daily quiz is a mode where you can play a set of 10 questions every day and earn extra coins and gems. You can also see how you rank among other players. Playing record mode and daily quiz can help you improve your knowledge, skills, and confidence in the game.
-
Conclusion
-
Quiz of Kings Helper APK is a modified version of the popular trivia game Quiz of Kings that offers some extra features that are not available in the official version. Quiz of Kings Helper APK allows you to see the correct answer before choosing your option, gives you unlimited coins and gems, and lets you access the game without using the Google Play Store. However, Quiz of Kings Helper APK also has some drawbacks, such as violating the terms and conditions of the original game, containing malware or viruses, or stealing your personal information. Therefore, you should be careful when downloading and installing Quiz of Kings Helper APK from unknown sources. If you want to play Quiz of Kings Helper APK, you need to find a reliable source for the APK file, enable unknown sources on your device settings, follow the installation instructions and launch the game, create an account or log in with your existing one, choose a mode, a topic, and an opponent, answer the questions correctly and earn coins and gems, use the hints and lifelines wisely, join or create a group and chat with other players, and challenge yourself with the record mode and the daily quiz. Quiz of Kings Helper APK is a fun and engaging way to test your general knowledge, make new friends, and compete with others.
-
FAQs
-
Here are some of the frequently asked questions about Quiz of Kings Helper APK:
- -
Q: Is Quiz of Kings Helper APK safe to use?
- -
A: Quiz of Kings Helper APK is not safe to use because it is not authorized by the developers of the original game, it may contain malware or viruses that can harm your device or steal your personal information, and it may violate the terms and conditions of the original game. Therefore, you should be careful when downloading and installing Quiz of Kings Helper APK from unknown sources.
--
Q: How can I update Quiz of Kings Helper APK?
- -
A: You can update Quiz of Kings Helper APK by downloading and installing the latest version of the APK file from a reliable source. However, you should be aware that updating Quiz of Kings Helper APK may cause some issues or errors in the game.
--
Q: Can I play Quiz of Kings Helper APK offline?
- -
A: No, you cannot play Quiz of Kings Helper APK offline because it requires an internet connection to access the questions, chat with other players, or buy items in the game.
--
Q: Can I play Quiz of Kings Helper APK on other devices?
- -
A: Yes, you can play Quiz of Kings Helper APK on other devices that support Android operating system. However, you need to download and install the APK file on each device separately.
--
Q: Can I play Quiz of Kings Helper APK with non-Persian speakers?
- -
A: No, you cannot play Quiz of Kings Helper APK with non-Persian speakers because the game is designed for Persian speakers only. The questions and answers are in Persian language, and so are the chat messages and group names.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Free Classic Solitaire Experience - Play Online Now.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Free Classic Solitaire Experience - Play Online Now.md
deleted file mode 100644
index ee7612a92aeaa4c78ac08d93acdf27471ccda031..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Free Classic Solitaire Experience - Play Online Now.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Play Free Classic Solitaire No Download: How to Enjoy the Timeless Card Game Online
-
If you are looking for a relaxing and fun way to pass the time, you might want to try playing classic solitaire online. Solitaire is one of the most popular card games in the world, and you can play it for free without downloading anything on your computer or mobile device. In this article, we will explain what classic solitaire is, how to play it online, and what features and options you can customize to make your experience more enjoyable.
Classic solitaire, also known as Klondike solitaire, is a single-player card game that involves sorting a deck of cards into four piles according to suit and rank. The goal is to move all the cards from the tableau (the seven columns of cards on the table) to the foundations (the four empty spaces at the top) in ascending order, starting from the ace.
-
The history and rules of the game
-
The origin of solitaire is not clear, but some historians believe that it was invented in France or Germany in the 18th century. The game became popular in Europe and America in the 19th century, and was often played by Napoleon Bonaparte and Winston Churchill. The name "solitaire" comes from the French word for "alone", as the game is played by oneself.
-
The rules of classic solitaire are simple, but the game can be challenging and addictive. Here are the basic steps to play:
-
-
Shuffle the deck and deal 28 cards face down into seven columns. The first column has one card, the second has two cards, and so on until the seventh column has seven cards. The top card of each column is turned face up.
-
The remaining 24 cards are placed face down in a pile called the stock. You can turn over one card at a time from the stock and place it on another pile called the waste.
-
You can move any face-up card from the tableau or the waste to another tableau column if it is one rank lower and of the opposite color (for example, you can move a black six onto a red seven). You can also move a group of cards in sequence from one tableau column to another, as long as they follow the same rule.
-
You can move any ace from the tableau or the waste to one of the four foundations. You can then build up each foundation in ascending order by suit (for example, you can place a two of hearts on an ace of hearts).
-
You can turn over a new card from the stock whenever you want, but you can only go through the stock once or three times, depending on your preference.
-
You win the game when you have moved all 52 cards to the foundations.
-
-
The benefits of playing solitaire
-
Playing solitaire is not only fun, but also good for your brain. Here are some of the benefits of playing solitaire regularly:
-
-
It improves your concentration and memory skills, as you have to keep track of the cards and plan your moves ahead.
-
It enhances your problem-solving and logical thinking abilities, as you have to find the best way to sort the cards and overcome obstacles.
-
It reduces your stress and anxiety levels, as you can focus on the game and forget about your worries for a while.
-
It boosts your mood and self-esteem, as you can feel a sense of accomplishment and satisfaction when you win or improve your score.
-
-
How to Play Free Classic Solitaire Online
How to Play Free Classic Solitaire Online
-
You don't need to buy a deck of cards or download any software to play classic solitaire online. There are many websites that offer free solitaire games that you can play on your browser, whether you are using a computer, a tablet, or a smartphone. Here are some of the best websites to play solitaire without downloading anything:
-
Free online Solitaire
-
This website lets you play classic solitaire for free, with no ads or registration required. You can choose between one-card and three-card draw modes, and you can also undo your moves, restart the game, or get a hint if you are stuck. The website also keeps track of your time and moves, and shows you your best score and win percentage. You can also change the card design and the background color according to your preference.
-
play free classic solitaire online no download
-play free classic solitaire card game no download
-play free classic solitaire without downloading anything
-play free classic solitaire on pc no download
-play free classic solitaire on mac no download
-play free classic solitaire on tablet no download
-play free classic solitaire on phone no download
-play free classic solitaire with many options no download
-play free classic solitaire with different card styles no download
-play free classic solitaire with turn 1 mode no download
-play free classic solitaire with turn 3 mode no download
-play free classic solitaire with undo button no download
-play free classic solitaire with stats menu no download
-play free classic solitaire with fastest game time no download
-play free classic solitaire with win loss ratio no download
-play free classic solitaire with klondike rules no download
-play free classic solitaire with html5 technology no download
-play free classic solitaire with fun gameplay no download
-play free classic solitaire with easy controls no download
-play free classic solitaire with smooth graphics no download
-play free classic solitaire with relaxing music no download
-play free classic solitaire with sound effects no download
-play free classic solitaire with hints and tips no download
-play free classic solitaire with challenges and achievements no download
-play free classic solitaire with leaderboards and scores no download
-play free classic solitaire with friends and family no download
-play free classic solitaire with online community no download
-play free classic solitaire with daily bonus no download
-play free classic solitaire with unlimited games no download
-play free classic solitaire with custom settings no download
-enjoy free classic solitaire online no download required
-enjoy free classic solitaire card game online no download required
-enjoy free classic solitaire without downloading anything online
-enjoy free classic solitaire on pc online no download required
-enjoy free classic solitaire on mac online no download required
-enjoy free classic solitaire on tablet online no download required
-enjoy free classic solitaire on phone online no download required
-enjoy free classic solitaire with many options online no download required
-enjoy free classic solitaire with different card styles online no download required
-enjoy free classic solitaire with turn 1 mode online no download required
-enjoy free classic solitaire with turn 3 mode online no download required
-enjoy free classic solitaire with undo button online no download required
-enjoy free classic solitaire with stats menu online no download required
-enjoy free classic solitaire with fastest game time online no download required
-enjoy free classic solitaire with win loss ratio online no download required
-enjoy free classic solitaire with klondike rules online no download required
-enjoy free classic solitaire with html5 technology online no download required
-enjoy free classic solitaire with fun gameplay online no download required
-enjoy free classic solitaire with easy controls online no download required
-
World of Solitaire
-
This website offers more than 100 solitaire games, including classic solitaire, spider solitaire, freecell solitaire, and more. You can play any game for free, with no ads or registration required. You can also customize the game settings, such as the number of passes through the stock, the scoring system, the animation speed, and the sound effects. The website also records your statistics and achievements, and lets you create an account to save your progress.
-
Classic Solitaire
-
This website provides a simple and elegant interface to play classic solitaire online. You can play for free, with no ads or registration required. You can choose between one-card and three-card draw modes, and you can also undo your moves, restart the game, or get a hint if you are stuck. The website also shows you your time and moves, and gives you a star rating based on your performance. You can also change the card design and the background image according to your preference.
-
The features and options you can customize
-
Playing solitaire online can be more fun and challenging if you can customize the game features and options to suit your style and preference. Here are some of the features and options you can customize when playing solitaire online:
-
Difficulty levels and game modes
-
You can choose how difficult or easy you want the game to be by selecting the number of cards you draw from the stock each time. If you choose one-card draw mode, you will have more chances to move the cards around, but the game will be easier. If you choose three-card draw mode, you will have fewer chances to move the cards around, but the game will be harder.
-
You can also choose how many times you can go through the stock before the game is over. Some websites allow you to go through the stock only once, which makes the game more challenging. Other websites allow you to go through the stock three times or unlimited times, which makes the game easier.
-
Card designs and backgrounds
-
You can make the game more visually appealing by changing the card design and the background of the game. Some websites offer different card designs, such as classic, modern, large print, or themed cards. You can also change the background color or image of the game, such as solid colors, gradients, patterns, or landscapes.
-
Statistics and achievements
-
You can keep track of your progress and performance by checking your statistics and achievements when playing solitaire online. Some websites show you your time and moves for each game, as well as your best score and win percentage. You can also see how many games you have played, won, or lost.
-
Some websites also reward you with achievements for completing certain goals or challenges in the game. For example, you might get an achievement for winning a game in less than a minute, or for clearing all the cards in one tableau column.
-
Conclusion
-
Classic solitaire is a timeless card game that you can play for free online without downloading anything. It is a great way to relax and have fun while improving your concentration, memory, problem-solving, and logical thinking skills. You can also customize the game features and options to make it more enjoyable and challenging for yourself.
-
We hope this article has helped you learn more about how to play free classic solitaire online. If you have any questions or comments, please feel free to share them below.
-
FAQs
-
-
Q: What is the difference between classic solitaire and spider solitaire?
-
A: Classic solitaire is a single-deck card game that involves sorting 52 cards into four piles according to suit and rank. Spider solitaire is a two-deck card game that involves sorting 104 cards into eight piles according to suit and rank, but only cards of the same suit can be moved together.
-
Q: How can I play solitaire offline?
-
A: If you want to play solitaire offline, you can either use a physical deck of cards or download a solitaire app on your device. There are many solitaire apps available for different platforms, such as Windows, Mac, iOS, Android, and more. Some of them are free, while others may require a fee or contain ads.
-
Q: How can I improve my solitaire skills?
-
A: There is no definitive strategy to win solitaire, as the game depends largely on luck and the cards you are dealt. However, there are some tips and tricks that can help you improve your solitaire skills, such as:
-
-
Always move an ace or a deuce to the foundation as soon as possible.
-
Try to expose the hidden cards in the tableau columns as quickly as possible.
-
Try to create empty tableau columns as soon as possible, as they can be used to store any card temporarily.
-
Try to avoid moving cards from the foundation back to the tableau, unless it is necessary.
-
Try to plan your moves ahead and anticipate the consequences of each move.
-
-
Q: What are some variations of solitaire?
-
A: There are many variations of solitaire, each with its own rules and challenges. Some of the most popular variations are:
-
-
Freecell solitaire: A solitaire game that involves using four free cells to temporarily store cards while sorting them into the foundations.
-
Golf solitaire: A solitaire game that involves removing cards from the tableau by placing them on a single waste pile, but only cards that are one rank higher or lower than the top card of the waste pile can be removed.
-
Pyramid solitaire: A solitaire game that involves removing cards from a pyramid-shaped tableau by pairing them up, but only cards that are fully exposed can be paired up.
-
-
Q: Where can I learn more about solitaire?
-
A: If you want to learn more about solitaire, you can visit some of these websites:
-
-
[Solitaire Central]: A website that offers information, resources, and links about solitaire games.
-
[Solitaire Network]: A website that offers free online solitaire games, tutorials, and tips.
-
[Solitaire City]: A website that offers free online and downloadable solitaire games, with high-quality graphics and sound effects.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Commercial Fonts - Avenir Next Pro (Font Family) A Comparison with Other Popular Fonts.md b/spaces/contluForse/HuggingGPT/assets/Commercial Fonts - Avenir Next Pro (Font Family) A Comparison with Other Popular Fonts.md
deleted file mode 100644
index d5eb7fff7e85af0b4769449db54c8cc983d87f36..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Commercial Fonts - Avenir Next Pro (Font Family) A Comparison with Other Popular Fonts.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
If you enjoyed these collections of Avenir Next Pro Rounded font family similar fonts from the web. We searched the web and discovered the most closest Avenir Next pro rounded similar fonts and these fonts are completely free for personal use. If you think we missed any similar font of Avenir Next Pro rounded then you can share the font with us.
-
The Knockout font family offers a wide range of presentation styles not present in the majority of modern sans-serif families, providing the benefits of a well-designed collection and the visual appeal of individually designed fonts alike.
The word avenir is French for "future". As the name suggests, the family takes inspiration from the geometric style of sans-serif typeface developed in the 1920s that took the circle as a basis, such as Erbar and Futura. Frutiger intended Avenir to be a more organic interpretation of the geometric style, more even in colour and suitable for extended text, with details recalling more traditional typefaces such as the two-storey 'a' and 't' with a curl at the bottom, and letters such as the 'o' that are not exact, perfect circles but optically corrected.[1]
-
The initial release of the typeface family was increased to 24 fonts: six weights, each with a roman and italic version, in two widths (normal and condensed). Frutiger's numbering system was abandoned in favor of more conventional weight names. The glyph set was expanded to include small caps, text figures, subscript and superscripts, and ligatures.
-
The family includes 8 fonts in 4 weights (regular, medium, demi, and bold) and 1 width (based on normal width), with complementary italics. OpenType features include numerator and denominator, fractions, standard ligatures, lining and old-style figures, localized forms, scientific inferiors, subscript and superscript, and small caps.
-
Fontspec with LuaLaTeX works well for small font families, but appears cumbersome to use with super-families. A modern super-family can contain tens of fonts in Weight/Width/Slope (WWS) matrix. As an example Avenir Next has 32 fonts in one family.
-
In particular, we've separated the Avenir Next LT Pro fonts so you can get to know the style of each of them and understand their suitability in your project. Download and try it right now! Download and install the Avenir Next LT Pro fonts\nTo increase the performance in your designer, still in MaisFontes, you can experiment with the text with the font and change colors and sizes. Being really a sought after source or the one that will stand out the most in your project, download it and install it on your personal computer. The list of Avenir Next LT Pro family fonts is:\n\u00a0\n\n \n \n \n Avenir Next LT Pro Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Bold Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Bold Condensed Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Bold Condensed Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Condensed Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Condensed Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Demi Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Demi Condensed Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Demi Condensed Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Demi Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Heavy Condensed Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Heavy Condensed Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Medium Condensed Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Medium Condensed Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Ultra Light Condensed Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n \n \n \n \n Avenir Next LT Pro Ultra Light Condensed Italic Font\n \n \n \n \n \n \n \n \n \n \n \n Open font\n \n \n Download\n \n \n \n Are Avenir Next LT Pro fonts free?\nAll fonts are made available for personal use only. Resale or sharing is prohibited. For commercial use, consult the source author. How to install the Avenir Next LT Pro font?\nInstalling a font is a simple and fast process, independent of the Operating System. We have prepared a material to support you on how to install the Avenir Next LT Pro font. A practical and straightforward guide to install at:\n
install font on Windows (all versions);
install font on MacOS. Share on your social networks\nI hope this content helps you during the creation of your designer. A selection of a source is something trivial and very important! If you were tied here, you told me you would like it or that you made suggestions! MaisFontes.com really wants to go beyond just a font download site."}Download Avenir Next LT Pro FontsPublished at 2022-07-10if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'maisfontes_com-medrectangle-4','ezslot_1',105,'0','0']);__ez_fad_position('div-gpt-ad-maisfontes_com-medrectangle-4-0');In particular, we've separated the Avenir Next LT Pro fonts so you can get to know the style of each of them and understand their suitability in your project. Download and try it right now! Download and install the Avenir Next LT Pro fontsTo increase the performance in your designer, still in MaisFontes, you can experiment with the text with the font and change colors and sizes. Being really a sought after source or the one that will stand out the most in your project, download it and install it on your personal computer. The list of Avenir Next LT Pro family fonts is: Avenir Next LT Pro Font Open font DownloadAvenir Next LT Pro Bold Condensed Italic Font Open font DownloadAvenir Next LT Pro Demi Font Open font DownloadAvenir Next LT Pro Demi Italic Font Open font DownloadAvenir Next LT Pro Italic Font Open font DownloadAvenir Next LT Pro Ultra Light Condensed Font
-
-
Click to view font family "Avenir Next LT Pro".Avenir Next LT ProAvenir Next LT Pro Bold CondensedAvenir Next LT Pro Bold Condensed ItalicAvenir Next LT Pro CondensedAvenir Next LT Pro Condensed ItalicAvenir Next LT Pro DemiAvenir Next LT Pro Demi CondensedAvenir Next LT Pro Demi Condensed ItalicAvenir Next LT Pro Demi ItalicAvenir Next LT Pro Heavy CondensedAvenir Next LT Pro Heavy Condensed ItalicAvenir Next LT Pro ItalicAvenir Next LT Pro Medium CondensedAvenir Next LT Pro Medium Condensed ItalicAvenir Next LT Pro Ultra Light CondensedAvenir Next LT Pro Ultra Light Condensed Italic
About the font Avenir Next LT Pro BoldBe aware that the Avenir Next LT Pro Bold font is free for personal knowledge and use only. However, you need to contact the author for commercial use or for any support.You can use the Avenir Next LT Pro Bold to create interesting designs, covers, shop and store name and logos.Also, the Avenir Next LT Pro Bold font is perfect for branding projects, housewares designs, product packaging, or simply as a stylish text overlay on any background image.FamilyAvenir Next LT ProSub-familyBoldVersionVersion 1.200;PS 001.002;hotconv 1.0.38AuthorAdrian Frutiger and Akira KobayashiCompanyLinotype Library GmbHSite is a trademark of Heidelberger Druckmaschinen AG which may be registered in certain jurisdictions exclusively licensed through Linotype Library GmbH a wholly owned subsidiary of Heidelberger Druckmaschinen AGLicenceFor personal use onlyLicence MaisFontesFor personal use onlyMost wanted:fontes gratis, baixar fontes gratis, font ttf, fontes para word gratis, fonts free Typography Avenir Next LT Pro BoldTo evaluate the typeface, in this section there is a preview of which we select 31 special characters or with accents, 26 letters of the alphabet in upper and lower case and the numbering from 0 to 10. The letters will be the same after installed in your operating system, either for viewing or for printing. Avenir Next LT Pro Bold font authorFurthermore, about all the content of this source, we also provide some additional information from the author and/or company. Therefore, if you need to clarify doubts about the license for personal or commercial use, please contact the author. Author: Adrian Frutiger and Akira KobayashiCompany: Linotype Library GmbHSite: License informationThe Avenir Next LT Pro Bold font provided is for typography style knowledge only. The download is completely free for personal use and the font cannot be used for commercial purposes.Therefore, if you wish to use this font for commercial purposes, you must purchase a license or contact the author for permission to use it. How to install the Avenir Next LT Pro Bold fontYou can install the Avenir Next LT Pro Bold font on any operating system. For safety and to ensure that there is no Malware or malicious software, downloading the source file é compressed in ZIP format. Fonts are in OTF (OpenType) or TTF (TrueType) format.
Click here to install the font on Microsoft Windows (all versions).
Click here to install the font on MAC OS.
Content related to Avenir Next LT Pro BoldWe found new special content and prepared with all dedication! The content below is related to the source Avenir Next LT Pro Bold. Click on the topic you want to learn more! Download Avenir Next LT Pro FontsA good designer invests a good deal of time in selecting fonts that will make a good visual impact. Check the Avenir Next LT Pro fonts. Download variations of Avenir Next LT Pro BoldAccording to the Avenir Next LT Pro Bold font family, below, we have listed other fonts that may be useful for your project. We have made an improved selection especially for you.Random fonts: Click to load 3 other fontsAvenir Next LT Pro Download this fontAvenir Next LT Pro Bold Condensed Download this fontAvenir Next LT Pro Bold Condensed Italic Download this fontAvenir Next LT Pro Condensed Download this fontAvenir Next LT Pro Condensed Italic Download this font Leave your feedback for the Avenir Next LT Pro Bold fontFinally, it's very important that we know your feedback about the Avenir Next LT Pro Bold font. Also tell us what type of project you used. Sharing your opinion and ideas will help many other participants in the MaisFontes community to improve the arts.
Also take the opportunity to share on social networks or click SAVE to keep this font in your fonts panel in the User Portal. Create a free account on MaisFontes by clicking here. Cloud words: Avenir Next LT Pro Bold Avenir Next LT Pro Bold font download;Avenir Next LT Pro Bold font free;Avenir Next LT Pro Bold download;Avenir Next LT Pro Bold Font;Avenir Next LT Pro Bold Logotipo;free font Avenir Next LT Pro Bold;Avenir Next LT Pro Bold free font;Font Avenir Next LT Pro Bold; × Avenir Next LT Pro BoldEmail type correctly your email Cancel Send email× Click to show the lettertypeavenir-next-lt-pro-bold.png Save imageDonate and help us!Continue browsing
")
- gr.Markdown(
- """ChatGPT based Insights from Decodem.ai for businesses.\nWhile ChatGPT has multiple use cases we have evolved specific use cases/ templates for businesses \n\n This template provides ideas on how a business can generate Advertisement ideas for a product. Enter a product area and get the results. Use examples as a guide. We use a equally powerful AI model bigscience/bloom."""
- )
- textbox = gr.Textbox(placeholder="Enter product name...", lines=1,label='Your product')
- btn = gr.Button("Generate")
- #output1 = gr.Textbox(lines=2,label='Market Sizing Framework')
- output_image = gr.components.Image(label="Your Advertisement")
-
-
- btn.click(getadvertisement,inputs=[textbox], outputs=[output_image])
- examples = gr.Examples(examples=['spectacles','rice cooker','smart watch','coffee mug',],
- inputs=[textbox])
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/decodemai/devils_advocate/app.py b/spaces/decodemai/devils_advocate/app.py
deleted file mode 100644
index ed1e0410e19fdd3653486f7f3cb1d2f446d07826..0000000000000000000000000000000000000000
--- a/spaces/decodemai/devils_advocate/app.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import json
-import requests
-import gradio as gr
-import random
-import time
-import os
-import datetime
-from datetime import datetime
-
-API_TOKEN = os.getenv("API_TOKEN")
-from huggingface_hub import InferenceApi
-inference = InferenceApi("bigscience/bloom",token=API_TOKEN)
-
-DECODEM_TOKEN=os.getenv("DECODEM_TOKEN")
-
-headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
-url_decodemprompts='https://us-central1-createinsightsproject.cloudfunctions.net/getdecodemprompts'
-
-data={"prompt_type":'devils_advocate',"decodem_token":DECODEM_TOKEN}
-try:
- r = requests.post(url_decodemprompts, data=json.dumps(data), headers=headers)
-except requests.exceptions.ReadTimeout as e:
- print(e)
-#print(r.content)
-
-prompt=str(r.content, 'UTF-8')
-
-
-def infer(prompt,
- max_length = 250,
- top_k = 0,
- num_beams = 0,
- no_repeat_ngram_size = 2,
- top_p = 0.9,
- seed=42,
- temperature=0.7,
- greedy_decoding = False,
- return_full_text = False):
-
- print(seed)
- top_k = None if top_k == 0 else top_k
- do_sample = False if num_beams > 0 else not greedy_decoding
- num_beams = None if (greedy_decoding or num_beams == 0) else num_beams
- no_repeat_ngram_size = None if num_beams is None else no_repeat_ngram_size
- top_p = None if num_beams else top_p
- early_stopping = None if num_beams is None else num_beams > 0
-
- params = {
- "max_new_tokens": max_length,
- "top_k": top_k,
- "top_p": top_p,
- "temperature": temperature,
- "do_sample": do_sample,
- "seed": seed,
- "early_stopping":early_stopping,
- "no_repeat_ngram_size":no_repeat_ngram_size,
- "num_beams":num_beams,
- "return_full_text":return_full_text
- }
-
- s = time.time()
- response = inference(prompt, params=params)
- #print(response)
- proc_time = time.time()-s
- #print(f"Processing time was {proc_time} seconds")
- return response
-
-def getdevilsadvocate(text_inp):
- print(text_inp)
- print(datetime.today().strftime("%d-%m-%Y"))
- text = prompt+"\nInput:"+text_inp + "\nOutput:"
- resp = infer(text,seed=random.randint(0,100))
-
- generated_text=resp[0]['generated_text']
- result = generated_text.replace(text,'').strip()
- result = result.replace("Output:","")
- parts = result.split("###")
- topic = parts[0].strip()
- topic="\n".join(topic.split('\n')[:3])
- print(topic)
- return(topic)
-
-
-with gr.Blocks() as demo:
- gr.Markdown("
Devil's Advocate
")
- gr.Markdown(
- """ChatGPT based Insights from Decodem.ai for businesses.\nWhile ChatGPT has multiple use cases we have evolved specific use cases/ templates for businesses \n\n This template provides a devil's advocate view for your ideas. Enter a crisp idea (2-3 words) and get the results. Use examples to guide. We use a equally powerful AI model bigscience/bloom."""
- )
- textbox = gr.Textbox(placeholder="Enter the crisp idea here...", lines=1,label='The Idea')
- btn = gr.Button("Generate")
- output1 = gr.Textbox(lines=2,label="Devil's Advocate")
-
- btn.click(getdevilsadvocate,inputs=[textbox], outputs=[output1])
- examples = gr.Examples(examples=['paneer donuts','smart tee shirt','blockchain for EV chargers','autonomous cars'],
- inputs=[textbox])
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/open_clip/utils.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/open_clip/utils.py
deleted file mode 100644
index de59fd2746a13742197ecdeac671d61ece3f79ba..0000000000000000000000000000000000000000
--- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/open_clip/utils.py
+++ /dev/null
@@ -1,361 +0,0 @@
-import numpy as np
-import torch
-from torch import nn as nn
-from torchvision.ops.misc import FrozenBatchNorm2d
-import logging
-# import h5py
-from tqdm import tqdm
-import random
-import json
-import os
-import pathlib
-
-# TODO: (yusong) this not a good place to store those information and does not scale. Need to be fixed later.
-dataset_split = {
- "audiocaps": ["train", "valid", "test"],
- "audioset": ["balanced_train", "unbalanced_train", "eval"],
- "BBCSoundEffects": ["train", "test"],
- "Clotho": ["train", "test", "valid"],
- "free_to_use_sounds": ["train", "test"],
- "paramount_motion": ["train", "test"],
- "sonniss_game_effects": ["train", "test"],
- "wesoundeffects": ["train", "test"],
- "MACS": ["train", "test"],
- "freesound": ["train", "test"],
- "FSD50K": ["train", "test", "valid"],
- "fsd50k_class_label": ["train", "test", "valid"],
- "esc50": ["train", "test"],
- "audiostock": ["train", "test"],
- "freesound_no_overlap_noesc50": ["train", "test"],
- "epidemic_sound_effects": ["train", "test"],
- "VGGSound": ["train", "test"],
- "urbansound8k_class_label": ["train", "test"],
- "audioset_t5": ["balanced_train", "unbalanced_train", "eval"],
- "epidemic_sound_effects_t5": ["train", "test"],
- "WavText5K": ["train", "test"],
- "esc50_no_overlap": ["train", "test"],
- "usd8k_no_overlap": ["train", "test"],
- "fsd50k_200_class_label": ["train", "test", "valid"],
-}
-
-
-def freeze_batch_norm_2d(module, module_match={}, name=""):
- """
- Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is
- itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and
- returned. Otherwise, the module is walked recursively and submodules are converted in place.
-
- Args:
- module (torch.nn.Module): Any PyTorch module.
- module_match (dict): Dictionary of full module names to freeze (all if empty)
- name (str): Full module name (prefix)
-
- Returns:
- torch.nn.Module: Resulting module
-
- Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762
- """
- res = module
- is_match = True
- if module_match:
- is_match = name in module_match
- if is_match and isinstance(
- module, (nn.modules.batchnorm.BatchNorm2d, nn.modules.batchnorm.SyncBatchNorm)
- ):
- res = FrozenBatchNorm2d(module.num_features)
- res.num_features = module.num_features
- res.affine = module.affine
- if module.affine:
- res.weight.data = module.weight.data.clone().detach()
- res.bias.data = module.bias.data.clone().detach()
- res.running_mean.data = module.running_mean.data
- res.running_var.data = module.running_var.data
- res.eps = module.eps
- else:
- for child_name, child in module.named_children():
- full_child_name = ".".join([name, child_name]) if name else child_name
- new_child = freeze_batch_norm_2d(child, module_match, full_child_name)
- if new_child is not child:
- res.add_module(child_name, new_child)
- return res
-
-
-def exist(dataset_name, dataset_type):
- """
- Check if dataset exists
- """
- if dataset_type in dataset_split[dataset_name]:
- return True
- else:
- return False
-
-
-def get_tar_path_from_dataset_name(
- dataset_names, dataset_types, islocal, dataset_path, proportion=1, full_dataset=None
-):
- """
- Get tar path from dataset name and type
- """
- output = []
- for n in dataset_names:
- if full_dataset is not None and n in full_dataset:
- current_dataset_types = dataset_split[n]
- else:
- current_dataset_types = dataset_types
- for s in current_dataset_types:
- tmp = []
- if islocal:
- sizefilepath_ = f"{dataset_path}/{n}/{s}/sizes.json"
- if not os.path.exists(sizefilepath_):
- sizefilepath_ = f"./json_files/{n}/{s}/sizes.json"
- else:
- sizefilepath_ = f"./json_files/{n}/{s}/sizes.json"
- if not os.path.exists(sizefilepath_):
- continue
- sizes = json.load(open(sizefilepath_, "r"))
- for k in sizes.keys():
- if islocal:
- tmp.append(f"{dataset_path}/{n}/{s}/{k}")
- else:
- tmp.append(
- f"pipe:aws s3 --cli-connect-timeout 0 cp s3://s-laion-audio/webdataset_tar/{n}/{s}/{k} -"
- )
- if proportion != 1:
- tmp = random.sample(tmp, int(proportion * len(tmp)))
- output.append(tmp)
- return sum(output, [])
-
-
-def get_tar_path_from_txts(txt_path, islocal, proportion=1):
- """
- Get tar path from txt path
- """
- if isinstance(txt_path, (list, tuple)):
- return sum(
- [
- get_tar_path_from_txts(
- txt_path[i], islocal=islocal, proportion=proportion
- )
- for i in range(len(txt_path))
- ],
- [],
- )
- if isinstance(txt_path, str):
- with open(txt_path) as f:
- lines = f.readlines()
- if islocal:
- lines = [
- lines[i]
- .split("\n")[0]
- .replace("pipe:aws s3 cp s3://s-laion-audio/", "/mnt/audio_clip/")
- for i in range(len(lines))
- ]
- else:
- lines = [
- lines[i].split("\n")[0].replace(".tar", ".tar -")
- for i in range(len(lines))
- ]
- if proportion != 1:
- print("Sampling tars with proportion of {}".format(proportion))
- lines = random.sample(lines, int(proportion * len(lines)))
- return lines
-
-
-def get_mix_lambda(mixup_alpha, batch_size):
- mixup_lambdas = [
- np.random.beta(mixup_alpha, mixup_alpha, 1)[0] for _ in range(batch_size)
- ]
- return np.array(mixup_lambdas).astype(np.float32)
-
-
-def do_mixup(x, mixup_lambda):
- """
- Args:
- x: (batch_size , ...)
- mixup_lambda: (batch_size,)
- Returns:
- out: (batch_size, ...)
- """
- out = (
- x.transpose(0, -1) * mixup_lambda
- + torch.flip(x, dims=[0]).transpose(0, -1) * (1 - mixup_lambda)
- ).transpose(0, -1)
- return out
-
-
-def interpolate(x, ratio):
- """Interpolate data in time domain. This is used to compensate the
- resolution reduction in downsampling of a CNN.
-
- Args:
- x: (batch_size, time_steps, classes_num)
- ratio: int, ratio to interpolate
- Returns:
- upsampled: (batch_size, time_steps * ratio, classes_num)
- """
- (batch_size, time_steps, classes_num) = x.shape
- upsampled = x[:, :, None, :].repeat(1, 1, ratio, 1)
- upsampled = upsampled.reshape(batch_size, time_steps * ratio, classes_num)
- return upsampled
-
-
-def pad_framewise_output(framewise_output, frames_num):
- """Pad framewise_output to the same length as input frames. The pad value
- is the same as the value of the last frame.
- Args:
- framewise_output: (batch_size, frames_num, classes_num)
- frames_num: int, number of frames to pad
- Outputs:
- output: (batch_size, frames_num, classes_num)
- """
- pad = framewise_output[:, -1:, :].repeat(
- 1, frames_num - framewise_output.shape[1], 1
- )
- """tensor for padding"""
-
- output = torch.cat((framewise_output, pad), dim=1)
- """(batch_size, frames_num, classes_num)"""
-
-
-# def process_ipc(index_path, classes_num, filename):
-# # load data
-# logging.info("Load Data...............")
-# ipc = [[] for _ in range(classes_num)]
-# with h5py.File(index_path, "r") as f:
-# for i in tqdm(range(len(f["target"]))):
-# t_class = np.where(f["target"][i])[0]
-# for t in t_class:
-# ipc[t].append(i)
-# print(ipc)
-# np.save(filename, ipc)
-# logging.info("Load Data Succeed...............")
-
-
-def save_to_dict(s, o_={}):
- sp = s.split(": ")
- o_.update({sp[0]: float(sp[1])})
- return o_
-
-
-def get_data_from_log(txt_path):
- """
- Output dictionary from out.txt log file
- """
- with open(txt_path) as f:
- lines = f.readlines()
- val_data = {}
- train_data = {}
- train_losses = []
- train_losses_epoch = []
- for i in range(len(lines)):
- if "| INFO |" in lines[i]:
- if "Eval Epoch" in lines[i]:
- if "val_loss" in lines[i]:
- # float(regex.sub("", lines[310].split(" ")[-1]).replace(" ", ""))
- line = lines[i].split("Eval Epoch: ")[-1]
- num_epoch = int(line.split(" ")[0].split(" ")[0])
- d = {
- line.split(" ")[0]
- .split(" ")[1]
- .replace(":", ""): float(line.split(" ")[0].split(" ")[-1])
- }
- for i in range(1, len(line.split(" "))):
- d = save_to_dict(line.split(" ")[i], d)
- val_data[num_epoch] = d
- elif "Train Epoch" in lines[i]:
- num_epoch = int(lines[i].split("Train Epoch: ")[1][0])
- loss = float(lines[i].split("Loss: ")[-1].split(" (")[0])
- train_losses.append(loss)
- train_losses_epoch.append(num_epoch)
- for i in range(len(train_losses)):
- train_data[i] = {
- "num_epoch": train_losses_epoch[i],
- "train_loss": train_losses[i],
- }
- return train_data, val_data
-
-
-def save_p(obj, filename):
- import pickle
-
- try:
- from deepdiff import DeepDiff
- except:
- os.system("pip install deepdiff")
- from deepdiff import DeepDiff
- with open(filename, "wb") as file:
- pickle.dump(obj, file, protocol=pickle.HIGHEST_PROTOCOL) # highest protocol
- with open(filename, "rb") as file:
- z = pickle.load(file)
- assert (
- DeepDiff(obj, z, ignore_string_case=True) == {}
- ), "there is something wrong with the saving process"
- return
-
-
-def load_p(filename):
- import pickle
-
- with open(filename, "rb") as file:
- z = pickle.load(file)
- return z
-
-
-def save_json(data, name="data.json"):
- import json
-
- with open(name, "w") as fp:
- json.dump(data, fp)
- return
-
-
-def load_json(name):
- import json
-
- with open(name, "r") as fp:
- data = json.load(fp)
- return data
-
-
-from multiprocessing import Process, Manager
-from multiprocessing import Process, Value, Array
-from ctypes import c_wchar
-
-
-def load_class_label(path):
- # https://stackoverflow.com/questions/48004243/how-to-share-large-read-only-dictionary-list-across-processes-in-multiprocessing
- # https://stackoverflow.com/questions/45693949/storing-strings-in-a-multiprocessing-sharedctypes-array
- out = None
- if path is not None:
- if pathlib.Path(path).suffix in [".pkl", ".pickle"]:
- out = load_p(path)
- elif pathlib.Path(path).suffix in [".json", ".txt"]:
- out = load_json(path)
- elif pathlib.Path(path).suffix in [".npy", ".npz"]:
- out = np.load(path)
- elif pathlib.Path(path).suffix in [".csv"]:
- import pandas as pd
-
- out = pd.read_csv(path)
- return out
- # if out is None:
- # return None
- # else:
- # key = Array(c_wchar, '\n'.join(list(out.keys())), lock=False)
- # val = Array('i', out.values(), lock=False)
- # return (key, val)
-
-
-from torch import optim
-
-
-def get_optimizer(params, lr, betas, eps, momentum, optimizer_name):
- if optimizer_name.lower() == "adamw":
- optimizer = optim.AdamW(params, lr=lr, betas=betas, eps=eps)
- elif optimizer_name.lower() == "sgd":
- optimizer = optim.SGD(params, lr=lr, momentum=momentum)
- elif optimizer_name.lower() == "adam":
- optimizer = optim.Adam(params, lr=lr, betas=betas, eps=eps)
- else:
- raise ValueError("optimizer name is not correct")
- return optimizer
diff --git a/spaces/demo-org/doccano/Dockerfile b/spaces/demo-org/doccano/Dockerfile
deleted file mode 100644
index 3e39deb70c6ef8f005ba4c702e136a6743bf049b..0000000000000000000000000000000000000000
--- a/spaces/demo-org/doccano/Dockerfile
+++ /dev/null
@@ -1,12 +0,0 @@
-FROM doccano/doccano
-
-ENV ADMIN_USERNAME=admin
-ENV ADMIN_EMAIL=admin@admin.com
-ENV ADMIN_PASSWORD=password
-
-# Otherwise it gets blocked by X-FRAME DENY
-# https://github.com/doccano/doccano/blob/a2918f792e2a1d076c5f3abbbc7af7e3b2c11d0b/backend/config/settings/base.py#L85
-RUN echo "X_FRAME_OPTIONS = 'SAMEORIGIN'" > /doccano/local_settings.py
-RUN sed -i 's/"django.middleware.clickjacking.XFrameOptionsMiddleware",//g' /doccano/backend/config/settings/base.py
-
-CMD ["/doccano/tools/run.sh"]
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Lakshmi Narayana Hrudayam Stotram In Tamil Pdf 67.md b/spaces/diacanFperku/AutoGPT/Lakshmi Narayana Hrudayam Stotram In Tamil Pdf 67.md
deleted file mode 100644
index 469b6b1fb8bfab593ae7ce049626441bda980b2e..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Lakshmi Narayana Hrudayam Stotram In Tamil Pdf 67.md
+++ /dev/null
@@ -1,160 +0,0 @@
-
-
Sri Lakshmi Narayana Hrudayam Stotram in Tamil PDF 67 - A Divine Prayer for Wealth and Prosperity
-
Sri Lakshmi Narayana Hrudayam Stotram is a powerful mantra that invokes the blessings of Lord Vishnu and Goddess Lakshmi, the preservers and providers of the universe. This stotra is composed by Sage Parashara, the father of Vyasa, and consists of 16 verses that describe the glory and attributes of Lord Narayana and Goddess Lakshmi. The stotra also contains a meditation, a nyasa, and a prarthana (prayer) for attaining various benefits such as dharma (righteousness), artha (wealth), kama (desire), and moksha (liberation).
-
In this article, we will provide you with a link to download Sri Lakshmi Narayana Hrudayam Stotram in Tamil PDF 67, which is a scanned copy of the original text in Tamil script. We will also explain the meaning and significance of this stotra, and how to chant it for maximum benefits.
Meaning and Significance of Sri Lakshmi Narayana Hrudayam Stotram
-
Sri Lakshmi Narayana Hrudayam Stotram begins with an invocation to Lord Narayana, who is the supreme light, the supreme self, the supreme brahman, the supreme lord, the supreme father, the supreme knowledge, the supreme witness, and the supreme creator of all beings. The stotra then praises Lord Narayana as the source of all auspiciousness, happiness, purity, strength, wisdom, and grace. The stotra also describes Lord Narayana as the one who resides in all the worlds, who is worshipped by all the gods, who is the sun, the moon, the fire, the guru, and the savior from the ocean of samsara (cycle of birth and death).
-
The stotra then shifts its focus to Goddess Lakshmi, who is the consort of Lord Narayana and who resides in his heart. The stotra praises Goddess Lakshmi as the mother of all creation, who bestows wealth, prosperity, beauty, fertility, and abundance. The stotra also describes Goddess Lakshmi as the one who grants boons, who removes obstacles, who fulfills desires, who protects from enemies, who dispels poverty, disease, and sorrow.
-
The stotra then requests Lord Narayana and Goddess Lakshmi to bless the devotee with their grace and mercy. The stotra asks them to grant dharma (righteousness), artha (wealth), kama (desire), and moksha (liberation) to the devotee. The stotra also asks them to remove all sins, faults, afflictions, and fears from the devotee. The stotra concludes with a salutation to Lord Narayana and Goddess Lakshmi.
-
-
How to Chant Sri Lakshmi Narayana Hrudayam Stotram
-
To chant Sri Lakshmi Narayana Hrudayam Stotram effectively, you need to follow some guidelines and procedures. Here are some tips for chanting this stotra:
-
-
Chant this stotra in the morning or evening after taking a bath and wearing clean clothes.
-
Chant this stotra in front of an image or idol of Lord Vishnu and Goddess Lakshmi.
-
Chant this stotra with devotion and concentration.
-
Chant this stotra 11 times daily for 48 days or 108 times daily for 21 days.
-
Chant this stotra on Fridays or on full moon days for more benefits.
-
Chant this stotra during festivals such as Diwali or Varalakshmi Vratam for more blessings.
-
-
-
Download Sri Lakshmi Narayana Hrudayam Stotram in Tamil PDF 67
-
If you want to download Sri Lakshmi Narayana Hrudayam Stotram in Tamil PDF 67, you can click on this link: https://archive.org/details/SriLakshmiNarayanaHrudayam. This link will take you to a page where you can view or download a scanned copy of the original text in Tamil script. You can also print or save this PDF file for your personal use.
-
-
Conclusion
-
Sri Lakshmi Narayana Hrudayam Stotram is a divine prayer that can help you attain wealth and prosperity in your life. By chanting this stotra regularly with faith and devotion, you can invoke the grace and mercy of Lord Vishnu and Goddess Lakshmi. You can also overcome all your problems and difficulties with their help. You can also achieve dharma (righteousness), artha (wealth), kama (desire), and moksha (liberation) with their blessings.
-
-
We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading.
-
Benefits of Chanting Sri Lakshmi Narayana Hrudayam Stotram
-
Chanting Sri Lakshmi Narayana Hrudayam Stotram can bring many benefits to your life. Here are some of the benefits that you can experience by chanting this stotra:
-
-
You can attract wealth and prosperity in your life. You can also overcome poverty and debt with the help of Goddess Lakshmi.
-
You can attain peace and happiness in your life. You can also enjoy good health and longevity with the help of Lord Vishnu.
-
You can fulfill your desires and wishes with the grace of Lord Narayana and Goddess Lakshmi. You can also achieve success and fame in your endeavors.
-
You can remove all obstacles and enemies from your path. You can also protect yourself from evil and negative forces with the power of Lord Narayana and Goddess Lakshmi.
-
You can purify your mind and heart from all sins and faults. You can also develop devotion and wisdom with the guidance of Lord Narayana and Goddess Lakshmi.
-
You can attain liberation from the cycle of birth and death. You can also reach the abode of Lord Narayana and Goddess Lakshmi with their mercy.
-
-
-
Meaning of Sri Lakshmi Narayana Hrudayam Stotram in Tamil
-
If you want to understand the meaning of Sri Lakshmi Narayana Hrudayam Stotram in Tamil, you can refer to this translation. This translation is based on the original text in Tamil script, and it tries to convey the essence and spirit of the stotra. However, this translation is not a word-for-word literal translation, and it may not capture all the nuances and subtleties of the stotra. Therefore, we recommend you to read this translation along with the original text for a better understanding.
-
-
Here is the meaning of Sri Lakshmi Narayana Hrudayam Stotram in Tamil:
-
-
-ஸ்ரீ லட்சுமி நாராயண ஹ்ருதயம்
-
-ஹரி: ஓம் ||
-
-அஸ்ய ஸ்ரீ நாராயண ஹ்ருதய ஸ்தோத்ர மஹாமந்த்ரஸ்ய |
-பார்கவ ருஷி: | அனுஷ்டுப் சந்த: | லக்ஷ்மி நாராயனோ தேவதா |
-நாராயண ப்ரீத்யர்தே ஜபே விநியோக: ||
-
-This is the great mantra of Lord Narayana's heart composed by Sage Parashara.
-The sage is Parashara, the meter is Anushtup, the deity is Lakshmi Narayana.
-This mantra is chanted for pleasing Lord Narayana.
-
-॥ கரந்யாஸ: ॥
-
-With the thumb, I salute Lord Narayana who is the supreme light.
-With the index finger, I salute Lord Narayana who is the supreme brahman.
-With the middle finger, I salute Lord Narayana who is the supreme lord.
-With the ring finger, I salute Lord Narayana who is the supreme darkness.
-With the little finger, I salute Lord Narayana who is the supreme dharma.
-With both palms, I salute Lord Narayana who is everything.
-
-॥ அங்கந்யாஸ: ॥
-
-With my heart, I salute Lord Narayana who is the supreme light.
-With my head, I offer oblations to Lord Narayana who is the supreme brahman.
-With my tuft, I propitiate Lord Narayana who is the supreme lord.
-With my armor, I invoke Lord Narayana who is the supreme darkness.
-With my eyes, I worship Lord Narayana who is the supreme dharma.
-With my weapon, I strike Lord Narayana who is everything.
-
-॥ அত ত্যানম் ॥
-
-I meditate on Lord Hari who shines like the rising sun,
-who wears yellow clothes and has four arms,
-who holds a conch, a discus, a mace and a lotus,
-who is the lord of Lakshmi.
-
-I meditate on Lord Hari who has a wheel that supports all the three worlds,
-who has a crown above that wheel,
-who has a lotus stalk that holds a lotus bud,
-who has a mountain that bears a golden lotus,
-who has three peaceful forms,
-who has a gem-studded crown,
-who has earrings that shine,
-who is known as Lakshmi Narayana,
-who has lotus-like eyes,
-who is always present in my mind.
-
-॥ அস্য শ্রী নারায়ণ হ্রু ʼ দয স্তোত্র মহামন্ত্রস্য |
-প্রহ্মা রু ʼ ষি: | অনুষ্টুপ্ চন্দ: | নারায়ণো দেবতা |
-নারায়ণ-প্রীত্যর্থে জপে বিনিযোগ: ||
-
-This is another great mantra of Lord Narayana's heart composed by Brahma.
-The sage is Brahma, the meter is Anushtup, the deity is Narayana.
-This mantra is chanted for pleasing Lord Narayana.
-
-ௐ ॥
-
-Narayana is the supreme light, the supreme self, salutations to him.
-Narayana is
-
-
-the supreme brahman, salutations to him.
-Narayana is the supreme lord, the supreme father, salutations to him.
-Narayana is the supreme darkness, the supreme silence, salutations to him.
-Narayana is the supreme dharma, the supreme law, salutations to him.
-Narayana is the supreme knowledge, the supreme teacher, salutations to him.
-Narayana is everything, the supreme witness, salutations to him.
-
-॥ 1 ॥
-
-Narayana is the source of all creation, from him Brahma was born.
-From Narayana came Shiva, from Narayana came Indra.
-From Narayana came the sun, the moon, and the fire.
-From Narayana came the guru, the savior from samsara.
-
-॥ 2 ॥
-
-Narayana is the one who is worshipped by all beings, he is the lord of all worlds.
-He is the one who grants boons, he is the one who removes obstacles.
-He is the one who fulfills desires, he is the one who protects from enemies.
-He is the one who dispels poverty, disease, and sorrow.
-
-॥ 3 ॥
-
-Narayana is the one who is pure and holy, he is the one who purifies all sins.
-He is the one who is blissful and joyful, he is the one who bestows happiness.
-He is the one who is wise and compassionate, he is the one who imparts wisdom.
-He is the one who is gracious and merciful, he is the one who showers grace.
-
-॥ 4 ॥
-
-Narayana is the one who is eternal and infinite, he is the one who transcends time and space.
-He is the one who is omnipotent and omniscient, he is the one who knows and does everything.
-He is the one who is omnipresent and immanent, he is the one who pervades and sustains everything.
-He is the one who is beyond description and comprehension, he is the one who can only be experienced.
-
-॥ 5 ॥
-
-Salutations to Narayana, who is the supreme light, self, brahman, lord, father,
-darkness, silence, dharma, law,
-knowledge, teacher,
-everything,
-witness.
-
-Salutations to Narayana again and again.
-
-॥ 6 ॥
-
-
Conclusion
-
Sri Lakshmi Narayana Hrudayam Stotram is a divine prayer that can help you attain wealth and prosperity in your life. By chanting this stotra regularly with faith and devotion, you can invoke the grace and mercy of Lord Narayana and Goddess Lakshmi. You can also overcome all your problems and difficulties with their help. You can also achieve dharma (righteousness), artha (wealth), kama (desire), and moksha (liberation) with their blessings.
-
We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Mahouka-Koukou-No-Rettousei-1080p-Torrentl.md b/spaces/diacanFperku/AutoGPT/Mahouka-Koukou-No-Rettousei-1080p-Torrentl.md
deleted file mode 100644
index 303b9900c0f7dc555902c6235b002d4796bc6809..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mahouka-Koukou-No-Rettousei-1080p-Torrentl.md
+++ /dev/null
@@ -1,64 +0,0 @@
-## Mahouka Koukou No Rettousei 1080p Torrentl
-
-
-
-
-
-
-
-
-
-**CLICK HERE ››› [https://urluso.com/2txxxs](https://urluso.com/2txxxs)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Mahouka Koukou No Rettousei: A Review of the Anime Series and How to Download It in High Quality
-
-
-
-Mahouka Koukou No Rettousei, also known as The Irregular at Magic High School, is a popular anime series based on the light novel series by Tsutomu Satou. The story follows Tatsuya Shiba, a student at a prestigious magic high school who is considered an irregular because of his low aptitude for magic. However, he has a secret talent that makes him a formidable fighter and a genius engineer. Along with his sister Miyuki, who is a top student and his guardian, he gets involved in various conflicts and mysteries involving magic and technology.
-
-
-
-The anime series consists of two seasons, a movie, and a special episode. The first season aired in 2014 and covered the first seven volumes of the light novel. The movie, titled The Irregular at Magic High School: The Girl Who Summons the Stars, was released in 2017 and was an original story set between the first and second season. The second season, titled The Irregular at Magic High School: Visitor Arc, aired in 2020 and adapted volumes 9 to 11 of the light novel. The special episode, titled The Irregular at Magic High School: Reminiscence Arc, was released in 2021 and adapted volume 8 of the light novel.
-
-
-
-The anime series has received positive reviews from fans and critics for its action-packed scenes, intriguing plot, and complex magic system. The animation quality is also praised for its smooth and detailed visuals. The voice acting, music, and sound effects are also well-done and enhance the atmosphere of the show.
-
-
-
-If you are interested in watching Mahouka Koukou No Rettousei in high quality, you can download it from various torrent sites that offer 1080p resolution. However, you should be careful of the legal and ethical issues involved in downloading copyrighted content without permission. You should also be aware of the potential risks of malware, viruses, and phishing scams that may come with torrent files.
-
-
-
-Some of the torrent sites that offer Mahouka Koukou No Rettousei in 1080p are:
-
-
-
-- Nyaa[^1^]: This is a popular site for anime torrents that has a large collection of Mahouka Koukou No Rettousei episodes in different formats and languages. You can choose from HEVC x265, FLAC, Dual-Audio, SubsPlease, Erai-raws, EMBER, sam, Beatrice-Raws, and more. You can also find the movie and the special episode here.
-
-- Reddit[^3^]: This is a social media platform that has various communities for anime fans. You can find some posts that share links to Mahouka Koukou No Rettousei torrents in 1080p on subreddits like r/Mahouka or r/animepiracy. However, you should be careful of the rules and regulations of each subreddit and Reddit as a whole before downloading anything.
-
-- SoundCloud[^5^]: This is an online audio platform that allows users to upload and share music and podcasts. You can find some tracks that have links to Mahouka Koukou No Rettousei torrents in 1080p on SoundCloud by searching for the keyword. However, you should be wary of the quality and legitimacy of these links as they may not be verified or safe.
-
-
-
-In conclusion, Mahouka Koukou No Rettousei is an anime series that you can enjoy watching in high quality if you download it from torrent sites. However, you should be mindful of the legal and ethical implications of doing so as well as the possible dangers of malware and scams. You should also respect the original creators and support them by buying their official products if you can.
-
- 1b8d091108
-
-
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Neoragex 5.2a Official [BEST] Fullset All Roms (neo-geo 188 Games).rar.md b/spaces/diacanFperku/AutoGPT/Neoragex 5.2a Official [BEST] Fullset All Roms (neo-geo 188 Games).rar.md
deleted file mode 100644
index 00e393a70d0b059e7e1e78c26ffb6766898a8d33..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Neoragex 5.2a Official [BEST] Fullset All Roms (neo-geo 188 Games).rar.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
How to Download and Play Neo-Geo Games on Your PC with NeoRAGEx 5.2a
-
If you are a fan of classic arcade games, you may be interested in playing some of the titles from the Neo-Geo system, which was a popular arcade and home console platform in the 1990s. The Neo-Geo had a library of 188 games, ranging from fighting games like The King of Fighters and Samurai Shodown, to shoot 'em ups like Metal Slug and Blazing Star, to sports games like Super Sidekicks and Neo Turf Masters.
-
However, finding and buying a working Neo-Geo console and cartridges can be expensive and difficult nowadays. Fortunately, there is a way to enjoy these games on your PC using an emulator called NeoRAGEx. An emulator is a software that mimics the hardware and software of another system, allowing you to run its games on your computer.
-
neoragex 5.2a official fullset all roms (neo-geo 188 games).rar
In this article, we will show you how to download and play Neo-Geo games on your PC with NeoRAGEx 5.2a, which is an updated version of the original NeoRAGEx emulator that supports all 188 games. You will need to download two files: the emulator itself, and a compressed file that contains all the ROMs (the game files) for the Neo-Geo system.
-
Step 1: Download NeoRAGEx 5.2a
-
The first thing you need to do is to download the NeoRAGEx 5.2a emulator from one of these sources:
The file size is about 1.6 GB, so it may take some time to download depending on your internet speed. Once you have downloaded the file, you need to extract it using a program like WinRAR or 7-Zip. You should get a folder called "NeoRAGEx 5.2a" that contains the emulator executable and other files.
-
Step 2: Download neoragex 5.2a official fullset all roms (neo-geo 188 games).rar
-
The next thing you need to do is to download the compressed file that contains all the ROMs for the Neo-Geo system. The file name is "neoragex 5.2a official fullset all roms (neo-geo 188 games).rar" and you can find it from one of these sources:
The file size is about 1.8 GB, so again it may take some time to download depending on your internet speed. Once you have downloaded the file, you need to extract it using a program like WinRAR or 7-Zip. You should get a folder called "ROMS" that contains
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Azusa-Bert-VITS2/preprocess_text.py b/spaces/digitalxingtong/Azusa-Bert-VITS2/preprocess_text.py
deleted file mode 100644
index 44c35fecd9b7f21016e80e9597d6055254cba3f7..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Azusa-Bert-VITS2/preprocess_text.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-import shutil
-stage = [1,2,3]
-
-transcription_path = 'filelists/short_character_anno.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- #language = "ZH"
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except:
- print("err!", utt)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- file_path = transcription_path+'.cleaned'
- shutil.copy(file_path,'./filelists/train.list')
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path))
- config['data']["n_speakers"] = current_sid #
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/digitalxingtong/Kino-Bert-VITS2/text/__init__.py b/spaces/digitalxingtong/Kino-Bert-VITS2/text/__init__.py
deleted file mode 100644
index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Kino-Bert-VITS2/text/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from text.symbols import *
-
-
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
-def cleaned_text_to_sequence(cleaned_text, tones, language):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- phones = [_symbol_to_id[symbol] for symbol in cleaned_text]
- tone_start = language_tone_start_map[language]
- tones = [i + tone_start for i in tones]
- lang_id = language_id_map[language]
- lang_ids = [lang_id for i in phones]
- return phones, tones, lang_ids
-
-def get_bert(norm_text, word2ph, language):
- from .chinese_bert import get_bert_feature as zh_bert
- from .english_bert_mock import get_bert_feature as en_bert
- lang_bert_func_map = {
- 'ZH': zh_bert,
- 'EN': en_bert
- }
- bert = lang_bert_func_map[language](norm_text, word2ph)
- return bert
diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/start.bat b/spaces/digitalxingtong/Lixiang-Bert-Vits2/start.bat
deleted file mode 100644
index 418d21233dbf720b0dd09821904d9d6a31b123a2..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/start.bat
+++ /dev/null
@@ -1,2 +0,0 @@
-set PYTHON=venv\python.exe
-start cmd /k "set PYTHON=%PYTHON%"
\ No newline at end of file
diff --git a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py b/spaces/dineshreddy/WALT/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py
deleted file mode 100644
index 847932547c6c309ae38b45dc43ac0ef8ca66d347..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-import torch
-import torch.nn as nn
-from mmcv import ops
-
-
-class BaseRoIExtractor(nn.Module, metaclass=ABCMeta):
- """Base class for RoI extractor.
-
- Args:
- roi_layer (dict): Specify RoI layer type and arguments.
- out_channels (int): Output channels of RoI layers.
- featmap_strides (List[int]): Strides of input feature maps.
- """
-
- def __init__(self, roi_layer, out_channels, featmap_strides):
- super(BaseRoIExtractor, self).__init__()
- self.roi_layers = self.build_roi_layers(roi_layer, featmap_strides)
- self.out_channels = out_channels
- self.featmap_strides = featmap_strides
- self.fp16_enabled = False
-
- @property
- def num_inputs(self):
- """int: Number of input feature maps."""
- return len(self.featmap_strides)
-
- def init_weights(self):
- pass
-
- def build_roi_layers(self, layer_cfg, featmap_strides):
- """Build RoI operator to extract feature from each level feature map.
-
- Args:
- layer_cfg (dict): Dictionary to construct and config RoI layer
- operation. Options are modules under ``mmcv/ops`` such as
- ``RoIAlign``.
- featmap_strides (List[int]): The stride of input feature map w.r.t
- to the original image size, which would be used to scale RoI
- coordinate (original image coordinate system) to feature
- coordinate system.
-
- Returns:
- nn.ModuleList: The RoI extractor modules for each level feature
- map.
- """
-
- cfg = layer_cfg.copy()
- layer_type = cfg.pop('type')
- assert hasattr(ops, layer_type)
- layer_cls = getattr(ops, layer_type)
- roi_layers = nn.ModuleList(
- [layer_cls(spatial_scale=1 / s, **cfg) for s in featmap_strides])
- return roi_layers
-
- def roi_rescale(self, rois, scale_factor):
- """Scale RoI coordinates by scale factor.
-
- Args:
- rois (torch.Tensor): RoI (Region of Interest), shape (n, 5)
- scale_factor (float): Scale factor that RoI will be multiplied by.
-
- Returns:
- torch.Tensor: Scaled RoI.
- """
-
- cx = (rois[:, 1] + rois[:, 3]) * 0.5
- cy = (rois[:, 2] + rois[:, 4]) * 0.5
- w = rois[:, 3] - rois[:, 1]
- h = rois[:, 4] - rois[:, 2]
- new_w = w * scale_factor
- new_h = h * scale_factor
- x1 = cx - new_w * 0.5
- x2 = cx + new_w * 0.5
- y1 = cy - new_h * 0.5
- y2 = cy + new_h * 0.5
- new_rois = torch.stack((rois[:, 0], x1, y1, x2, y2), dim=-1)
- return new_rois
-
- @abstractmethod
- def forward(self, feats, rois, roi_scale_factor=None):
- pass
diff --git a/spaces/docs-demos/dpr-question_encoder-bert-base-multilingual/README.md b/spaces/docs-demos/dpr-question_encoder-bert-base-multilingual/README.md
deleted file mode 100644
index d40498c242a3832f1eaba82499abdaadaa8cec26..0000000000000000000000000000000000000000
--- a/spaces/docs-demos/dpr-question_encoder-bert-base-multilingual/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: DPR
-emoji: 🌖
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full/README.md b/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full/README.md
deleted file mode 100644
index d64183f306094a1b5bc02d1d408da33ea6c51a7f..0000000000000000000000000000000000000000
--- a/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: MT5
-emoji: 🦀
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/dongyi/MMFS/utils/face_parsing/resnet.py b/spaces/dongyi/MMFS/utils/face_parsing/resnet.py
deleted file mode 100644
index 6730d7fafab7b6cce74ca879d8d0c5a13e4cbfed..0000000000000000000000000000000000000000
--- a/spaces/dongyi/MMFS/utils/face_parsing/resnet.py
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/usr/bin/python
-# -*- encoding: utf-8 -*-
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.model_zoo as modelzoo
-
-resnet18_url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth'
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- """3x3 convolution with padding"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- def __init__(self, in_chan, out_chan, stride=1):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(in_chan, out_chan, stride)
- self.bn1 = nn.BatchNorm2d(out_chan)
- self.conv2 = conv3x3(out_chan, out_chan)
- self.bn2 = nn.BatchNorm2d(out_chan)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = None
- if in_chan != out_chan or stride != 1:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_chan, out_chan,
- kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(out_chan),
- )
-
- def forward(self, x):
- residual = self.conv1(x)
- residual = F.relu(self.bn1(residual))
- residual = self.conv2(residual)
- residual = self.bn2(residual)
-
- shortcut = x
- if self.downsample is not None:
- shortcut = self.downsample(x)
-
- out = shortcut + residual
- out = self.relu(out)
- return out
-
-
-def create_layer_basic(in_chan, out_chan, bnum, stride=1):
- layers = [BasicBlock(in_chan, out_chan, stride=stride)]
- for i in range(bnum-1):
- layers.append(BasicBlock(out_chan, out_chan, stride=1))
- return nn.Sequential(*layers)
-
-
-class Resnet18(nn.Module):
- def __init__(self):
- super(Resnet18, self).__init__()
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
- bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1)
- self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2)
- self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2)
- self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2)
- self.init_weight()
-
- def forward(self, x):
- x = self.conv1(x)
- x = F.relu(self.bn1(x))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- feat8 = self.layer2(x) # 1/8
- feat16 = self.layer3(feat8) # 1/16
- feat32 = self.layer4(feat16) # 1/32
- return feat8, feat16, feat32
-
- def init_weight(self):
- state_dict = modelzoo.load_url(resnet18_url)
- self_state_dict = self.state_dict()
- for k, v in state_dict.items():
- if 'fc' in k: continue
- self_state_dict.update({k: v})
- self.load_state_dict(self_state_dict)
-
- def get_params(self):
- wd_params, nowd_params = [], []
- for _, module in self.named_modules():
- if isinstance(module, (nn.Linear, nn.Conv2d)):
- wd_params.append(module.weight)
- if not module.bias is None:
- nowd_params.append(module.bias)
- elif isinstance(module, nn.BatchNorm2d):
- nowd_params += list(module.parameters())
- return wd_params, nowd_params
diff --git a/spaces/dorkai/text-generation-webui-main/api-example.py b/spaces/dorkai/text-generation-webui-main/api-example.py
deleted file mode 100644
index f35ea1db76f291bf1cae90a1a7801d2d19be3acc..0000000000000000000000000000000000000000
--- a/spaces/dorkai/text-generation-webui-main/api-example.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import requests
-
-# For local streaming, the websockets are hosted without ssl - http://
-HOST = 'localhost:5000'
-URI = f'http://{HOST}/api/v1/generate'
-
-# For reverse-proxied streaming, the remote will likely host with ssl - https://
-# URI = 'https://your-uri-here.trycloudflare.com/api/v1/generate'
-
-
-def run(prompt):
- request = {
- 'prompt': prompt,
- 'max_new_tokens': 250,
- 'do_sample': True,
- 'temperature': 1.3,
- 'top_p': 0.1,
- 'typical_p': 1,
- 'repetition_penalty': 1.18,
- 'top_k': 40,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- 'seed': -1,
- 'add_bos_token': True,
- 'truncation_length': 2048,
- 'ban_eos_token': False,
- 'skip_special_tokens': True,
- 'stopping_strings': []
- }
-
- response = requests.post(URI, json=request)
-
- if response.status_code == 200:
- result = response.json()['results'][0]['text']
- print(prompt + result)
-
-
-if __name__ == '__main__':
- prompt = "In order to make homemade bread, follow these steps:\n1)"
- run(prompt)
diff --git a/spaces/dorkai/text-generation-webui-main/docs/Training-LoRAs.md b/spaces/dorkai/text-generation-webui-main/docs/Training-LoRAs.md
deleted file mode 100644
index 406ec1e4a135288867dc5c876594426aa827d568..0000000000000000000000000000000000000000
--- a/spaces/dorkai/text-generation-webui-main/docs/Training-LoRAs.md
+++ /dev/null
@@ -1,167 +0,0 @@
-## Training Your Own LoRAs
-
-The WebUI seeks to make training your own LoRAs as easy as possible. It comes down to just a few simple steps:
-
-### **Step 1**: Make a plan.
-- What base model do you want to use? The LoRA you make has to be matched up to a single architecture (eg LLaMA-13B) and cannot be transferred to others (eg LLaMA-7B, StableLM, etc. would all be different). Derivatives of the same model (eg Alpaca finetune of LLaMA-13B) might be transferrable, but even then it's best to train exactly on what you plan to use.
-- What model format do you want? At time of writing, 8-bit models are most stable, and 4-bit are supported but experimental. In the near future it is likely that 4-bit will be the best option for most users.
-- What are you training it on? Do you want it to learn real information, a simple format, ...?
-
-### **Step 2**: Gather a dataset.
-- If you use a dataset similar to the [Alpaca](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) format, that is natively supported by the `Formatted Dataset` input in the WebUI, with premade formatter options.
-- If you use a dataset that isn't matched to Alpaca's format, but uses the same basic JSON structure, you can make your own format file by copying `training/formats/alpaca-format.json` to a new file and [editing its content](#format-files).
-- If you can get the dataset into a simple text file, that works too! You can train using the `Raw text file` input option.
- - This means you can for example just copy/paste a chatlog/documentation page/whatever you want, shove it in a plain text file, and train on it.
-- If you use a structured dataset not in this format, you may have to find an external way to convert it - or open an issue to request native support.
-
-### **Step 3**: Do the training.
-- **3.1**: Load the WebUI, and your model.
- - Make sure you don't have any LoRAs already loaded (unless you want to train for multi-LoRA usage).
-- **3.2**: Open the `Training` tab at the top, `Train LoRA` sub-tab.
-- **3.3**: Fill in the name of the LoRA, select your dataset in the dataset options.
-- **3.4**: Select other parameters to your preference. See [parameters below](#parameters).
-- **3.5**: click `Start LoRA Training`, and wait.
- - It can take a few hours for a large dataset, or just a few minute if doing a small run.
- - You may want to monitor your [loss value](#loss) while it goes.
-
-### **Step 4**: Evaluate your results.
-- Load the LoRA under the Models Tab.
-- You can go test-drive it on the `Text generation` tab, or you can use the `Perplexity evaluation` sub-tab of the `Training` tab.
-- If you used the `Save every n steps` option, you can grab prior copies of the model from sub-folders within the LoRA model's folder and try them instead.
-
-### **Step 5**: Re-run if you're unhappy.
-- Make sure to unload the LoRA before training it.
-- You can simply resume a prior run - use `Copy parameters from` to select your LoRA, and edit parameters. Note that you cannot change the `Rank` of an already created LoRA.
- - If you want to resume from a checkpoint saved along the way, simply copy the contents of the checkpoint folder into the LoRA's folder.
- - (Note: `adapter_model.bin` is the important file that holds the actual LoRA content).
- - This will start Learning Rate and Steps back to the start. If you want to resume as if you were midway through, you can adjust your Learning Rate to the last reported LR in logs and reduce your epochs.
-- Or, you can start over entirely if you prefer.
-- If your model is producing corrupted outputs, you probably need to start over and use a lower Learning Rate.
-- If your model isn't learning detailed information but you want it to, you might need to just run more epochs, or you might need a higher Rank.
-- If your model is enforcing a format you didn't want, you may need to tweak your dataset, or start over and not train as far.
-
-## Format Files
-
-If using JSON formatted datasets, they are presumed to be in the following approximate format:
-
-```json
-[
- {
- "somekey": "somevalue",
- "key2": "value2"
- },
- {
- // etc
- }
-]
-```
-
-Where the keys (eg `somekey`, `key2` above) are standardized, and relatively consistent across the dataset, and the values (eg `somevalue`, `value2`) contain the content actually intended to be trained.
-
-For Alpaca, the keys are `instruction`, `input`, and `output`, wherein `input` is sometimes blank.
-
-A simple format file for Alpaca to be used as a chat bot is:
-
-```json
-{
- "instruction,output": "User: %instruction%\nAssistant: %output%",
- "instruction,input,output": "User: %instruction%: %input%\nAssistant: %output%"
-}
-```
-
-Note that the keys (eg `instruction,output`) are a comma-separated list of dataset keys, and the values are a simple string that use those keys with `%%`.
-
-So for example if a dataset has `"instruction": "answer my question"`, then the format file's `User: %instruction%\n` will be automatically filled in as `User: answer my question\n`.
-
-If you have different sets of key inputs, you can make your own format file to match it. This format-file is designed to be as simple as possible to enable easy editing to match your needs.
-
-## Parameters
-
-The basic purpose and function of each parameter is documented on-page in the WebUI, so read through them in the UI to understand your options.
-
-That said, here's a guide to the most important parameter choices you should consider:
-
-### VRAM
-
-- First, you must consider your VRAM availability.
- - Generally, under default settings, VRAM usage for training with default parameters is very close to when generating text (with 1000+ tokens of context) (ie, if you can generate text, you can train LoRAs).
- - Note: worse by default in the 4-bit monkeypatch currently. Reduce `Micro Batch Size` to `1` to restore this to expectations.
- - If you have VRAM to spare, setting higher batch sizes will use more VRAM and get you better quality training in exchange.
- - If you have large data, setting a higher cutoff length may be beneficial, but will cost significant VRAM. If you can spare some, set your batch size to `1` and see how high you can push your cutoff length.
- - If you're low on VRAM, reducing batch size or cutoff length will of course improve that.
- - Don't be afraid to just try it and see what happens. If it's too much, it will just error out, and you can lower settings and try again.
-
-### Rank
-
-- Second, you want to consider the amount of learning you want.
- - For example, you may wish to just learn a dialogue format (as in the case of Alpaca) in which case setting a low `Rank` value (32 or lower) works great.
- - Or, you might be training on project documentation you want the bot to understand and be able to understand questions about, in which case the higher the rank, the better.
- - Generally, higher Rank = more precise learning = more total content learned = more VRAM usage while training.
-
-### Learning Rate and Epochs
-
-- Third, how carefully you want it to be learned.
- - In other words, how okay or not you are with the model losing unrelated understandings.
- - You can control this with 3 key settings: the Learning Rate, its scheduler, and your total epochs.
- - The learning rate controls how much change is made to the model by each token it sees.
- - It's in scientific notation normally, so for example `3e-4` means `3 * 10^-4` which is `0.0003`. The number after `e-` controls how many `0`s are in the number.
- - Higher values let training run faster, but also are more likely to corrupt prior data in the model.
- - You essentially have two variables to balance: the LR, and Epochs.
- - If you make LR higher, you can set Epochs equally lower to match. High LR + low epochs = very fast, low quality training.
- - If you make LR low, set epochs high. Low LR + high epochs = slow but high-quality training.
- - The scheduler controls change-over-time as you train - it starts high, and then goes low. This helps balance getting data in, and having decent quality, at the same time.
- - You can see graphs of the different scheduler options [in the HuggingFace docs here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/en/main_classes/optimizer_schedules#transformers.SchedulerType)
-
-## Loss
-
-When you're running training, the WebUI's console window will log reports that include, among other things, a numeric value named `Loss`. It will start as a high number, and gradually get lower and lower as it goes.
-
-"Loss" in the world of AI training theoretically means "how close is the model to perfect", with `0` meaning "absolutely perfect". This is calculated by measuring the difference between the model outputting exactly the text you're training it to output, and what it actually outputs.
-
-In practice, a good LLM should have a very complex variable range of ideas running in its artificial head, so a loss of `0` would indicate that the model has broken and forgotten to how think about anything other than what you trained it.
-
-So, in effect, Loss is a balancing game: you want to get it low enough that it understands your data, but high enough that it isn't forgetting everything else. Generally, if it goes below `1.0`, it's going to start forgetting its prior memories, and you should stop training. In some cases you may prefer to take it as low as `0.5` (if you want it to be very very predictable). Different goals have different needs, so don't be afraid to experiment and see what works best for you.
-
-Note: if you see Loss start at or suddenly jump to exactly `0`, it is likely something has gone wrong in your training process (eg model corruption).
-
-## Note: 4-Bit Monkeypatch
-
-The [4-bit LoRA monkeypatch](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) works for training, but has side effects:
-- VRAM usage is higher currently. You can reduce the `Micro Batch Size` to `1` to compensate.
-- Models do funky things. LoRAs apply themselves, or refuse to apply, or spontaneously error out, or etc. It can be helpful to reload base model or restart the WebUI between training/usage to minimize chances of anything going haywire.
-- Loading or working with multiple LoRAs at the same time doesn't currently work.
-- Generally, recognize and treat the monkeypatch as the dirty temporary hack it is - it works, but isn't very stable. It will get better in time when everything is merged upstream for full official support.
-
-## Legacy notes
-
-LoRA training was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570).
-
-### Using the original alpaca-lora code
-
-Kept here for reference. The Training tab has much more features than this method.
-
-```
-conda activate textgen
-git clone https://github.com/tloen/alpaca-lora
-```
-
-Edit those two lines in `alpaca-lora/finetune.py` to use your existing model folder instead of downloading everything from decapoda:
-
-```
-model = LlamaForCausalLM.from_pretrained(
- "models/llama-7b",
- load_in_8bit=True,
- device_map="auto",
-)
-tokenizer = LlamaTokenizer.from_pretrained(
- "models/llama-7b", add_eos_token=True
-)
-```
-
-Run the script with:
-
-```
-python finetune.py
-```
-
-It just works. It runs at 22.32s/it, with 1170 iterations in total, so about 7 hours and a half for training a LoRA. RTX 3090, 18153MiB VRAM used, drawing maximum power (350W, room heater mode).
diff --git a/spaces/dragonSwing/LangChain-ChatGPT-plugins/style.css b/spaces/dragonSwing/LangChain-ChatGPT-plugins/style.css
deleted file mode 100644
index 81aa848aff7f3fc0f9989b46c220d56d686ac5c0..0000000000000000000000000000000000000000
--- a/spaces/dragonSwing/LangChain-ChatGPT-plugins/style.css
+++ /dev/null
@@ -1,11 +0,0 @@
-#col-container {max-width: 440px; margin-left: auto; margin-right: auto;}
-
-a, a:hover, a:visited {
- text-decoration-line: underline;
- font-weight: 600;
- color: #1f2937 !important;
-}
-
-.dark a, .dark a:hover, .dark a:visited {
- color: #f3f4f6 !important;
-}
diff --git a/spaces/drift-ai/emoji-predictor/Makefile b/spaces/drift-ai/emoji-predictor/Makefile
deleted file mode 100644
index 30500ef74a38a2b9f4bff78bfc53f1f5ccf70b48..0000000000000000000000000000000000000000
--- a/spaces/drift-ai/emoji-predictor/Makefile
+++ /dev/null
@@ -1,3 +0,0 @@
-install:
- poetry install
- poetry run pip list --format=freeze > requirements.txt
\ No newline at end of file
diff --git a/spaces/eson/tokenizer-arena/utils/lang_util.py b/spaces/eson/tokenizer-arena/utils/lang_util.py
deleted file mode 100644
index be3d08cc57a4d3cf870d40a98f9701afefdc226a..0000000000000000000000000000000000000000
--- a/spaces/eson/tokenizer-arena/utils/lang_util.py
+++ /dev/null
@@ -1,3 +0,0 @@
-"""
-日语、韩语 等
-"""
\ No newline at end of file
diff --git a/spaces/evaluate-metric/mase/app.py b/spaces/evaluate-metric/mase/app.py
deleted file mode 100644
index ac47c0d679868dd50c8b5476a1e53d721c429b90..0000000000000000000000000000000000000000
--- a/spaces/evaluate-metric/mase/app.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import evaluate
-from evaluate.utils import launch_gradio_widget
-
-
-module = evaluate.load("mase")
-launch_gradio_widget(module)
diff --git a/spaces/falterWliame/Face_Mask_Detection/Crack KeygenMaya 2018 Download [EXCLUSIVE].md b/spaces/falterWliame/Face_Mask_Detection/Crack KeygenMaya 2018 Download [EXCLUSIVE].md
deleted file mode 100644
index 9c790a6711e9541e65ae139574182edf89e3dfed..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Crack KeygenMaya 2018 Download [EXCLUSIVE].md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
-Dec 3, 2019 - Autodesk Maya Crack 2022 is a free 3D animation software .... Photolemur 3.5 Crack + Keygen Full Version Free Download. More info. Autodesk Maya Crack is a free program for 3D animation.
-Photolemur 3.5 Crack + Keygen Full Version Free Download.
-2 Sep at 7:29 pm.
-Autodesk Maya Crack is a free 3D animation software.
-May 4 at 8:08 am.
-Photolemur 3.5 Crack + Keygen Full Version Free Download 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Patchman Soundbank.rar.md b/spaces/falterWliame/Face_Mask_Detection/Patchman Soundbank.rar.md
deleted file mode 100644
index 511c5f5666945c66fa8690e087a220a7dab88466..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Patchman Soundbank.rar.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
How to Download and Install Patchman Soundbank for Akai EWI4000s
-
If you are looking for a way to enhance your Akai EWI4000s wind controller with professional quality sounds, you might want to check out the Patchman Soundbank. This soundbank contains 100 all-new, super expressive, breath controlled patches designed especially for the EWI4000s by wind controller expert Matt Traum. You will find a wide variety of synthetic and emulative sounds that will take your EWI4000s to a new level of playability and fun.
In this article, we will show you how to download and install the Patchman Soundbank for your Akai EWI4000s. You will need a computer with a MIDI interface, a USB cable, and the EWI4000s Editor software. You will also need to purchase the Patchman Soundbank from Patchman Music, which will be emailed to you in four different formats: Standard MIDIfile (.MID), Sysex (.SYX), and the EWI4000s Editor (.BNK and .SQS) formats.
-
Step 1: Download the Patchman Soundbank
-
After you purchase the Patchman Soundbank from Patchman Music, you will receive an email with a link to download a zip file containing the soundbank files. Save the zip file to your computer and unzip it to a folder of your choice. You should see four files with the extension .MID, .SYX, .BNK, and .SQS. These are the different formats of the soundbank that you can use depending on your preference.
-
Step 2: Connect your EWI4000s to your computer
-
Before you can load the Patchman Soundbank into your EWI4000s, you need to connect it to your computer using a USB cable. Make sure your EWI4000s is turned on and set to MIDI mode (press and hold SETUP until MIDI appears on the display). Plug one end of the USB cable into the USB port on the back of your EWI4000s and the other end into a free USB port on your computer. Your computer should recognize your EWI4000s as a MIDI device.
-
-
Step 3: Load the Patchman Soundbank using the EWI4000s Editor
-
The easiest way to load the Patchman Soundbank into your EWI4000s is using the EWI4000s Editor software. This software allows you to edit and manage the patches on your EWI4000s using a graphical interface. You can download the EWI4000s Editor software for free from Akai's website. Install and launch the software on your computer.
-
Once you open the EWI4000s Editor, you should see a window with two panels: one showing the patches on your computer (PC/Mac) and one showing the patches on your EWI4000s (EWI). To load the Patchman Soundbank into your EWI4000s, you need to drag and drop the .BNK file from the PC/Mac panel to the EWI panel. You can also use the File menu to open and save banks.
-
The .BNK file contains all 100 patches of the Patchman Soundbank in one bank. If you want to load individual patches instead of the whole bank, you can use the .SQS files instead. These files contain single patches that you can drag and drop from the PC/Mac panel to any slot on the EWI panel. You can also use the Edit menu to copy and paste patches between panels.
-
After you load the Patchman Soundbank into your EWI4000s, you need to save it to your internal memory. To do this, click on Write Bank or Write Single in the Tools menu. A dialog box will appear asking you to confirm that you want to overwrite your existing patches. Click Yes to proceed. The Patchman Soundbank will be saved to your EWI4000s and ready to play.
-
Step 4: Enjoy d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Naruto Senki Mod Legendary Shinobi War V4 APK for Android - The Ultimate Naruto Game Experience.md b/spaces/fatiXbelha/sd/Download Naruto Senki Mod Legendary Shinobi War V4 APK for Android - The Ultimate Naruto Game Experience.md
deleted file mode 100644
index 4f5e260c3505de630fc170ece40844a0446bf2d7..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Naruto Senki Mod Legendary Shinobi War V4 APK for Android - The Ultimate Naruto Game Experience.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
Download Naruto Senki Mod Legendary Shinobi War V4: A Guide for Naruto Fans
-
If you are a fan of the Naruto anime series, you might have heard of Naruto Senki, a 2D action adventure game that lets you play as your favorite characters from the show. But did you know that there is a modded version of the game that adds more features and challenges? In this article, we will tell you everything you need to know about Naruto Senki Mod Legendary Shinobi War V4, how to download and install it, how to play it, and some tips and tricks to make your gaming experience more enjoyable.
-
download naruto senki mod legendary shinobi war v4
Naruto Senki is a game developed by Zakume for Android devices. It is based on the Naruto anime series, which follows the adventures of Naruto Uzumaki, a young ninja who dreams of becoming the Hokage, the leader of his village. The game covers the first 70 episodes of the show, mainly the Prologue — Land of Waves and Chūnin Exams arcs. You can choose from many characters from the show, such as Naruto, Sasuke, Sakura, Kakashi, Gaara, Orochimaru, Zabuza, and more. Each character has their own unique techniques and skills that you can use in battle. You can also equip items such as chakra pills, healing items, and ranged weapons such as kunai and shuriken.
-
The game has various modes that you can play, such as story mode, challenge mode, quiz mode, and multiplayer mode. In story mode, you can follow the plot of the anime and complete missions. In challenge mode, you can test your skills against different enemies and bosses. In quiz mode, you can answer questions about the anime and test your knowledge. In multiplayer mode, you can play with or against other players online.
-
What is Naruto Senki Mod Legendary Shinobi War V4?
-
Naruto Senki Mod Legendary Shinobi War V4 is a modded version of the original game by Zam Zam. It adds new features and content to the game that make it more fun and exciting. Some of the new features are:
-
-
New characters such as Jiraiya, Tsunade, Itachi, Kisame, Deidara, Sasori, Pain, Konan, Madara, Obito, Minato, Kushina, Shisui, and more. You can play as these characters and use their special abilities and techniques.
-
New maps such as the Hidden Leaf Village, the Hidden Sand Village, the Akatsuki Hideout, the Valley of the End, and more. You can explore these locations and fight in different environments.
-
New menus, towers, and tiles that make the game look more appealing and realistic. You can see the changes in the graphics and the interface of the game.
-
-
Naruto Senki Mod Legendary Shinobi War V4 is not an official version of the game, so you need to download it from a reliable source. You also need a password to unlock the game and access all the features. The password is "Zam Zam".
-
How to download and install Naruto Senki Mod Legendary Shinobi War V4?
-
To download and install Naruto Senki Mod Legendary Shinobi War V4, you need to follow these steps:
-
-
Find a trustworthy website that provides the APK file of the game. You can search for it on Google or use this link: . Make sure you have enough storage space on your device before downloading.
-
Enable unknown sources in your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Install the APK file and open the game. You will see a screen that asks you to enter a password. Type "Zam Zam" and press OK.
-
Enjoy the game. You can now play Naruto Senki Mod Legendary Shinobi War V4 with all the new features and content.
-
-
How to play Naruto Senki Mod Legendary Shinobi War V4?
-
To play Naruto Senki Mod Legendary Shinobi War V4, you need to know the basics of the game. Here are some tips on how to play:
-
-
Choose your main character and start a mission. You can select from many characters from the anime, each with their own stats, skills, and items. You can also customize your character by changing their outfit, hairstyle, and accessories.
-
Defeat enemies and bosses using various techniques and items. You can use different attacks such as taijutsu, ninjutsu, genjutsu, and senjutsu. You can also use items such as chakra pills, healing items, and ranged weapons such as kunai and shuriken. To use an attack or an item, tap on the corresponding button on the screen.
-
Collect money and gold to upgrade your character and unlock new ones. You can earn money and gold by completing missions, defeating enemies, and finding hidden chests. You can use money to buy items and gold to upgrade your skills and unlock new characters.
-
Use the transformation technique to disguise yourself as another character. This is a unique feature of Naruto Senki Mod Legendary Shinobi War V4 that allows you to change your appearance and abilities temporarily. To use this technique, tap on the transformation button on the screen and select a character from the list. You can then use their skills and items for a limited time.
-
-
Tips and tricks for Naruto Senki Mod Legendary Shinobi War V4
-
To make your gaming experience more enjoyable, here are some tips and tricks for Naruto Senki Mod Legendary Shinobi War V4:
-
-
Understand the strengths and weaknesses of each character. Some characters are better at close-range combat, while others are better at long-range combat. Some characters have more chakra, while others have more health. Some characters have more speed, while others have more power. Knowing these differences will help you choose the best character for each situation.
-
Learn the best combinations of attacks and skills. Some attacks and skills work better together than others. For example, using Sasuke's Chidori after Kakashi's Lightning Blade will deal more damage than using them separately. Experiment with different combinations and find out what works best for you.
-
Use clones and full-bodied clones to distract and damage enemies. Clones are illusions that look like you but have no substance. Full-bodied clones are solid copies of you that can fight independently. You can create clones by using Naruto's Shadow Clone Technique or Itachi's Crow Clone Technique. You can create full-bodied clones by using Naruto's Multi Shadow Clone Technique or Pain's Six Paths of Pain Technique. Clones and full-bodied clones can help you confuse your enemies, avoid their attacks, or attack them from multiple directions.
-
Avoid attacks that are easy to read and counter. Some attacks are very obvious and predictable, such as Naruto's Rasengan or Gaara's Sand Coffin. These attacks can be easily dodged or blocked by your enemies, or even turned against you. For example, if you use Naruto's Rasengan against Sasuke, he can use his Sharingan to see through it and counter with his Chidori. To avoid this, you should use attacks that are more subtle and surprising, such as Naruto's Sexy Technique or Gaara's Sand Shower.
-
-
Conclusion
-
Naruto Senki Mod Legendary Shinobi War V4 is a great game for Naruto fans who want to experience the thrill of the anime in a different way. It offers a lot of features and content that make the game more fun and challenging. You can download and install the game easily by following the steps in this article. You can also play the game better by following the tips and tricks in this article. If you are a Naruto fan, you should definitely try this game and see for yourself how awesome it is.
-
How to download naruto senki mod legendary shinobi war v4 for android
-Naruto senki mod legendary shinobi war v4 apk free download
-Naruto senki mod legendary shinobi war v4 gameplay and review
-Naruto senki mod legendary shinobi war v4 new characters and features
-Naruto senki mod legendary shinobi war v4 password and link
-Naruto senki mod legendary shinobi war v4 by zam zam tutorial production
-Naruto senki mod legendary shinobi war v4 update and patch notes
-Naruto senki mod legendary shinobi war v4 vs naruto senki mod boruto senki
-Naruto senki mod legendary shinobi war v4 cheats and hacks
-Naruto senki mod legendary shinobi war v4 best settings and tips
-Naruto senki mod legendary shinobi war v4 offline or online mode
-Naruto senki mod legendary shinobi war v4 system requirements and compatibility
-Naruto senki mod legendary shinobi war v4 bugs and errors fix
-Naruto senki mod legendary shinobi war v4 fan art and wallpapers
-Naruto senki mod legendary shinobi war v4 trailer and teaser
-Naruto senki mod legendary shinobi war v4 download size and speed
-Naruto senki mod legendary shinobi war v4 ratings and reviews
-Naruto senki mod legendary shinobi war v4 alternatives and similar games
-Naruto senki mod legendary shinobi war v4 forum and community
-Naruto senki mod legendary shinobi war v4 guide and walkthrough
-Naruto senki mod legendary shinobi war v4 secrets and easter eggs
-Naruto senki mod legendary shinobi war v4 mods and customizations
-Naruto senki mod legendary shinobi war v4 support and feedback
-Naruto senki mod legendary shinobi war v4 news and updates
-Naruto senki mod legendary shinobi war v4 wiki and facts
-Naruto senki mod legendary shinobi war v4 story and plot
-Naruto senki mod legendary shinobi war v4 characters and skills
-Naruto senki mod legendary shinobi war v4 maps and stages
-Naruto senki mod legendary shinobi war v4 modes and missions
-Naruto senki mod legendary shinobi war v4 achievements and trophies
-Naruto senki mod legendary shinobi war v4 soundtrack and music
-Naruto senki mod legendary shinobi war v4 voice actors and cast
-Naruto senki mod legendary shinobi war v4 developer and publisher
-Naruto senki mod legendary shinobi war v4 release date and price
-Naruto senki mod legendary shinobi war v4 genre and category
-Naruto senki mod legendary shinobi war v4 comparison and contrast
-Naruto senki mod legendary shinobi war v4 pros and cons
-Naruto senki mod legendary shinobi war v4 faq and q&a
-Naruto senki mod legendary shinobi war v4 videos and images
-Naruto senki mod legendary shinobi war v4 memes and jokes
-
FAQs
-
Here are some frequently asked questions about Naruto Senki Mod Legendary Shinobi War V4:
-
-
What is the size of the game?
-
The game is about 100 MB in size. You need to have enough storage space on your device to download and install it.
-
Is the game safe to download and play?
-
The game is safe to download and play as long as you get it from a reliable source. You should also scan the APK file with an antivirus app before installing it. However, since the game is not an official version of Naruto Senki, it may have some bugs or glitches that could affect your device or gameplay. You should play the game at your own risk and responsibility.
-
Can I play the game offline?
-
You can play the game offline without an internet connection. However, some features such as multiplayer mode and quiz mode require an internet connection to work.
-
Can I play the game on PC or iOS devices?
-
The game is only compatible with Android devices. You cannot play it on PC or iOS devices unless you use an emulator or a simulator. However, this may not guarantee a smooth and stable performance of the game.
-
How can I contact the developer of the game?
-
You can contact the developer of the game by visiting their Facebook page: . You can also leave a comment or a review on their YouTube channel: . You can give them feedback, suggestions, or report any issues with the game.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/utils/safetensor_helper.py b/spaces/fb700/chatglm-fitness-RLHF/src/utils/safetensor_helper.py
deleted file mode 100644
index 3cdbdd21e4ed656dfe2d31a57360afb3e96480b3..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/utils/safetensor_helper.py
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
-def load_x_from_safetensor(checkpoint, key):
- x_generator = {}
- for k,v in checkpoint.items():
- if key in k:
- x_generator[k.replace(key+'.', '')] = v
- return x_generator
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Fire MAX 2.90.1 and join millions of players in the most immersive Battle Royale game.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Fire MAX 2.90.1 and join millions of players in the most immersive Battle Royale game.md
deleted file mode 100644
index d9cceaaac5238b6feabd28767a68bdab64d260bd..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Free Fire MAX 2.90.1 and join millions of players in the most immersive Battle Royale game.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
How to Download Free Fire MAX 2.90.1 for Android
-
Free Fire MAX is a popular battle royale game that offers a premium gameplay experience with ultra HD graphics, enhanced effects, and immersive sound. If you are a fan of Free Fire and want to try out the latest version of Free Fire MAX, you are in luck. In this article, we will show you how to download and install Free Fire MAX 2.90.1 for Android devices.
-
What is Free Fire MAX?
-
Free Fire MAX is a standalone application that is designed exclusively for delivering a premium gameplay experience in a battle royale. It is developed by Garena International, the same company that created Free Fire, one of the most downloaded mobile games in the world.
Free Fire MAX is compatible with all Free Fire players via exclusive Firelink technology, which means you can play with your friends and millions of other players across different devices and platforms. You can also use your existing Free Fire account to log in to Free Fire MAX without any hassle.
-
Features of Free Fire MAX
-
Free Fire MAX has many features that make it stand out from other battle royale games. Some of them are:
-
-
Ultra HD graphics and breathtaking effects: Free Fire MAX delivers stunning visuals and realistic animations that will make you feel like you are in the middle of a real battlefield. You can enjoy the details of the environment, the weapons, the characters, and the vehicles with high-resolution textures and dynamic lighting.
-
Fast-paced, deeply immersive gameplay: Free Fire MAX offers a variety of exciting game modes that will keep you on your toes. You can choose from classic mode, clash squad mode, ranked mode, and more. You can also explore different maps that have unique terrains, weather conditions, and loot spots. The game also features a smooth and responsive control system that will let you aim, shoot, and move with ease.
-
4-man squad, with in-game voice chat: Free Fire MAX allows you to create squads of up to 4 players and communicate with them via voice chat right from the start. You can coordinate your strategies, share your loot, and support each other in combat. You can also invite your friends from Free Fire or other social media platforms to join your squad.
-
-
Requirements for Free Fire MAX
-
Free Fire MAX is a high-end game that requires a powerful device to run smoothly. According to the official Google Play Store page, these are the minimum requirements for playing Free Fire MAX on Android devices:
-
-
Android version: 4.4 or higher
-
RAM: 2 GB or higher
-
Storage space: 1.5 GB or higher
-
Internet connection: Stable and fast
-
-
If your device meets these requirements, you can proceed to download and install Free Fire MAX 2.90.1 on your Android device.
-
How to Download and Install Free Fire MAX 2.90.1
-
To download and install Free Fire MAX 2.90.1 on your Android device, you need to follow these steps:
-
Step 1: Enable Unknown Sources
-
Since Free Fire MAX 2.90.1 is not available on the Google Play Store, you need to enable the installation of apps from unknown sources on your device. To do this, go to your device settings and look for the security or privacy option. Then, find the option that says "Allow installation of apps from unknown sources" and toggle it on. You may see a warning message, but you can ignore it and proceed.
-
Step 2: Download the APK and OBB files
-
Next, you need to download the APK and OBB files of Free Fire MAX 2.90.1 from a trusted source. You can use the link below to download them from our website. The APK file is about 47 MB in size, while the OBB file is about 1.4 GB in size. Make sure you have enough storage space and a stable internet connection before downloading them.
How to download free fire max 2.90.1 on android
-Free fire max 2.90.1 apk download for pc
-Download free fire max 2.90.1 latest version
-Free fire max 2.90.1 mod apk download
-Download free fire max 2.90.1 for ios
-Free fire max 2.90.1 update download
-Download free fire max 2.90.1 obb file
-Free fire max 2.90.1 download size
-Download free fire max 2.90.1 from play store
-Free fire max 2.90.1 download link
-Download free fire max 2.90.1 without vpn
-Free fire max 2.90.1 beta download
-Download free fire max 2.90.1 highly compressed
-Free fire max 2.90.1 download for laptop
-Download free fire max 2.90.1 with unlimited diamonds
-Free fire max 2.90.1 gameplay download
-Download free fire max 2.90.1 in jio phone
-Free fire max 2.90.1 hack download
-Download free fire max 2.90.1 offline mode
-Free fire max 2.90.1 graphics settings download
-Download free fire max 2.90.1 emulator
-Free fire max 2.90.1 wallpaper download
-Download free fire max 2.90.1 new map
-Free fire max 2.90.1 redeem code download
-Download free fire max 2.90.1 on chromebook
-Free fire max 2.90.1 review download
-Download free fire max 2.90.1 for windows 10
-Free fire max 2.90.1 trailer download
-Download free fire max 2.90.1 from uptodown
-Free fire max 2.90.1 tips and tricks download
-Download free fire max 2.90.1 on bluestacks
-Free fire max 2.90.1 characters download
-Download free fire max 2.90.1 with hd graphics
-Free fire max 2.90.1 skins download
-Download free fire max 2.90.1 on macbook
-Free fire max 2.90.1 system requirements download
-Download free fire max 2.90.1 with voice chat
-Free fire max 2.90.1 weapons download
-Download free fire max 2.90.1 on amazon appstore
-Free fire max 2.90.1 bugs and glitches download
-Download free fire max 2.90
-
Step 3: Install the APK file
-
After downloading the APK file, you need to install it on your device. To do this, locate the file in your device's file manager and tap on it. You may see a prompt asking you to confirm the installation. Tap on "Install" and wait for the process to complete.
-
Step 4: Copy the OBB file to the Android/OBB folder
-
After installing the APK file, you need to copy the OBB file to the Android/OBB folder on your device's internal storage. To do this, locate the OBB file in your device's file manager and long-press on it. Then, select "Copy" from the menu that appears. Next, navigate to the Android/OBB folder and paste the OBB file there. If you don't see an OBB folder, you can create one by tapping on the "+" icon and naming it "OBB".
-
Step 5: Launch the game and log in with your Free Fire account
-
Finally, you can launch Free Fire MAX from your app drawer and enjoy the game. You will see a splash screen with the Free Fire MAX logo and then a loading screen with some tips and tricks. After that, you will be asked to log in with your Free Fire account. You can use your existing account or create a new one if you don't have one. You can also link your account with Facebook, Google, or VK for easy access.
-
How to Play Free Fire MAX with Free Fire Players
-
One of the best features of Free Fire MAX is that it allows you to play with Free Fire players across different devices and platforms. This means you can team up with your friends who are playing Free Fire on their smartphones or tablets, or even on their PCs using emulators. How is this possible? The answer is Firelink technology.
-
What is Firelink Technology?
-
Firelink technology is a proprietary technology developed by Garena that enables cross-play and cross-progression between Free Fire MAX and Free Fire. It allows players to use their same Free Fire account to log in to both games and sync their data, such as their level, rank, inventory, friends list, etc. It also allows players to join the same lobby and match with each other regardless of which game they are playing.
-
How to Use Firelink Technology to Connect with Free Fire Players
-
To use Firelink technology to connect with Free Fire players, you need to follow these steps:
-
-
Step 1: Launch Free Fire MAX and log in with your Free Fire account.
-
Step 2: Tap on the "Friends" icon at the top right corner of the screen.
-
Step 3: Tap on the "Add Friends" icon at the bottom right corner of the screen.
-
Step 4: Enter the nickname or ID of your friend who is playing Free Fire and tap on "Search".
-
Step 5: Tap on the "Add" button next to your friend's name and wait for them to accept your request.
-
Step 6: Once they accept your request, tap on their name and then tap on "Invite" to invite them to your squad.
-
Step 7: Wait for them to join your squad and then tap on "Start" to begin the match.
-
-
You can also use voice chat or text chat to communicate with your squad members during the match. You can also see their game status, such as their health, kills, and location, on the mini-map.
-
Tips and Tricks for Playing Free Fire MAX
-
Free Fire MAX is a fun and challenging game that requires skill, strategy, and luck to win. Here are some tips and tricks that can help you improve your gameplay and increase your chances of survival:
-
Adjust the Graphics Settings According to Your Device Performance
-
Free Fire MAX has a lot of graphics options that you can customize according to your preference and device performance. You can access them by tapping on the "Settings" icon at the top right corner of the screen and then tapping on "Graphics". You can adjust the resolution, frame rate, shadow quality, anti-aliasing, texture quality, and more. You can also enable or disable features such as HDR mode, bloom effect, depth of field, etc.
-
It is recommended that you choose the graphics settings that suit your device's capabilities and ensure a smooth and stable gameplay. If you experience lag, stuttering, or overheating, you may want to lower some of the graphics settings or turn off some of the features.
-
Use the In-game Voice Chat to Communicate with Your Squad
-
Communication is key in a battle royale game like Free Fire MAX. You need to coordinate with your squad members, share information, and plan your moves. The best way to do this is by using the in-game voice chat feature that allows you to talk to your squad members in real time.
-
You can enable or disable the voice chat by tapping on the "Voice" icon at the top left corner of the screen. You can also adjust the volume and mute or unmute yourself or your squad members by tapping on their icons. You can also use the quick chat feature that lets you send predefined messages to your squad members by tapping on the "Chat" icon at the bottom left corner of the screen.
-
Explore the Different Game Modes and Maps in Free Fire MAX
-
Free Fire MAX offers a variety of game modes and maps that will keep you entertained and challenged. You can choose from classic mode, clash squad mode, ranked mode, and more. Each game mode has its own rules, objectives, and rewards. You can also explore different maps that have unique terrains, weather conditions, and loot spots. Some of the maps are Bermuda, Kalahari, Purgatory, etc.
-
It is advisable that you try out different game modes and maps to find out which ones suit your play style and preferences. You can also learn more about the map layout, the best landing spots, the hot zones, the safe zones, etc. by playing more matches and observing your surroundings.
-
Conclusion
-
Free Fire MAX is a great game for anyone who loves battle royale games with high-quality graphics and immersive gameplay. It is easy to download and install on Android devices using the APK and OBB files. It is also compatible with Free Fire players via Firelink technology that allows cross-play and cross-progression. If you follow the tips and tricks we shared in this article, you will have a better gaming experience and more fun playing Free Fire MAX.
-
FAQs
-
-
Q: Is Free Fire MAX free to play?
-
A: Yes, Free Fire MAX is free to play. However, it may contain in-app purchases that allow you to buy items such as diamonds, skins, characters, etc.
-
Q: Can I play Free Fire MAX on PC?
-
A: Yes, you can play Free Fire MAX on PC using an Android emulator such as BlueStacks or NoxPlayer. However, you may need a powerful PC to run Free Fire MAX smoothly.
-
Q: How can I update Free Fire MAX to the latest version?
-
A: You can update Free Fire MAX to the latest version by downloading and installing the latest APK and OBB files from our website or other trusted sources.
-
Q: How can I report a bug or a problem in Free Fire MAX?
-
A: You can report a bug or a problem in Free Fire MAX by tapping on the "Settings" icon at the top right corner of the screen and then tapping on "Customer Service". You can also contact Garena through their official website or social media platforms.
-
Q: How can I get more diamonds in Free Fire MAX?
-
A: You can get more diamonds in Free Fire MAX by buying them with real money through in-app purchases or by completing tasks and offers from third-party providers.
-
I have already written the article according to your instructions. There is nothing more to write. I hope you are satisfied with my work. If you have any feedback or suggestions, please let me know. Thank you for choosing me as your content writer. 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download New Super Mario Bros. 2 for 3DS and Citra - Free and Fast.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download New Super Mario Bros. 2 for 3DS and Citra - Free and Fast.md
deleted file mode 100644
index b220c6f4a193f98c95f6f2ec03d9ded048ee2372..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download New Super Mario Bros. 2 for 3DS and Citra - Free and Fast.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
Download New Super Mario Bros 2
-
If you are a fan of Mario games, you might want to download New Super Mario Bros 2, a fun and exciting platformer for the Nintendo 3DS. This game is the sequel to New Super Mario Bros, and it features more of the classic side-scrolling action that you love, with some new twists and challenges. In this article, we will tell you everything you need to know about New Super Mario Bros 2, including its features, how to download it, and some tips to make the most of it.
New Super Mario Bros 2 is a game that offers a lot of variety and replay value. Here are some of the features that make it stand out:
-
Gameplay
-
The gameplay of New Super Mario Bros 2 is similar to the previous games in the series, but with some new elements. You control Mario or Luigi as they run, jump, and stomp on enemies across different levels. The goal is to reach the flagpole at the end of each level, while collecting coins, stars, and other items along the way. You can also find secret exits that lead to hidden areas and bonus stages.
-
The game has nine worlds that consist of six main worlds and three special worlds. Each world has a different theme and environment, such as grasslands, deserts, jungles, mountains, castles, and more. You will face various obstacles and enemies, such as Goombas, Koopas, Piranha Plants, Boos, Hammer Bros, and more. You will also encounter boss battles against Bowser and his minions, the Koopalings.
-
download new super mario bros 2 rom
-download new super mario bros 2 for pc
-download new super mario bros 2 cia
-download new super mario bros 2 decrypted
-download new super mario bros 2 free
-download new super mario bros 2 apk
-download new super mario bros 2 android
-download new super mario bros 2 emulator
-download new super mario bros 2 citra
-download new super mario bros 2 online
-download new super mario bros 2 full version
-download new super mario bros 2 iso
-download new super mario bros 2 nintendo 3ds
-download new super mario bros 2 nds
-download new super mario bros 2 mod
-download new super mario bros 2 hack
-download new super mario bros 2 cheats
-download new super mario bros 2 dlc
-download new super mario bros 2 gold edition
-download new super mario bros 2 coin rush
-download new super mario bros 2 save file
-download new super mario bros 2 update
-download new super mario bros 2 rar
-download new super mario bros 2 zip
-download new super mario bros 2 torrent
-how to download new super mario bros 2
-where to download new super mario bros 2
-best site to download new super mario bros 2
-safe way to download new super mario bros 2
-legal way to download new super mario bros 2
-can i download new super mario bros 2
-can you download new super mario bros 2
-should i download new super mario bros 2
-why download new super mario bros 2
-when to download new super mario bros 2
-what is the size of the file to download New Super Mario Bros. 2?
-what are the requirements to play New Super Mario Bros. 2 after downloading it?
-what are the features of New Super Mario Bros. 2 that make it worth downloading?
-what are the reviews of New Super Mario Bros. 2 from people who downloaded it?
-what are the alternatives to New Super Mario Bros. 2 if I don't want to download it?
-
Coin Rush
-
One of the main features of New Super Mario Bros 2 is its emphasis on coin collecting. The game has many new items and mechanics that help you collect more coins than ever before. For example, there is the Gold Flower, which turns Mario into Gold Mario and allows him to shoot gold fireballs that turn enemies and blocks into coins. There is also the Gold Ring, which turns all enemies into gold versions that drop coins when defeated.
-
The game also has a special mode called Coin Rush, which challenges you to collect as many coins as possible in a series of three randomly selected levels. You have one life and a limited amount of time to complete the levels. You can also use StreetPass to share your coin records with other players and compete with them.
-
Power-ups
-
New Super Mario Bros 2 has many power-ups that can help you in your adventure. Some of them are returning from previous games, such as the Super Mushroom, which makes you bigger; the Fire Flower, which lets you throw fireballs; the Starman, which makes you invincible; and the Mini Mushroom, which makes you smaller and able to access narrow spaces.
-
Some power-ups are new or have been modified from previous games. For example, there is the Super Leaf, which gives you a raccoon tail that can be used to fly or whip enemies; the Mega Mushroom, which makes you giant and able to destroy everything in your path; and the Invincibility Leaf, which appears after you die five times in a row and gives you both invincibility and flight.
-
Worlds
-
New Super Mario Bros 2 has nine worlds that you can explore, each with its own theme and challenges. Here is a brief overview of each world:
-
-
-
World
-
Description
-
-
-
World 1
-
A grassy world with hills, pipes, and mushrooms. The boss is Roy Koopa.
-
-
-
World 2
-
A desert world with pyramids, quicksand, and cacti. The boss is Morton Koopa Jr.
-
-
-
World 3
-
A water world with oceans, beaches, and coral reefs. The boss is Wendy O. Koopa.
-
-
-
World 4
-
A jungle world with vines, trees, and swamps. The boss is Iggy Koopa.
-
-
-
World 5
-
A mountain world with cliffs, caves, and waterfalls. The boss is Lemmy Koopa.
-
-
-
World 6
-
A lava world with volcanoes, fireballs, and lava pits. The boss is Ludwig von Koopa.
-
-
-
Mushroom World
-
A special world that can be accessed by finding the secret exit in World 1-3. It has mushroom-themed levels and the boss is Boom Boom.
-
-
-
Flower World
-
A special world that can be accessed by finding the secret exit in World 3-4. It has flower-themed levels and the boss is Pom Pom.
-
-
-
Star World
-
A special world that can be accessed by finding the secret exit in World 6-5. It has star-themed levels and the boss is Bowser Jr.
-
-
Downloadable content
-
New Super Mario Bros 2 also has downloadable content (DLC) that you can purchase and download from the Nintendo eShop. The DLC consists of three packs of three Coin Rush levels each, with different themes and difficulties. The packs are:
-
-
Gold Rush Pack: Easy levels that have a lot of coins and gold items.
-
Coin Challenge Pack A: Medium levels that have a high score challenge and online rankings.
-
Nerve-Wrack Pack: Hard levels that have a lot of enemies and obstacles.
-
-
You can also download free DLC packs that are released periodically by Nintendo, such as the Gold Classics Pack, which has levels inspired by classic Mario games.
-
How to download New Super Mario Bros 2
-
If you want to download New Super Mario Bros 2, you will need the following:
-
-
A Nintendo 3DS system with an internet connection.
-
A Nintendo Network ID that is linked to your system.
-
Enough space on your system memory or SD card to store the game data (about 2.9 GB).
-
Enough funds on your Nintendo eShop account to purchase the game (about $29.99).
-
-
Once you have everything ready, you can follow these steps to download the game:
-
-
Turn on your Nintendo 3DS system and tap the Nintendo eShop icon on the home menu.
-
Select New Super Mario Bros 2 from the list of games or search for it using the search function.
-
Select Download or Purchase to start the download process.
-
Wait for the download to complete. You can check the progress on the home menu or on the upper screen of your system.
-
Once the download is finished, you can start playing the game by tapping its icon on the home menu.
-
-
Tips for downloading New Super Mario Bros 2
-
To make sure you have a smooth and enjoyable experience when downloading New Super Mario Bros 2, here are some tips to keep in mind:
-
-
Make sure your system battery is fully charged or plugged into a power outlet before downloading the game. Downloading large files can drain your battery quickly.
-
Make sure you have a stable and fast internet connection when downloading the game. Downloading large files can take a long time or fail if your connection is weak or interrupted.
-
Make sure you have enough space on your system memory or SD card to store the game data. You can check how much space you have by going to System Settings > Data Management > Nintendo 3DS > Software. You can also delete or move data from other games or applications if you need more space.
-
If you want to download DLC packs for New Super Mario Bros 2, you will need to repeat the same steps as above, but select the DLC option instead of the game option. You can also access the DLC menu from within the game by selecting Coin Rush and then Shop.
-
If you want to share your coin records with other players via StreetPass, you will need to enable StreetPass for New Super Mario Bros 2. You can do this by going to System Settings > Data Management > StreetPass Management and selecting New Super Mario Bros 2. You can also customize your StreetPass settings from within the game by selecting Coin Rush and then Settings.
-
-
Conclusion
-
New Super Mario Bros 2 is a great game that you can download and enjoy on your Nintendo 3DS system. It has many features that make it fun and challenging, such as the coin collecting, the power-ups, the worlds, and the DLC. It is easy to download and play, as long as you have the necessary requirements and follow the steps. It is also a game that you can share and compete with other players via StreetPass and online rankings.
-
If you are looking for a game that will keep you entertained and engaged for hours, you should download New Super Mario Bros 2 today. It is a game that will make you feel like a kid again, as you jump, run, and collect coins in the colorful and vibrant worlds of Mario. It is a game that will make you smile, laugh, and cheer as you overcome the obstacles and enemies in your way. It is a game that will make you happy, as you experience the joy and excitement of playing a classic Mario game.
-
So what are you waiting for? Download New Super Mario Bros 2 now and join Mario and Luigi in their latest adventure!
-
FAQs
-
Here are some frequently asked questions about New Super Mario Bros 2:
-
-
Q: How many coins can I collect in New Super Mario Bros 2?
-
A: There is no limit to how many coins you can collect in New Super Mario Bros 2. The game keeps track of your total coin count across all modes and saves it to your profile. You can also see how many coins you have collected in each level and world. The game also has a special goal of collecting one million coins, which unlocks a special reward.
-
Q: How do I unlock the special worlds in New Super Mario Bros 2?
-
A: To unlock the special worlds in New Super Mario Bros 2, you need to find the secret exits in some of the levels in the main worlds. The secret exits are usually hidden behind fake walls or pipes, or require a certain power-up or item to access. They lead to warp cannons that take you to the special worlds. You can tell if a level has a secret exit by looking at the map screen. If a level has two paths leading from it, it means it has a secret exit.
-
Q: How do I play with a friend in New Super Mario Bros 2?
-
A: To play with a friend in New Super Mario Bros 2, you need to have two Nintendo 3DS systems and two copies of the game. You can then use the local wireless or download play options to play together. You can choose to play cooperatively or competitively in any of the levels or modes in the game. You can also use voice chat to communicate with your friend while playing.
-
Q: How do I get more lives in New Super Mario Bros 2?
-
A: There are many ways to get more lives in New Super Mario Bros 2. Some of them are:
-
-
Collecting 100 coins gives you one extra life.
-
Collecting three star coins in a level gives you one extra life.
-
Collecting a green mushroom gives you one extra life.
-
Collecting three green mushrooms in a row gives you three extra lives.
-
Collecting three gold mushrooms in a row gives you five extra lives.
-
Finding a hidden 1-Up Toad house gives you three extra lives.
-
Finding a hidden Star Toad house gives you five extra lives.
-
Finding a hidden Moon Toad house gives you ten extra lives.
-
-
Q: How do I save my progress in New Super Mario Bros 2?
-
A: The game automatically saves your progress after completing each level or mode. You can also manually save your progress by selecting Save from the pause menu or from the map screen. You can have up to three save files for different profiles.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/cors/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/cors/README.md
deleted file mode 100644
index 56f269ede2d151ea6bafb05b8132d29bf410f904..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/cors/README.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Installation
-> `npm install --save @types/cors`
-
-# Summary
-This package contains type definitions for cors (https://github.com/expressjs/cors/).
-
-# Details
-Files were exported from https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/cors.
-## [index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/cors/index.d.ts)
-````ts
-// Type definitions for cors 2.8
-// Project: https://github.com/expressjs/cors/
-// Definitions by: Alan Plum
-// Gaurav Sharma
-// Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped
-// TypeScript Version: 2.3
-
-///
-
-import { IncomingHttpHeaders } from 'http';
-
-type StaticOrigin = boolean | string | RegExp | (boolean | string | RegExp)[];
-
-type CustomOrigin = (requestOrigin: string | undefined, callback: (err: Error | null, origin?: StaticOrigin) => void) => void;
-
-declare namespace e {
- interface CorsRequest {
- method?: string | undefined;
- headers: IncomingHttpHeaders;
- }
- interface CorsOptions {
- /**
- * @default '*''
- */
- origin?: StaticOrigin | CustomOrigin | undefined;
- /**
- * @default 'GET,HEAD,PUT,PATCH,POST,DELETE'
- */
- methods?: string | string[] | undefined;
- allowedHeaders?: string | string[] | undefined;
- exposedHeaders?: string | string[] | undefined;
- credentials?: boolean | undefined;
- maxAge?: number | undefined;
- /**
- * @default false
- */
- preflightContinue?: boolean | undefined;
- /**
- * @default 204
- */
- optionsSuccessStatus?: number | undefined;
- }
- type CorsOptionsDelegate = (
- req: T,
- callback: (err: Error | null, options?: CorsOptions) => void,
- ) => void;
-}
-
-declare function e(
- options?: e.CorsOptions | e.CorsOptionsDelegate,
-): (
- req: T,
- res: {
- statusCode?: number | undefined;
- setHeader(key: string, value: string): any;
- end(): any;
- },
- next: (err?: any) => any,
-) => void;
-export = e;
-
-````
-
-### Additional Details
- * Last updated: Mon, 05 Dec 2022 07:33:01 GMT
- * Dependencies: [@types/node](https://npmjs.com/package/@types/node)
- * Global values: none
-
-# Credits
-These definitions were written by [Alan Plum](https://github.com/pluma), and [Gaurav Sharma](https://github.com/gtpan77).
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/license.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/license.md
deleted file mode 100644
index 69b61253a38926757b7de1d4df4880fc2105c2c9..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/license.md
+++ /dev/null
@@ -1,21 +0,0 @@
-The MIT License (MIT)
-
-Copyright (c) 2016 Zeit, Inc.
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/qs/CHANGELOG.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/qs/CHANGELOG.md
deleted file mode 100644
index 37b1d3f04e97c31a1066f85ec4873080841e9781..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/qs/CHANGELOG.md
+++ /dev/null
@@ -1,546 +0,0 @@
-## **6.11.0
-- [New] [Fix] `stringify`: revert 0e903c0; add `commaRoundTrip` option (#442)
-- [readme] fix version badge
-
-## **6.10.5**
-- [Fix] `stringify`: with `arrayFormat: comma`, properly include an explicit `[]` on a single-item array (#434)
-
-## **6.10.4**
-- [Fix] `stringify`: with `arrayFormat: comma`, include an explicit `[]` on a single-item array (#441)
-- [meta] use `npmignore` to autogenerate an npmignore file
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `has-symbol`, `object-inspect`, `tape`
-
-## **6.10.3**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [actions] reuse common workflows
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `object-inspect`, `tape`
-
-## **6.10.2**
-- [Fix] `stringify`: actually fix cyclic references (#426)
-- [Fix] `stringify`: avoid encoding arrayformat comma when `encodeValuesOnly = true` (#424)
-- [readme] remove travis badge; add github actions/codecov badges; update URLs
-- [Docs] add note and links for coercing primitive values (#408)
-- [actions] update codecov uploader
-- [actions] update workflows
-- [Tests] clean up stringify tests slightly
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `object-inspect`, `safe-publish-latest`, `tape`
-
-## **6.10.1**
-- [Fix] `stringify`: avoid exception on repeated object values (#402)
-
-## **6.10.0**
-- [New] `stringify`: throw on cycles, instead of an infinite loop (#395, #394, #393)
-- [New] `parse`: add `allowSparse` option for collapsing arrays with missing indices (#312)
-- [meta] fix README.md (#399)
-- [meta] only run `npm run dist` in publish, not install
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `has-symbols`, `tape`
-- [Tests] fix tests on node v0.6
-- [Tests] use `ljharb/actions/node/install` instead of `ljharb/actions/node/run`
-- [Tests] Revert "[meta] ignore eclint transitive audit warning"
-
-## **6.9.7**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Fix] `stringify`: avoid encoding arrayformat comma when `encodeValuesOnly = true` (#424)
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [readme] remove travis badge; add github actions/codecov badges; update URLs
-- [Docs] add note and links for coercing primitive values (#408)
-- [Tests] clean up stringify tests slightly
-- [meta] fix README.md (#399)
-- Revert "[meta] ignore eclint transitive audit warning"
-- [actions] backport actions from main
-- [Dev Deps] backport updates from main
-
-## **6.9.6**
-- [Fix] restore `dist` dir; mistakenly removed in d4f6c32
-
-## **6.9.5**
-- [Fix] `stringify`: do not encode parens for RFC1738
-- [Fix] `stringify`: fix arrayFormat comma with empty array/objects (#350)
-- [Refactor] `format`: remove `util.assign` call
-- [meta] add "Allow Edits" workflow; update rebase workflow
-- [actions] switch Automatic Rebase workflow to `pull_request_target` event
-- [Tests] `stringify`: add tests for #378
-- [Tests] migrate tests to Github Actions
-- [Tests] run `nyc` on all tests; use `tape` runner
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `browserify`, `mkdirp`, `object-inspect`, `tape`; add `aud`
-
-## **6.9.4**
-- [Fix] `stringify`: when `arrayFormat` is `comma`, respect `serializeDate` (#364)
-- [Refactor] `stringify`: reduce branching (part of #350)
-- [Refactor] move `maybeMap` to `utils`
-- [Dev Deps] update `browserify`, `tape`
-
-## **6.9.3**
-- [Fix] proper comma parsing of URL-encoded commas (#361)
-- [Fix] parses comma delimited array while having percent-encoded comma treated as normal text (#336)
-
-## **6.9.2**
-- [Fix] `parse`: Fix parsing array from object with `comma` true (#359)
-- [Fix] `parse`: throw a TypeError instead of an Error for bad charset (#349)
-- [meta] ignore eclint transitive audit warning
-- [meta] fix indentation in package.json
-- [meta] add tidelift marketing copy
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `object-inspect`, `has-symbols`, `tape`, `mkdirp`, `iconv-lite`
-- [actions] add automatic rebasing / merge commit blocking
-
-## **6.9.1**
-- [Fix] `parse`: with comma true, handle field that holds an array of arrays (#335)
-- [Fix] `parse`: with comma true, do not split non-string values (#334)
-- [meta] add `funding` field
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`
-- [Tests] use shared travis-ci config
-
-## **6.9.0**
-- [New] `parse`/`stringify`: Pass extra key/value argument to `decoder` (#333)
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `evalmd`
-- [Tests] `parse`: add passing `arrayFormat` tests
-- [Tests] add `posttest` using `npx aud` to run `npm audit` without a lockfile
-- [Tests] up to `node` `v12.10`, `v11.15`, `v10.16`, `v8.16`
-- [Tests] `Buffer.from` in node v5.0-v5.9 and v4.0-v4.4 requires a TypedArray
-
-## **6.8.3**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [Fix] `stringify`: avoid encoding arrayformat comma when `encodeValuesOnly = true` (#424)
-- [readme] remove travis badge; add github actions/codecov badges; update URLs
-- [Tests] clean up stringify tests slightly
-- [Docs] add note and links for coercing primitive values (#408)
-- [meta] fix README.md (#399)
-- [actions] backport actions from main
-- [Dev Deps] backport updates from main
-- [Refactor] `stringify`: reduce branching
-- [meta] do not publish workflow files
-
-## **6.8.2**
-- [Fix] proper comma parsing of URL-encoded commas (#361)
-- [Fix] parses comma delimited array while having percent-encoded comma treated as normal text (#336)
-
-## **6.8.1**
-- [Fix] `parse`: Fix parsing array from object with `comma` true (#359)
-- [Fix] `parse`: throw a TypeError instead of an Error for bad charset (#349)
-- [Fix] `parse`: with comma true, handle field that holds an array of arrays (#335)
-- [fix] `parse`: with comma true, do not split non-string values (#334)
-- [meta] add tidelift marketing copy
-- [meta] add `funding` field
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `tape`, `safe-publish-latest`, `evalmd`, `has-symbols`, `iconv-lite`, `mkdirp`, `object-inspect`
-- [Tests] `parse`: add passing `arrayFormat` tests
-- [Tests] use shared travis-ci configs
-- [Tests] `Buffer.from` in node v5.0-v5.9 and v4.0-v4.4 requires a TypedArray
-- [actions] add automatic rebasing / merge commit blocking
-
-## **6.8.0**
-- [New] add `depth=false` to preserve the original key; [Fix] `depth=0` should preserve the original key (#326)
-- [New] [Fix] stringify symbols and bigints
-- [Fix] ensure node 0.12 can stringify Symbols
-- [Fix] fix for an impossible situation: when the formatter is called with a non-string value
-- [Refactor] `formats`: tiny bit of cleanup.
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `browserify`, `safe-publish-latest`, `iconv-lite`, `tape`
-- [Tests] add tests for `depth=0` and `depth=false` behavior, both current and intuitive/intended (#326)
-- [Tests] use `eclint` instead of `editorconfig-tools`
-- [docs] readme: add security note
-- [meta] add github sponsorship
-- [meta] add FUNDING.yml
-- [meta] Clean up license text so it’s properly detected as BSD-3-Clause
-
-## **6.7.3**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Fix] `stringify`: avoid encoding arrayformat comma when `encodeValuesOnly = true` (#424)
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [readme] remove travis badge; add github actions/codecov badges; update URLs
-- [Docs] add note and links for coercing primitive values (#408)
-- [meta] fix README.md (#399)
-- [meta] do not publish workflow files
-- [actions] backport actions from main
-- [Dev Deps] backport updates from main
-- [Tests] use `nyc` for coverage
-- [Tests] clean up stringify tests slightly
-
-## **6.7.2**
-- [Fix] proper comma parsing of URL-encoded commas (#361)
-- [Fix] parses comma delimited array while having percent-encoded comma treated as normal text (#336)
-
-## **6.7.1**
-- [Fix] `parse`: Fix parsing array from object with `comma` true (#359)
-- [Fix] `parse`: with comma true, handle field that holds an array of arrays (#335)
-- [fix] `parse`: with comma true, do not split non-string values (#334)
-- [Fix] `parse`: throw a TypeError instead of an Error for bad charset (#349)
-- [Fix] fix for an impossible situation: when the formatter is called with a non-string value
-- [Refactor] `formats`: tiny bit of cleanup.
-- readme: add security note
-- [meta] add tidelift marketing copy
-- [meta] add `funding` field
-- [meta] add FUNDING.yml
-- [meta] Clean up license text so it’s properly detected as BSD-3-Clause
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `tape`, `safe-publish-latest`, `evalmd`, `iconv-lite`, `mkdirp`, `object-inspect`, `browserify`
-- [Tests] `parse`: add passing `arrayFormat` tests
-- [Tests] use shared travis-ci configs
-- [Tests] `Buffer.from` in node v5.0-v5.9 and v4.0-v4.4 requires a TypedArray
-- [Tests] add tests for `depth=0` and `depth=false` behavior, both current and intuitive/intended
-- [Tests] use `eclint` instead of `editorconfig-tools`
-- [actions] add automatic rebasing / merge commit blocking
-
-## **6.7.0**
-- [New] `stringify`/`parse`: add `comma` as an `arrayFormat` option (#276, #219)
-- [Fix] correctly parse nested arrays (#212)
-- [Fix] `utils.merge`: avoid a crash with a null target and a truthy non-array source, also with an array source
-- [Robustness] `stringify`: cache `Object.prototype.hasOwnProperty`
-- [Refactor] `utils`: `isBuffer`: small tweak; add tests
-- [Refactor] use cached `Array.isArray`
-- [Refactor] `parse`/`stringify`: make a function to normalize the options
-- [Refactor] `utils`: reduce observable [[Get]]s
-- [Refactor] `stringify`/`utils`: cache `Array.isArray`
-- [Tests] always use `String(x)` over `x.toString()`
-- [Tests] fix Buffer tests to work in node < 4.5 and node < 5.10
-- [Tests] temporarily allow coverage to fail
-
-## **6.6.1**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Fix] fix for an impossible situation: when the formatter is called with a non-string value
-- [Fix] `utils.merge`: avoid a crash with a null target and an array source
-- [Fix] `utils.merge`: avoid a crash with a null target and a truthy non-array source
-- [Fix] correctly parse nested arrays
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [Robustness] `stringify`: cache `Object.prototype.hasOwnProperty`
-- [Refactor] `formats`: tiny bit of cleanup.
-- [Refactor] `utils`: `isBuffer`: small tweak; add tests
-- [Refactor]: `stringify`/`utils`: cache `Array.isArray`
-- [Refactor] `utils`: reduce observable [[Get]]s
-- [Refactor] use cached `Array.isArray`
-- [Refactor] `parse`/`stringify`: make a function to normalize the options
-- [readme] remove travis badge; add github actions/codecov badges; update URLs
-- [Docs] Clarify the need for "arrayLimit" option
-- [meta] fix README.md (#399)
-- [meta] do not publish workflow files
-- [meta] Clean up license text so it’s properly detected as BSD-3-Clause
-- [meta] add FUNDING.yml
-- [meta] Fixes typo in CHANGELOG.md
-- [actions] backport actions from main
-- [Tests] fix Buffer tests to work in node < 4.5 and node < 5.10
-- [Tests] always use `String(x)` over `x.toString()`
-- [Dev Deps] backport from main
-
-## **6.6.0**
-- [New] Add support for iso-8859-1, utf8 "sentinel" and numeric entities (#268)
-- [New] move two-value combine to a `utils` function (#189)
-- [Fix] `stringify`: fix a crash with `strictNullHandling` and a custom `filter`/`serializeDate` (#279)
-- [Fix] when `parseArrays` is false, properly handle keys ending in `[]` (#260)
-- [Fix] `stringify`: do not crash in an obscure combo of `interpretNumericEntities`, a bad custom `decoder`, & `iso-8859-1`
-- [Fix] `utils`: `merge`: fix crash when `source` is a truthy primitive & no options are provided
-- [refactor] `stringify`: Avoid arr = arr.concat(...), push to the existing instance (#269)
-- [Refactor] `parse`: only need to reassign the var once
-- [Refactor] `parse`/`stringify`: clean up `charset` options checking; fix defaults
-- [Refactor] add missing defaults
-- [Refactor] `parse`: one less `concat` call
-- [Refactor] `utils`: `compactQueue`: make it explicitly side-effecting
-- [Dev Deps] update `browserify`, `eslint`, `@ljharb/eslint-config`, `iconv-lite`, `safe-publish-latest`, `tape`
-- [Tests] up to `node` `v10.10`, `v9.11`, `v8.12`, `v6.14`, `v4.9`; pin included builds to LTS
-
-## **6.5.3**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Fix]` `utils.merge`: avoid a crash with a null target and a truthy non-array source
-- [Fix] correctly parse nested arrays
-- [Fix] `stringify`: fix a crash with `strictNullHandling` and a custom `filter`/`serializeDate` (#279)
-- [Fix] `utils`: `merge`: fix crash when `source` is a truthy primitive & no options are provided
-- [Fix] when `parseArrays` is false, properly handle keys ending in `[]`
-- [Fix] fix for an impossible situation: when the formatter is called with a non-string value
-- [Fix] `utils.merge`: avoid a crash with a null target and an array source
-- [Refactor] `utils`: reduce observable [[Get]]s
-- [Refactor] use cached `Array.isArray`
-- [Refactor] `stringify`: Avoid arr = arr.concat(...), push to the existing instance (#269)
-- [Refactor] `parse`: only need to reassign the var once
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [readme] remove travis badge; add github actions/codecov badges; update URLs
-- [Docs] Clean up license text so it’s properly detected as BSD-3-Clause
-- [Docs] Clarify the need for "arrayLimit" option
-- [meta] fix README.md (#399)
-- [meta] add FUNDING.yml
-- [actions] backport actions from main
-- [Tests] always use `String(x)` over `x.toString()`
-- [Tests] remove nonexistent tape option
-- [Dev Deps] backport from main
-
-## **6.5.2**
-- [Fix] use `safer-buffer` instead of `Buffer` constructor
-- [Refactor] utils: `module.exports` one thing, instead of mutating `exports` (#230)
-- [Dev Deps] update `browserify`, `eslint`, `iconv-lite`, `safer-buffer`, `tape`, `browserify`
-
-## **6.5.1**
-- [Fix] Fix parsing & compacting very deep objects (#224)
-- [Refactor] name utils functions
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `tape`
-- [Tests] up to `node` `v8.4`; use `nvm install-latest-npm` so newer npm doesn’t break older node
-- [Tests] Use precise dist for Node.js 0.6 runtime (#225)
-- [Tests] make 0.6 required, now that it’s passing
-- [Tests] on `node` `v8.2`; fix npm on node 0.6
-
-## **6.5.0**
-- [New] add `utils.assign`
-- [New] pass default encoder/decoder to custom encoder/decoder functions (#206)
-- [New] `parse`/`stringify`: add `ignoreQueryPrefix`/`addQueryPrefix` options, respectively (#213)
-- [Fix] Handle stringifying empty objects with addQueryPrefix (#217)
-- [Fix] do not mutate `options` argument (#207)
-- [Refactor] `parse`: cache index to reuse in else statement (#182)
-- [Docs] add various badges to readme (#208)
-- [Dev Deps] update `eslint`, `browserify`, `iconv-lite`, `tape`
-- [Tests] up to `node` `v8.1`, `v7.10`, `v6.11`; npm v4.6 breaks on node < v1; npm v5+ breaks on node < v4
-- [Tests] add `editorconfig-tools`
-
-## **6.4.1**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Fix] fix for an impossible situation: when the formatter is called with a non-string value
-- [Fix] use `safer-buffer` instead of `Buffer` constructor
-- [Fix] `utils.merge`: avoid a crash with a null target and an array source
-- [Fix]` `utils.merge`: avoid a crash with a null target and a truthy non-array source
-- [Fix] `stringify`: fix a crash with `strictNullHandling` and a custom `filter`/`serializeDate` (#279)
-- [Fix] `utils`: `merge`: fix crash when `source` is a truthy primitive & no options are provided
-- [Fix] when `parseArrays` is false, properly handle keys ending in `[]`
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [Refactor] use cached `Array.isArray`
-- [Refactor] `stringify`: Avoid arr = arr.concat(...), push to the existing instance (#269)
-- [readme] remove travis badge; add github actions/codecov badges; update URLs
-- [Docs] Clarify the need for "arrayLimit" option
-- [meta] fix README.md (#399)
-- [meta] Clean up license text so it’s properly detected as BSD-3-Clause
-- [meta] add FUNDING.yml
-- [actions] backport actions from main
-- [Tests] remove nonexistent tape option
-- [Dev Deps] backport from main
-
-## **6.4.0**
-- [New] `qs.stringify`: add `encodeValuesOnly` option
-- [Fix] follow `allowPrototypes` option during merge (#201, #201)
-- [Fix] support keys starting with brackets (#202, #200)
-- [Fix] chmod a-x
-- [Dev Deps] update `eslint`
-- [Tests] up to `node` `v7.7`, `v6.10`,` v4.8`; disable osx builds since they block linux builds
-- [eslint] reduce warnings
-
-## **6.3.3**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Fix] fix for an impossible situation: when the formatter is called with a non-string value
-- [Fix] `utils.merge`: avoid a crash with a null target and an array source
-- [Fix]` `utils.merge`: avoid a crash with a null target and a truthy non-array source
-- [Fix] `stringify`: fix a crash with `strictNullHandling` and a custom `filter`/`serializeDate` (#279)
-- [Fix] `utils`: `merge`: fix crash when `source` is a truthy primitive & no options are provided
-- [Fix] when `parseArrays` is false, properly handle keys ending in `[]`
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [Refactor] use cached `Array.isArray`
-- [Refactor] `stringify`: Avoid arr = arr.concat(...), push to the existing instance (#269)
-- [Docs] Clarify the need for "arrayLimit" option
-- [meta] fix README.md (#399)
-- [meta] Clean up license text so it’s properly detected as BSD-3-Clause
-- [meta] add FUNDING.yml
-- [actions] backport actions from main
-- [Tests] use `safer-buffer` instead of `Buffer` constructor
-- [Tests] remove nonexistent tape option
-- [Dev Deps] backport from main
-
-## **6.3.2**
-- [Fix] follow `allowPrototypes` option during merge (#201, #200)
-- [Dev Deps] update `eslint`
-- [Fix] chmod a-x
-- [Fix] support keys starting with brackets (#202, #200)
-- [Tests] up to `node` `v7.7`, `v6.10`,` v4.8`; disable osx builds since they block linux builds
-
-## **6.3.1**
-- [Fix] ensure that `allowPrototypes: false` does not ever shadow Object.prototype properties (thanks, @snyk!)
-- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `browserify`, `iconv-lite`, `qs-iconv`, `tape`
-- [Tests] on all node minors; improve test matrix
-- [Docs] document stringify option `allowDots` (#195)
-- [Docs] add empty object and array values example (#195)
-- [Docs] Fix minor inconsistency/typo (#192)
-- [Docs] document stringify option `sort` (#191)
-- [Refactor] `stringify`: throw faster with an invalid encoder
-- [Refactor] remove unnecessary escapes (#184)
-- Remove contributing.md, since `qs` is no longer part of `hapi` (#183)
-
-## **6.3.0**
-- [New] Add support for RFC 1738 (#174, #173)
-- [New] `stringify`: Add `serializeDate` option to customize Date serialization (#159)
-- [Fix] ensure `utils.merge` handles merging two arrays
-- [Refactor] only constructors should be capitalized
-- [Refactor] capitalized var names are for constructors only
-- [Refactor] avoid using a sparse array
-- [Robustness] `formats`: cache `String#replace`
-- [Dev Deps] update `browserify`, `eslint`, `@ljharb/eslint-config`; add `safe-publish-latest`
-- [Tests] up to `node` `v6.8`, `v4.6`; improve test matrix
-- [Tests] flesh out arrayLimit/arrayFormat tests (#107)
-- [Tests] skip Object.create tests when null objects are not available
-- [Tests] Turn on eslint for test files (#175)
-
-## **6.2.4**
-- [Fix] `parse`: ignore `__proto__` keys (#428)
-- [Fix] `utils.merge`: avoid a crash with a null target and an array source
-- [Fix] `utils.merge`: avoid a crash with a null target and a truthy non-array source
-- [Fix] `utils`: `merge`: fix crash when `source` is a truthy primitive & no options are provided
-- [Fix] when `parseArrays` is false, properly handle keys ending in `[]`
-- [Robustness] `stringify`: avoid relying on a global `undefined` (#427)
-- [Refactor] use cached `Array.isArray`
-- [Docs] Clarify the need for "arrayLimit" option
-- [meta] fix README.md (#399)
-- [meta] Clean up license text so it’s properly detected as BSD-3-Clause
-- [meta] add FUNDING.yml
-- [actions] backport actions from main
-- [Tests] use `safer-buffer` instead of `Buffer` constructor
-- [Tests] remove nonexistent tape option
-- [Dev Deps] backport from main
-
-## **6.2.3**
-- [Fix] follow `allowPrototypes` option during merge (#201, #200)
-- [Fix] chmod a-x
-- [Fix] support keys starting with brackets (#202, #200)
-- [Tests] up to `node` `v7.7`, `v6.10`,` v4.8`; disable osx builds since they block linux builds
-
-## **6.2.2**
-- [Fix] ensure that `allowPrototypes: false` does not ever shadow Object.prototype properties
-
-## **6.2.1**
-- [Fix] ensure `key[]=x&key[]&key[]=y` results in 3, not 2, values
-- [Refactor] Be explicit and use `Object.prototype.hasOwnProperty.call`
-- [Tests] remove `parallelshell` since it does not reliably report failures
-- [Tests] up to `node` `v6.3`, `v5.12`
-- [Dev Deps] update `tape`, `eslint`, `@ljharb/eslint-config`, `qs-iconv`
-
-## [**6.2.0**](https://github.com/ljharb/qs/issues?milestone=36&state=closed)
-- [New] pass Buffers to the encoder/decoder directly (#161)
-- [New] add "encoder" and "decoder" options, for custom param encoding/decoding (#160)
-- [Fix] fix compacting of nested sparse arrays (#150)
-
-## **6.1.2
-- [Fix] follow `allowPrototypes` option during merge (#201, #200)
-- [Fix] chmod a-x
-- [Fix] support keys starting with brackets (#202, #200)
-- [Tests] up to `node` `v7.7`, `v6.10`,` v4.8`; disable osx builds since they block linux builds
-
-## **6.1.1**
-- [Fix] ensure that `allowPrototypes: false` does not ever shadow Object.prototype properties
-
-## [**6.1.0**](https://github.com/ljharb/qs/issues?milestone=35&state=closed)
-- [New] allowDots option for `stringify` (#151)
-- [Fix] "sort" option should work at a depth of 3 or more (#151)
-- [Fix] Restore `dist` directory; will be removed in v7 (#148)
-
-## **6.0.4**
-- [Fix] follow `allowPrototypes` option during merge (#201, #200)
-- [Fix] chmod a-x
-- [Fix] support keys starting with brackets (#202, #200)
-- [Tests] up to `node` `v7.7`, `v6.10`,` v4.8`; disable osx builds since they block linux builds
-
-## **6.0.3**
-- [Fix] ensure that `allowPrototypes: false` does not ever shadow Object.prototype properties
-- [Fix] Restore `dist` directory; will be removed in v7 (#148)
-
-## [**6.0.2**](https://github.com/ljharb/qs/issues?milestone=33&state=closed)
-- Revert ES6 requirement and restore support for node down to v0.8.
-
-## [**6.0.1**](https://github.com/ljharb/qs/issues?milestone=32&state=closed)
-- [**#127**](https://github.com/ljharb/qs/pull/127) Fix engines definition in package.json
-
-## [**6.0.0**](https://github.com/ljharb/qs/issues?milestone=31&state=closed)
-- [**#124**](https://github.com/ljharb/qs/issues/124) Use ES6 and drop support for node < v4
-
-## **5.2.1**
-- [Fix] ensure `key[]=x&key[]&key[]=y` results in 3, not 2, values
-
-## [**5.2.0**](https://github.com/ljharb/qs/issues?milestone=30&state=closed)
-- [**#64**](https://github.com/ljharb/qs/issues/64) Add option to sort object keys in the query string
-
-## [**5.1.0**](https://github.com/ljharb/qs/issues?milestone=29&state=closed)
-- [**#117**](https://github.com/ljharb/qs/issues/117) make URI encoding stringified results optional
-- [**#106**](https://github.com/ljharb/qs/issues/106) Add flag `skipNulls` to optionally skip null values in stringify
-
-## [**5.0.0**](https://github.com/ljharb/qs/issues?milestone=28&state=closed)
-- [**#114**](https://github.com/ljharb/qs/issues/114) default allowDots to false
-- [**#100**](https://github.com/ljharb/qs/issues/100) include dist to npm
-
-## [**4.0.0**](https://github.com/ljharb/qs/issues?milestone=26&state=closed)
-- [**#98**](https://github.com/ljharb/qs/issues/98) make returning plain objects and allowing prototype overwriting properties optional
-
-## [**3.1.0**](https://github.com/ljharb/qs/issues?milestone=24&state=closed)
-- [**#89**](https://github.com/ljharb/qs/issues/89) Add option to disable "Transform dot notation to bracket notation"
-
-## [**3.0.0**](https://github.com/ljharb/qs/issues?milestone=23&state=closed)
-- [**#80**](https://github.com/ljharb/qs/issues/80) qs.parse silently drops properties
-- [**#77**](https://github.com/ljharb/qs/issues/77) Perf boost
-- [**#60**](https://github.com/ljharb/qs/issues/60) Add explicit option to disable array parsing
-- [**#74**](https://github.com/ljharb/qs/issues/74) Bad parse when turning array into object
-- [**#81**](https://github.com/ljharb/qs/issues/81) Add a `filter` option
-- [**#68**](https://github.com/ljharb/qs/issues/68) Fixed issue with recursion and passing strings into objects.
-- [**#66**](https://github.com/ljharb/qs/issues/66) Add mixed array and object dot notation support Closes: #47
-- [**#76**](https://github.com/ljharb/qs/issues/76) RFC 3986
-- [**#85**](https://github.com/ljharb/qs/issues/85) No equal sign
-- [**#84**](https://github.com/ljharb/qs/issues/84) update license attribute
-
-## [**2.4.1**](https://github.com/ljharb/qs/issues?milestone=20&state=closed)
-- [**#73**](https://github.com/ljharb/qs/issues/73) Property 'hasOwnProperty' of object #