How to Download Vray Sketchup 2022 Crackeado for Free
-
Vray Sketchup 2022 is a powerful rendering software that can create realistic and stunning images from 3D models. It is widely used by architects, designers, and artists for various projects. However, Vray Sketchup 2022 is not a cheap software, and it requires a license to use. If you want to download Vray Sketchup 2022 crackeado for free, you might be tempted to look for online sources that offer cracked versions of the software. But is it worth it?
In this article, we will explain why you should avoid downloading Vray Sketchup 2022 crackeado for free, and what are the risks and consequences of doing so. We will also provide some alternatives that can help you use Vray Sketchup 2022 legally and safely.
-
Why You Should Not Download Vray Sketchup 2022 Crackeado for Free
-
Downloading Vray Sketchup 2022 crackeado for free might seem like a good idea at first, but it comes with many drawbacks and dangers. Here are some of the reasons why you should not download Vray Sketchup 2022 crackeado for free:
-
-
It is illegal. Downloading Vray Sketchup 2022 crackeado for free is a violation of the software's terms of service and intellectual property rights. You are essentially stealing the software from the developers and distributors, who have invested time and money to create and maintain it. If you are caught downloading or using Vray Sketchup 2022 crackeado for free, you could face legal actions, fines, or even jail time.
-
It is unsafe. Downloading Vray Sketchup 2022 crackeado for free from unknown or untrusted sources can expose your computer to viruses, malware, spyware, or ransomware. These malicious programs can damage your system, steal your personal information, or lock your files until you pay a ransom. You could also lose your data or compromise your privacy and security.
-
It is unreliable. Downloading Vray Sketchup 2022 crackeado for free can result in a poor performance of the software. The cracked version might not work properly, crash frequently, or have missing or corrupted features. You might also experience compatibility issues with other software or hardware. You could also miss out on updates, bug fixes, or new features that the official version offers.
-
It is unethical. Downloading Vray Sketchup 2022 crackeado for free is unfair to the creators and users of the software. You are depriving them of their rightful income and recognition for their work. You are also disrespecting their efforts and skills. You are also harming the software industry and the quality of the products they produce.
-
-
What Are Some Alternatives to Downloading Vray Sketchup 2022 Crackeado for Free
-
If you want to use Vray Sketchup 2022 legally and safely, you should avoid downloading Vray Sketchup 2022 crackeado for free. Instead, you should consider some of these alternatives:
-
-
Buy a license. The best way to use Vray Sketchup 2022 is to purchase a license from the official website or an authorized reseller. This way, you can enjoy all the benefits and features of the software without any risks or limitations. You can also get technical support and customer service if you encounter any problems.
-
Use a trial version. If you want to try Vray Sketchup 2022 before buying it, you can download a trial version from the official website. The trial version allows you to use the software for 30 days with full functionality. You can then decide whether you want to buy a license or not.
-
Use an alternative software. If you cannot afford or do not want to buy a license for Vray Sketchup 2022, you can look for other rendering software that are similar or compatible with Sketchup. Some examples are Blender, Lumion, Enscape, or Twinmotion. These software might have different features or prices than Vray Sketchup 2022, but they can still help you create realistic and stunning images from your 3D models.
-
-
Conclusion
-
Vray Sketchup 2022 is a great rendering software that can help you create amazing images from your 3D models. However, downloading Vray Sketchup 2022 crackeado for
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gothic 3 Forsaken Gods Enhanced Edition [key serial] - The Ultimate Guide to Activate and Play.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gothic 3 Forsaken Gods Enhanced Edition [key serial] - The Ultimate Guide to Activate and Play.md
deleted file mode 100644
index ebf205a5eb6fabd95cf983611aa8e37396e93b99..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gothic 3 Forsaken Gods Enhanced Edition [key serial] - The Ultimate Guide to Activate and Play.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
If you are a fan of open-world action role-playing games, you might have heard of Gothic 3, a game that was released in 2006 by JoWooD Productions. The game received mixed reviews from critics and players, mainly due to its technical issues and bugs. However, it also had a loyal fan base that appreciated its immersive world, rich lore and freedom of choice.
In 2008, JoWooD Productions released a standalone expansion for Gothic 3, called Gothic 3: Forsaken Gods. The expansion was developed by Trine Games and aimed to answer some of the questions that were left unresolved in the original game. However, the expansion also suffered from many problems, such as poor graphics, gameplay glitches and a lackluster story.
-
In 2011, JoWooD Productions released a new version of the expansion, called Gothic 3: Forsaken Gods Enhanced Edition. The new version was developed by Mad Vulture Games, a team that also worked on community patches for Gothic 3. The enhanced edition improved many aspects of the game, such as graphics, sound, combat system, quests, monsters and characters.
-
If you are interested in playing Gothic 3: Forsaken Gods Enhanced Edition, you will need a key serial to activate the game on Steam. A key serial is a unique code that verifies your ownership of the game and allows you to download and play it online. You can get a key serial for Gothic 3: Forsaken Gods Enhanced Edition by buying it from Steam or other authorized retailers.
-
Gameplay
-
Gothic 3: Forsaken Gods Enhanced Edition is an open-world action role-playing game that takes place in Myrtana, a fantasy continent that is divided by various factions. You play as the Nameless Hero, a legendary warrior who has banished the influence of the gods from Myrtana with the help of his friend Xardas, a powerful mage.
-
However, your actions have caused more chaos and conflict in Myrtana, as different groups fight for power and resources. You wake up from a coma in a secret realm between space and time, where you witness the events that unfold in Myrtana. You decide to return to Myrtana and try to unite it once again under a new empire for both humans and orcs.
-
The gameplay of Gothic 3: Forsaken Gods Enhanced Edition is similar to that of Gothic 3, but with some differences. The combat system has been revised and now relies on endurance. You can use various weapons, armors, spells and skills to fight your enemies. You can also interact with many characters, join factions, complete quests and explore the world.
-
The enhanced edition adds new quests, monsters and characters to the game. Some of these are old friends or foes from previous Gothic games, such as Diego, Gorn, Milten and Lee. You can also encounter new creatures, such as giant spiders, undead warriors and fire golems. The enhanced edition also fixes some bugs and glitches that were present in the original expansion.
-
Graphics and Sound
-
Gothic 3: Forsaken Gods Enhanced Edition improves the graphics quality of the game compared to the original expansion. The enhanced edition uses an updated engine that enhances the lighting, shadows, textures and animations of the game. The enhanced edition also adds new visual effects, such as blood splatter, fire sparks and smoke trails.
-
The sound quality of Gothic 3: Forsaken Gods Enhanced Edition is also improved compared to the original expansion. The enhanced edition features better voice acting, sound effects and music for the game. The enhanced edition also adds new soundtracks composed by Kai Rosenkranz, who also worked on previous Gothic games.
The technical requirements and performance of Gothic 3: Forsaken Gods Enhanced Edition are similar to those of Gothic 3. You will need a Windows XP/Vista/7 operating system, an Intel or AMD single-core processor (2.5 GHz), 1 GB of RAM (1.5 GB with Windows Vista/7), 4 GB of free disk space, an ATI X1900 or nVidia 7900 video card with 256 MB of RAM and DirectX 9.0c or higher.
-
You can customize the graphics and sound settings of Gothic 3: Forsaken Gods Enhanced Edition according to your preferences and system specifications. You can adjust options such as resolution, anti-aliasing, texture quality, shadow quality, ambient occlusion, sound volume and subtitles.
-
Conclusion
-
Gothic 3: Forsaken Gods Enhanced Edition is a game that offers an immersive open-world experience with plenty of freedom and choice. If you enjoyed playing Gothic 3 or other games in the series, you will find this game appealing. The enhanced edition improves many aspects of the original expansion, such as graphics, sound, combat system, quests, monsters and characters.
-
However, Gothic 3: Forsaken Gods Enhanced Edition is not a perfect game. It still has some flaws, such as a weak story, repetitive gameplay, clunky controls, and some remaining bugs and issues. The enhanced edition also does not add much new content, so you might find it short and lacking. The gameplay area is limited to one part of Myrtana, unlike in Gothic 3 which had three parts.
-
If you want to play Gothic 3: Forsaken Gods Enhanced Edition, you can buy or download it from Steam or other authorized retailers. The game costs $9.99 on Steam, but you can also find it on sale or in bundles with other games. You will need a key serial to activate the game on Steam, which you will receive after your purchase. You can also check out other games in the Gothic series, such as Gothic 1, Gothic II: Gold Edition, Gothic® 3, ArcaniA, and ArcaniA: Fall of Setarrif.
-
FAQs
-
-
Is Gothic 3: Forsaken Gods Enhanced Edition a standalone game or an expansion?
-
Gothic 3: Forsaken Gods Enhanced Edition is a standalone game, which means you do not need to have Gothic 3 installed to play it. However, it is also an expansion for Gothic 3, which means it continues the story and gameplay of Gothic 3.
-
How long is Gothic 3: Forsaken Gods Enhanced Edition?
-
Gothic 3: Forsaken Gods Enhanced Edition is a relatively short game compared to other open-world RPGs. It can take you around 10-15 hours to complete all the main quests and side quests in the game. However, you can spend more time exploring, fighting, looting, and crafting in the game if you want.
-
Is Gothic 3: Forsaken Gods Enhanced Edition compatible with Gothic 3 mods and patches?
-
Gothic 3: Forsaken Gods Enhanced Edition is not compatible with most mods and patches made for Gothic 3. have different files and scripts. However, you can use some mods and patches that are specifically made for Gothic 3: Forsaken Gods Enhanced Edition. You can find them on websites such as World of Gothic or Nexus Mods.
-
How to fix common bugs and issues in Gothic 3: Forsaken Gods Enhanced Edition?
-
Gothic 3: Forsaken Gods Enhanced Edition is a more stable and polished game than the original expansion, but it still has some bugs and issues that can affect your gameplay. Some of the common problems are crashes, freezes, stuttering, low FPS, graphical glitches, sound errors, and save game corruption. To fix these problems, you can try the following solutions:
-
-
Update your drivers and DirectX to the latest versions.
-
Run the game as an administrator and in compatibility mode for Windows XP SP3.
-
Disable any antivirus or firewall programs that might interfere with the game.
-
Lower your graphics and sound settings to reduce the load on your system.
-
Verify the integrity of your game files on Steam or reinstall the game if necessary.
-
Apply the latest official patch or community patch for the game.
-
Use a save game cleaner or editor to fix corrupted save files.
-
Check online forums or guides for specific solutions to specific problems.
-
-
What are some tips and tricks for Gothic 3: Forsaken Gods Enhanced Edition?
-
Gothic 3: Forsaken Gods Enhanced Edition is a challenging game that requires you to use your skills, strategy, and resources wisely. Here are some tips and tricks that can help you survive and thrive in the game:
-
-
Save your game often and in different slots, especially before entering a new area or starting a new quest.
-
Explore every corner of the world and loot everything you can find. You never know what useful items or secrets you might discover.
-
Talk to every character you meet and listen to their stories and requests. You might gain valuable information, allies, or rewards.
-
Choose your faction carefully and be aware of the consequences of your actions. Your reputation and alignment will affect how other factions and characters treat you.
-
Learn new skills and spells from trainers and books. They will enhance your abilities and give you an edge in combat.
-
Craft your own weapons, armors, potions, and scrolls using the materials you find or buy. They will be more powerful and customized than the ones you find or loot.
-
Use stealth, ranged attacks, magic, or melee combat depending on the situation and your preference. You can also combine them for more effectiveness.
-
Avoid fighting multiple enemies at once or enemies that are too strong for you. You can use distractions, traps, environmental hazards, or allies to help you.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CRACK PreSonus Studio One 3 Professional V6.1.0.35191-R2R FREE.md b/spaces/1gistliPinn/ChatGPT4/Examples/CRACK PreSonus Studio One 3 Professional V6.1.0.35191-R2R FREE.md
deleted file mode 100644
index 284509b96265dc7daf3293adda35953c6dbde52c..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/CRACK PreSonus Studio One 3 Professional V6.1.0.35191-R2R FREE.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
CRACK PreSonus Studio One 3 Professional v6.1.0.35191-R2R
-
-PART1 - Instructions - What's New and Important - Support and Troubleshooting |. How to install and use: 1..4,6-Dihydropyridine derivatives of the formula (I) have shown antithrombotic activity and are useful as a medicinal product for the prophylaxis and treatment of thrombosis in humans and animals.
-
-1. Therapeutic Uses
-
-2. Pharmaceutical Compositions
-
-3. Pharmaceutical Packages
-
-4. Methods of Preparing the Compositions
-
-5. Applications of the Compositions
-
-P2Y.sub.12 antagonists are useful for treating mammals (e.g., humans) having or being susceptible to having one or more diseases or conditions (e.g., thrombotic disorders), wherein said disease or condition is associated with, or caused by or caused by, the inappropriate activation of platelets (e.g., by the inappropriate aggregation of platelets) or leukocytes (e.g., by the inappropriate activation of leukocytes). In one embodiment, the disease or condition is associated with, or caused by, the inappropriate activation of platelets and/or leukocytes. These include, but are not limited to, atherothrombotic events, such as myocardial infarction, stroke, unstable angina pectoris, intermittent claudication, arteriosclerosis, restenosis, reocclusion or complications associated with balloon angioplasty, coronary bypass surgery, and the like. In another embodiment, the disease or condition is associated with, or caused by, the inappropriate activation of platelets. These include, but are not limited to, acute coronary syndromes such as unstable angina, ST-segment elevation myocardial infarction, non-ST-segment elevation myocardial infarction and other coronary reperfusion syndromes, thrombolytic therapy, coronary artery bypass graft (CABG) surgery, carotid angioplasty, primary coronary angioplasty, cardiac bypass, angioplasty or stent placement after acute coronary syndromes and the like. In a further embodiment, the disease or condition is associated with, or caused by, the inappropriate activation of leukocytes. These include, but are not limited to, the inappropriate activation of monocytes/macrophages or neutrophils, which is associated with, or causes, atherosclerosis, resten 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Down Coreldraw X5 Full Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/Down Coreldraw X5 Full Crack.md
deleted file mode 100644
index 54d3d4be89b166bf8bab51d3491a3b2d24935850..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Down Coreldraw X5 Full Crack.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
Down Coreldraw X5 Full Crack: A Guide to Download and Install the Graphics Suite
-
-
If you are looking for a powerful and versatile graphics design software, you might want to consider Down Coreldraw X5 Full Crack. This is a cracked version of CorelDraw Graphics Suite X5, which is one of the most popular and widely used vector graphics applications in the world. With this software, you can create stunning logos, illustrations, banners, flyers, web graphics, and more with ease and efficiency.
However, downloading and installing Down Coreldraw X5 Full Crack is not as simple as clicking a button. You need to follow some steps and precautions to make sure that you get the full version of the software without any errors or viruses. In this article, we will show you how to do that in a safe and reliable way.
-
-
What is Down Coreldraw X5 Full Crack?
-
-
Down Coreldraw X5 Full Crack is a modified version of CorelDraw Graphics Suite X5 that bypasses the activation process and allows you to use the software for free. Normally, you would need to purchase a license key or subscribe to a monthly plan to use CorelDraw X5 legally. However, with Down Coreldraw X5 Full Crack, you can get access to all the features and tools of the software without paying anything.
-
-
This might sound tempting, but you should also be aware of the risks and disadvantages of using Down Coreldraw X5 Full Crack. First of all, it is illegal and unethical to use cracked software, as it violates the intellectual property rights of the developers. You could face legal consequences or fines if you are caught using Down Coreldraw X5 Full Crack. Secondly, cracked software often comes with malware or viruses that can harm your computer or steal your personal information. You could lose your data or compromise your security if you download Down Coreldraw X5 Full Crack from untrusted sources. Thirdly, cracked software usually does not receive updates or support from the developers. You could miss out on new features, bug fixes, or compatibility improvements if you use Down Coreldraw X5 Full Crack.
-
-
Therefore, we do not recommend using Down Coreldraw X5 Full Crack, and we advise you to purchase a legitimate copy of CorelDraw X5 from the official website. However, if you still want to try Down Coreldraw X5 Full Crack, we will show you how to download and install it in the next section.
-
-
How to Download and Install Down Coreldraw X5 Full Crack?
-
-
To download and install Down Coreldraw X5 Full Crack, you will need two files: the setup file and the keygen file. The setup file is the installer of CorelDraw X5 that contains all the necessary files for the software. The keygen file is a program that generates serial numbers and activation codes for CorelDraw X5. You will need both files to activate Down Coreldraw X5 Full Crack.
-
-
-
You can find both files on various websites that offer cracked software downloads. However, you should be careful when choosing a website, as some of them might contain fake or malicious links that can infect your computer with malware or viruses. To avoid this, you should look for websites that have positive reviews, ratings, or feedback from other users. You should also scan the files with an antivirus program before opening them.
-
-
Once you have downloaded both files, you can follow these steps to install Down Coreldraw X5 Full Crack:
-
-
-
Turn off your internet connection and antivirus program.
-
Extract the setup file with WinRAR or any other file compression tool.
-
Run the setup.exe file and select "I don't have a serial number" > Next.
-
Select Typical Installation and start installing.
-
Run the CGSX5HF4.exe file for updates.
-
When finished, open the CorelDRW.exe file in C:\\Program Files (x86)\\Corel\\CorelDRAW Graphics Suite x5\\Programs.
-
Select Register Later > Once open, just exit the program immediately.
-
The Activation dialog will then appear.
-
Click "Already Purchased?" > Copy the Serial Number that appears in the dialog.
-
Run keygen.exe as administrator, Paste the Serial Number earlier.
-
You copy the Installation Code that appears to Keygen.
-
Now click Activation, copy Activation Code.
-
Paste and click the Continue button.
-
Congratulations! You have successfully installed Down Coreldraw X5 Full Crack.
-
-
-
Conclusion
-
-
Down Coreldraw X5 Full Crack is a cracked version of CorelDraw Graphics Suite X5 that allows you to use the software for free without activation. However, it also comes with many risks and disadvantages, such as legal issues, malware infections, or lack of updates. Therefore, we do not recommend using Down Coreldraw X5 Full Crack, and we suggest you buy a legitimate copy of CorelDraw X5 from the official website instead.
-
-
If you still want to try Down Coreldraw X5 Full Crack, we have shown you how to download and install it in this article. However, you should do this at your own risk and responsibility. We hope this article was helpful for you. Thank you for reading!
-
What are the Benefits of Down Coreldraw X5 Full Crack?
-
-
Despite the risks and disadvantages of using Down Coreldraw X5 Full Crack, some users might still find some benefits from using it. Here are some of the possible benefits of using Down Coreldraw X5 Full Crack:
-
-
-
You can save money by not paying for a license key or a subscription plan.
-
You can access all the features and tools of CorelDraw X5 without any limitations or restrictions.
-
You can create professional and stunning graphics for various purposes and industries.
-
You can enjoy the new and enhanced features of CorelDraw X5, such as the B-Spline tool, the Mesh Fill tool, the Web Graphics tools, and more.
-
You can work with vector graphics easily and efficiently with CorelDraw X5's minimalist and user-friendly interface.
-
-
-
These are some of the potential benefits of using Down Coreldraw X5 Full Crack. However, you should also weigh them against the risks and disadvantages that we mentioned earlier. You might find that the benefits are not worth the costs and consequences of using Down Coreldraw X5 Full Crack.
-
-
What are the Alternatives to Down Coreldraw X5 Full Crack?
-
-
If you are looking for a graphics design software that is similar to CorelDraw X5 but does not require cracking or activation, you might want to consider some of the alternatives that are available in the market. Here are some of the possible alternatives to Down Coreldraw X5 Full Crack:
-
-
-
Adobe Illustrator: This is one of the most popular and widely used vector graphics software in the world. It offers a comprehensive set of tools and features for creating logos, icons, illustrations, typography, and more. It also integrates well with other Adobe products, such as Photoshop, InDesign, and After Effects. However, Adobe Illustrator is not free, and you need to pay a monthly or annual subscription fee to use it.
-
Inkscape: This is a free and open-source vector graphics software that can run on Windows, Mac, and Linux. It has a similar interface and functionality to CorelDraw X5, and it supports many file formats, such as SVG, PNG, PDF, EPS, and more. It also has a large community of users and developers who provide support and resources for Inkscape users. However, Inkscape might not have all the advanced features or performance that CorelDraw X5 has.
-
Affinity Designer: This is a relatively new but powerful vector graphics software that can compete with CorelDraw X5 and Adobe Illustrator. It has a sleek and modern interface that is easy to use and customize. It also has a fast and smooth performance that can handle complex graphics and large files. It also supports many file formats, such as PSD, AI, PDF, SVG, EPS, and more. Affinity Designer is not free, but it has a one-time payment option that is cheaper than Adobe Illustrator's subscription plan.
-
-
-
These are some of the possible alternatives to Down Coreldraw X5 Full Crack. You might want to try them out and see which one suits your needs and preferences better. You might find that they are more reliable, secure, and updated than Down Coreldraw X5 Full Crack.
-
-
Conclusion
-
-
Down Coreldraw X5 Full Crack is a cracked version of CorelDraw Graphics Suite X5 that allows you to use the software for free without activation. However, it also comes with many risks and disadvantages, such as legal issues, malware infections, or lack of updates. Therefore, we do not recommend using Down Coreldraw X5 Full Crack, and we suggest you buy a legitimate copy of CorelDraw X5 from the official website instead.
-
-
If you still want to try Down Coreldraw X5 Full Crack, we have shown you how to download and install it in this article. However, you should do this at your own risk and responsibility. We hope this article was helpful for you. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash Royale 3D A Fan-Made Game You Can Download and Play Now.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash Royale 3D A Fan-Made Game You Can Download and Play Now.md
deleted file mode 100644
index b43905bf69fad3fc0e6e9f8a66010908269eb6db..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash Royale 3D A Fan-Made Game You Can Download and Play Now.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-
Clash Royale 3D Download: How to Play Your Favorite Game in a New Dimension
-
If you are a fan of Clash Royale, the popular mobile game that combines card collecting, tower defense, and real-time strategy, you might be wondering if there is a way to play it in a more immersive and realistic way. Well, you are in luck, because there is a fan-made project that allows you to play Clash Royale in 3D! In this article, we will tell you everything you need to know about Clash Royale 3D, how to download and install it, and how to enjoy it to the fullest. So, get ready to experience your favorite game in a new dimension!
Clash Royale 3D is a fan-made project that brings Clash Royale to life in 3D. It is not an official game by Supercell, the developer of Clash Royale, but rather a tribute by some passionate fans who wanted to create something unique and amazing. Clash Royale 3D is not a simple port or remake of the original game, but rather a new way to play it with enhanced graphics, animations, sounds, and gameplay.
-
A fan-made project that brings Clash Royale to life in 3D
-
Clash Royale 3D is created by using Sketchfab, a platform that allows anyone to create, share, and discover 3D models online. Sketchfab has a large community of artists and enthusiasts who create and upload various 3D models, including those based on popular games, movies, characters, and more. Some of these models are inspired by Clash Royale, such as the characters, cards, towers, arenas, and effects. By using Sketchfab's viewer and API, the developers of Clash Royale 3D were able to combine these models into a playable game that runs on your browser.
-
The features and benefits of playing Clash Royale 3D
-
Clash Royale 3D has many features and benefits that make it worth playing. Here are some of them:
-
-
You can play Clash Royale in a more realistic and immersive way. You can see your cards come to life in 3D, watch them move and attack on the battlefield, and hear them make sounds and voices. You can also zoom in and out, rotate the camera, and change the perspective to get a better view of the action.
-
You can explore different arenas and environments in 3D. You can see the details and textures of each arena, such as the grass, rocks, trees, buildings, flags, and more. You can also see the weather effects, such as rain, snow, fog, and night. You can even interact with some elements of the arena, such as breaking barrels or opening chests.
-
You can enjoy the same gameplay and mechanics as the original game. You can still collect cards, build decks, join clans, chat with other players, participate in events, earn rewards, and more. You can also play against other players online or against bots offline.
-
You can customize your game settings and preferences. You can adjust the quality, resolution, sound, and performance of the game. You can also choose the language, theme, and mode of the game. You can even enable or disable some features, such as shadows, reflections, particles, and animations.
-
-
How to download and install Clash Royale 3D?
-
Clash Royale 3D is not available on the official app stores, such as Google Play or Apple Store, because it is not an official game by Supercell. However, you can still download and install it easily by following these steps:
-
clash royale 3d models free download
-clash royale 3d apk download for android
-clash royale 3d arena 7 royal arena
-clash royale 3d mod apk download latest version
-clash royale 3d game download for pc
-clash royale 3d characters download
-clash royale 3d animation download
-clash royale 3d wallpaper download
-clash royale 3d offline download
-clash royale 3d hack download
-clash royale 3d skins download
-clash royale 3d online download
-clash royale 3d cards download
-clash royale 3d sound effects download
-clash royale 3d logo download
-clash royale 3d fan made download
-clash royale 3d update download
-clash royale 3d ios download
-clash royale 3d tower defense download
-clash royale 3d simulator download
-clash royale 3d chest opening download
-clash royale 3d private server download
-clash royale 3d beta download
-clash royale 3d minecraft map download
-clash royale 3d battle pass download
-clash royale 3d clan wars download
-clash royale 3d tutorial download
-clash royale 3d editor download
-clash royale 3d gameplay video download
-clash royale 3d theme song download
-clash royale 3d stickers download
-clash royale 3d memes download
-clash royale 3d tips and tricks download
-clash royale 3d reddit download
-clash royale 3d discord server download
-clash royale 3d tournament mode download
-clash royale 3d brawlers download
-clash royale 3d heroes download
-clash royale 3d spells download
-clash royale 3d buildings download
-clash royale 3d troops download
-clash royale 3d emotes download
-clash royale 3d star levels download
-clash royale 3d quests download
-clash royale 3d shop offers download
-clash royale 3d replays download
-clash royale 3d stats tracker download
-clash royale 3d deck builder download
-
The requirements and steps for downloading Clash Royale 3D
-
Before you download Clash Royale 3D, you need to make sure that your device meets the minimum requirements for running the game. These are:
-
-
A device that supports WebGL, which is a technology that enables 3D graphics on the web. Most modern browsers and devices support WebGL, but you can check if yours does by visiting this link: [WebGL Report].
-
A stable internet connection, preferably with a high speed and low latency. This is because Clash Royale 3D is an online game that requires constant communication with the server and other players.
-
A sufficient amount of storage space on your device, depending on the size of the game files. The current version of Clash Royale 3D is about 300 MB, but it may vary depending on the updates and patches.
-
-
Once you have verified that your device meets the requirements, you can proceed to download Clash Royale 3D by following these steps:
-
-
Visit the official website of Clash Royale 3D at [clashroyale3d.com].
-
Click on the "Download" button and choose the version that matches your device (Android or iOS).
-
Wait for the download to finish and then open the downloaded file.
-
Follow the instructions on the screen to install Clash Royale 3D on your device.
-
Launch Clash Royale 3D and enjoy!
-
-
The tips and tricks for playing Clash Royale 3D smoothly
-
Clash Royale 3D is a fun and exciting game, but it can also be challenging and frustrating at times. To help you play Clash Royale 3D smoothly and avoid any problems or issues, here are some tips and tricks that you should keep in mind:
-
-
Make sure that your device is fully charged or plugged in before playing Clash Royale 3D. The game can drain your battery quickly because of its high graphics and performance demands.
-
Close any other apps or programs that are running in the background while playing Clash Royale 3D. This can free up some memory and CPU resources for the game and prevent any lag or crashes.
-
Adjust your game settings according to your device's capabilities and preferences. You can lower the quality, resolution, sound, and performance of the game if you experience any slowness or stuttering. You can also enable or disable some features, such as shadows, reflections, particles, and animations, to improve the game's performance.
-
Use a reliable and secure network connection when playing Clash Royale 3D online. Avoid using public or unsecured Wi-Fi networks that may expose your data or interfere with your gameplay. You can also use a VPN service to protect your privacy and bypass any geo-restrictions or firewalls.
-
Update your game regularly to get the latest features, fixes, and improvements. You can check for updates by visiting the official website of Clash Royale 3D or by opening the game's settings menu.
-
How to enjoy Clash Royale 3D to the fullest?
-
Clash Royale 3D is not only a game, but also a way to express your creativity and passion for Clash Royale. There are many ways to enjoy Clash Royale 3D to the fullest, such as:
-
The best 3D models and arenas to explore in Clash Royale 3D
-
One of the main attractions of Clash Royale 3D is the variety and quality of the 3D models and arenas that you can explore. You can see your favorite characters, cards, towers, and effects in a new light, with more details, colors, and animations. You can also discover new and unique models and arenas that are not available in the original game, such as custom-made ones by other fans or artists. Here are some of the best 3D models and arenas that you can explore in Clash Royale 3D:
-
-
-
Model/Arena
-
Description
-
-
-
King Tower
-
The King Tower is the main tower that you have to protect in Clash Royale. In Clash Royale 3D, you can see the King Tower in 3D, with its crown, cannons, flags, and windows. You can also see the King himself, sitting on his throne and cheering or taunting you.
-
-
-
Princess
-
The Princess is one of the most popular and iconic cards in Clash Royale. She is a legendary card that can shoot arrows from a long distance. In Clash Royale 3D, you can see the Princess in 3D, with her dress, hair, bow, and quiver. You can also see her facial expressions and hear her voice.
-
-
-
Fireball
-
The Fireball is one of the most powerful and versatile spells in Clash Royale. It is a rare card that can deal high damage to a large area. In Clash Royale 3D, you can see the Fireball in 3D, with its flames, sparks, and smoke. You can also feel its impact and hear its sound.
-
-
-
Hog Mountain
-
Hog Mountain is one of the most fun and colorful arenas in Clash Royale. It is the arena for players who have reached 3000 trophies. In Clash Royale 3D, you can see Hog Mountain in 3D, with its hills, bridges, balloons, hogs, and fireworks. You can also interact with some of the elements of the arena, such as popping balloons or riding hogs.
-
-
-
Clashmas Village
-
Clashmas Village is one of the most festive and special arenas in Clash Royale. It is a seasonal arena that appears during the Christmas period. In Clash Royale 3D, you can see Clashmas Village in 3D, with its snow, trees, lights, presents, and snowmen. You can also enjoy the Christmas music and atmosphere.
-
-
-
The challenges and rewards of playing Clash Royale 3D
-
Clash Royale 3D is not only a game to admire, but also a game to challenge yourself and improve your skills. There are many challenges and rewards that you can face and earn while playing Clash Royale 3D, such as:
-
-
You can compete with other players online or offline in different modes and formats. You can play classic matches or tournaments with standard rules or custom rules. You can also play special events or modes that have different objectives or conditions.
-
You can collect cards and build decks that suit your style and strategy. You can unlock new cards by opening chests or buying them from the shop. You can also upgrade your cards by using gold or gems. You can create different decks for different situations or preferences.
-
You can join clans and chat with other players who share your passion for Clash Royale. You can exchange cards, tips, and ideas with your clanmates. You can also participate in clan wars or clan games to earn rewards and glory for your clan.
-
You can earn trophies and climb the ladder of rankings. You can gain trophies by winning matches or lose trophies by losing matches. You can also reach new arenas or leagues that have different rewards and challenges.
-
You can achieve goals and milestones that show your progress and achievements. You can complete quests or missions that have specific tasks or requirements. You can also earn badges or stars that indicate your level or skill.
-
-
Conclusion
-
Clash Royale 3D is a fan-made project that allows you to play Clash Royale in 3D. It is not an official game by Supercell, but rather a tribute by by visiting this link: [WebGL Report]. However, some devices and platforms may have better performance and compatibility than others, depending on their specifications and features. For example, Android devices may run Clash Royale 3D better than iOS devices, and Chrome browsers may run Clash Royale 3D better than Safari browsers.
-
Q4: Is Clash Royale 3D updated regularly?
-
A4: Clash Royale 3D is updated regularly. The developers of Clash Royale 3D are constantly working on improving the game and adding new features, fixes, and improvements. You can check for updates by visiting the official website of Clash Royale 3D or by opening the game's settings menu. You can also follow the developers on their social media accounts or join their Discord server to get the latest news and updates about Clash Royale 3D.
-
Q5: How can I support the developers of Clash Royale 3D?
-
A5: You can support the developers of Clash Royale 3D by doing the following things:
-
-
Share your feedback and suggestions with them. You can contact them via email, social media, or Discord. You can also rate and review the game on the website or app store.
-
Spread the word and invite your friends to play Clash Royale 3D. You can share the game's link or screenshots on your social media accounts or chat apps. You can also challenge your friends to play with you online or offline.
-
Donate or contribute to the game's development. You can donate money or resources to the developers via PayPal or Patreon. You can also contribute your skills or talents to the game, such as creating 3D models, graphics, sounds, or codes.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CricHD App APK Live Cricket Streaming at Your Fingertips.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CricHD App APK Live Cricket Streaming at Your Fingertips.md
deleted file mode 100644
index d47b9f7506c5e3fbbdb10f2cf713ba89bfdc5d5d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CricHD App APK Live Cricket Streaming at Your Fingertips.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
Live Cricket Streaming App Download APK: How to Watch Cricket Matches on Your Phone
-
If you are a cricket fan, you probably don't want to miss any of the exciting matches happening around the world. Whether it's the ICC World Cup, the Ashes, or the IPL, you want to watch every ball and every run live. But what if you don't have access to a TV or a cable subscription? Or what if you are on the go and can't sit in front of a screen for hours? That's where live cricket streaming app download apk comes in handy.
Live cricket streaming app download apk is a way of downloading and installing an application that allows you to watch live cricket matches on your phone. You don't need to pay any fees or sign up for any subscriptions. You just need to have a stable internet connection and enough storage space on your phone. With live cricket streaming app download apk, you can enjoy watching cricket anytime and anywhere.
-
But why do you need live cricket streaming app download apk? What are the benefits of using it? Here are some of the reasons why you should try it out:
-
-
You can watch live cricket matches from different leagues and tournaments, such as the ICC World Cup, the Ashes, the IPL, the BBL, and more.
-
You can watch live cricket matches from different countries and regions, such as India, Australia, England, Pakistan, South Africa, New Zealand, and more.
-
You can watch live cricket matches in different languages and commentary options, such as English, Hindi, Urdu, Tamil, Telugu, Bengali, and more.
-
You can watch live cricket matches in high-definition quality and smooth streaming without any buffering or lagging.
-
You can watch live cricket matches on your phone screen or cast them to your TV or laptop for a bigger view.
-
You can watch live cricket matches with interactive features, such as live scores, stats, highlights, replays, polls, chat, and more.
-
-
So how do you download and use live cricket streaming app download apk? Here are the steps you need to follow:
-
How to Download Live Cricket Streaming App APK
-
Step 1: Find a reliable source for the APK file
-
An APK file is an Android package file that contains all the necessary files and data for an application to run on your phone. You can find many sources for live cricket streaming app APK files online, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or spyware that can harm your phone or steal your personal information. Therefore, you need to be careful and choose a reliable source for the APK file.
-
One way to find a reliable source is to check the reviews and ratings of other users who have downloaded and used the app. You can also look for official websites or social media pages of the app developers or publishers. You can also ask for recommendations from your friends or fellow cricket fans who have used live cricket streaming apps before.
-
Once you have found a reliable source for the APK file, you need to download it to your phone. You can do this by clicking on the download link or scanning the QR code provided by the source. The download process may take a few minutes depending on the size of the file and the speed of your internet connection.
-
live cricket match watch online free app apk
-download crichd live cricket streaming app for android
-watch live cricket tv hd apk app free download
-rts tv apk download latest version - live sports and movies app
-icc.tv app download - official home of icc cricket video
-live cricket streaming app for pc - bluestacks emulator download
-watch live cricket online free hd quality app apk
-best live cricket streaming app for android 2023 download
-live cricket score and news app apk download
-free live cricket tv channel app download apk
-
Step 2: Enable unknown sources on your phone settings
-
Before you can install the APK file on your phone, you need to enable unknown sources on your phone settings. This is because most live cricket streaming apps are not available on the official Google Play Store or Apple App Store, and your phone may block the installation of apps from unknown sources by default.
-
To enable unknown sources on your phone settings, you need to follow these steps:
-
-
Go to your phone settings and look for security or privacy options.
-
Find the option that says unknown sources or allow installation of apps from unknown sources and toggle it on.
-
You may see a warning message that says installing apps from unknown sources may harm your device or data. Tap on OK or Continue to proceed.
-
-
Once you have enabled unknown sources on your phone settings, you are ready to install the APK file on your phone.
-
Step 3: Install the APK file on your phone
-
To install the APK file on your phone, you need to follow these steps:
-
-
Locate the APK file on your phone storage. You can use a file manager app or search for it in your downloads folder.
-
Tap on the APK file and you will see a prompt that asks you to confirm the installation. Tap on Install or Next to start the installation process.
-
The installation process may take a few seconds or minutes depending on the size of the app and the performance of your phone.
-
Once the installation is complete, you will see a message that says App installed or Done. Tap on Open or Launch to open the app or tap on Done to exit the installer.
-
-
Congratulations! You have successfully downloaded and installed live cricket streaming app APK on your phone. Now you can start using it to watch live cricket matches on your phone.
-
How to Use Live Cricket Streaming App APK
-
Step 1: Open the app and sign up or log in
-
Once you have installed live cricket streaming app APK on your phone, you need to open it and sign up or log in. Some apps may require you to create an account or provide some personal information before you can use them. Others may allow you to use them without signing up or logging in. You can choose the option that suits you best.
-
To sign up or log in, you need to follow these steps:
-
-
Open the app and look for the sign up or log in option. It may be on the home screen, the menu, or the settings of the app.
-
If you choose to sign up, you need to provide some basic information, such as your name, email address, password, country, etc. You may also need to verify your email address or phone number by entering a code sent to you by the app.
-
If you choose to log in, you need to enter your email address and password that you used to sign up. You may also need to enter a captcha code or a verification code sent to you by the app.
-
Once you have signed up or logged in, you will see your profile or dashboard where you can access different features and options of the app.
-
-
Step 2: Choose a live match or a replay from the list
-
Now that you have signed up or logged in, you can choose a live match or a replay from the list of available matches. You can find the list of matches on the home screen, the menu, or the categories of the app. You can also use the search function or filter function to find a specific match or league that you want to watch.
-
To choose a live match or a replay from the list, you need to follow these steps:
-
-
Browse through the list of matches and look for the one that interests you. You can see some information about each match, such as the teams, the date, the time, the venue, etc.
-
Tap on the match that you want to watch and you will see more details about it, such as the score, the overs, the wickets, the run rate, etc.
-
If the match is live, you will see a button that says Watch Live or Live Stream. Tap on it and you will be directed to the live streaming page where you can watch the match in real time.
-
If the match is not live, you will see a button that says Watch Replay or Replay Stream. Tap on it and you will be directed to the replay streaming page where you can watch the match from the beginning or from any point that you want.
-
-
Step 3: Enjoy watching cricket on your phone
-
Now that you have chosen a live match or a replay from the list, you can enjoy watching cricket on your phone. You can adjust the video quality, the volume, the brightness, and the orientation of your phone according to your preference. You can also pause, resume, rewind, fast forward, or skip the match as you wish.
-
While watching cricket on your phone, you can also access some interactive features that enhance your viewing experience. For example, you can:
-
-
Check the live scores, stats, highlights, replays, polls, chat, and more on the app.
-
Share your thoughts and opinions with other cricket fans on social media platforms, such as Facebook, Twitter, Instagram, etc.
-
Cast your phone screen to your TV or laptop for a bigger view using Chromecast, Airplay, Miracast, or other devices.
-
Use headphones or earphones for a better sound quality and avoid disturbing others.
-
-
With live cricket streaming app download apk, you can watch cricket matches on your phone anytime and anywhere. You don't need to worry about missing any action or excitement. You can stay updated and entertained with live cricket streaming app download apk.
-
Comparison of Different Live Cricket Streaming Apps APK
-
There are many live cricket streaming apps APK available online, but not all of them are equally good. Some of them may have better features, quality, and performance than others. Some of them may also have more matches, leagues, and options than others. Therefore, it is important to compare different live cricket streaming apps APK before choosing one.
-
Here are some of the popular and reliable live cricket streaming apps APK that you can try out:
-
ICC.tv
-
ICC.tv is the official app of the International Cricket Council (ICC), the governing body of world cricket. It offers live streaming of all ICC events and tournaments, such as the ICC World Cup, the ICC Champions Trophy, the ICC World Test Championship, the ICC Women's World Cup, and more. It also offers live streaming of other domestic and international matches from various countries and regions.
-
Some of the features of ICC.tv are:
-
-
It has high-definition quality and smooth streaming without any buffering or lagging.
-
It has multiple languages and commentary options for different matches.
-
It has interactive features such as live scores, stats, highlights, replays, polls, chat, and more on the app.
-
It is free to download and use, but it may require registration or verification for some matches.
-
-
You can download ICC.tv APK from its official website or from other sources online.
-
CricHD
-
CricHD is one of the most popular and widely used live cricket streaming apps APK. It offers live streaming of all kinds of cricket matches, such as Test, ODI, T20, IPL, BBL, PSL, CPL, and more. It also offers live streaming of other sports, such as football, basketball, tennis, hockey, rugby, and more.
-
Some of the features of CricHD are:
-
-
It has high-definition quality and smooth streaming without any buffering or lagging.
-
It has multiple languages and commentary options for different matches.
-
It has interactive features such as live scores, stats, highlights, replays, polls, chat, and more on the app.
-
It is free to download and use, but it may show some ads or pop-ups during the streaming.
-
-
You can download CricHD APK from its official website or from other sources online.
-
Other options
-
There are many other live cricket streaming apps APK that you can try out, such as:
-
-
Hotstar: It is a popular streaming platform that offers live cricket matches from India and other countries. It also offers other entertainment content, such as movies, TV shows, news, etc. It is free to download and use, but it may require a subscription for some content.
-
SonyLIV: It is another popular streaming platform that offers live cricket matches from India and other countries. It also offers other sports and entertainment content, such as football, WWE, comedy, etc. It is free to download and use, but it may require a subscription for some content.
-
ThopTV: It is a third-party streaming app that offers live cricket matches from various sources and channels. It also offers other content, such as movies, TV shows, music, etc. It is free to download and use, but it may not be safe or legal to use.
-
-
You can compare these and other live cricket streaming apps APK based on your preferences and needs. You can also check the reviews and ratings of other users who have used them before. You can also try out different apps and see which one works best for you.
-
Conclusion
-
Live cricket streaming app download apk is a great way to watch live cricket matches on your phone. You don't need to have a TV or a cable subscription to enjoy watching cricket. You just need to have a stable internet connection and enough storage space on your phone. You can watch live cricket matches from different leagues, tournaments, countries, regions, languages, and commentary options. You can watch live cricket matches in high-definition quality and smooth streaming without any buffering or lagging. You can watch live cricket matches with interactive features, such as live scores, stats, highlights, replays, polls, chat, and more.
-
To watch live cricket matches on your phone, you need to download and install live cricket streaming app APK on your phone. You need to find a reliable source for the APK file, enable unknown sources on your phone settings, and install the APK file on your phone. Then you need to open the app and sign up or log in, choose a live match or a replay from the list, and enjoy watching cricket on your phone.
-
You can also compare different live cricket streaming apps APK based on their features, quality, performance, and options. You can try out some of the popular and reliable apps, such as ICC.tv, CricHD, Hotstar, SonyLIV, ThopTV, and more. You can also check the reviews and ratings of other users who have used them before. You can also try out different apps and see which one works best for you.
-
Live cricket streaming app download apk is a must-have for any cricket fan who wants to watch live cricket matches on their phone. It is easy to use, convenient, and fun. It is the best way to stay updated and entertained with live cricket.
-
So what are you waiting for? Download live cricket streaming app APK now and start watching live cricket matches on your phone!
-
FAQs
-
Here are some of the frequently asked questions about live cricket streaming app download apk:
-
Q1: Is live cricket streaming app download apk safe and legal?
-
A1: Live cricket streaming app download apk is safe and legal as long as you download it from a reliable source and use it for personal and non-commercial purposes. However, some apps may not have the rights or permissions to stream some matches or content, and they may violate the intellectual property or privacy rights of the owners or providers. Therefore, you should be careful and responsible when using live cricket streaming app download apk.
-
Q2: How much data does live cricket streaming app download apk consume?
-
A2: Live cricket streaming app download apk consumes data depending on the video quality, the duration, and the frequency of your streaming. Generally, the higher the video quality, the more data it consumes. For example, streaming a match in HD quality may consume about 1 GB of data per hour, while streaming it in SD quality may consume about 300 MB of data per hour. Therefore, you should monitor your data usage and choose a suitable video quality according to your data plan.
-
Q3: What are the best live cricket streaming apps for Android and iOS?
-
A3: There are many live cricket streaming apps for Android and iOS devices, but some of them may be better than others in terms of features, quality, performance, and options. Some of the best live cricket streaming apps for Android and iOS devices are: - ICC.tv: It is the official app of the International Cricket Council (ICC), the governing body of world cricket. It offers live streaming of all ICC events and tournaments, as well as other domestic and international matches from various countries and regions. It has high-definition quality and smooth streaming without any buffering or lagging. It has multiple languages and commentary options for different matches. It has interactive features such as live scores, stats, highlights, replays, polls, chat, and more on the app. It is free to download and use, but it may require registration or verification for some matches. - CricHD: It is one of the most popular and widely used live cricket streaming apps. It offers live streaming of all kinds of cricket matches, such as Test, ODI, T20, IPL, BBL, PSL, CPL, and more. It also offers live streaming of other sports, such as football, basketball, tennis, hockey, rugby, and more. It has high-definition quality and smooth streaming without any buffering or lagging. It has multiple languages and commentary options for different matches. It has interactive features such as live scores, stats, highlights, replays, polls, chat, and more on the app. It is free to download and use, but it may show some ads or pop-ups during the streaming. - Hotstar: It is a popular streaming platform that offers live cricket matches from India and other countries. It also offers other entertainment content, such as movies, TV shows, news, etc. It has high-definition quality and smooth streaming without any buffering or lagging. It has multiple languages and commentary options for different matches. It has interactive features such as live scores, stats, highlights, replays, polls, chat, and more on the app. It is free to download and use, but it may require a subscription for some content. - SonyLIV: It is another popular streaming platform that offers live cricket matches from India and other countries. It also offers other sports and entertainment content, such as football, WWE, comedy, etc. It has high-definition quality and smooth streaming without any buffering or lagging. It has multiple languages and commentary options for different matches. It has interactive features such as live scores, stats, highlights, replays, polls, chat, and more on the app. It is free to download and use, but it may require a subscription for some content. These are some of the best live cricket streaming apps for Android and iOS devices that you can try out. You can also check out other apps that may suit your preferences and needs.
-
Q4: How can I watch live cricket streaming on my TV or laptop?
-
A4: If you want to watch live cricket streaming on your TV or laptop instead of your phone, you have a few options to do so. For example, you can: - Use a device that supports casting or mirroring, such as Chromecast, Airplay, Miracast, or others. You can connect your phone to your TV or laptop using these devices and stream the live cricket match from your phone to your TV or laptop screen. You may need to download and install an app or a software that supports casting or mirroring on your phone and your TV or laptop. - Use a device that supports HDMI, such as a cable, an adapter, or a dongle. You can connect your phone to your TV or laptop using these devices and stream the live cricket match from your phone to your TV or laptop screen. You may need to adjust the settings on your phone and your TV or laptop to enable HDMI output and input. - Use a device that supports USB, such as a cable, an adapter, or a dongle. You can connect your phone to your TV or laptop using these devices and stream the live cricket match from your phone to your TV or laptop screen. You may need to enable USB debugging on your phone and install a driver or a software on your TV or laptop to recognize your phone. These are some of the ways you can watch live cricket streaming on your TV or laptop using live cricket streaming app download apk. You can also check out other methods that may work for you.
-
Q5: How can I improve the quality and speed of live cricket streaming?
-
A5: If you want to improve the quality and speed of live cricket streaming, you need to consider some factors that may affect them, such as: - Your internet connection: You need to have a stable and fast internet connection to stream live cricket matches without any buffering or lagging. You can check your internet speed and signal strength using an app or a website. You can also use a Wi-Fi network instead of a mobile data network if possible. You can also close any other apps or programs that may consume bandwidth on your phone or your router. - Your phone storage: You need to have enough storage space on your phone to download and install live cricket streaming app APK and store the data and cache of the app. You can check your phone storage using an app or a setting. You can also delete any unnecessary files or apps that may take up space on your phone. - Your phone performance: You need to have a good phone performance to run live cricket streaming app APK smoothly and efficiently. You can check your phone performance using an app or a setting. You can also clear any background processes or tasks that may slow down your phone. You can also update your phone software and firmware if available. These are some of the factors that may affect the quality and speed of live cricket streaming. You can try to optimize them as much as possible to improve your streaming experience.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Free Download Video Game Player The Ultimate Guide to Playing Any Game on Any Device.md b/spaces/1phancelerku/anime-remove-background/Free Download Video Game Player The Ultimate Guide to Playing Any Game on Any Device.md
deleted file mode 100644
index 042de4d5efed3691f5e5403c11fb827bf8d63087..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Free Download Video Game Player The Ultimate Guide to Playing Any Game on Any Device.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
How to Find and Download the Best Free Video Game Players
-
If you love playing video games, you probably know how important it is to have a good video game player. A video game player is a software that allows you to run and enjoy various types of video games on your device. But how do you find and download the best free video game players? In this article, we will answer this question and give you some tips and recommendations.
A video game player is a program that can play video games that are stored on your device or streamed from the internet. A video game player can support different formats and genres of games, such as action, adventure, puzzle, simulation, sports, etc. A video game player can also have various features, such as graphics settings, sound options, controller support, online multiplayer, achievements, etc.
-
Benefits of Using a Video Game Player
-
Using a video game player has many benefits, such as:
-
-
You can play games without buying or installing them separately.
-
You can access a large library of games from different sources and platforms.
-
You can enjoy high-quality graphics and sound effects.
-
You can customize your gaming experience according to your preferences.
-
You can save space and money on your device.
-
-
How to Choose the Right Video Game Player for Your Needs
-
Compatibility and Performance
-
The first thing you need to consider when choosing a video game player is its compatibility and performance. You need to make sure that the video game player can run smoothly on your device and operating system. You also need to check the system requirements and specifications of the games you want to play. You don't want to download a video game player that will crash or lag frequently.
-
free download VLC media player for PC
-free download EA games on console and mobile
-free download top games from Microsoft Store
-free download Steam game player for Windows
-free download Epic Games launcher for PC and Mac
-free download GOG Galaxy game player and store
-free download Origin game player and library
-free download Uplay game player and platform
-free download Battle.net game player and service
-free download Xbox Game Pass for PC and console
-free download PlayStation Now for PC and PS4
-free download Nintendo Switch Online for games and cloud
-free download Stadia game player and streaming
-free download GeForce Now game player and cloud gaming
-free download Roblox game player and creator
-free download Minecraft game player and server
-free download Fortnite game player and battle royale
-free download PUBG Mobile game player and emulator
-free download Call of Duty Mobile game player and shooter
-free download Genshin Impact game player and RPG
-free download Among Us game player and social deduction
-free download Fall Guys game player and party game
-free download Valorant game player and tactical shooter
-free download League of Legends game player and MOBA
-free download Dota 2 game player and strategy
-free download Counter-Strike: Global Offensive game player and FPS
-free download Apex Legends game player and hero shooter
-free download Overwatch game player and team-based shooter
-free download Hearthstone game player and card game
-free download World of Warcraft game player and MMORPG
-free download Star Wars: The Old Republic game player and online RPG
-free download Star Wars: Galaxy of Heroes game player and mobile RPG
-free download FIFA Mobile game player and soccer simulator
-free download NBA 2K Mobile game player and basketball simulator
-free download Madden NFL Mobile game player and football simulator
-free download MLB Tap Sports Baseball 2021 game player and baseball simulator
-free download Golf Clash game player and golf simulator
-free download Asphalt 9: Legends game player and racing simulator
-free download Real Racing 3 game player and car simulator
-free download Need for Speed No Limits game player and street racing simulator
-free download Candy Crush Saga game player and puzzle solver
-free download Angry Birds 2 game player and physics-based puzzle solver
-free download Plants vs. Zombies 2 game player and tower defense puzzle solver
-free download Cut the Rope 2 game player and logic-based puzzle solver
-free download Monument Valley 2 game player and optical illusion puzzle solver
-free download Limbo game player and dark-themed puzzle solver
-free download The Room Two game player and escape room puzzle solver
-free download Sudoku.com - Free Sudoku Puzzles game player
-free download Words With Friends 2 - Free Word Games & Puzzles
-
Variety and Quality of Games
-
The second thing you need to consider is the variety and quality of games that the video game player offers. You want to choose a video game player that has a wide range of games from different genres and categories. You also want to choose a video game player that has high-quality games that are fun, engaging, and original. You don't want to download a video game player that has boring or outdated games.
-
User Interface and Customization
-
The third thing you need to consider is the user interface and customization of the video game player. You want to choose a video game player that has a simple and intuitive user interface that is easy to navigate and use. You also want to choose a video game player that has various options and settings that allow you to customize your gaming experience. You don't want to download a video game player that has a complicated or cluttered user interface.
-
Where to Download Free Video Game Players Safely and Legally
-
Official Websites of Developers and Publishers
-
One of the best places to download free video game players safely and legally is the official websites of the developers and publishers of the games. These websites usually offer free downloads or trials of their video game players, as well as updates, patches, support, and information. For example, you can download VLC media player from its official website. This is a free and open source cross-platform multimedia player that can play most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols.
-
Trusted
Trusted Platforms and Stores
-
Another good place to download free video game players safely and legally is the trusted platforms and stores that offer games and software. These platforms and stores usually have a large collection of free and paid video game players, as well as reviews, ratings, recommendations, and security checks. For example, you can download EA's free-to-play games from its platform Origin. This is a digital distribution platform that allows you to play games from EA and other publishers, as well as access online features, social networking, and cloud saves.
-
Tips and Precautions for Downloading
-
Before you download any video game player, you should follow some tips and precautions to ensure a safe and legal download. Here are some of them:
-
-
Always download from official or trusted sources. Avoid downloading from unknown or suspicious websites that may contain malware or viruses.
-
Always read the terms and conditions, privacy policy, and license agreement of the video game player. Make sure you understand what you are agreeing to and what rights you have.
-
Always scan the downloaded file with an antivirus or anti-malware program. Make sure the file is clean and does not contain any harmful or unwanted components.
-
Always backup your device and data before installing the video game player. In case something goes wrong or you don't like the video game player, you can restore your device and data to the previous state.
-
-
Some of the Best Free Video Game Players You Can Try Today
-
VLC Media Player
-
As mentioned earlier, VLC media player is a free and open source cross-platform multimedia player that can play most multimedia files as well as DVDs, Audio CDs, VCDs, and various streaming protocols. It can also play video games that are stored on your device or streamed from the internet. Some of the video games that VLC media player can play are Doom, Quake, SuperTuxKart, Super Mario Bros., etc. You can find more information on how to play video games with VLC media player here.
-
EA's Free-to-Play Games
-
EA is one of the biggest video game publishers in the world, and it offers some of its games for free on its platform Origin. Some of the free-to-play games that EA offers are Apex Legends, FIFA Online 4, Star Wars: The Old Republic, Need for Speed World, etc. These games are high-quality and have online multiplayer features. You can find more information on how to download and play EA's free-to-play games here.
-
Microsoft's Top Free Games
-
Microsoft is another giant in the video game industry, and it also offers some of its games for free on its platform Microsoft Store. Some of the top free games that Microsoft offers are Forza Street, Asphalt 9: Legends, Roblox, Minecraft: Windows 10 Edition, etc. These games are also high-quality and have online multiplayer features. You can find more information on how to download and play Microsoft's top free games here.
-
Conclusion
-
In conclusion, finding and downloading the best free video game players is not difficult if you know where to look and what to consider. A video game player is a software that allows you to play various types of video games on your device. You should choose a video game player that is compatible with your device and operating system, has a wide variety of high-quality games, and has a simple and customizable user interface. You should also download from official or trusted sources, read the terms and conditions, scan the file with an antivirus program, and backup your device before installing. Some of the best free video game players you can try today are VLC media player, EA's free-to-play games, and Microsoft's top free games.
-
FAQs
-
What is the difference between a video game player and a video game emulator?
-
A video game player is a software that can play video games that are designed for your device or platform. A video game emulator is a software that can simulate another device or platform and allow you to play video games that are not designed for your device or platform.
-
Can I play online multiplayer games with a video game player?
-
Yes, you can play online multiplayer games with a video game player if the video game player supports online features and has an internet connection. However, you may need to create an account or register with the developer or publisher of the game to access online multiplayer modes.
-
How can I update my video game player?
-
You can update your video game player by checking for updates on the official website of the video game player or the platform or store where you downloaded it from. You can also enable automatic updates if the video game player has this option.
-
How can I uninstall my video game player?
-
You can uninstall your video game player by following the instructions on the official website of the video game player or the platform or store where you downloaded it from. You can also use the uninstaller program that comes with the video game player or use the control panel or settings of your device.
-
What are some alternatives to video game players?
-
Some alternatives to video game players are video game consoles, handheld devices, streaming services, and web browsers. Video game consoles are dedicated devices that can play video games on a TV or monitor. Handheld devices are portable devices that can play video games on a small screen. Streaming services are online platforms that can stream video games to your device without downloading them. Web browsers are programs that can access and play web-based games on your device.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/preconfig/preconfig_scheduling_lms_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/preconfig/preconfig_scheduling_lms_discrete.py
deleted file mode 100644
index 13a567113246ecd0646d297c1bd9fd86dd7ee2bf..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/preconfig/preconfig_scheduling_lms_discrete.py
+++ /dev/null
@@ -1,299 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import warnings
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-from scipy import integrate
-
-from ...configuration_utils import ConfigMixin, register_to_config
-from ...utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput
-from ..scheduling_utils import SchedulerMixin
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->LMSDiscrete
-class PreconfigLMSDiscreteSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: paddle.Tensor
- pred_original_sample: Optional[paddle.Tensor] = None
-
-
-class PreconfigLMSDiscreteScheduler(SchedulerMixin, ConfigMixin):
- """
- Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by
- Katherine Crowson:
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- """
-
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- prediction_type: str = "epsilon",
- preconfig=True,
- ):
- if trained_betas is not None:
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
- elif beta_schedule == "linear":
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = self.sigmas.max()
-
- # setable values
- self.num_inference_steps = None
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy()
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
- self.derivatives = []
- self.is_scale_input_called = False
- self.preconfig = preconfig
-
- def scale_model_input(
- self, sample: paddle.Tensor, timestep: Union[float, paddle.Tensor], **kwargs
- ) -> paddle.Tensor:
- """
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the K-LMS algorithm.
-
- Args:
- sample (`paddle.Tensor`): input sample
- timestep (`float` or `paddle.Tensor`): the current timestep in the diffusion chain
-
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- if kwargs.get("step_index") is not None:
- step_index = kwargs["step_index"]
- else:
- step_index = (self.timesteps == timestep).nonzero().item()
- self.is_scale_input_called = True
- if not self.preconfig:
- sigma = self.sigmas[step_index]
- sample = sample / ((sigma**2 + 1) ** 0.5)
- return sample
- else:
- return sample * self.latent_scales[step_index]
-
- def get_lms_coefficient(self, order, t, current_order):
- """
- Compute a linear multistep coefficient.
-
- Args:
- order (TODO):
- t (TODO):
- current_order (TODO):
- """
-
- def lms_derivative(tau):
- prod = 1.0
- for k in range(order):
- if current_order == k:
- continue
- prod *= (tau - self.sigmas[t - k]) / (self.sigmas[t - current_order] - self.sigmas[t - k])
- return prod
-
- integrated_coeff = integrate.quad(lms_derivative, self.sigmas[t], self.sigmas[t + 1], epsrel=1e-4)[0]
-
- return integrated_coeff
-
- def set_timesteps(self, num_inference_steps: int, preconfig_order: int = 4):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- self.num_inference_steps = num_inference_steps
-
- timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy()
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
-
- self.derivatives = []
- if self.preconfig:
- self.order = preconfig_order
- self.lms_coeffs = []
- self.latent_scales = [1.0 / ((sigma**2 + 1) ** 0.5) for sigma in self.sigmas]
- for step_index in range(self.num_inference_steps):
- order = min(step_index + 1, preconfig_order)
- self.lms_coeffs.append(
- [self.get_lms_coefficient(order, step_index, curr_order) for curr_order in range(order)]
- )
-
- def step(
- self,
- model_output: paddle.Tensor,
- timestep: Union[float, paddle.Tensor],
- sample: paddle.Tensor,
- order: int = 4,
- return_dict: bool = True,
- **kwargs
- ) -> Union[PreconfigLMSDiscreteSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`paddle.Tensor`): direct output from learned diffusion model.
- timestep (`float`): current timestep in the diffusion chain.
- sample (`paddle.Tensor`):
- current instance of sample being created by diffusion process.
- order: coefficient for multi-step inference.
- return_dict (`bool`): option for returning tuple rather than PreconfigLMSDiscreteSchedulerOutput class
- Args in kwargs:
- step_index (`int`):
- return_pred_original_sample (`bool`): option for return pred_original_sample
-
- Returns:
- [`~schedulers.scheduling_utils.PreconfigLMSDiscreteSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.PreconfigLMSDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`.
- When returning a tuple, the first element is the sample tensor.
-
- """
- if not self.is_scale_input_called:
- warnings.warn(
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
- "See `StableDiffusionPipeline` for a usage example."
- )
- if kwargs.get("return_pred_original_sample") is not None:
- return_pred_original_sample = kwargs["return_pred_original_sample"]
- else:
- return_pred_original_sample = True
- if kwargs.get("step_index") is not None:
- step_index = kwargs["step_index"]
- else:
- step_index = (self.timesteps == timestep).nonzero().item()
- if self.config.prediction_type == "epsilon" and not return_pred_original_sample:
- # if pred_original_sample is no need
- self.derivatives.append(model_output)
- pred_original_sample = None
- else:
- sigma = self.sigmas[step_index]
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- if self.config.prediction_type == "epsilon":
- pred_original_sample = sample - sigma * model_output
- elif self.config.prediction_type == "v_prediction":
- # * c_out + input * c_skip
- pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
- )
- # 2. Convert to an ODE derivative
- derivative = (sample - pred_original_sample) / sigma
- self.derivatives.append(derivative)
-
- if len(self.derivatives) > order:
- self.derivatives.pop(0)
-
- if not self.preconfig:
- # 3. If not preconfiged, compute linear multistep coefficients.
- order = min(step_index + 1, order)
- lms_coeffs = [self.get_lms_coefficient(order, step_index, curr_order) for curr_order in range(order)]
- # 4. Compute previous sample based on the derivatives path
- prev_sample = sample + sum(
- coeff * derivative for coeff, derivative in zip(lms_coeffs, reversed(self.derivatives))
- )
- else:
- # 3. If preconfiged, direct compute previous sample based on the derivatives path
- prev_sample = sample + sum(
- coeff * derivative
- for coeff, derivative in zip(self.lms_coeffs[step_index], reversed(self.derivatives))
- )
-
- if not return_dict:
- if not return_pred_original_sample:
- return (prev_sample,)
- else:
- return (prev_sample, pred_original_sample)
-
- return PreconfigLMSDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
-
- def add_noise(
- self,
- original_samples: paddle.Tensor,
- noise: paddle.Tensor,
- timesteps: paddle.Tensor,
- ) -> paddle.Tensor:
- # Make sure sigmas and timesteps have the same dtype as original_samples
- sigmas = self.sigmas.cast(original_samples.dtype)
- schedule_timesteps = self.timesteps
-
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
-
- sigma = sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/ASJMO/freegpt/client/css/checkbox.css b/spaces/ASJMO/freegpt/client/css/checkbox.css
deleted file mode 100644
index 94955b604ea3fab493a50d740fb29be1a8ef6cd3..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/client/css/checkbox.css
+++ /dev/null
@@ -1,55 +0,0 @@
-.checkbox input {
- height: 0;
- width: 0;
- display: none;
-}
-
-.checkbox span {
- font-size: 0.875rem;
- color: var(--colour-2);
- margin-left: 4px;
-}
-
-.checkbox label:after {
- content: "";
- position: absolute;
- top: 50%;
- transform: translateY(-50%);
- left: 5px;
- width: 20px;
- height: 20px;
- background: var(--blur-border);
- border-radius: 90px;
- transition: 0.33s;
-}
-
-.checkbox input + label:after,
-.checkbox input:checked + label {
- background: var(--colour-3);
-}
-
-.checkbox input + label,
-.checkbox input:checked + label:after {
- background: var(--blur-border);
-}
-
-.checkbox input:checked + label:after {
- left: calc(100% - 5px - 20px);
-}
-
-@media screen and (max-width: 990px) {
- .checkbox label {
- width: 25px;
- height: 15px;
- }
-
- .checkbox label:after {
- left: 2px;
- width: 10px;
- height: 10px;
- }
-
- .checkbox input:checked + label:after {
- left: calc(100% - 2px - 10px);
- }
-}
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/admin/export/+server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/admin/export/+server.ts
deleted file mode 100644
index 2cdae1f2aa779316e5998907a2b0484be2847cfc..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/admin/export/+server.ts
+++ /dev/null
@@ -1,166 +0,0 @@
-import {
- PARQUET_EXPORT_DATASET,
- PARQUET_EXPORT_HF_TOKEN,
- PARQUET_EXPORT_SECRET,
-} from "$env/static/private";
-import { collections } from "$lib/server/database";
-import type { Message } from "$lib/types/Message";
-import { error } from "@sveltejs/kit";
-import { pathToFileURL } from "node:url";
-import { unlink } from "node:fs/promises";
-import { uploadFile } from "@huggingface/hub";
-import parquet from "parquetjs";
-import { z } from "zod";
-
-// Triger like this:
-// curl -X POST "http://localhost:5173/chat/admin/export" -H "Authorization: Bearer " -H "Content-Type: application/json" -d '{"model": "OpenAssistant/oasst-sft-6-llama-30b-xor"}'
-
-export async function POST({ request }) {
- if (!PARQUET_EXPORT_SECRET || !PARQUET_EXPORT_DATASET || !PARQUET_EXPORT_HF_TOKEN) {
- throw error(500, "Parquet export is not configured.");
- }
-
- if (request.headers.get("Authorization") !== `Bearer ${PARQUET_EXPORT_SECRET}`) {
- throw error(403);
- }
-
- const { model } = z
- .object({
- model: z.string(),
- })
- .parse(await request.json());
-
- const schema = new parquet.ParquetSchema({
- title: { type: "UTF8" },
- created_at: { type: "TIMESTAMP_MILLIS" },
- updated_at: { type: "TIMESTAMP_MILLIS" },
- messages: {
- repeated: true,
- fields: {
- from: { type: "UTF8" },
- content: { type: "UTF8" },
- score: { type: "INT_8", optional: true },
- },
- },
- });
-
- const fileName = `/tmp/conversations-${new Date().toJSON().slice(0, 10)}-${Date.now()}.parquet`;
-
- const writer = await parquet.ParquetWriter.openFile(schema, fileName);
-
- let count = 0;
- console.log("Exporting conversations for model", model);
-
- for await (const conversation of collections.settings.aggregate<{
- title: string;
- created_at: Date;
- updated_at: Date;
- messages: Message[];
- }>([
- {
- $match: {
- shareConversationsWithModelAuthors: true,
- sessionId: { $exists: true },
- userId: { $exists: false },
- },
- },
- {
- $lookup: {
- from: "conversations",
- localField: "sessionId",
- foreignField: "sessionId",
- as: "conversations",
- pipeline: [{ $match: { model, userId: { $exists: false } } }],
- },
- },
- { $unwind: "$conversations" },
- {
- $project: {
- title: "$conversations.title",
- created_at: "$conversations.createdAt",
- updated_at: "$conversations.updatedAt",
- messages: "$conversations.messages",
- },
- },
- ])) {
- await writer.appendRow({
- title: conversation.title,
- created_at: conversation.created_at,
- updated_at: conversation.updated_at,
- messages: conversation.messages.map((message: Message) => ({
- from: message.from,
- content: message.content,
- ...(message.score ? { score: message.score } : undefined),
- })),
- });
- ++count;
-
- if (count % 1_000 === 0) {
- console.log("Exported", count, "conversations");
- }
- }
-
- console.log("exporting convos with userId");
-
- for await (const conversation of collections.settings.aggregate<{
- title: string;
- created_at: Date;
- updated_at: Date;
- messages: Message[];
- }>([
- { $match: { shareConversationsWithModelAuthors: true, userId: { $exists: true } } },
- {
- $lookup: {
- from: "conversations",
- localField: "userId",
- foreignField: "userId",
- as: "conversations",
- pipeline: [{ $match: { model } }],
- },
- },
- { $unwind: "$conversations" },
- {
- $project: {
- title: "$conversations.title",
- created_at: "$conversations.createdAt",
- updated_at: "$conversations.updatedAt",
- messages: "$conversations.messages",
- },
- },
- ])) {
- await writer.appendRow({
- title: conversation.title,
- created_at: conversation.created_at,
- updated_at: conversation.updated_at,
- messages: conversation.messages.map((message: Message) => ({
- from: message.from,
- content: message.content,
- ...(message.score ? { score: message.score } : undefined),
- })),
- });
- ++count;
-
- if (count % 1_000 === 0) {
- console.log("Exported", count, "conversations");
- }
- }
-
- await writer.close();
-
- console.log("Uploading", fileName, "to Hugging Face Hub");
-
- await uploadFile({
- file: pathToFileURL(fileName),
- credentials: { accessToken: PARQUET_EXPORT_HF_TOKEN },
- repo: {
- type: "dataset",
- name: PARQUET_EXPORT_DATASET,
- },
- });
-
- console.log("Upload done");
-
- await unlink(fileName);
-
- return new Response();
-}
diff --git a/spaces/AdamGustavsson/AnimeganV2Webcam/app.py b/spaces/AdamGustavsson/AnimeganV2Webcam/app.py
deleted file mode 100644
index 923fccf98f7e1f0c0c5402b752e5dbd01dd4ed91..0000000000000000000000000000000000000000
--- a/spaces/AdamGustavsson/AnimeganV2Webcam/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("spaces/akhaliq/AnimeGANv2", inputs="webcam").launch()
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/sde_team.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/sde_team.py
deleted file mode 100644
index 7a4b571ad2ad409b1071a3752a64d35ed86bded4..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/sde_team.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, List
-
-from agentverse.message import Message
-
-from . import selector_registry as SelectorRegistry
-from .base import BaseSelector
-
-import json
-import re
-
-if TYPE_CHECKING:
- from agentverse.environments import BaseEnvironment
-
-def extract(content: str, keyword: str):
- result = ""
- flag = False
- for line in content.split('\n'):
- if line.strip().startswith(keyword):
- flag = True
- continue
- if flag:
- result += line
- result += "\n"
- return result
-
-
-@SelectorRegistry.register("sde_team")
-class SdeTeamSelector(BaseSelector):
- def select_message(self, environment: BaseEnvironment, messages: List[Message]) -> List[Message]:
- last_sender = environment.last_messages[0].sender
- selected = messages
-
- if last_sender == "unit_test_generator":
- unit_tests = set()
- for message in selected:
- unit_test = extract(message.content, ":")
- if unit_test not in unit_tests:
- unit_tests.add(extract(message.content, ":"))
- unit_tests = list(unit_tests)
- environment.rule_params["unit_tests"] = str(unit_tests)
- new_message = Message(
- content="",
- sender="unit_test_generator",
- receiver=[],
- ) # TODO: set the content of the message
- selected = [new_message]
-
- elif last_sender == "code_writer":
- cur_code = extract(selected[0].content, ":")
- environment.rule_params["code"] = cur_code
-
- from .code_api import execute_unit_tests
- feedback = execute_unit_tests(environment.rule_params["code"], eval(environment.rule_params["unit_tests"]))
-
- environment.rule_params["feedback"] = feedback
- selected[0].content = f":\n\n{cur_code}\n\n:\n{feedback}"
- f_dict = json.loads(feedback)
- if f_dict["is_passing"]:
- environment.rule_params["end_flag"] = True
-
- elif last_sender == "code_reviewer":
- code_review = selected[0].content
- cur_code = environment.rule_params["code"]
- selected[0].content = f":\n\n{cur_code}\n\n{code_review}"
- feedback = environment.rule_params["feedback"]
- f_dict = json.loads(feedback)
- if f_dict["is_passing"]:
- environment.rule_params["end_flag"] = True
-
- return selected
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/plan.py b/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/plan.py
deleted file mode 100644
index 530190c0f9d233e4ceed3385fa5836b3214b8c4a..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/plan.py
+++ /dev/null
@@ -1,79 +0,0 @@
-from __future__ import annotations
-
-from logging import getLogger
-from typing import List, TYPE_CHECKING
-
-from . import memory_manipulator_registry
-from .base import BaseMemoryManipulator
-from ..message import Message
-
-if TYPE_CHECKING:
- from agentverse.memory import VectorStoreMemory
- from agentverse.agents.reflection_agent import ReflectionAgent
-
-logger = getLogger(__file__)
-
-PLAN_PROMPT = """Now you are act for as an agent named ${agent_name} in a virtual world.
-You might need to performing reaction to the observation.
-Based on the following information:
-(1) The agent's description: ${role_description}
-(2) Current time is ${current_time}
-(3) Your history memory is ${chat_history}
-
-Now is ${current_time}. If all plans are expired, you have to plan for\
-the next time periods.
-Do you need to generate new plans?
-If yes, tell me the new plan, including the time period.
-If no, just tell me No."""
-
-
-@memory_manipulator_registry.register("plan")
-class Plan(BaseMemoryManipulator):
- """
- Memory manipulator for plan.
- """
- memory: VectorStoreMemory = None
- agent: ReflectionAgent = None # specify ReflectionAgent
- # later considering removing current_time to be more general
- # and then change to BaseAgent
- plan: List[str] = []
-
- def manipulate_memory(self) -> str:
- """
- Generate new plans
- """
- prompt = self._fill_prompt_template()
- result = self.agent.llm.generate_response(prompt).content
- result = result.strip('.')
- logger.info(f"{self.agent.name}'s new plan: {result}")
- if result == "No":
- return ""
- else:
- self.plan.append(result)
- plan_message = Message(
- content=result,
- sender=self.agent.name,
- receiver={self.agent.name})
- self.agent.memory.add_message([plan_message])
- return result
-
-
- def _fill_prompt_template(self) -> str:
- """Fill the placeholders in the prompt template
-
- In the conversation agent, three placeholders are supported:
- - ${agent_name}: the name of the agent
- - ${env_description}: the description of the environment
- - ${role_description}: the description of the role of the agent
- - ${chat_history}: the chat history of the agent
- """
- input_arguments = {
- "agent_name": self.agent.name,
- "role_description": self.agent.role_description,
- "chat_history": self.agent.memory.to_string(add_sender_prefix=True),
- "current_time": self.agent.current_time,
- }
- return PLAN_PROMPT.format(**input_arguments)
-
- def reset(self) -> None:
- pass
diff --git a/spaces/AhmedM20/Email_Marketing_Content_Generator/README.md b/spaces/AhmedM20/Email_Marketing_Content_Generator/README.md
deleted file mode 100644
index 040309416610a4f38056d6378ff0010cc565804c..0000000000000000000000000000000000000000
--- a/spaces/AhmedM20/Email_Marketing_Content_Generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Email Marketing Geneartor
-emoji: 🏃
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.41.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlexWang/lama/bin/paper_runfiles/blur_tests.sh b/spaces/AlexWang/lama/bin/paper_runfiles/blur_tests.sh
deleted file mode 100644
index 8f204a4c643d08935e5561ed27a286536643958d..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/paper_runfiles/blur_tests.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-##!/usr/bin/env bash
-#
-## !!! file set to make test_large_30k from the vanilla test_large: configs/test_large_30k.lst
-#
-## paths to data are valid for mml7
-#PLACES_ROOT="/data/inpainting/Places365"
-#OUT_DIR="/data/inpainting/paper_data/Places365_val_test"
-#
-#source "$(dirname $0)/env.sh"
-#
-#for datadir in test_large_30k # val_large
-#do
-# for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
-# do
-# "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
-# "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 8
-#
-# "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
-# done
-#
-# for conf in segm_256 segm_512
-# do
-# "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \
-# "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 2
-#
-# "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
-# done
-#done
-#
-#IN_DIR="/data/inpainting/paper_data/Places365_val_test/test_large_30k/random_medium_512"
-#PRED_DIR="/data/inpainting/predictions/final/images/r.suvorov_2021-03-05_17-08-35_train_ablv2_work_resume_epoch37/random_medium_512"
-#BLUR_OUT_DIR="/data/inpainting/predictions/final/blur/images"
-#
-#for b in 0.1
-#
-#"$BINDIR/blur_predicts.py" "$BASEDIR/../../configs/eval2.yaml" "$CUR_IN_DIR" "$CUR_OUT_DIR" "$CUR_EVAL_DIR"
-#
diff --git a/spaces/AlexZou/Deploy_Restoration/README.md b/spaces/AlexZou/Deploy_Restoration/README.md
deleted file mode 100644
index e1411f4e55c96cef4f5382d156f740954f4da43b..0000000000000000000000000000000000000000
--- a/spaces/AlexZou/Deploy_Restoration/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Deploy Restoration
-emoji: 👀
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlignmentResearch/tuned-lens/README.md b/spaces/AlignmentResearch/tuned-lens/README.md
deleted file mode 100644
index 9e77df2f9974291988a66ba8982098cef3473962..0000000000000000000000000000000000000000
--- a/spaces/AlignmentResearch/tuned-lens/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: Tuned Lens
-emoji: 🔎
-colorFrom: pink
-colorTo: blue
-sdk: docker
-pinned: false
-license: mit
----
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/multistep_dpm_solver_inverse.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/multistep_dpm_solver_inverse.md
deleted file mode 100644
index 1b3348a5a3ea5c98f7d55639c2268693a639dc84..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/multistep_dpm_solver_inverse.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-# Inverse Multistep DPM-Solver (DPMSolverMultistepInverse)
-
-## Overview
-
-This scheduler is the inverted scheduler of [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://arxiv.org/abs/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models
-](https://arxiv.org/abs/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.
-The implementation is mostly based on the DDIM inversion definition of [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://arxiv.org/pdf/2211.09794.pdf) and the ad-hoc notebook implementation for DiffEdit latent inversion [here](https://github.com/Xiang-cd/DiffEdit-stable-diffusion/blob/main/diffedit.ipynb).
-
-## DPMSolverMultistepInverseScheduler
-[[autodoc]] DPMSolverMultistepInverseScheduler
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/imagic_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/imagic_stable_diffusion.py
deleted file mode 100644
index 56bd381a9e65aa8edbe56cf7f22127c5c449b7ee..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/imagic_stable_diffusion.py
+++ /dev/null
@@ -1,496 +0,0 @@
-"""
- modeled after the textual_inversion.py / train_dreambooth.py and the work
- of justinpinkney here: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
-"""
-import inspect
-import warnings
-from typing import List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-import torch.nn.functional as F
-from accelerate import Accelerator
-
-# TODO: remove and import from diffusers.utils when the new version of diffusers is released
-from packaging import version
-from tqdm.auto import tqdm
-from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
-
-from diffusers import DiffusionPipeline
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from diffusers.utils import logging
-
-
-if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
- PIL_INTERPOLATION = {
- "linear": PIL.Image.Resampling.BILINEAR,
- "bilinear": PIL.Image.Resampling.BILINEAR,
- "bicubic": PIL.Image.Resampling.BICUBIC,
- "lanczos": PIL.Image.Resampling.LANCZOS,
- "nearest": PIL.Image.Resampling.NEAREST,
- }
-else:
- PIL_INTERPOLATION = {
- "linear": PIL.Image.LINEAR,
- "bilinear": PIL.Image.BILINEAR,
- "bicubic": PIL.Image.BICUBIC,
- "lanczos": PIL.Image.LANCZOS,
- "nearest": PIL.Image.NEAREST,
- }
-# ------------------------------------------------------------------------------
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def preprocess(image):
- w, h = image.size
- w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = torch.from_numpy(image)
- return 2.0 * image - 1.0
-
-
-class ImagicStableDiffusionPipeline(DiffusionPipeline):
- r"""
- Pipeline for imagic image editing.
- See paper here: https://arxiv.org/pdf/2210.09276.pdf
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offsensive or harmful.
- Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
- feature_extractor ([`CLIPImageProcessor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- ):
- super().__init__()
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
-
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- r"""
- Enable sliced attention computation.
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
- Args:
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
- `attention_head_dim` must be a multiple of `slice_size`.
- """
- if slice_size == "auto":
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- self.unet.set_attention_slice(slice_size)
-
- def disable_attention_slicing(self):
- r"""
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
- back to computing attention in one step.
- """
- # set slice_size = `None` to disable `attention slicing`
- self.enable_attention_slicing(None)
-
- def train(
- self,
- prompt: Union[str, List[str]],
- image: Union[torch.FloatTensor, PIL.Image.Image],
- height: Optional[int] = 512,
- width: Optional[int] = 512,
- generator: Optional[torch.Generator] = None,
- embedding_learning_rate: float = 0.001,
- diffusion_model_learning_rate: float = 2e-6,
- text_embedding_optimization_steps: int = 500,
- model_fine_tuning_optimization_steps: int = 1000,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- accelerator = Accelerator(
- gradient_accumulation_steps=1,
- mixed_precision="fp16",
- )
-
- if "torch_device" in kwargs:
- device = kwargs.pop("torch_device")
- warnings.warn(
- "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
- " Consider using `pipe.to(torch_device)` instead."
- )
-
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- self.to(device)
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- # Freeze vae and unet
- self.vae.requires_grad_(False)
- self.unet.requires_grad_(False)
- self.text_encoder.requires_grad_(False)
- self.unet.eval()
- self.vae.eval()
- self.text_encoder.eval()
-
- if accelerator.is_main_process:
- accelerator.init_trackers(
- "imagic",
- config={
- "embedding_learning_rate": embedding_learning_rate,
- "text_embedding_optimization_steps": text_embedding_optimization_steps,
- },
- )
-
- # get text embeddings for prompt
- text_input = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_embeddings = torch.nn.Parameter(
- self.text_encoder(text_input.input_ids.to(self.device))[0], requires_grad=True
- )
- text_embeddings = text_embeddings.detach()
- text_embeddings.requires_grad_()
- text_embeddings_orig = text_embeddings.clone()
-
- # Initialize the optimizer
- optimizer = torch.optim.Adam(
- [text_embeddings], # only optimize the embeddings
- lr=embedding_learning_rate,
- )
-
- if isinstance(image, PIL.Image.Image):
- image = preprocess(image)
-
- latents_dtype = text_embeddings.dtype
- image = image.to(device=self.device, dtype=latents_dtype)
- init_latent_image_dist = self.vae.encode(image).latent_dist
- image_latents = init_latent_image_dist.sample(generator=generator)
- image_latents = 0.18215 * image_latents
-
- progress_bar = tqdm(range(text_embedding_optimization_steps), disable=not accelerator.is_local_main_process)
- progress_bar.set_description("Steps")
-
- global_step = 0
-
- logger.info("First optimizing the text embedding to better reconstruct the init image")
- for _ in range(text_embedding_optimization_steps):
- with accelerator.accumulate(text_embeddings):
- # Sample noise that we'll add to the latents
- noise = torch.randn(image_latents.shape).to(image_latents.device)
- timesteps = torch.randint(1000, (1,), device=image_latents.device)
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps)
-
- # Predict the noise residual
- noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
-
- loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
- accelerator.backward(loss)
-
- optimizer.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- accelerator.log(logs, step=global_step)
-
- accelerator.wait_for_everyone()
-
- text_embeddings.requires_grad_(False)
-
- # Now we fine tune the unet to better reconstruct the image
- self.unet.requires_grad_(True)
- self.unet.train()
- optimizer = torch.optim.Adam(
- self.unet.parameters(), # only optimize unet
- lr=diffusion_model_learning_rate,
- )
- progress_bar = tqdm(range(model_fine_tuning_optimization_steps), disable=not accelerator.is_local_main_process)
-
- logger.info("Next fine tuning the entire model to better reconstruct the init image")
- for _ in range(model_fine_tuning_optimization_steps):
- with accelerator.accumulate(self.unet.parameters()):
- # Sample noise that we'll add to the latents
- noise = torch.randn(image_latents.shape).to(image_latents.device)
- timesteps = torch.randint(1000, (1,), device=image_latents.device)
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = self.scheduler.add_noise(image_latents, noise, timesteps)
-
- # Predict the noise residual
- noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
-
- loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
- accelerator.backward(loss)
-
- optimizer.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- accelerator.log(logs, step=global_step)
-
- accelerator.wait_for_everyone()
- self.text_embeddings_orig = text_embeddings_orig
- self.text_embeddings = text_embeddings
-
- @torch.no_grad()
- def __call__(
- self,
- alpha: float = 1.2,
- height: Optional[int] = 512,
- width: Optional[int] = 512,
- num_inference_steps: Optional[int] = 50,
- generator: Optional[torch.Generator] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- guidance_scale: float = 7.5,
- eta: float = 0.0,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
- if self.text_embeddings is None:
- raise ValueError("Please run the pipe.train() before trying to generate an image.")
- if self.text_embeddings_orig is None:
- raise ValueError("Please run the pipe.train() before trying to generate an image.")
-
- text_embeddings = alpha * self.text_embeddings_orig + (1 - alpha) * self.text_embeddings
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens = [""]
- max_length = self.tokenizer.model_max_length
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = uncond_embeddings.shape[1]
- uncond_embeddings = uncond_embeddings.view(1, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- # get the initial random noise unless the user supplied it
-
- # Unlike in other pipelines, latents need to be generated in the target device
- # for 1-to-1 results reproducibility with the CompVis implementation.
- # However this currently doesn't work in `mps`.
- latents_shape = (1, self.unet.config.in_channels, height // 8, width // 8)
- latents_dtype = text_embeddings.dtype
- if self.device.type == "mps":
- # randn does not exist on mps
- latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
- self.device
- )
- else:
- latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- # Some schedulers like PNDM have timesteps as arrays
- # It's more optimized to move all timesteps to correct device beforehand
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
-
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
- self.device
- )
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
- )
- else:
- has_nsfw_concept = None
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/README.md
deleted file mode 100644
index c28ecefc9a3002b2f6c6d3d97e53047e82ab2733..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/README.md
+++ /dev/null
@@ -1,50 +0,0 @@
-## Training examples
-
-Creating a training image set is [described in a different document](https://huggingface.co/docs/datasets/image_process#image-datasets).
-
-### Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install .
-```
-
-Then cd in the example folder and run
-```bash
-pip install -r requirements.txt
-```
-
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-#### Use ONNXRuntime to accelerate training
-
-In order to leverage onnxruntime to accelerate training, please use train_unconditional_ort.py
-
-The command to train a DDPM UNet model on the Oxford Flowers dataset with onnxruntime:
-
-```bash
-accelerate launch train_unconditional.py \
- --dataset_name="huggan/flowers-102-categories" \
- --resolution=64 --center_crop --random_flip \
- --output_dir="ddpm-ema-flowers-64" \
- --use_ema \
- --train_batch_size=16 \
- --num_epochs=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=1e-4 \
- --lr_warmup_steps=500 \
- --mixed_precision=fp16
- ```
-
-Please contact Prathik Rao (prathikr), Sunghoon Choi (hanbitmyths), Ashwini Khade (askhade), or Peng Wang (pengwa) on github with any questions.
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_repaint.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_repaint.py
deleted file mode 100644
index 41e7450d2df68c40c3b4f49669513832e443c5e3..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_repaint.py
+++ /dev/null
@@ -1,344 +0,0 @@
-# Copyright 2023 ETH Zurich Computer Vision Lab and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
-from .scheduling_utils import SchedulerMixin
-
-
-@dataclass
-class RePaintSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from
- the current timestep. `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: torch.FloatTensor
- pred_original_sample: torch.FloatTensor
-
-
-# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
-def betas_for_alpha_bar(
- num_diffusion_timesteps,
- max_beta=0.999,
- alpha_transform_type="cosine",
-):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
- Choose from `cosine` or `exp`
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
- if alpha_transform_type == "cosine":
-
- def alpha_bar_fn(t):
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
-
- elif alpha_transform_type == "exp":
-
- def alpha_bar_fn(t):
- return math.exp(t * -12.0)
-
- else:
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
- return torch.tensor(betas, dtype=torch.float32)
-
-
-class RePaintScheduler(SchedulerMixin, ConfigMixin):
- """
- RePaint is a schedule for DDPM inpainting inside a given mask.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, `squaredcos_cap_v2` or `sigmoid`.
- eta (`float`):
- The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and
- 1.0 is DDPM scheduler respectively.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- variance_type (`str`):
- options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`,
- `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
- clip_sample (`bool`, default `True`):
- option to clip predicted sample between -1 and 1 for numerical stability.
-
- """
-
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- eta: float = 0.0,
- trained_betas: Optional[np.ndarray] = None,
- clip_sample: bool = True,
- ):
- if trained_betas is not None:
- self.betas = torch.from_numpy(trained_betas)
- elif beta_schedule == "linear":
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = (
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
- )
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- elif beta_schedule == "sigmoid":
- # GeoDiff sigmoid schedule
- betas = torch.linspace(-6, 6, num_train_timesteps)
- self.betas = torch.sigmoid(betas) * (beta_end - beta_start) + beta_start
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
- self.one = torch.tensor(1.0)
-
- self.final_alpha_cumprod = torch.tensor(1.0)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # setable values
- self.num_inference_steps = None
- self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy())
-
- self.eta = eta
-
- def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`torch.FloatTensor`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- return sample
-
- def set_timesteps(
- self,
- num_inference_steps: int,
- jump_length: int = 10,
- jump_n_sample: int = 10,
- device: Union[str, torch.device] = None,
- ):
- num_inference_steps = min(self.config.num_train_timesteps, num_inference_steps)
- self.num_inference_steps = num_inference_steps
-
- timesteps = []
-
- jumps = {}
- for j in range(0, num_inference_steps - jump_length, jump_length):
- jumps[j] = jump_n_sample - 1
-
- t = num_inference_steps
- while t >= 1:
- t = t - 1
- timesteps.append(t)
-
- if jumps.get(t, 0) > 0:
- jumps[t] = jumps[t] - 1
- for _ in range(jump_length):
- t = t + 1
- timesteps.append(t)
-
- timesteps = np.array(timesteps) * (self.config.num_train_timesteps // self.num_inference_steps)
- self.timesteps = torch.from_numpy(timesteps).to(device)
-
- def _get_variance(self, t):
- prev_timestep = t - self.config.num_train_timesteps // self.num_inference_steps
-
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- # For t > 0, compute predicted variance βt (see formula (6) and (7) from
- # https://arxiv.org/pdf/2006.11239.pdf) and sample from it to get
- # previous sample x_{t-1} ~ N(pred_prev_sample, variance) == add
- # variance to pred_sample
- # Is equivalent to formula (16) in https://arxiv.org/pdf/2010.02502.pdf
- # without eta.
- # variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * self.betas[t]
- variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev)
-
- return variance
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- original_image: torch.FloatTensor,
- mask: torch.FloatTensor,
- generator: Optional[torch.Generator] = None,
- return_dict: bool = True,
- ) -> Union[RePaintSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned
- diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- original_image (`torch.FloatTensor`):
- the original image to inpaint on.
- mask (`torch.FloatTensor`):
- the mask where 0.0 values define which part of the original image to inpaint (change).
- generator (`torch.Generator`, *optional*): random number generator.
- return_dict (`bool`): option for returning tuple rather than
- DDPMSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.RePaintSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.RePaintSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
-
- """
- t = timestep
- prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
-
- # 1. compute alphas, betas
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
-
- # 2. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
- pred_original_sample = (sample - beta_prod_t**0.5 * model_output) / alpha_prod_t**0.5
-
- # 3. Clip "predicted x_0"
- if self.config.clip_sample:
- pred_original_sample = torch.clamp(pred_original_sample, -1, 1)
-
- # We choose to follow RePaint Algorithm 1 to get x_{t-1}, however we
- # substitute formula (7) in the algorithm coming from DDPM paper
- # (formula (4) Algorithm 2 - Sampling) with formula (12) from DDIM paper.
- # DDIM schedule gives the same results as DDPM with eta = 1.0
- # Noise is being reused in 7. and 8., but no impact on quality has
- # been observed.
-
- # 5. Add noise
- device = model_output.device
- noise = randn_tensor(model_output.shape, generator=generator, device=device, dtype=model_output.dtype)
- std_dev_t = self.eta * self._get_variance(timestep) ** 0.5
-
- variance = 0
- if t > 0 and self.eta > 0:
- variance = std_dev_t * noise
-
- # 6. compute "direction pointing to x_t" of formula (12)
- # from https://arxiv.org/pdf/2010.02502.pdf
- pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** 0.5 * model_output
-
- # 7. compute x_{t-1} of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- prev_unknown_part = alpha_prod_t_prev**0.5 * pred_original_sample + pred_sample_direction + variance
-
- # 8. Algorithm 1 Line 5 https://arxiv.org/pdf/2201.09865.pdf
- prev_known_part = (alpha_prod_t_prev**0.5) * original_image + ((1 - alpha_prod_t_prev) ** 0.5) * noise
-
- # 9. Algorithm 1 Line 8 https://arxiv.org/pdf/2201.09865.pdf
- pred_prev_sample = mask * prev_known_part + (1.0 - mask) * prev_unknown_part
-
- if not return_dict:
- return (
- pred_prev_sample,
- pred_original_sample,
- )
-
- return RePaintSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
-
- def undo_step(self, sample, timestep, generator=None):
- n = self.config.num_train_timesteps // self.num_inference_steps
-
- for i in range(n):
- beta = self.betas[timestep + i]
- if sample.device.type == "mps":
- # randn does not work reproducibly on mps
- noise = randn_tensor(sample.shape, dtype=sample.dtype, generator=generator)
- noise = noise.to(sample.device)
- else:
- noise = randn_tensor(sample.shape, generator=generator, device=sample.device, dtype=sample.dtype)
-
- # 10. Algorithm 1 Line 10 https://arxiv.org/pdf/2201.09865.pdf
- sample = (1 - beta) ** 0.5 * sample + beta**0.5 * noise
-
- return sample
-
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.IntTensor,
- ) -> torch.FloatTensor:
- raise NotImplementedError("Use `DDPMScheduler.add_noise()` to train for sampling with RePaint.")
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco.py
deleted file mode 100644
index 4b28a59280e6701d31afeeaae7ae12cdbd4fb95e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,86 +0,0 @@
-_base_ = [
- '../_base_/models/cascade_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-# model settings
-model = dict(
- roi_head=dict(bbox_head=[
- dict(
- type='SABLHead',
- num_classes=80,
- cls_in_channels=256,
- reg_in_channels=256,
- roi_feat_size=7,
- reg_feat_up_ratio=2,
- reg_pre_kernel=3,
- reg_post_kernel=3,
- reg_pre_num=2,
- reg_post_num=1,
- cls_out_channels=1024,
- reg_offset_out_channels=256,
- reg_cls_out_channels=256,
- num_cls_fcs=1,
- num_reg_fcs=0,
- reg_class_agnostic=True,
- norm_cfg=None,
- bbox_coder=dict(
- type='BucketingBBoxCoder', num_buckets=14, scale_factor=1.7),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox_reg=dict(type='SmoothL1Loss', beta=0.1,
- loss_weight=1.0)),
- dict(
- type='SABLHead',
- num_classes=80,
- cls_in_channels=256,
- reg_in_channels=256,
- roi_feat_size=7,
- reg_feat_up_ratio=2,
- reg_pre_kernel=3,
- reg_post_kernel=3,
- reg_pre_num=2,
- reg_post_num=1,
- cls_out_channels=1024,
- reg_offset_out_channels=256,
- reg_cls_out_channels=256,
- num_cls_fcs=1,
- num_reg_fcs=0,
- reg_class_agnostic=True,
- norm_cfg=None,
- bbox_coder=dict(
- type='BucketingBBoxCoder', num_buckets=14, scale_factor=1.5),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox_reg=dict(type='SmoothL1Loss', beta=0.1,
- loss_weight=1.0)),
- dict(
- type='SABLHead',
- num_classes=80,
- cls_in_channels=256,
- reg_in_channels=256,
- roi_feat_size=7,
- reg_feat_up_ratio=2,
- reg_pre_kernel=3,
- reg_post_kernel=3,
- reg_pre_num=2,
- reg_post_num=1,
- cls_out_channels=1024,
- reg_offset_out_channels=256,
- reg_cls_out_channels=256,
- num_cls_fcs=1,
- num_reg_fcs=0,
- reg_class_agnostic=True,
- norm_cfg=None,
- bbox_coder=dict(
- type='BucketingBBoxCoder', num_buckets=14, scale_factor=1.3),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox_reg=dict(type='SmoothL1Loss', beta=0.1, loss_weight=1.0))
- ]))
diff --git a/spaces/Andy1621/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/config.py b/spaces/Andy1621/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/config.py
deleted file mode 100644
index 1935f1914df202018438a21021ea1e7acf69e983..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/exp/cascade_mask_rcnn_3x_ms_hybrid_small/config.py
+++ /dev/null
@@ -1,142 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/cascade_mask_rcnn_uniformer_fpn.py',
- '../../configs/_base_/datasets/coco_instance.py',
- '../../configs/_base_/schedules/schedule_1x.py',
- '../../configs/_base_/default_runtime.py'
-]
-
-model = dict(
- backbone=dict(
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- drop_path_rate=0.2,
- use_checkpoint=True,
- checkpoint_num=[0, 0, 8, 0],
- windows=False,
- hybrid=True,
- window_size=14
- ),
- neck=dict(in_channels=[64, 128, 320, 512]),
- roi_head=dict(
- bbox_head=[
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.05, 0.05, 0.1, 0.1]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
- dict(
- type='ConvFCBBoxHead',
- num_shared_convs=4,
- num_shared_fcs=1,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.033, 0.033, 0.067, 0.067]),
- reg_class_agnostic=False,
- reg_decoded_bbox=True,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0))
- ]))
-
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-# augmentation strategy originates from DETR / Sparse RCNN
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='AutoAugment',
- policies=[
- [
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
- (736, 1333), (768, 1333), (800, 1333)],
- multiscale_mode='value',
- keep_ratio=True)
- ],
- [
- dict(type='Resize',
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomCrop',
- crop_type='absolute_range',
- crop_size=(384, 600),
- allow_negative_crop=True),
- dict(type='Resize',
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
- (576, 1333), (608, 1333), (640, 1333),
- (672, 1333), (704, 1333), (736, 1333),
- (768, 1333), (800, 1333)],
- multiscale_mode='value',
- override=True,
- keep_ratio=True)
- ]
- ]),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-
-optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-lr_config = dict(step=[27, 33])
-runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
-
-# do not use mmdet version fp16
-fp16 = None
-optimizer_config = dict(
- type="DistOptimizerHook",
- update_interval=1,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- use_fp16=True,
-)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py
deleted file mode 100644
index 847932547c6c309ae38b45dc43ac0ef8ca66d347..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-import torch
-import torch.nn as nn
-from mmcv import ops
-
-
-class BaseRoIExtractor(nn.Module, metaclass=ABCMeta):
- """Base class for RoI extractor.
-
- Args:
- roi_layer (dict): Specify RoI layer type and arguments.
- out_channels (int): Output channels of RoI layers.
- featmap_strides (List[int]): Strides of input feature maps.
- """
-
- def __init__(self, roi_layer, out_channels, featmap_strides):
- super(BaseRoIExtractor, self).__init__()
- self.roi_layers = self.build_roi_layers(roi_layer, featmap_strides)
- self.out_channels = out_channels
- self.featmap_strides = featmap_strides
- self.fp16_enabled = False
-
- @property
- def num_inputs(self):
- """int: Number of input feature maps."""
- return len(self.featmap_strides)
-
- def init_weights(self):
- pass
-
- def build_roi_layers(self, layer_cfg, featmap_strides):
- """Build RoI operator to extract feature from each level feature map.
-
- Args:
- layer_cfg (dict): Dictionary to construct and config RoI layer
- operation. Options are modules under ``mmcv/ops`` such as
- ``RoIAlign``.
- featmap_strides (List[int]): The stride of input feature map w.r.t
- to the original image size, which would be used to scale RoI
- coordinate (original image coordinate system) to feature
- coordinate system.
-
- Returns:
- nn.ModuleList: The RoI extractor modules for each level feature
- map.
- """
-
- cfg = layer_cfg.copy()
- layer_type = cfg.pop('type')
- assert hasattr(ops, layer_type)
- layer_cls = getattr(ops, layer_type)
- roi_layers = nn.ModuleList(
- [layer_cls(spatial_scale=1 / s, **cfg) for s in featmap_strides])
- return roi_layers
-
- def roi_rescale(self, rois, scale_factor):
- """Scale RoI coordinates by scale factor.
-
- Args:
- rois (torch.Tensor): RoI (Region of Interest), shape (n, 5)
- scale_factor (float): Scale factor that RoI will be multiplied by.
-
- Returns:
- torch.Tensor: Scaled RoI.
- """
-
- cx = (rois[:, 1] + rois[:, 3]) * 0.5
- cy = (rois[:, 2] + rois[:, 4]) * 0.5
- w = rois[:, 3] - rois[:, 1]
- h = rois[:, 4] - rois[:, 2]
- new_w = w * scale_factor
- new_h = h * scale_factor
- x1 = cx - new_w * 0.5
- x2 = cx + new_w * 0.5
- y1 = cy - new_h * 0.5
- y2 = cy + new_h * 0.5
- new_rois = torch.stack((rois[:, 0], x1, y1, x2, y2), dim=-1)
- return new_rois
-
- @abstractmethod
- def forward(self, feats, rois, roi_scale_factor=None):
- pass
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/default_runtime.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/default_runtime.py
deleted file mode 100644
index b564cc4e7e7d9a67dacaaddecb100e4d8f5c005b..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/default_runtime.py
+++ /dev/null
@@ -1,14 +0,0 @@
-# yapf:disable
-log_config = dict(
- interval=50,
- hooks=[
- dict(type='TextLoggerHook', by_epoch=False),
- # dict(type='TensorboardLoggerHook')
- ])
-# yapf:enable
-dist_params = dict(backend='nccl')
-log_level = 'INFO'
-load_from = None
-resume_from = None
-workflow = [('train', 1)]
-cudnn_benchmark = True
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/iter_timer.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/iter_timer.py
deleted file mode 100644
index cfd5002fe85ffc6992155ac01003878064a1d9be..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/iter_timer.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import time
-
-from .hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class IterTimerHook(Hook):
-
- def before_epoch(self, runner):
- self.t = time.time()
-
- def before_iter(self, runner):
- runner.log_buffer.update({'data_time': time.time() - self.t})
-
- def after_iter(self, runner):
- runner.log_buffer.update({'time': time.time() - self.t})
- self.t = time.time()
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_annotator.py b/spaces/Anonymous-sub/Rerender/ControlNet/gradio_annotator.py
deleted file mode 100644
index 2b1a29ebbec24073a9e4357b700e0577a17a9379..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_annotator.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import gradio as gr
-
-from annotator.util import resize_image, HWC3
-
-
-model_canny = None
-
-
-def canny(img, res, l, h):
- img = resize_image(HWC3(img), res)
- global model_canny
- if model_canny is None:
- from annotator.canny import CannyDetector
- model_canny = CannyDetector()
- result = model_canny(img, l, h)
- return [result]
-
-
-model_hed = None
-
-
-def hed(img, res):
- img = resize_image(HWC3(img), res)
- global model_hed
- if model_hed is None:
- from annotator.hed import HEDdetector
- model_hed = HEDdetector()
- result = model_hed(img)
- return [result]
-
-
-model_mlsd = None
-
-
-def mlsd(img, res, thr_v, thr_d):
- img = resize_image(HWC3(img), res)
- global model_mlsd
- if model_mlsd is None:
- from annotator.mlsd import MLSDdetector
- model_mlsd = MLSDdetector()
- result = model_mlsd(img, thr_v, thr_d)
- return [result]
-
-
-model_midas = None
-
-
-def midas(img, res, a):
- img = resize_image(HWC3(img), res)
- global model_midas
- if model_midas is None:
- from annotator.midas import MidasDetector
- model_midas = MidasDetector()
- results = model_midas(img, a)
- return results
-
-
-model_openpose = None
-
-
-def openpose(img, res, has_hand):
- img = resize_image(HWC3(img), res)
- global model_openpose
- if model_openpose is None:
- from annotator.openpose import OpenposeDetector
- model_openpose = OpenposeDetector()
- result, _ = model_openpose(img, has_hand)
- return [result]
-
-
-model_uniformer = None
-
-
-def uniformer(img, res):
- img = resize_image(HWC3(img), res)
- global model_uniformer
- if model_uniformer is None:
- from annotator.uniformer import UniformerDetector
- model_uniformer = UniformerDetector()
- result = model_uniformer(img)
- return [result]
-
-
-block = gr.Blocks().queue()
-with block:
- with gr.Row():
- gr.Markdown("## Canny Edge")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- low_threshold = gr.Slider(label="low_threshold", minimum=1, maximum=255, value=100, step=1)
- high_threshold = gr.Slider(label="high_threshold", minimum=1, maximum=255, value=200, step=1)
- resolution = gr.Slider(label="resolution", minimum=256, maximum=1024, value=512, step=64)
- run_button = gr.Button(label="Run")
- with gr.Column():
- gallery = gr.Gallery(label="Generated images", show_label=False).style(height="auto")
- run_button.click(fn=canny, inputs=[input_image, resolution, low_threshold, high_threshold], outputs=[gallery])
-
- with gr.Row():
- gr.Markdown("## HED Edge")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- resolution = gr.Slider(label="resolution", minimum=256, maximum=1024, value=512, step=64)
- run_button = gr.Button(label="Run")
- with gr.Column():
- gallery = gr.Gallery(label="Generated images", show_label=False).style(height="auto")
- run_button.click(fn=hed, inputs=[input_image, resolution], outputs=[gallery])
-
- with gr.Row():
- gr.Markdown("## MLSD Edge")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- value_threshold = gr.Slider(label="value_threshold", minimum=0.01, maximum=2.0, value=0.1, step=0.01)
- distance_threshold = gr.Slider(label="distance_threshold", minimum=0.01, maximum=20.0, value=0.1, step=0.01)
- resolution = gr.Slider(label="resolution", minimum=256, maximum=1024, value=384, step=64)
- run_button = gr.Button(label="Run")
- with gr.Column():
- gallery = gr.Gallery(label="Generated images", show_label=False).style(height="auto")
- run_button.click(fn=mlsd, inputs=[input_image, resolution, value_threshold, distance_threshold], outputs=[gallery])
-
- with gr.Row():
- gr.Markdown("## MIDAS Depth and Normal")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- alpha = gr.Slider(label="alpha", minimum=0.1, maximum=20.0, value=6.2, step=0.01)
- resolution = gr.Slider(label="resolution", minimum=256, maximum=1024, value=384, step=64)
- run_button = gr.Button(label="Run")
- with gr.Column():
- gallery = gr.Gallery(label="Generated images", show_label=False).style(height="auto")
- run_button.click(fn=midas, inputs=[input_image, resolution, alpha], outputs=[gallery])
-
- with gr.Row():
- gr.Markdown("## Openpose")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- hand = gr.Checkbox(label='detect hand', value=False)
- resolution = gr.Slider(label="resolution", minimum=256, maximum=1024, value=512, step=64)
- run_button = gr.Button(label="Run")
- with gr.Column():
- gallery = gr.Gallery(label="Generated images", show_label=False).style(height="auto")
- run_button.click(fn=openpose, inputs=[input_image, resolution, hand], outputs=[gallery])
-
-
- with gr.Row():
- gr.Markdown("## Uniformer Segmentation")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- resolution = gr.Slider(label="resolution", minimum=256, maximum=1024, value=512, step=64)
- run_button = gr.Button(label="Run")
- with gr.Column():
- gallery = gr.Gallery(label="Generated images", show_label=False).style(height="auto")
- run_button.click(fn=uniformer, inputs=[input_image, resolution], outputs=[gallery])
-
-
-block.launch(server_name='0.0.0.0')
diff --git a/spaces/Armandoliv/document_parser/README.md b/spaces/Armandoliv/document_parser/README.md
deleted file mode 100644
index 77df6afdd5d1c6fc6ad33f1d51768d8df1a191f3..0000000000000000000000000000000000000000
--- a/spaces/Armandoliv/document_parser/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Document Parser
-emoji: 📈
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Arnx/MusicGenXvAKN/tests/utils/__init__.py b/spaces/Arnx/MusicGenXvAKN/tests/utils/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/tests/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/Artrajz/vits-simple-api/vits/text/english.py b/spaces/Artrajz/vits-simple-api/vits/text/english.py
deleted file mode 100644
index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/vits/text/english.py
+++ /dev/null
@@ -1,188 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-
-# Regular expression matching whitespace:
-
-
-import re
-import inflect
-from unidecode import unidecode
-import eng_to_ipa as ipa
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-
-# List of (ipa, lazy ipa) pairs:
-_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('æ', 'e'),
- ('ɑ', 'a'),
- ('ɔ', 'o'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ɛ', 'e'),
- ('ɪ', 'i'),
- ('ʊ', 'u'),
- ('ʒ', 'ʥ'),
- ('ʤ', 'ʥ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, lazy ipa2) pairs:
-_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ʒ', 'ʑ'),
- ('ʤ', 'dʑ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, ipa2) pairs
-_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ʤ', 'dʒ'),
- ('ʧ', 'tʃ')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def collapse_whitespace(text):
- return re.sub(r'\s+', ' ', text)
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
-
-
-def mark_dark_l(text):
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text)
-
-
-def english_to_ipa(text):
- text = unidecode(text).lower()
- text = expand_abbreviations(text)
- text = normalize_numbers(text)
- phonemes = ipa.convert(text)
- phonemes = collapse_whitespace(phonemes)
- return phonemes
-
-
-def english_to_lazy_ipa(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def english_to_ipa2(text):
- text = english_to_ipa(text)
- text = mark_dark_l(text)
- for regex, replacement in _ipa_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text.replace('...', '…')
-
-
-def english_to_lazy_ipa2(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa2:
- text = re.sub(regex, replacement, text)
- return text
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py
deleted file mode 100644
index 1e8ff50edfb8059799b334325e65eea9bb9b1ab3..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py
+++ /dev/null
@@ -1,260 +0,0 @@
-import logging
-import os
-import shlex
-import subprocess
-from typing import (
- TYPE_CHECKING,
- Any,
- Callable,
- Iterable,
- List,
- Mapping,
- Optional,
- Union,
-)
-
-from pip._vendor.rich.markup import escape
-
-from pip._internal.cli.spinners import SpinnerInterface, open_spinner
-from pip._internal.exceptions import InstallationSubprocessError
-from pip._internal.utils.logging import VERBOSE, subprocess_logger
-from pip._internal.utils.misc import HiddenText
-
-if TYPE_CHECKING:
- # Literal was introduced in Python 3.8.
- #
- # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7.
- from typing import Literal
-
-CommandArgs = List[Union[str, HiddenText]]
-
-
-def make_command(*args: Union[str, HiddenText, CommandArgs]) -> CommandArgs:
- """
- Create a CommandArgs object.
- """
- command_args: CommandArgs = []
- for arg in args:
- # Check for list instead of CommandArgs since CommandArgs is
- # only known during type-checking.
- if isinstance(arg, list):
- command_args.extend(arg)
- else:
- # Otherwise, arg is str or HiddenText.
- command_args.append(arg)
-
- return command_args
-
-
-def format_command_args(args: Union[List[str], CommandArgs]) -> str:
- """
- Format command arguments for display.
- """
- # For HiddenText arguments, display the redacted form by calling str().
- # Also, we don't apply str() to arguments that aren't HiddenText since
- # this can trigger a UnicodeDecodeError in Python 2 if the argument
- # has type unicode and includes a non-ascii character. (The type
- # checker doesn't ensure the annotations are correct in all cases.)
- return " ".join(
- shlex.quote(str(arg)) if isinstance(arg, HiddenText) else shlex.quote(arg)
- for arg in args
- )
-
-
-def reveal_command_args(args: Union[List[str], CommandArgs]) -> List[str]:
- """
- Return the arguments in their raw, unredacted form.
- """
- return [arg.secret if isinstance(arg, HiddenText) else arg for arg in args]
-
-
-def call_subprocess(
- cmd: Union[List[str], CommandArgs],
- show_stdout: bool = False,
- cwd: Optional[str] = None,
- on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise",
- extra_ok_returncodes: Optional[Iterable[int]] = None,
- extra_environ: Optional[Mapping[str, Any]] = None,
- unset_environ: Optional[Iterable[str]] = None,
- spinner: Optional[SpinnerInterface] = None,
- log_failed_cmd: Optional[bool] = True,
- stdout_only: Optional[bool] = False,
- *,
- command_desc: str,
-) -> str:
- """
- Args:
- show_stdout: if true, use INFO to log the subprocess's stderr and
- stdout streams. Otherwise, use DEBUG. Defaults to False.
- extra_ok_returncodes: an iterable of integer return codes that are
- acceptable, in addition to 0. Defaults to None, which means [].
- unset_environ: an iterable of environment variable names to unset
- prior to calling subprocess.Popen().
- log_failed_cmd: if false, failed commands are not logged, only raised.
- stdout_only: if true, return only stdout, else return both. When true,
- logging of both stdout and stderr occurs when the subprocess has
- terminated, else logging occurs as subprocess output is produced.
- """
- if extra_ok_returncodes is None:
- extra_ok_returncodes = []
- if unset_environ is None:
- unset_environ = []
- # Most places in pip use show_stdout=False. What this means is--
- #
- # - We connect the child's output (combined stderr and stdout) to a
- # single pipe, which we read.
- # - We log this output to stderr at DEBUG level as it is received.
- # - If DEBUG logging isn't enabled (e.g. if --verbose logging wasn't
- # requested), then we show a spinner so the user can still see the
- # subprocess is in progress.
- # - If the subprocess exits with an error, we log the output to stderr
- # at ERROR level if it hasn't already been displayed to the console
- # (e.g. if --verbose logging wasn't enabled). This way we don't log
- # the output to the console twice.
- #
- # If show_stdout=True, then the above is still done, but with DEBUG
- # replaced by INFO.
- if show_stdout:
- # Then log the subprocess output at INFO level.
- log_subprocess: Callable[..., None] = subprocess_logger.info
- used_level = logging.INFO
- else:
- # Then log the subprocess output using VERBOSE. This also ensures
- # it will be logged to the log file (aka user_log), if enabled.
- log_subprocess = subprocess_logger.verbose
- used_level = VERBOSE
-
- # Whether the subprocess will be visible in the console.
- showing_subprocess = subprocess_logger.getEffectiveLevel() <= used_level
-
- # Only use the spinner if we're not showing the subprocess output
- # and we have a spinner.
- use_spinner = not showing_subprocess and spinner is not None
-
- log_subprocess("Running command %s", command_desc)
- env = os.environ.copy()
- if extra_environ:
- env.update(extra_environ)
- for name in unset_environ:
- env.pop(name, None)
- try:
- proc = subprocess.Popen(
- # Convert HiddenText objects to the underlying str.
- reveal_command_args(cmd),
- stdin=subprocess.PIPE,
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT if not stdout_only else subprocess.PIPE,
- cwd=cwd,
- env=env,
- errors="backslashreplace",
- )
- except Exception as exc:
- if log_failed_cmd:
- subprocess_logger.critical(
- "Error %s while executing command %s",
- exc,
- command_desc,
- )
- raise
- all_output = []
- if not stdout_only:
- assert proc.stdout
- assert proc.stdin
- proc.stdin.close()
- # In this mode, stdout and stderr are in the same pipe.
- while True:
- line: str = proc.stdout.readline()
- if not line:
- break
- line = line.rstrip()
- all_output.append(line + "\n")
-
- # Show the line immediately.
- log_subprocess(line)
- # Update the spinner.
- if use_spinner:
- assert spinner
- spinner.spin()
- try:
- proc.wait()
- finally:
- if proc.stdout:
- proc.stdout.close()
- output = "".join(all_output)
- else:
- # In this mode, stdout and stderr are in different pipes.
- # We must use communicate() which is the only safe way to read both.
- out, err = proc.communicate()
- # log line by line to preserve pip log indenting
- for out_line in out.splitlines():
- log_subprocess(out_line)
- all_output.append(out)
- for err_line in err.splitlines():
- log_subprocess(err_line)
- all_output.append(err)
- output = out
-
- proc_had_error = proc.returncode and proc.returncode not in extra_ok_returncodes
- if use_spinner:
- assert spinner
- if proc_had_error:
- spinner.finish("error")
- else:
- spinner.finish("done")
- if proc_had_error:
- if on_returncode == "raise":
- error = InstallationSubprocessError(
- command_description=command_desc,
- exit_code=proc.returncode,
- output_lines=all_output if not showing_subprocess else None,
- )
- if log_failed_cmd:
- subprocess_logger.error("[present-rich] %s", error)
- subprocess_logger.verbose(
- "[bold magenta]full command[/]: [blue]%s[/]",
- escape(format_command_args(cmd)),
- extra={"markup": True},
- )
- subprocess_logger.verbose(
- "[bold magenta]cwd[/]: %s",
- escape(cwd or "[inherit]"),
- extra={"markup": True},
- )
-
- raise error
- elif on_returncode == "warn":
- subprocess_logger.warning(
- 'Command "%s" had error code %s in %s',
- command_desc,
- proc.returncode,
- cwd,
- )
- elif on_returncode == "ignore":
- pass
- else:
- raise ValueError(f"Invalid value: on_returncode={on_returncode!r}")
- return output
-
-
-def runner_with_spinner_message(message: str) -> Callable[..., None]:
- """Provide a subprocess_runner that shows a spinner message.
-
- Intended for use with for BuildBackendHookCaller. Thus, the runner has
- an API that matches what's expected by BuildBackendHookCaller.subprocess_runner.
- """
-
- def runner(
- cmd: List[str],
- cwd: Optional[str] = None,
- extra_environ: Optional[Mapping[str, Any]] = None,
- ) -> None:
- with open_spinner(message) as spinner:
- call_subprocess(
- cmd,
- command_desc=message,
- cwd=cwd,
- extra_environ=extra_environ,
- spinner=spinner,
- )
-
- return runner
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py
deleted file mode 100644
index 02bbf68e7ad3ce14f191af24260312e817e12df7..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py
+++ /dev/null
@@ -1,705 +0,0 @@
-"""Handles all VCS (version control) support"""
-
-import logging
-import os
-import shutil
-import sys
-import urllib.parse
-from typing import (
- TYPE_CHECKING,
- Any,
- Dict,
- Iterable,
- Iterator,
- List,
- Mapping,
- Optional,
- Tuple,
- Type,
- Union,
-)
-
-from pip._internal.cli.spinners import SpinnerInterface
-from pip._internal.exceptions import BadCommand, InstallationError
-from pip._internal.utils.misc import (
- HiddenText,
- ask_path_exists,
- backup_dir,
- display_path,
- hide_url,
- hide_value,
- is_installable_dir,
- rmtree,
-)
-from pip._internal.utils.subprocess import (
- CommandArgs,
- call_subprocess,
- format_command_args,
- make_command,
-)
-from pip._internal.utils.urls import get_url_scheme
-
-if TYPE_CHECKING:
- # Literal was introduced in Python 3.8.
- #
- # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7.
- from typing import Literal
-
-
-__all__ = ["vcs"]
-
-
-logger = logging.getLogger(__name__)
-
-AuthInfo = Tuple[Optional[str], Optional[str]]
-
-
-def is_url(name: str) -> bool:
- """
- Return true if the name looks like a URL.
- """
- scheme = get_url_scheme(name)
- if scheme is None:
- return False
- return scheme in ["http", "https", "file", "ftp"] + vcs.all_schemes
-
-
-def make_vcs_requirement_url(
- repo_url: str, rev: str, project_name: str, subdir: Optional[str] = None
-) -> str:
- """
- Return the URL for a VCS requirement.
-
- Args:
- repo_url: the remote VCS url, with any needed VCS prefix (e.g. "git+").
- project_name: the (unescaped) project name.
- """
- egg_project_name = project_name.replace("-", "_")
- req = f"{repo_url}@{rev}#egg={egg_project_name}"
- if subdir:
- req += f"&subdirectory={subdir}"
-
- return req
-
-
-def find_path_to_project_root_from_repo_root(
- location: str, repo_root: str
-) -> Optional[str]:
- """
- Find the the Python project's root by searching up the filesystem from
- `location`. Return the path to project root relative to `repo_root`.
- Return None if the project root is `repo_root`, or cannot be found.
- """
- # find project root.
- orig_location = location
- while not is_installable_dir(location):
- last_location = location
- location = os.path.dirname(location)
- if location == last_location:
- # We've traversed up to the root of the filesystem without
- # finding a Python project.
- logger.warning(
- "Could not find a Python project for directory %s (tried all "
- "parent directories)",
- orig_location,
- )
- return None
-
- if os.path.samefile(repo_root, location):
- return None
-
- return os.path.relpath(location, repo_root)
-
-
-class RemoteNotFoundError(Exception):
- pass
-
-
-class RemoteNotValidError(Exception):
- def __init__(self, url: str):
- super().__init__(url)
- self.url = url
-
-
-class RevOptions:
-
- """
- Encapsulates a VCS-specific revision to install, along with any VCS
- install options.
-
- Instances of this class should be treated as if immutable.
- """
-
- def __init__(
- self,
- vc_class: Type["VersionControl"],
- rev: Optional[str] = None,
- extra_args: Optional[CommandArgs] = None,
- ) -> None:
- """
- Args:
- vc_class: a VersionControl subclass.
- rev: the name of the revision to install.
- extra_args: a list of extra options.
- """
- if extra_args is None:
- extra_args = []
-
- self.extra_args = extra_args
- self.rev = rev
- self.vc_class = vc_class
- self.branch_name: Optional[str] = None
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def arg_rev(self) -> Optional[str]:
- if self.rev is None:
- return self.vc_class.default_arg_rev
-
- return self.rev
-
- def to_args(self) -> CommandArgs:
- """
- Return the VCS-specific command arguments.
- """
- args: CommandArgs = []
- rev = self.arg_rev
- if rev is not None:
- args += self.vc_class.get_base_rev_args(rev)
- args += self.extra_args
-
- return args
-
- def to_display(self) -> str:
- if not self.rev:
- return ""
-
- return f" (to revision {self.rev})"
-
- def make_new(self, rev: str) -> "RevOptions":
- """
- Make a copy of the current instance, but with a new rev.
-
- Args:
- rev: the name of the revision for the new object.
- """
- return self.vc_class.make_rev_options(rev, extra_args=self.extra_args)
-
-
-class VcsSupport:
- _registry: Dict[str, "VersionControl"] = {}
- schemes = ["ssh", "git", "hg", "bzr", "sftp", "svn"]
-
- def __init__(self) -> None:
- # Register more schemes with urlparse for various version control
- # systems
- urllib.parse.uses_netloc.extend(self.schemes)
- super().__init__()
-
- def __iter__(self) -> Iterator[str]:
- return self._registry.__iter__()
-
- @property
- def backends(self) -> List["VersionControl"]:
- return list(self._registry.values())
-
- @property
- def dirnames(self) -> List[str]:
- return [backend.dirname for backend in self.backends]
-
- @property
- def all_schemes(self) -> List[str]:
- schemes: List[str] = []
- for backend in self.backends:
- schemes.extend(backend.schemes)
- return schemes
-
- def register(self, cls: Type["VersionControl"]) -> None:
- if not hasattr(cls, "name"):
- logger.warning("Cannot register VCS %s", cls.__name__)
- return
- if cls.name not in self._registry:
- self._registry[cls.name] = cls()
- logger.debug("Registered VCS backend: %s", cls.name)
-
- def unregister(self, name: str) -> None:
- if name in self._registry:
- del self._registry[name]
-
- def get_backend_for_dir(self, location: str) -> Optional["VersionControl"]:
- """
- Return a VersionControl object if a repository of that type is found
- at the given directory.
- """
- vcs_backends = {}
- for vcs_backend in self._registry.values():
- repo_path = vcs_backend.get_repository_root(location)
- if not repo_path:
- continue
- logger.debug("Determine that %s uses VCS: %s", location, vcs_backend.name)
- vcs_backends[repo_path] = vcs_backend
-
- if not vcs_backends:
- return None
-
- # Choose the VCS in the inner-most directory. Since all repository
- # roots found here would be either `location` or one of its
- # parents, the longest path should have the most path components,
- # i.e. the backend representing the inner-most repository.
- inner_most_repo_path = max(vcs_backends, key=len)
- return vcs_backends[inner_most_repo_path]
-
- def get_backend_for_scheme(self, scheme: str) -> Optional["VersionControl"]:
- """
- Return a VersionControl object or None.
- """
- for vcs_backend in self._registry.values():
- if scheme in vcs_backend.schemes:
- return vcs_backend
- return None
-
- def get_backend(self, name: str) -> Optional["VersionControl"]:
- """
- Return a VersionControl object or None.
- """
- name = name.lower()
- return self._registry.get(name)
-
-
-vcs = VcsSupport()
-
-
-class VersionControl:
- name = ""
- dirname = ""
- repo_name = ""
- # List of supported schemes for this Version Control
- schemes: Tuple[str, ...] = ()
- # Iterable of environment variable names to pass to call_subprocess().
- unset_environ: Tuple[str, ...] = ()
- default_arg_rev: Optional[str] = None
-
- @classmethod
- def should_add_vcs_url_prefix(cls, remote_url: str) -> bool:
- """
- Return whether the vcs prefix (e.g. "git+") should be added to a
- repository's remote url when used in a requirement.
- """
- return not remote_url.lower().startswith(f"{cls.name}:")
-
- @classmethod
- def get_subdirectory(cls, location: str) -> Optional[str]:
- """
- Return the path to Python project root, relative to the repo root.
- Return None if the project root is in the repo root.
- """
- return None
-
- @classmethod
- def get_requirement_revision(cls, repo_dir: str) -> str:
- """
- Return the revision string that should be used in a requirement.
- """
- return cls.get_revision(repo_dir)
-
- @classmethod
- def get_src_requirement(cls, repo_dir: str, project_name: str) -> str:
- """
- Return the requirement string to use to redownload the files
- currently at the given repository directory.
-
- Args:
- project_name: the (unescaped) project name.
-
- The return value has a form similar to the following:
-
- {repository_url}@{revision}#egg={project_name}
- """
- repo_url = cls.get_remote_url(repo_dir)
-
- if cls.should_add_vcs_url_prefix(repo_url):
- repo_url = f"{cls.name}+{repo_url}"
-
- revision = cls.get_requirement_revision(repo_dir)
- subdir = cls.get_subdirectory(repo_dir)
- req = make_vcs_requirement_url(repo_url, revision, project_name, subdir=subdir)
-
- return req
-
- @staticmethod
- def get_base_rev_args(rev: str) -> List[str]:
- """
- Return the base revision arguments for a vcs command.
-
- Args:
- rev: the name of a revision to install. Cannot be None.
- """
- raise NotImplementedError
-
- def is_immutable_rev_checkout(self, url: str, dest: str) -> bool:
- """
- Return true if the commit hash checked out at dest matches
- the revision in url.
-
- Always return False, if the VCS does not support immutable commit
- hashes.
-
- This method does not check if there are local uncommitted changes
- in dest after checkout, as pip currently has no use case for that.
- """
- return False
-
- @classmethod
- def make_rev_options(
- cls, rev: Optional[str] = None, extra_args: Optional[CommandArgs] = None
- ) -> RevOptions:
- """
- Return a RevOptions object.
-
- Args:
- rev: the name of a revision to install.
- extra_args: a list of extra options.
- """
- return RevOptions(cls, rev, extra_args=extra_args)
-
- @classmethod
- def _is_local_repository(cls, repo: str) -> bool:
- """
- posix absolute paths start with os.path.sep,
- win32 ones start with drive (like c:\\folder)
- """
- drive, tail = os.path.splitdrive(repo)
- return repo.startswith(os.path.sep) or bool(drive)
-
- @classmethod
- def get_netloc_and_auth(
- cls, netloc: str, scheme: str
- ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]:
- """
- Parse the repository URL's netloc, and return the new netloc to use
- along with auth information.
-
- Args:
- netloc: the original repository URL netloc.
- scheme: the repository URL's scheme without the vcs prefix.
-
- This is mainly for the Subversion class to override, so that auth
- information can be provided via the --username and --password options
- instead of through the URL. For other subclasses like Git without
- such an option, auth information must stay in the URL.
-
- Returns: (netloc, (username, password)).
- """
- return netloc, (None, None)
-
- @classmethod
- def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:
- """
- Parse the repository URL to use, and return the URL, revision,
- and auth info to use.
-
- Returns: (url, rev, (username, password)).
- """
- scheme, netloc, path, query, frag = urllib.parse.urlsplit(url)
- if "+" not in scheme:
- raise ValueError(
- "Sorry, {!r} is a malformed VCS url. "
- "The format is +://, "
- "e.g. svn+http://myrepo/svn/MyApp#egg=MyApp".format(url)
- )
- # Remove the vcs prefix.
- scheme = scheme.split("+", 1)[1]
- netloc, user_pass = cls.get_netloc_and_auth(netloc, scheme)
- rev = None
- if "@" in path:
- path, rev = path.rsplit("@", 1)
- if not rev:
- raise InstallationError(
- "The URL {!r} has an empty revision (after @) "
- "which is not supported. Include a revision after @ "
- "or remove @ from the URL.".format(url)
- )
- url = urllib.parse.urlunsplit((scheme, netloc, path, query, ""))
- return url, rev, user_pass
-
- @staticmethod
- def make_rev_args(
- username: Optional[str], password: Optional[HiddenText]
- ) -> CommandArgs:
- """
- Return the RevOptions "extra arguments" to use in obtain().
- """
- return []
-
- def get_url_rev_options(self, url: HiddenText) -> Tuple[HiddenText, RevOptions]:
- """
- Return the URL and RevOptions object to use in obtain(),
- as a tuple (url, rev_options).
- """
- secret_url, rev, user_pass = self.get_url_rev_and_auth(url.secret)
- username, secret_password = user_pass
- password: Optional[HiddenText] = None
- if secret_password is not None:
- password = hide_value(secret_password)
- extra_args = self.make_rev_args(username, password)
- rev_options = self.make_rev_options(rev, extra_args=extra_args)
-
- return hide_url(secret_url), rev_options
-
- @staticmethod
- def normalize_url(url: str) -> str:
- """
- Normalize a URL for comparison by unquoting it and removing any
- trailing slash.
- """
- return urllib.parse.unquote(url).rstrip("/")
-
- @classmethod
- def compare_urls(cls, url1: str, url2: str) -> bool:
- """
- Compare two repo URLs for identity, ignoring incidental differences.
- """
- return cls.normalize_url(url1) == cls.normalize_url(url2)
-
- def fetch_new(
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
- ) -> None:
- """
- Fetch a revision from a repository, in the case that this is the
- first fetch from the repository.
-
- Args:
- dest: the directory to fetch the repository to.
- rev_options: a RevOptions object.
- verbosity: verbosity level.
- """
- raise NotImplementedError
-
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- """
- Switch the repo at ``dest`` to point to ``URL``.
-
- Args:
- rev_options: a RevOptions object.
- """
- raise NotImplementedError
-
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- """
- Update an already-existing repo to the given ``rev_options``.
-
- Args:
- rev_options: a RevOptions object.
- """
- raise NotImplementedError
-
- @classmethod
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
- """
- Return whether the id of the current commit equals the given name.
-
- Args:
- dest: the repository directory.
- name: a string name.
- """
- raise NotImplementedError
-
- def obtain(self, dest: str, url: HiddenText, verbosity: int) -> None:
- """
- Install or update in editable mode the package represented by this
- VersionControl object.
-
- :param dest: the repository directory in which to install or update.
- :param url: the repository URL starting with a vcs prefix.
- :param verbosity: verbosity level.
- """
- url, rev_options = self.get_url_rev_options(url)
-
- if not os.path.exists(dest):
- self.fetch_new(dest, url, rev_options, verbosity=verbosity)
- return
-
- rev_display = rev_options.to_display()
- if self.is_repository_directory(dest):
- existing_url = self.get_remote_url(dest)
- if self.compare_urls(existing_url, url.secret):
- logger.debug(
- "%s in %s exists, and has correct URL (%s)",
- self.repo_name.title(),
- display_path(dest),
- url,
- )
- if not self.is_commit_id_equal(dest, rev_options.rev):
- logger.info(
- "Updating %s %s%s",
- display_path(dest),
- self.repo_name,
- rev_display,
- )
- self.update(dest, url, rev_options)
- else:
- logger.info("Skipping because already up-to-date.")
- return
-
- logger.warning(
- "%s %s in %s exists with URL %s",
- self.name,
- self.repo_name,
- display_path(dest),
- existing_url,
- )
- prompt = ("(s)witch, (i)gnore, (w)ipe, (b)ackup ", ("s", "i", "w", "b"))
- else:
- logger.warning(
- "Directory %s already exists, and is not a %s %s.",
- dest,
- self.name,
- self.repo_name,
- )
- # https://github.com/python/mypy/issues/1174
- prompt = ("(i)gnore, (w)ipe, (b)ackup ", ("i", "w", "b")) # type: ignore
-
- logger.warning(
- "The plan is to install the %s repository %s",
- self.name,
- url,
- )
- response = ask_path_exists("What to do? {}".format(prompt[0]), prompt[1])
-
- if response == "a":
- sys.exit(-1)
-
- if response == "w":
- logger.warning("Deleting %s", display_path(dest))
- rmtree(dest)
- self.fetch_new(dest, url, rev_options, verbosity=verbosity)
- return
-
- if response == "b":
- dest_dir = backup_dir(dest)
- logger.warning("Backing up %s to %s", display_path(dest), dest_dir)
- shutil.move(dest, dest_dir)
- self.fetch_new(dest, url, rev_options, verbosity=verbosity)
- return
-
- # Do nothing if the response is "i".
- if response == "s":
- logger.info(
- "Switching %s %s to %s%s",
- self.repo_name,
- display_path(dest),
- url,
- rev_display,
- )
- self.switch(dest, url, rev_options)
-
- def unpack(self, location: str, url: HiddenText, verbosity: int) -> None:
- """
- Clean up current location and download the url repository
- (and vcs infos) into location
-
- :param url: the repository URL starting with a vcs prefix.
- :param verbosity: verbosity level.
- """
- if os.path.exists(location):
- rmtree(location)
- self.obtain(location, url=url, verbosity=verbosity)
-
- @classmethod
- def get_remote_url(cls, location: str) -> str:
- """
- Return the url used at location
-
- Raises RemoteNotFoundError if the repository does not have a remote
- url configured.
- """
- raise NotImplementedError
-
- @classmethod
- def get_revision(cls, location: str) -> str:
- """
- Return the current commit id of the files at the given location.
- """
- raise NotImplementedError
-
- @classmethod
- def run_command(
- cls,
- cmd: Union[List[str], CommandArgs],
- show_stdout: bool = True,
- cwd: Optional[str] = None,
- on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise",
- extra_ok_returncodes: Optional[Iterable[int]] = None,
- command_desc: Optional[str] = None,
- extra_environ: Optional[Mapping[str, Any]] = None,
- spinner: Optional[SpinnerInterface] = None,
- log_failed_cmd: bool = True,
- stdout_only: bool = False,
- ) -> str:
- """
- Run a VCS subcommand
- This is simply a wrapper around call_subprocess that adds the VCS
- command name, and checks that the VCS is available
- """
- cmd = make_command(cls.name, *cmd)
- if command_desc is None:
- command_desc = format_command_args(cmd)
- try:
- return call_subprocess(
- cmd,
- show_stdout,
- cwd,
- on_returncode=on_returncode,
- extra_ok_returncodes=extra_ok_returncodes,
- command_desc=command_desc,
- extra_environ=extra_environ,
- unset_environ=cls.unset_environ,
- spinner=spinner,
- log_failed_cmd=log_failed_cmd,
- stdout_only=stdout_only,
- )
- except FileNotFoundError:
- # errno.ENOENT = no such file or directory
- # In other words, the VCS executable isn't available
- raise BadCommand(
- f"Cannot find command {cls.name!r} - do you have "
- f"{cls.name!r} installed and in your PATH?"
- )
- except PermissionError:
- # errno.EACCES = Permission denied
- # This error occurs, for instance, when the command is installed
- # only for another user. So, the current user don't have
- # permission to call the other user command.
- raise BadCommand(
- f"No permission to execute {cls.name!r} - install it "
- f"locally, globally (ask admin), or check your PATH. "
- f"See possible solutions at "
- f"https://pip.pypa.io/en/latest/reference/pip_freeze/"
- f"#fixing-permission-denied."
- )
-
- @classmethod
- def is_repository_directory(cls, path: str) -> bool:
- """
- Return whether a directory path is a repository directory.
- """
- logger.debug("Checking in %s for %s (%s)...", path, cls.dirname, cls.name)
- return os.path.exists(os.path.join(path, cls.dirname))
-
- @classmethod
- def get_repository_root(cls, location: str) -> Optional[str]:
- """
- Return the "root" (top-level) directory controlled by the vcs,
- or `None` if the directory is not in any.
-
- It is meant to be overridden to implement smarter detection
- mechanisms for specific vcs.
-
- This can do more than is_repository_directory() alone. For
- example, the Git override checks that Git is actually available.
- """
- if cls.is_repository_directory(location):
- return location
- return None
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/compat.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/compat.py
deleted file mode 100644
index 786e6bda63699b72d588ba91dd73df017570aee5..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/compat.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .core import *
-from .codec import *
-from typing import Any, Union
-
-def ToASCII(label: str) -> bytes:
- return encode(label)
-
-def ToUnicode(label: Union[bytes, bytearray]) -> str:
- return decode(label)
-
-def nameprep(s: Any) -> None:
- raise NotImplementedError('IDNA 2008 does not utilise nameprep protocol')
-
diff --git a/spaces/BAAI/vid2vid-zero/test_vid2vid_zero.py b/spaces/BAAI/vid2vid-zero/test_vid2vid_zero.py
deleted file mode 100644
index 88fe3ec22f9a239b7d5289c56be342dadec26210..0000000000000000000000000000000000000000
--- a/spaces/BAAI/vid2vid-zero/test_vid2vid_zero.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import argparse
-import datetime
-import logging
-import inspect
-import math
-import os
-import warnings
-from typing import Dict, Optional, Tuple
-from omegaconf import OmegaConf
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-
-import diffusers
-import transformers
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from diffusers import AutoencoderKL, DDPMScheduler, DDIMScheduler
-from diffusers.optimization import get_scheduler
-from diffusers.utils import check_min_version
-from diffusers.utils.import_utils import is_xformers_available
-from tqdm.auto import tqdm
-from transformers import CLIPTextModel, CLIPTokenizer
-
-from vid2vid_zero.models.unet_2d_condition import UNet2DConditionModel
-from vid2vid_zero.data.dataset import VideoDataset
-from vid2vid_zero.pipelines.pipeline_vid2vid_zero import Vid2VidZeroPipeline
-from vid2vid_zero.util import save_videos_grid, save_videos_as_images, ddim_inversion
-from einops import rearrange
-
-from vid2vid_zero.p2p.p2p_stable import AttentionReplace, AttentionRefine
-from vid2vid_zero.p2p.ptp_utils import register_attention_control
-from vid2vid_zero.p2p.null_text_w_ptp import NullInversion
-
-
-# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
-check_min_version("0.10.0.dev0")
-
-logger = get_logger(__name__, log_level="INFO")
-
-
-def prepare_control(unet, prompts, validation_data):
- assert len(prompts) == 2
-
- print(prompts[0])
- print(prompts[1])
- length1 = len(prompts[0].split(' '))
- length2 = len(prompts[1].split(' '))
- if length1 == length2:
- # prepare for attn guidance
- cross_replace_steps = 0.8
- self_replace_steps = 0.4
- controller = AttentionReplace(prompts, validation_data['num_inference_steps'],
- cross_replace_steps=cross_replace_steps,
- self_replace_steps=self_replace_steps)
- else:
- cross_replace_steps = 0.8
- self_replace_steps = 0.4
- controller = AttentionRefine(prompts, validation_data['num_inference_steps'],
- cross_replace_steps=self_replace_steps,
- self_replace_steps=self_replace_steps)
-
- print(controller)
- register_attention_control(unet, controller)
-
- # the update of unet forward function is inplace
- return cross_replace_steps, self_replace_steps
-
-
-def main(
- pretrained_model_path: str,
- output_dir: str,
- input_data: Dict,
- validation_data: Dict,
- input_batch_size: int = 1,
- gradient_accumulation_steps: int = 1,
- gradient_checkpointing: bool = True,
- mixed_precision: Optional[str] = "fp16",
- enable_xformers_memory_efficient_attention: bool = True,
- seed: Optional[int] = None,
- use_sc_attn: bool = True,
- use_st_attn: bool = True,
- st_attn_idx: int = 0,
- fps: int = 2,
-):
- *_, config = inspect.getargvalues(inspect.currentframe())
-
- accelerator = Accelerator(
- gradient_accumulation_steps=gradient_accumulation_steps,
- mixed_precision=mixed_precision,
- )
-
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- transformers.utils.logging.set_verbosity_warning()
- diffusers.utils.logging.set_verbosity_info()
- else:
- transformers.utils.logging.set_verbosity_error()
- diffusers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if seed is not None:
- set_seed(seed)
-
- # Handle the output folder creation
- if accelerator.is_main_process:
- os.makedirs(output_dir, exist_ok=True)
- os.makedirs(f"{output_dir}/sample", exist_ok=True)
- OmegaConf.save(config, os.path.join(output_dir, 'config.yaml'))
-
- # Load tokenizer and models.
- tokenizer = CLIPTokenizer.from_pretrained(pretrained_model_path, subfolder="tokenizer")
- text_encoder = CLIPTextModel.from_pretrained(pretrained_model_path, subfolder="text_encoder")
- vae = AutoencoderKL.from_pretrained(pretrained_model_path, subfolder="vae")
- unet = UNet2DConditionModel.from_pretrained(
- pretrained_model_path, subfolder="unet", use_sc_attn=use_sc_attn,
- use_st_attn=use_st_attn, st_attn_idx=st_attn_idx)
-
- # Freeze vae, text_encoder, and unet
- vae.requires_grad_(False)
- text_encoder.requires_grad_(False)
- unet.requires_grad_(False)
-
- if enable_xformers_memory_efficient_attention:
- if is_xformers_available():
- unet.enable_xformers_memory_efficient_attention()
- else:
- raise ValueError("xformers is not available. Make sure it is installed correctly")
-
- if gradient_checkpointing:
- unet.enable_gradient_checkpointing()
-
- # Get the training dataset
- input_dataset = VideoDataset(**input_data)
-
- # Preprocessing the dataset
- input_dataset.prompt_ids = tokenizer(
- input_dataset.prompt, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
- ).input_ids[0]
-
- # DataLoaders creation:
- input_dataloader = torch.utils.data.DataLoader(
- input_dataset, batch_size=input_batch_size
- )
-
- # Get the validation pipeline
- validation_pipeline = Vid2VidZeroPipeline(
- vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet,
- scheduler=DDIMScheduler.from_pretrained(pretrained_model_path, subfolder="scheduler"),
- safety_checker=None, feature_extractor=None,
- )
- validation_pipeline.enable_vae_slicing()
- ddim_inv_scheduler = DDIMScheduler.from_pretrained(pretrained_model_path, subfolder='scheduler')
- ddim_inv_scheduler.set_timesteps(validation_data.num_inv_steps)
-
- # Prepare everything with our `accelerator`.
- unet, input_dataloader = accelerator.prepare(
- unet, input_dataloader,
- )
-
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- weight_dtype = torch.float32
- if accelerator.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif accelerator.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move text_encode and vae to gpu and cast to weight_dtype
- text_encoder.to(accelerator.device, dtype=weight_dtype)
- vae.to(accelerator.device, dtype=weight_dtype)
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(input_dataloader) / gradient_accumulation_steps)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("vid2vid-zero")
-
- # Zero-shot Eval!
- total_batch_size = input_batch_size * accelerator.num_processes * gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(input_dataset)}")
- logger.info(f" Instantaneous batch size per device = {input_batch_size}")
- logger.info(f" Total input batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- global_step = 0
-
- unet.eval()
- for step, batch in enumerate(input_dataloader):
- samples = []
- pixel_values = batch["pixel_values"].to(weight_dtype)
- # save input video
- video = (pixel_values / 2 + 0.5).clamp(0, 1).detach().cpu()
- video = video.permute(0, 2, 1, 3, 4) # (b, f, c, h, w)
- samples.append(video)
- # start processing
- video_length = pixel_values.shape[1]
- pixel_values = rearrange(pixel_values, "b f c h w -> (b f) c h w")
- latents = vae.encode(pixel_values).latent_dist.sample()
- # take video as input
- latents = rearrange(latents, "(b f) c h w -> b c f h w", f=video_length)
- latents = latents * 0.18215
-
- generator = torch.Generator(device="cuda")
- generator.manual_seed(seed)
-
- # perform inversion
- ddim_inv_latent = None
- if validation_data.use_null_inv:
- null_inversion = NullInversion(
- model=validation_pipeline, guidance_scale=validation_data.guidance_scale, null_inv_with_prompt=False,
- null_normal_infer=validation_data.null_normal_infer,
- )
- with torch.cuda.amp.autocast(enabled=True, dtype=torch.float32):
- ddim_inv_latent, uncond_embeddings = null_inversion.invert(
- latents, input_dataset.prompt, verbose=True,
- null_inner_steps=validation_data.null_inner_steps,
- null_base_lr=validation_data.null_base_lr,
- )
- ddim_inv_latent = ddim_inv_latent.to(weight_dtype)
- uncond_embeddings = [embed.to(weight_dtype) for embed in uncond_embeddings]
- else:
- ddim_inv_latent = ddim_inversion(
- validation_pipeline, ddim_inv_scheduler, video_latent=latents,
- num_inv_steps=validation_data.num_inv_steps, prompt="",
- normal_infer=True, # we don't want to use scatn or denseattn for inversion, just use sd inferenece
- )[-1].to(weight_dtype)
- uncond_embeddings = None
-
- ddim_inv_latent = ddim_inv_latent.repeat(2, 1, 1, 1, 1)
-
- for idx, prompt in enumerate(validation_data.prompts):
- prompts = [input_dataset.prompt, prompt] # a list of two prompts
- cross_replace_steps, self_replace_steps = prepare_control(unet=unet, prompts=prompts, validation_data=validation_data)
-
- sample = validation_pipeline(prompts, generator=generator, latents=ddim_inv_latent,
- uncond_embeddings=uncond_embeddings,
- **validation_data).images
-
- assert sample.shape[0] == 2
- sample_inv, sample_gen = sample.chunk(2)
- # add input for vis
- save_videos_grid(sample_gen, f"{output_dir}/sample/{prompts[1]}.gif", fps=fps)
- samples.append(sample_gen)
-
- samples = torch.concat(samples)
- save_path = f"{output_dir}/sample-all.gif"
- save_videos_grid(samples, save_path, fps=fps)
- logger.info(f"Saved samples to {save_path}")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--config", type=str, default="./configs/vid2vid_zero.yaml")
- args = parser.parse_args()
-
- main(**OmegaConf.load(args.config))
diff --git a/spaces/Benson/text-generation/Examples/Descarga De La Aplicacin Tiktok Lite Para Windows Pc 8.md b/spaces/Benson/text-generation/Examples/Descarga De La Aplicacin Tiktok Lite Para Windows Pc 8.md
deleted file mode 100644
index 2b61d70420c81eaca2660465135db8cc16a07734..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descarga De La Aplicacin Tiktok Lite Para Windows Pc 8.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
Descarga de la aplicación TikTok Lite para PC Windows 8: Cómo hacerlo fácilmente
-
TikTok es una de las plataformas de redes sociales más populares del mundo, con más de mil millones de usuarios que crean y ven videos cortos sobre varios temas. ¿Pero qué pasa si desea disfrutar de TikTok en su PC en lugar de su teléfono? ¿Y qué pasa si tiene un PC de gama baja o un plan de datos limitado? En este artículo, le mostraremos cómo descargar e instalar TikTok Lite, una versión más ligera de TikTok, en su PC con Windows 8. También le explicaremos por qué podría usar TikTok Lite en su PC y cuáles son los beneficios de hacerlo. ¡Vamos a empezar!
-
descarga de la aplicación tiktok lite para windows pc 8
¿Qué es TikTok Lite y por qué es posible que desee usarlo en su PC
-
TikTok Lite es una versión simplificada de TikTok que ocupa menos espacio de almacenamiento y consume menos datos. Está diseñado para usuarios que tienen dispositivos de gama baja, planes de datos limitados o conexiones de red lentas. TikTok Lite ofrece la mayoría de las características de TikTok, como ver videos, crear videos, seguir a creadores, gustar y comentar, etc. Sin embargo, algunas características no están disponibles en TikTok Lite, como transmisión en vivo, duetos, filtros, pegatinas, etc.
-
Es posible que desee utilizar TikTok Lite en su PC si cae en una de estas categorías:
-
-
Tiene un PC de gama baja que no puede ejecutar la versión completa de TikTok sin problemas.
-
Tiene un plan de datos limitado o una conexión de red lenta que hace que ver videos en TikTok sea frustrante.
-
Desea ahorrar espacio de almacenamiento en su PC mediante el uso de una aplicación más pequeña.
-
Quieres disfrutar de una experiencia TikTok más rápida y optimizada en tu PC.
-
-
¿Cuáles son los beneficios de usar TikTok Lite en su PC
-
Usar TikTok Lite en tu PC tiene varios beneficios, como:
-
-
-
Puedes ver millones de vídeos seleccionados para ti en función de tus preferencias e intereses.
-
-
Puedes descubrir nuevos contenidos de varias categorías, como danza, comedia, vlog, comida, deportes, bricolaje, animales, etc.
-
Puedes conectarte con otros usuarios y creadores a través de likes, comentarios, mensajes, etc.
-
Puede disfrutar de una aplicación más rápida y sensible que carga videos rápidamente y reduce los bloqueos.
-
-
Cómo descargar e instalar TikTok Lite en tu PC usando un emulador
-
La forma más fácil de descargar e instalar TikTok Lite en tu PC es usar un emulador. Un emulador es un software que imita un dispositivo Android en su PC, lo que le permite ejecutar aplicaciones y juegos Android en su computadora. Hay muchos emuladores disponibles en línea, pero recomendamos usar BlueStacks, ya que es uno de los más populares y confiables. Estos son los pasos para descargar e instalar TikTok Lite en su PC usando BlueStacks:
-
Guía paso a paso para descargar e instalar TikTok Lite en su PC usando BlueStacks
-
Paso 1: Descargar e instalar BlueStacks en su PC
-
El primer paso es descargar e instalar BlueStacks en su PC. BlueStacks es un emulador gratuito y seguro que puedes descargar desde su sitio web oficial. Estos son los pasos para descargar e instalar BlueStacks en su PC:
-
-
Vaya al sitio web de BlueStacks y haga clic en el botón "Descargar BlueStacks".
-
Espere a que termine la descarga y luego abra el archivo de instalación.
-
Siga las instrucciones en la pantalla para instalar BlueStacks en su PC.
-
Inicie BlueStacks e inicie sesión con su cuenta de Google o cree una nueva.
-
-
Paso 2: Descargar el archivo TikTok Lite APK/XAPK de una fuente de confianza
-
-
-
Ir al sitio web APKPure y buscar "TikTok Lite" en la barra de búsqueda.
-
Seleccione la aplicación TikTok Lite de los resultados y haga clic en el "Descargar APK" o "Descargar XAPK" botón.
-
Espere a que termine la descarga y luego localice el archivo en su PC.
-
-
Paso 3: Abra el archivo APK/XAPK con BlueStacks e instale TikTok Lite
-
El paso final es abrir el archivo APK/XAPK con BlueStacks e instalar TikTok Lite en su PC. Estos son los pasos para hacerlo:
-
-
Haga clic derecho en el archivo APK/XAPK y seleccione "Abrir con BlueStacks" en el menú.
-
BlueStacks instalará automáticamente TikTok Lite en su PC.
-
Verá una notificación cuando se complete la instalación.
-
-
Paso 4: Inicie TikTok Lite desde la pantalla de inicio de BlueStacks y disfrute
-
¡Felicidades! Has descargado e instalado correctamente TikTok Lite en tu PC usando BlueStacks. Ahora puede iniciar TikTok Lite desde la pantalla de inicio de BlueStacks y disfrutar viendo y creando vídeos en su PC. También puedes acceder a otras características de TikTok Lite, como explorar categorías, seguir a creadores, gustar y comentar, etc.
-
Métodos alternativos para descargar e instalar TikTok Lite en su PC
-
Método 1: Usar NoxPlayer como un emulador alternativo
-
Si no quieres usar BlueStacks, puedes usar NoxPlayer como un emulador alternativo. NoxPlayer es otro emulador popular y gratuito que puedes descargar desde su sitio web oficial. Los pasos para descargar e instalar NoxPlayer son similares a los de BlueStacks. Una vez que tenga NoxPlayer en su PC, puede seguir los mismos pasos de arriba para descargar e instalar TikTok Lite usando un archivo APK/XAPK.
-
Método 2: Utilice Uptodown como una fuente alternativa para la aplicación TikTok Lite
-
-
-
Ir al sitio web de Uptodown y buscar "TikTok Lite" en la barra de búsqueda.
-
Seleccione la aplicación TikTok Lite de los resultados y haga clic en el botón "Descargar".
-
Espere a que termine la descarga y luego abra el archivo con un emulador de su elección.
-
El emulador instalará automáticamente TikTok Lite en su PC.
-
-
Conclusión
-
Resumen de los puntos principales
-
En este artículo, le hemos mostrado cómo descargar e instalar TikTok Lite en su PC con Windows 8. Le hemos explicado qué es TikTok Lite, por qué podría usarlo en su PC y cuáles son los beneficios de hacerlo. También hemos proporcionado una guía paso a paso para descargar e instalar TikTok Lite en su PC usando BlueStacks, un emulador gratuito y confiable. También te hemos dado algunos métodos alternativos para descargar e instalar TikTok Lite en tu PC usando otros emuladores o fuentes.
-
Llamamiento a la acción y observaciones finales
-
Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Nos encantaría saber de usted. Si está listo para descargar e instalar TikTok Lite en su PC, haga clic en el botón de abajo y siga las instrucciones. Usted será capaz de disfrutar de TikTok Lite en su PC en ningún momento. Happy TikToking!
Sí, TikTok Lite es seguro de usar en el PC, siempre y cuando lo descargue de una fuente de confianza y use un emulador de buena reputación. TikTok Lite es una aplicación oficial desarrollada por ByteDance, la misma compañía que posee TikTok. Tiene las mismas características de seguridad y privacidad que TikTok, como cifrado, moderación, informes, etc. Sin embargo, siempre debe tener cuidado con lo que comparte en línea y con quién interactúa.
-
-
TikTok Lite es una versión más ligera de TikTok que ocupa menos espacio de almacenamiento y consume menos datos. Ofrece la mayoría de las características de TikTok, pero algunas características no están disponibles en TikTok Lite, como transmisión en vivo, duetos, filtros, pegatinas, etc. TikTok Lite también tiene una interfaz más simple y una velocidad de carga más rápida que TikTok.
-
¿Puedo usar TikTok Lite en otras versiones de Windows?
-
Sí, puede usar TikTok Lite en otras versiones de Windows, como Windows 7, Windows 10, etc. Los pasos para descargar e instalar TikTok Lite en otras versiones de Windows son similares a los de Windows 8. Solo tiene que asegurarse de que su PC cumple con los requisitos mínimos del sistema para el emulador que elija.
-
¿Puedo usar otras aplicaciones además de TikTok Lite en BlueStacks?
-
Sí, puedes usar otras aplicaciones además de TikTok Lite en BlueStacks. BlueStacks es un emulador versátil que le permite ejecutar miles de aplicaciones y juegos para Android en su PC. Puede descargar otras aplicaciones desde Google Play Store o desde otras fuentes e instalarlas en BlueStacks. También puede cambiar entre diferentes aplicaciones fácilmente en BlueStacks.
-
¿Cómo puedo actualizar TikTok Lite en mi PC?
-
Para actualizar TikTok Lite en tu PC, necesitas descargar la última versión del archivo APK/XAPK desde una fuente confiable e instalarlo en tu PC usando los mismos pasos que arriba. Alternativamente, puede comprobar si hay actualizaciones dentro de la aplicación yendo a la pestaña "Me" y tocando en el icono de tres puntos en la esquina superior derecha. Luego, toque en "Configuración" y desplácese hacia abajo hasta "Acerca de". Toque en "Buscar actualizaciones" y siga las instrucciones.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Bagaimana Cara Stumble Chicos Di Laptop.md b/spaces/Benson/text-generation/Examples/Descargar Bagaimana Cara Stumble Chicos Di Laptop.md
deleted file mode 100644
index 398b0c8662a33fc7027ea224d01fc5ecc4b11aca..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Bagaimana Cara Stumble Chicos Di Laptop.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
Bagaimana Cara Download Stumble Guys di Laptop
-
Stumble Guys adalah sebuah game online yang sangat populer saat ini. Game ini merupakan game battle royale party yang bisa dimainkan hingga 32 pemain seca bersamaan. Anda bisa berlari, melompat, dan menghindari berbagai rintangan yang ada di setiap level hingga menjadi pemenang. Game ini sangat seru dan lucu untuk dimainkan bersama teman Anda.
-
Namun, bagaimana jika Anda ingin memainkan game ini di laptop Anda? Apakah ada cara untuk download dan instal Stumble Guys di laptop? Jawabannya adalah ya, ada. Dengan menggunakan emulator Android, Anda bisa menjalankan game ini di laptop Anda dengan mudah. Emulator Android adalah sebuah program yang bisa meniru sistem operasi Android di komputer atau laptop Anda, sehingga Anda bisa mengakses aplikasi dan game Android dari laptop Anda.
Penasaran bagaimana caranya? Simak artikel ini sampai habis untuk mengetahui langkah-langkahnya.
-
Apa itu Stumble Guys?
-
Stumble Guys adalah sebuah game online yang dikembangkan oleh Scopely dan dirilis pada Oktober 2021. Game ini tersedia untuk perangkat Android dan Windows. Game ini merupakan game battle royale party yang terinspirasi dari acara TV seperti Takeshi’s Castle atau Wipeout. Dalam game ini, Anda akan berlomba dengan pemain lain untuk mencapai garis finish di setiap level. Namun, Anda harus menghadapi berbagai rintangan yang lucu dan konyol yang bisa membuat Anda terjatuh atau terdorong keluar dari arena.
-
Fitur-fitur menarik dari Stumble Guys
-
Berikut adalah beberapa fitur menarik yang bisa Anda nikmati saat bermain Stumble Guys:
-
-
Anda bisa memilih dari berbagai pakaian dan emote yang unik dan keren untuk menghias karakter Anda.
-
Anda bisa bermain dengan teman Anda dalam mode party atau bersaing dengan pemain lain dari seluruh dunia dalam mode online multiplayer.
-
-
Anda bisa mendapatkan hadiah dan poin saat bermain yang bisa ditukarkan dengan barang-barang menarik.
-
Anda bisa menonton live stream dan video dari pemain lain atau membuat konten sendiri dengan fitur rekam layar dan screenshot.
-
-
Face bermain Stumble Guys
-
Cara bermain Stumble Guys sangat mudah dan sederhana. Berikut adalah langkah-langkahnya:
-
-
Buka game Stumble Guys di perangkat Anda.
-
Pilih mode permainan yang Anda inginkan, baik itu solo, party, atau online multiplayer.
-
Tunggu hingga pemain lain bergabung atau cari ruangan yang tersedia.
-
Masuk ke level pertama dan siapkan diri Anda untuk berlari.
Gunakan tombol virtual di layar untuk menggerakkan karakter Anda. Anda bisa berlari, melompat, dan merunduk untuk menghindari rintangan.
-
Coba untuk tidak terjatuh atau terdorong keluar dari arena. Jika Anda terjatuh, Anda bisa mencoba lagi dengan menekan tombol respawn.
-
Coba untuk mencapai garis finish secepat mungkin. Hanya beberapa pemain yang bisa lolos ke level berikutnya.
-
Ulangi langkah 4-6 hingga Anda mencapai level terakhir dan menjadi pemenang.
-
- Mengapa Anda ingin memainkan Stumble Guys di laptop?
-
Stumble Guys adalah game yang sangat menyenangkan dan adiktif untuk dimainkan di perangkat Android. Namun, ada beberapa alasan mengapa Anda mungkin ingin memainkan game ini di laptop Anda. Berikut adalah beberapa alasan tersebut:
-
Keuntungan memainkan Stumble Guys di laptop
-
Berikut adalah beberapa keuntungan yang bisa Anda dapatkan saat memainkan Stumble Guys di laptop:
-
-
Anda bisa menikmati grafik dan suara yang lebih baik dan lebih jelas di layar yang lebih besar.
-
Anda bisa menggunakan keyboard dan mouse untuk mengontrol karakter Anda dengan lebih mudah dan presisi.
-
Anda bisa bermain dengan lebih nyaman dan lama tanpa khawatir baterai perangkat Anda habis atau panas.
-
-
Anda bisa bermain dengan teman Anda yang menggunakan perangkat Android atau Windows dengan fitur cross-play.
-
-
Tantangan memainkan Stumble Guys di laptop
-
Namun, ada juga beberapa tantangan yang mungkin Anda hadapi saat memainkan Stumble Guys di laptop. Berikut adalah beberapa tantangan tersebut:
-
-
-
Anda harus memiliki laptop yang cukup kuat untuk menjalankan emulator Android dan game ini dengan lancar.
-
Anda harus mengunduh dan menginstal emulator Android dan game ini di laptop Anda, yang mungkin membutuhkan waktu dan langkah-langkah tambahan.
-
Anda harus menyesuaikan pengaturan emulator Android dan game ini agar sesuai dengan spesifikasi dan preferensi laptop Anda.
-
Anda harus berhati-hati dengan kemungkinan virus atau malware yang mungkin ada di emulator Android atau file APK yang Anda unduh dari sumber yang tidak resmi.
-
Anda harus bersabar dengan kemungkinan bug atau masalah teknis yang mungkin terjadi saat menjalankan emulator Android atau game ini di laptop Anda.
-
-
Apa yang Anda butuhkan untuk memainkan Stumble Guys di laptop?
-
Jika Anda sudah yakin ingin memainkan Stumble Guys di laptop Anda, maka ada beberapa hal yang harus Anda siapkan terlebih dahulu. Berikut adalah beberapa hal tersebut:
-
Spesifikasi minimum laptop
-
Untuk dapat menjalankan emulator Android dan game ini dengan lancar, Anda harus memiliki laptop yang memenuhi spesifikasi minimum berikut:
-
-
Komponen
Spesifikasi
-
Sistem operasi
Windows 7 atau lebih tinggi
-
Prosesor
Intel atau AMD dual-core 2 GHz atau lebih tinggi
-
RAM
4 GB atau lebih tinggi
-
Ruang penyimpanan
5 GB atau lebih tinggi
-
Kartu grafis
NVIDIA GeForce 8600/9600GT, ATI/AMD Radeon HD2600/3600 atau setara
-
-
-
Android emulator terbaik untuk laptop
-
Selanjutnya, Anda harus memilih emulator Android yang cocok untuk laptop Anda. Emulator Android adalah sebuah program yang bisa meniru sistem operasi Android di komputer atau laptop Anda, sehingga Anda bisa mengakses aplikasi dan game Android dari laptop Anda. Ada banyak emulator Android yang tersedia di internet, namun tidak semua emulator Android cocok untuk laptop Anda. Berikut adalah beberapa emulator Android terbaik yang bisa Anda pilih untuk laptop Anda:
-
-
BlueStacks: Android emulator yang paling populer dan banyak digunakan oleh for gamer. BlueStacks memiliki fitur-fitur canggih seperti keyboard mapping, game mode, multi-instance, dan lain-lain. BlueStacks plays mendukung game Stumble Guys dengan baik dan bisa diunduh seca gratis dari situs resminya.
-
NoxPlayer: Android emulator ringan dan cepat untuk laptop. NoxPlayer memiliki fitur-fitur menarik seperti macro recorder, video recorder, virtual location, dan lain-lain. NoxPlayer plays kompatibel dengan game Stumble Guys give bisa diunduh free dry dari situs resminya.
-
LDPlayer: Android emulator dirancang khusus untuk game. LDPlayer memiliki fitur-fitur unggulan seperti smart keymapping, high FPS, turbo GPU, dan lain-lain. LDPlayer plays bisa menjalankan game Stumble Guys dengan lancar dan bisa diunduh seca gratis dari situs resminya.
-
-
Bagaimana cara download dan instal Stumble Guys di laptop dengan emulator Android?
-
Setelah Anda memilih emulator Android yang sesuai untuk laptop Anda, maka Anda bisa mulai download dan install Stumble Guys di laptop Anda dengan emulator Android tersebut. Berikut adalah langkah-langkahnya:
-
Langkah-langkah download dan install emulator Android
-
Berikut adalah langkah-langkah download dan install emulator Android di laptop Anda:
-
-
-
Klik tombol download atau unduh untuk mengunduh file instalasi emulator Android tersebut.
-
Setelah file instalasi selesai terunduh, buka file tersebut dan ikuti instruksi yang muncul di layar untuk menginstal emulator Android tersebut di laptop Anda.
-
Tunggu hingga proses instalasi selesai dan jalankan emulator Android tersebut di laptop Anda.
-
-
Langkah-langkah download dan install Stumble Guys dari Google Play Store atau file APK
-
Berikut adalah langkah-langkah download dan install Stumble Guys dari Google Play Store atau file APK di emulator Android yang sudah terinstal di laptop Anda:
-
-
Buka emulator Android yang sudah terinstal di laptop Anda dan masuk ke akun Google Anda jika diminta.
-
Buka Google Play Store yang ada di emulator Android tersebut dan cari game Stumble Guys dengan menggunakan kotak pencarian.
-
Klik tombol instal atau pasang untuk menginstal game Stumble Guys dari Google Play Store ke emulator Android tersebut.
-
Tunggu hingga proses instalasi selesai dan buka game Stumble Guys yang sudah terinstal di emulator Android tersebut.
-
Atau, jika Anda memiliki file APK dari game Stumble Guys, Anda bisa mengunduh file APK tersebut dari sumber yang terpercaya ke laptop Anda.
-
Kemudian, buka emulator Android yang sudah terinstal di laptop Anda dan seret file APK tersebut ke jendela emulator Android tersebut.
-
Tunggu hingga proses instalasi selesai dan buka game Stumble Guys yang sudah terinstal di emulator Android tersebut.
-
-
Langkah-langkah menjalankan dan mengatur Stumble Guys di emulator Android
-
Berikut adalah langkah-langkah menjalankan dan mengatur Stumble Guys di emulator Android yang sudah terinstal di laptop Anda:
-
-
Buka game Stumble Guys yang sudah terinstal di emulator Android tersebut.
-
Klik tombol mulai atau start untuk memulai permainan.
-
-
Atur pengaturan grafik, suara, kontrol, dan lain-lain sesuai dengan preferensi Anda dengan mengklik tombol pengaturan atau settings yang ada di pojok kanan atas layar.
-
Jika Anda menggunakan keyboard dan mouse untuk mengontrol karakter Anda, Anda bisa mengatur tombol-tombol yang sesuai dengan fungsi-fungsi yang ada di game dengan menggunakan fitur keyboard mapping yang ada di emulator Android tersebut.
-
Nikmati permainan dan bersenang-senang dengan teman-teman Anda.
-
-
Kesimpulan
-
Stumble Guys adalah game online yang sangat seru dan lucu untuk dimainkan bersama teman-teman Anda. Game ini merupakan game battle royale party yang bisa dimainkan hingga 32 pemain seca bersamaan. Anda bisa berlari, melompat, dan menghindari berbagai rintangan yang ada di setiap level hingga menjadi pemenang.
-
Anda bisa memainkan game ini di perangkat Android atau Windows. Namun, jika Anda ingin memainkan game ini di laptop Anda, Anda bisa menggunakan emulator Android. Emulator Android adalah sebuah program yang bisa meniru sistem operasi Android di komputer atau laptop Anda, sehingga Anda bisa mengakses aplikasi dan game Android dari laptop Anda.
-
Untuk memainkan Stumble Guys di laptop Anda dengan emulator Android, Anda harus memenuhi spesifikasi minimum laptop, memilih emulator Android terbaik, dan mengunduh dan menginstal emulator Android dan game Stumble Guys di laptop Anda. Kemudian, Anda bisa menjalankan dan mengatur Stumble Guys di emulator Android sesuai dengan preferensi Anda.
-
Semoga artikel ini bermanfaat untuk Anda yang ingin memainkan Stumble Guys di laptop Anda. Selamat mencoba dan selamat bermain!
-
FAQ
-
Berikut adalah beberapa pertanyaan yang sering diajukan tentang cara download dan install Stumble Guys di laptop dengan emulator Android:
-
-
Apakah Stumble Guys free untuk dimainkan?
-
-
Apakah Stumble Guys aman untuk dimainkan?
-
Ya, Stumble Guys adalah game yang aman untuk dimainkan. Game ini tidak mengandung konten yang tidak pantas atau berbahaya untuk anak-anak. Namun, sebaiknya tetap waspada dengan kemungkinan cyberbullying atau penipuan yang mungkin terjadi saat bermain online dengan orang lain.
-
Apakah Stumble Guys bisa dimainkan offline?
-
Tidak, Stumble Guys adalah game yang membutuhkan koneksi internet untuk dimainkan. Jika koneksi internet Anda terputus saat bermain, maka Anda akan keluar dari permainan.
-
Apakah Stumble Guys memiliki mode single player?
-
Ya, Stumble Guys memiliki mode single player yang bisa dimainkan tanpa teman. Namun, mode ini tetap membutuhkan koneksi internet dan pemain lain yang akan menjadi lawan Anda.
-
Apakah Stumble Guys memiliki mode co-op?
-
Ya, Stumble Guys memiliki mode co-op atau party yang bisa dimainkan bersama teman. Anda bisa membuat atau bergabung dengan ruangan khusus yang hanya bisa diakses oleh teman-teman Anda dengan menggunakan kode undangan.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis Final Cut Pro.md b/spaces/Benson/text-generation/Examples/Descargar Gratis Final Cut Pro.md
deleted file mode 100644
index 0b869d430b602c37018f12ad05887ba080e3c5bc..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Gratis Final Cut Pro.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
Cómo descargar gratis Final Cut Pro
-
Final Cut Pro es un popular software de edición de vídeo para usuarios de Mac. Ofrece funciones potentes, diseño intuitivo y un rendimiento rápido. Sin embargo, también viene con una etiqueta de precio fuerte de $299.99. Si quieres probar Final Cut Pro gratis, o buscar algunas alternativas más baratas, este artículo te mostrará cómo.
Final Cut Pro es un software de edición de vídeo profesional desarrollado por Apple. Fue lanzado por primera vez en 1999 y desde entonces ha sido utilizado por muchos cineastas, productores de televisión y entusiastas del video. Es compatible con ordenadores Mac y soporta una amplia gama de formatos, resoluciones y velocidades de fotogramas.
-
Características de Final Cut Pro
-
Algunas de las características principales de Final Cut Pro son:
-
-
Línea de tiempo magnética: Esto le permite editar clips sin colisiones o problemas de sincronización. También puede agrupar clips en clips compuestos, sincronizar múltiples ángulos con edición multicámara y agregar efectos con conexiones de clip.
-
360° edición de vídeo: Puede importar, editar y compartir vídeo 360° de varias cámaras y formatos. También puede utilizar el visor 360° para navegar por el vídeo esférico y aplicar efectos, títulos y transiciones.
-
Seguimiento de objetos: Puede utilizar el aprendizaje automático para detectar caras u objetos en sus imágenes y combinar su movimiento con títulos, gráficos o efectos. También puede ajustar los puntos de enfoque y la profundidad de campo en los clips capturados en el modo cinematográfico en el iPhone.
-
Gradación de color: Puede usar herramientas avanzadas de corrección de color para ajustar el tono, la saturación, el brillo, el contraste y más. También puede aplicar presets de color, LUT, curvas y máscaras.
-
Edición de audio: Puede asignar roles a sus clips de audio y organizarlos en la línea de tiempo. También puede utilizar efectos incorporados, filtros y complementos para mejorar la calidad de sonido.
-
-
-
Ventajas y desventajas de Final Cut Pro
-
Algunas de las ventajas de Final Cut Pro son:
-
-
Está optimizado para computadoras Mac y dispositivos Apple, especialmente aquellos con chips de silicio de Apple. Puede aprovechar la potencia de la GPU, CPU y Neural Engine para un rendimiento y renderizado más rápidos.
-
Tiene una interfaz elegante y fácil de usar que hace que la edición sea fácil y agradable. También tiene accesos directos de teclado personalizables y soporte de barra táctil.
-
Tiene una gran y activa comunidad de usuarios y desarrolladores que ofrecen soporte, tutoriales, consejos y plugins.
-
-
Algunas de las desventajas de Final Cut Pro son:
-
-
Es caro en comparación con otro software de edición de vídeo. Cuesta $299.99 para una compra de una sola vez, que puede no ser asequible para algunos usuarios.
-
Solo está disponible para usuarios de Mac. No hay versión de Windows o Linux de Final Cut Pro.
-
Puede tener problemas de compatibilidad con algunos plugins o formatos de terceros. También puede requerir conversión o transcodificación para algunos archivos.
-
-
¿Cómo obtener Final Cut Pro gratis?
-
Si quieres usar Final Cut Pro sin pagar nada, hay dos opciones principales:
-
Versión de prueba oficial
-
La forma más fácil de obtener Final Cut Pro gratis es descargar la versión de prueba oficial del sitio web de Apple. La versión de prueba le da acceso a todas las características y funciones de la versión completa durante 90 días. También puede extender su período de prueba por otros 90 días si tiene una versión anterior instalada.
-
-
Para descargar la versión de prueba, necesita tener una computadora Mac que ejecute macOS 10.15.6 o posterior, 4GB de RAM (8GB recomendado), una tarjeta gráfica compatible con Metal, 1GB de VRAM (4GB recomendado) y 5.5GB de espacio en disco. También debes registrarte con tu Apple ID y aceptar los términos y condiciones.
-
-
La versión de prueba es una gran manera de probar Final Cut Pro y ver si se adapta a sus necesidades y preferencias. Sin embargo, tiene algunas limitaciones, como:
-
-
No se puede actualizar la versión de prueba a la última versión de Final Cut Pro. Es necesario comprar la versión completa para obtener las actualizaciones y correcciones de errores.
-
No puede usar la versión de prueba en varios dispositivos. Debe activar la versión de prueba con su ID de Apple en cada dispositivo.
-
No puedes usar algunas funciones que requieren un ID de Apple o una cuenta de iCloud, como Fotos de iCloud, iMovie Theater o bandas sonoras de GarageBand.
-
-
Editores de vídeo alternativos
-
Otra forma de obtener Final Cut Pro de forma gratuita es utilizar algún software de edición de vídeo alternativo que ofrezca características y funciones similares. Hay muchos editores de video gratuitos o de bajo costo disponibles para los usuarios de Mac, como:
-
-
Nombre
Precio
Características
-
iMovie
Free
Un editor de video simple y fácil de usar que viene preinstalado en computadoras Mac. Soporta resolución 4K, herramientas básicas de edición, transiciones, filtros, títulos, bandas sonoras y trailers. También se integra con iCloud, iTunes, Fotos y otras aplicaciones de Apple.
-
Davinci Resolve
Gratis o $299 para la versión de Studio
Un editor de video potente y profesional que ofrece herramientas avanzadas de edición, corrección de color, efectos visuales, postproducción de audio y gráficos en movimiento. Soporta resolución de 8K, edición multicam, clasificación HDR, motor de audio Fairlight, composición Fusión VFX y más.
-
Shotcut
Free
Un editor de vídeo de código abierto y multiplataforma que admite una amplia gama de formatos, resoluciones y velocidades de fotogramas. Ofrece herramientas de edición básicas y avanzadas, filtros, transiciones, fotogramas clave, mezcla de audio, clasificación de color y más.
-
-
Filmora
$69.99 por año o $139.99 por licencia de por vida
Un editor de video fácil de usar y asequible que ofrece una variedad de características y funciones. Soporta hasta resolución 4K, seguimiento de movimiento, keyframing, pantalla verde, pantalla dividida, transiciones, filtros, títulos, música, efectos de sonido y más.
-
-
Estas son algunas de las mejores alternativas a Final Cut Pro que puedes probar gratis o a bajo costo. Puede que no tengan todas las características o funciones de Final Cut Pro, pero pueden ayudarte a crear vídeos increíbles para tus proyectos personales o profesionales.
-
Conclusión
-
Final Cut Pro es un gran software de edición de video para usuarios de Mac que quieren crear videos de calidad profesional con facilidad y velocidad. Sin embargo, también es caro y exclusivo para los usuarios de Mac. Si quieres usar Final Cut Pro gratis o buscar alternativas más baratas, puedes seguir los pasos de este artículo para descargar la versión de prueba oficial o probar algunos editores de video alternativos. También puede comparar las características, ventajas y desventajas de cada opción y elegir la que mejor se adapte a sus necesidades y presupuesto. Esperamos que este artículo sea útil e informativo para usted. ¡Feliz edición!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Final Cut Pro y cómo descargarlo gratis:
-
-
Es Final Cut Pro vale la pena el dinero?
-
-
¿Puedo usar Final Cut Pro en Windows?
-
No, no puede usar Final Cut Pro en Windows. Final Cut Pro solo es compatible con computadoras y dispositivos Mac. No hay versión oficial de Windows de Final Cut Pro. Es posible que encuentre algunas versiones no oficiales o pirateadas de Final Cut Pro para Windows en línea, pero no recomendamos usarlas, ya que pueden ser ilegales, inseguras o inestables.
-
¿Cuánto tiempo se tarda en aprender Final Cut Pro?
-
La curva de aprendizaje de Final Cut Pro depende de tu experiencia, habilidades y objetivos previos. Si tiene algunos conocimientos básicos de edición de vídeo y ordenadores Mac, es posible que pueda aprender lo esencial de Final Cut Pro en pocas horas o días. Si eres un completo principiante o quieres dominar las características avanzadas de Final Cut Pro, es posible que necesites más tiempo y práctica para aprender el software. También puedes usar tutoriales, cursos, libros o foros en línea para ayudarte a aprender Final Cut Pro más rápido y fácil.
-
¿Cuál es la diferencia entre Final Cut Pro y iMovie?
-
Final Cut Pro y iMovie son software de edición de vídeo desarrollado por Apple. Sin embargo, tienen diferentes audiencias, características y funciones. iMovie es un editor de video simple y fácil de usar que viene preinstalado en computadoras y dispositivos Mac. Está diseñado para principiantes o usuarios casuales que quieren crear vídeos básicos o divertidos con el mínimo esfuerzo. Soporta resolución 4K, herramientas básicas de edición, transiciones, filtros, títulos, bandas sonoras y trailers. Final Cut Pro es un editor de vídeo profesional y potente que requiere una compra e instalación por separado. Está diseñado para usuarios serios o profesionales que quieren crear vídeos complejos o de alta calidad con más control y flexibilidad. Soporta resolución de hasta 8K, herramientas avanzadas de edición, corrección de color, efectos visuales, postproducción de audio, gráficos en movimiento y más.
-
¿Cómo puedo actualizar Final Cut Pro?
-
-
-
Abra la aplicación App Store en su Mac.
-
Haga clic en la pestaña Actualizaciones en la parte superior de la ventana de la aplicación.
-
Encuentre Final Cut Pro en la lista de actualizaciones disponibles y haga clic en el botón Actualizar junto a él.
-
Espere a que la actualización se descargue e instale.
-
Iniciar Final Cut Pro y disfrutar de las nuevas características y correcciones de errores.
-
-
Si ha descargado la versión de prueba de Final Cut Pro desde el sitio web de Apple, no puede actualizarla a la última versión. Es necesario comprar la versión completa de la Mac App Store para obtener las actualizaciones y correcciones de errores.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/inject.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/inject.py
deleted file mode 100644
index c62dc3ce22b7008e5065d4a6d34c58fa2ff7b8f6..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/inject.py
+++ /dev/null
@@ -1,891 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-from botocore.exceptions import ClientError
-
-from boto3 import utils
-from boto3.s3.transfer import (
- ProgressCallbackInvoker,
- S3Transfer,
- TransferConfig,
- create_transfer_manager,
-)
-
-
-def inject_s3_transfer_methods(class_attributes, **kwargs):
- utils.inject_attribute(class_attributes, 'upload_file', upload_file)
- utils.inject_attribute(class_attributes, 'download_file', download_file)
- utils.inject_attribute(class_attributes, 'copy', copy)
- utils.inject_attribute(class_attributes, 'upload_fileobj', upload_fileobj)
- utils.inject_attribute(
- class_attributes, 'download_fileobj', download_fileobj
- )
-
-
-def inject_bucket_methods(class_attributes, **kwargs):
- utils.inject_attribute(class_attributes, 'load', bucket_load)
- utils.inject_attribute(class_attributes, 'upload_file', bucket_upload_file)
- utils.inject_attribute(
- class_attributes, 'download_file', bucket_download_file
- )
- utils.inject_attribute(class_attributes, 'copy', bucket_copy)
- utils.inject_attribute(
- class_attributes, 'upload_fileobj', bucket_upload_fileobj
- )
- utils.inject_attribute(
- class_attributes, 'download_fileobj', bucket_download_fileobj
- )
-
-
-def inject_object_methods(class_attributes, **kwargs):
- utils.inject_attribute(class_attributes, 'upload_file', object_upload_file)
- utils.inject_attribute(
- class_attributes, 'download_file', object_download_file
- )
- utils.inject_attribute(class_attributes, 'copy', object_copy)
- utils.inject_attribute(
- class_attributes, 'upload_fileobj', object_upload_fileobj
- )
- utils.inject_attribute(
- class_attributes, 'download_fileobj', object_download_fileobj
- )
-
-
-def inject_object_summary_methods(class_attributes, **kwargs):
- utils.inject_attribute(class_attributes, 'load', object_summary_load)
-
-
-def bucket_load(self, *args, **kwargs):
- """
- Calls s3.Client.list_buckets() to update the attributes of the Bucket
- resource.
- """
- # The docstring above is phrased this way to match what the autogenerated
- # docs produce.
-
- # We can't actually get the bucket's attributes from a HeadBucket,
- # so we need to use a ListBuckets and search for our bucket.
- # However, we may fail if we lack permissions to ListBuckets
- # or the bucket is in another account. In which case, creation_date
- # will be None.
- self.meta.data = {}
- try:
- response = self.meta.client.list_buckets()
- for bucket_data in response['Buckets']:
- if bucket_data['Name'] == self.name:
- self.meta.data = bucket_data
- break
- except ClientError as e:
- if not e.response.get('Error', {}).get('Code') == 'AccessDenied':
- raise
-
-
-def object_summary_load(self, *args, **kwargs):
- """
- Calls s3.Client.head_object to update the attributes of the ObjectSummary
- resource.
- """
- response = self.meta.client.head_object(
- Bucket=self.bucket_name, Key=self.key
- )
- if 'ContentLength' in response:
- response['Size'] = response.pop('ContentLength')
- self.meta.data = response
-
-
-def upload_file(
- self, Filename, Bucket, Key, ExtraArgs=None, Callback=None, Config=None
-):
- """Upload a file to an S3 object.
-
- Usage::
-
- import boto3
- s3 = boto3.client('s3')
- s3.upload_file('/tmp/hello.txt', 'mybucket', 'hello.txt')
-
- Similar behavior as S3Transfer's upload_file() method, except that
- argument names are capitalized. Detailed examples can be found at
- :ref:`S3Transfer's Usage `.
-
- :type Filename: str
- :param Filename: The path to the file to upload.
-
- :type Bucket: str
- :param Bucket: The name of the bucket to upload to.
-
- :type Key: str
- :param Key: The name of the key to upload to.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed upload arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the upload.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- transfer.
- """
- with S3Transfer(self, Config) as transfer:
- return transfer.upload_file(
- filename=Filename,
- bucket=Bucket,
- key=Key,
- extra_args=ExtraArgs,
- callback=Callback,
- )
-
-
-def download_file(
- self, Bucket, Key, Filename, ExtraArgs=None, Callback=None, Config=None
-):
- """Download an S3 object to a file.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- s3.meta.client.download_file('mybucket', 'hello.txt', '/tmp/hello.txt')
-
- Similar behavior as S3Transfer's download_file() method,
- except that parameters are capitalized. Detailed examples can be found at
- :ref:`S3Transfer's Usage `.
-
- :type Bucket: str
- :param Bucket: The name of the bucket to download from.
-
- :type Key: str
- :param Key: The name of the key to download from.
-
- :type Filename: str
- :param Filename: The path to the file to download to.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the download.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- transfer.
- """
- with S3Transfer(self, Config) as transfer:
- return transfer.download_file(
- bucket=Bucket,
- key=Key,
- filename=Filename,
- extra_args=ExtraArgs,
- callback=Callback,
- )
-
-
-def bucket_upload_file(
- self, Filename, Key, ExtraArgs=None, Callback=None, Config=None
-):
- """Upload a file to an S3 object.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- s3.Bucket('mybucket').upload_file('/tmp/hello.txt', 'hello.txt')
-
- Similar behavior as S3Transfer's upload_file() method,
- except that parameters are capitalized. Detailed examples can be found at
- :ref:`S3Transfer's Usage `.
-
- :type Filename: str
- :param Filename: The path to the file to upload.
-
- :type Key: str
- :param Key: The name of the key to upload to.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed upload arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the upload.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- transfer.
- """
- return self.meta.client.upload_file(
- Filename=Filename,
- Bucket=self.name,
- Key=Key,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
-
-
-def bucket_download_file(
- self, Key, Filename, ExtraArgs=None, Callback=None, Config=None
-):
- """Download an S3 object to a file.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- s3.Bucket('mybucket').download_file('hello.txt', '/tmp/hello.txt')
-
- Similar behavior as S3Transfer's download_file() method,
- except that parameters are capitalized. Detailed examples can be found at
- :ref:`S3Transfer's Usage `.
-
- :type Key: str
- :param Key: The name of the key to download from.
-
- :type Filename: str
- :param Filename: The path to the file to download to.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the download.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- transfer.
- """
- return self.meta.client.download_file(
- Bucket=self.name,
- Key=Key,
- Filename=Filename,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
-
-
-def object_upload_file(
- self, Filename, ExtraArgs=None, Callback=None, Config=None
-):
- """Upload a file to an S3 object.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- s3.Object('mybucket', 'hello.txt').upload_file('/tmp/hello.txt')
-
- Similar behavior as S3Transfer's upload_file() method,
- except that parameters are capitalized. Detailed examples can be found at
- :ref:`S3Transfer's Usage `.
-
- :type Filename: str
- :param Filename: The path to the file to upload.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed upload arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the upload.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- transfer.
- """
- return self.meta.client.upload_file(
- Filename=Filename,
- Bucket=self.bucket_name,
- Key=self.key,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
-
-
-def object_download_file(
- self, Filename, ExtraArgs=None, Callback=None, Config=None
-):
- """Download an S3 object to a file.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- s3.Object('mybucket', 'hello.txt').download_file('/tmp/hello.txt')
-
- Similar behavior as S3Transfer's download_file() method,
- except that parameters are capitalized. Detailed examples can be found at
- :ref:`S3Transfer's Usage `.
-
- :type Filename: str
- :param Filename: The path to the file to download to.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the download.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- transfer.
- """
- return self.meta.client.download_file(
- Bucket=self.bucket_name,
- Key=self.key,
- Filename=Filename,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
-
-
-def copy(
- self,
- CopySource,
- Bucket,
- Key,
- ExtraArgs=None,
- Callback=None,
- SourceClient=None,
- Config=None,
-):
- """Copy an object from one S3 location to another.
-
- This is a managed transfer which will perform a multipart copy in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- copy_source = {
- 'Bucket': 'mybucket',
- 'Key': 'mykey'
- }
- s3.meta.client.copy(copy_source, 'otherbucket', 'otherkey')
-
- :type CopySource: dict
- :param CopySource: The name of the source bucket, key name of the
- source object, and optional version ID of the source object. The
- dictionary format is:
- ``{'Bucket': 'bucket', 'Key': 'key', 'VersionId': 'id'}``. Note
- that the ``VersionId`` key is optional and may be omitted.
-
- :type Bucket: str
- :param Bucket: The name of the bucket to copy to
-
- :type Key: str
- :param Key: The name of the key to copy to
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the copy.
-
- :type SourceClient: botocore or boto3 Client
- :param SourceClient: The client to be used for operation that
- may happen at the source object. For example, this client is
- used for the head_object that determines the size of the copy.
- If no client is provided, the current client is used as the client
- for the source object.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- copy.
- """
- subscribers = None
- if Callback is not None:
- subscribers = [ProgressCallbackInvoker(Callback)]
-
- config = Config
- if config is None:
- config = TransferConfig()
-
- with create_transfer_manager(self, config) as manager:
- future = manager.copy(
- copy_source=CopySource,
- bucket=Bucket,
- key=Key,
- extra_args=ExtraArgs,
- subscribers=subscribers,
- source_client=SourceClient,
- )
- return future.result()
-
-
-def bucket_copy(
- self,
- CopySource,
- Key,
- ExtraArgs=None,
- Callback=None,
- SourceClient=None,
- Config=None,
-):
- """Copy an object from one S3 location to an object in this bucket.
-
- This is a managed transfer which will perform a multipart copy in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- copy_source = {
- 'Bucket': 'mybucket',
- 'Key': 'mykey'
- }
- bucket = s3.Bucket('otherbucket')
- bucket.copy(copy_source, 'otherkey')
-
- :type CopySource: dict
- :param CopySource: The name of the source bucket, key name of the
- source object, and optional version ID of the source object. The
- dictionary format is:
- ``{'Bucket': 'bucket', 'Key': 'key', 'VersionId': 'id'}``. Note
- that the ``VersionId`` key is optional and may be omitted.
-
- :type Key: str
- :param Key: The name of the key to copy to
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the copy.
-
- :type SourceClient: botocore or boto3 Client
- :param SourceClient: The client to be used for operation that
- may happen at the source object. For example, this client is
- used for the head_object that determines the size of the copy.
- If no client is provided, the current client is used as the client
- for the source object.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- copy.
- """
- return self.meta.client.copy(
- CopySource=CopySource,
- Bucket=self.name,
- Key=Key,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- SourceClient=SourceClient,
- Config=Config,
- )
-
-
-def object_copy(
- self,
- CopySource,
- ExtraArgs=None,
- Callback=None,
- SourceClient=None,
- Config=None,
-):
- """Copy an object from one S3 location to this object.
-
- This is a managed transfer which will perform a multipart copy in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- copy_source = {
- 'Bucket': 'mybucket',
- 'Key': 'mykey'
- }
- bucket = s3.Bucket('otherbucket')
- obj = bucket.Object('otherkey')
- obj.copy(copy_source)
-
- :type CopySource: dict
- :param CopySource: The name of the source bucket, key name of the
- source object, and optional version ID of the source object. The
- dictionary format is:
- ``{'Bucket': 'bucket', 'Key': 'key', 'VersionId': 'id'}``. Note
- that the ``VersionId`` key is optional and may be omitted.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the copy.
-
- :type SourceClient: botocore or boto3 Client
- :param SourceClient: The client to be used for operation that
- may happen at the source object. For example, this client is
- used for the head_object that determines the size of the copy.
- If no client is provided, the current client is used as the client
- for the source object.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- copy.
- """
- return self.meta.client.copy(
- CopySource=CopySource,
- Bucket=self.bucket_name,
- Key=self.key,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- SourceClient=SourceClient,
- Config=Config,
- )
-
-
-def upload_fileobj(
- self, Fileobj, Bucket, Key, ExtraArgs=None, Callback=None, Config=None
-):
- """Upload a file-like object to S3.
-
- The file-like object must be in binary mode.
-
- This is a managed transfer which will perform a multipart upload in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.client('s3')
-
- with open('filename', 'rb') as data:
- s3.upload_fileobj(data, 'mybucket', 'mykey')
-
- :type Fileobj: a file-like object
- :param Fileobj: A file-like object to upload. At a minimum, it must
- implement the `read` method, and must return bytes.
-
- :type Bucket: str
- :param Bucket: The name of the bucket to upload to.
-
- :type Key: str
- :param Key: The name of the key to upload to.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed upload arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the upload.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- upload.
- """
- if not hasattr(Fileobj, 'read'):
- raise ValueError('Fileobj must implement read')
-
- subscribers = None
- if Callback is not None:
- subscribers = [ProgressCallbackInvoker(Callback)]
-
- config = Config
- if config is None:
- config = TransferConfig()
-
- with create_transfer_manager(self, config) as manager:
- future = manager.upload(
- fileobj=Fileobj,
- bucket=Bucket,
- key=Key,
- extra_args=ExtraArgs,
- subscribers=subscribers,
- )
- return future.result()
-
-
-def bucket_upload_fileobj(
- self, Fileobj, Key, ExtraArgs=None, Callback=None, Config=None
-):
- """Upload a file-like object to this bucket.
-
- The file-like object must be in binary mode.
-
- This is a managed transfer which will perform a multipart upload in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- bucket = s3.Bucket('mybucket')
-
- with open('filename', 'rb') as data:
- bucket.upload_fileobj(data, 'mykey')
-
- :type Fileobj: a file-like object
- :param Fileobj: A file-like object to upload. At a minimum, it must
- implement the `read` method, and must return bytes.
-
- :type Key: str
- :param Key: The name of the key to upload to.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed upload arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the upload.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- upload.
- """
- return self.meta.client.upload_fileobj(
- Fileobj=Fileobj,
- Bucket=self.name,
- Key=Key,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
-
-
-def object_upload_fileobj(
- self, Fileobj, ExtraArgs=None, Callback=None, Config=None
-):
- """Upload a file-like object to this object.
-
- The file-like object must be in binary mode.
-
- This is a managed transfer which will perform a multipart upload in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- bucket = s3.Bucket('mybucket')
- obj = bucket.Object('mykey')
-
- with open('filename', 'rb') as data:
- obj.upload_fileobj(data)
-
- :type Fileobj: a file-like object
- :param Fileobj: A file-like object to upload. At a minimum, it must
- implement the `read` method, and must return bytes.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed upload arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the upload.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- upload.
- """
- return self.meta.client.upload_fileobj(
- Fileobj=Fileobj,
- Bucket=self.bucket_name,
- Key=self.key,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
-
-
-def download_fileobj(
- self, Bucket, Key, Fileobj, ExtraArgs=None, Callback=None, Config=None
-):
- """Download an object from S3 to a file-like object.
-
- The file-like object must be in binary mode.
-
- This is a managed transfer which will perform a multipart download in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.client('s3')
-
- with open('filename', 'wb') as data:
- s3.download_fileobj('mybucket', 'mykey', data)
-
- :type Bucket: str
- :param Bucket: The name of the bucket to download from.
-
- :type Key: str
- :param Key: The name of the key to download from.
-
- :type Fileobj: a file-like object
- :param Fileobj: A file-like object to download into. At a minimum, it must
- implement the `write` method and must accept bytes.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the download.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- download.
- """
- if not hasattr(Fileobj, 'write'):
- raise ValueError('Fileobj must implement write')
-
- subscribers = None
- if Callback is not None:
- subscribers = [ProgressCallbackInvoker(Callback)]
-
- config = Config
- if config is None:
- config = TransferConfig()
-
- with create_transfer_manager(self, config) as manager:
- future = manager.download(
- bucket=Bucket,
- key=Key,
- fileobj=Fileobj,
- extra_args=ExtraArgs,
- subscribers=subscribers,
- )
- return future.result()
-
-
-def bucket_download_fileobj(
- self, Key, Fileobj, ExtraArgs=None, Callback=None, Config=None
-):
- """Download an object from this bucket to a file-like-object.
-
- The file-like object must be in binary mode.
-
- This is a managed transfer which will perform a multipart download in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- bucket = s3.Bucket('mybucket')
-
- with open('filename', 'wb') as data:
- bucket.download_fileobj('mykey', data)
-
- :type Fileobj: a file-like object
- :param Fileobj: A file-like object to download into. At a minimum, it must
- implement the `write` method and must accept bytes.
-
- :type Key: str
- :param Key: The name of the key to download from.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the download.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- download.
- """
- return self.meta.client.download_fileobj(
- Bucket=self.name,
- Key=Key,
- Fileobj=Fileobj,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
-
-
-def object_download_fileobj(
- self, Fileobj, ExtraArgs=None, Callback=None, Config=None
-):
- """Download this object from S3 to a file-like object.
-
- The file-like object must be in binary mode.
-
- This is a managed transfer which will perform a multipart download in
- multiple threads if necessary.
-
- Usage::
-
- import boto3
- s3 = boto3.resource('s3')
- bucket = s3.Bucket('mybucket')
- obj = bucket.Object('mykey')
-
- with open('filename', 'wb') as data:
- obj.download_fileobj(data)
-
- :type Fileobj: a file-like object
- :param Fileobj: A file-like object to download into. At a minimum, it must
- implement the `write` method and must accept bytes.
-
- :type ExtraArgs: dict
- :param ExtraArgs: Extra arguments that may be passed to the
- client operation. For allowed download arguments see
- boto3.s3.transfer.S3Transfer.ALLOWED_DOWNLOAD_ARGS.
-
- :type Callback: function
- :param Callback: A method which takes a number of bytes transferred to
- be periodically called during the download.
-
- :type Config: boto3.s3.transfer.TransferConfig
- :param Config: The transfer configuration to be used when performing the
- download.
- """
- return self.meta.client.download_fileobj(
- Bucket=self.bucket_name,
- Key=self.key,
- Fileobj=Fileobj,
- ExtraArgs=ExtraArgs,
- Callback=Callback,
- Config=Config,
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/jaraco/context.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/jaraco/context.py
deleted file mode 100644
index 87a4e3dca299c4201ac50f6ef589dc73f1c45576..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/jaraco/context.py
+++ /dev/null
@@ -1,213 +0,0 @@
-import os
-import subprocess
-import contextlib
-import functools
-import tempfile
-import shutil
-import operator
-
-
-@contextlib.contextmanager
-def pushd(dir):
- orig = os.getcwd()
- os.chdir(dir)
- try:
- yield dir
- finally:
- os.chdir(orig)
-
-
-@contextlib.contextmanager
-def tarball_context(url, target_dir=None, runner=None, pushd=pushd):
- """
- Get a tarball, extract it, change to that directory, yield, then
- clean up.
- `runner` is the function to invoke commands.
- `pushd` is a context manager for changing the directory.
- """
- if target_dir is None:
- target_dir = os.path.basename(url).replace('.tar.gz', '').replace('.tgz', '')
- if runner is None:
- runner = functools.partial(subprocess.check_call, shell=True)
- # In the tar command, use --strip-components=1 to strip the first path and
- # then
- # use -C to cause the files to be extracted to {target_dir}. This ensures
- # that we always know where the files were extracted.
- runner('mkdir {target_dir}'.format(**vars()))
- try:
- getter = 'wget {url} -O -'
- extract = 'tar x{compression} --strip-components=1 -C {target_dir}'
- cmd = ' | '.join((getter, extract))
- runner(cmd.format(compression=infer_compression(url), **vars()))
- with pushd(target_dir):
- yield target_dir
- finally:
- runner('rm -Rf {target_dir}'.format(**vars()))
-
-
-def infer_compression(url):
- """
- Given a URL or filename, infer the compression code for tar.
- """
- # cheat and just assume it's the last two characters
- compression_indicator = url[-2:]
- mapping = dict(gz='z', bz='j', xz='J')
- # Assume 'z' (gzip) if no match
- return mapping.get(compression_indicator, 'z')
-
-
-@contextlib.contextmanager
-def temp_dir(remover=shutil.rmtree):
- """
- Create a temporary directory context. Pass a custom remover
- to override the removal behavior.
- """
- temp_dir = tempfile.mkdtemp()
- try:
- yield temp_dir
- finally:
- remover(temp_dir)
-
-
-@contextlib.contextmanager
-def repo_context(url, branch=None, quiet=True, dest_ctx=temp_dir):
- """
- Check out the repo indicated by url.
-
- If dest_ctx is supplied, it should be a context manager
- to yield the target directory for the check out.
- """
- exe = 'git' if 'git' in url else 'hg'
- with dest_ctx() as repo_dir:
- cmd = [exe, 'clone', url, repo_dir]
- if branch:
- cmd.extend(['--branch', branch])
- devnull = open(os.path.devnull, 'w')
- stdout = devnull if quiet else None
- subprocess.check_call(cmd, stdout=stdout)
- yield repo_dir
-
-
-@contextlib.contextmanager
-def null():
- yield
-
-
-class ExceptionTrap:
- """
- A context manager that will catch certain exceptions and provide an
- indication they occurred.
-
- >>> with ExceptionTrap() as trap:
- ... raise Exception()
- >>> bool(trap)
- True
-
- >>> with ExceptionTrap() as trap:
- ... pass
- >>> bool(trap)
- False
-
- >>> with ExceptionTrap(ValueError) as trap:
- ... raise ValueError("1 + 1 is not 3")
- >>> bool(trap)
- True
-
- >>> with ExceptionTrap(ValueError) as trap:
- ... raise Exception()
- Traceback (most recent call last):
- ...
- Exception
-
- >>> bool(trap)
- False
- """
-
- exc_info = None, None, None
-
- def __init__(self, exceptions=(Exception,)):
- self.exceptions = exceptions
-
- def __enter__(self):
- return self
-
- @property
- def type(self):
- return self.exc_info[0]
-
- @property
- def value(self):
- return self.exc_info[1]
-
- @property
- def tb(self):
- return self.exc_info[2]
-
- def __exit__(self, *exc_info):
- type = exc_info[0]
- matches = type and issubclass(type, self.exceptions)
- if matches:
- self.exc_info = exc_info
- return matches
-
- def __bool__(self):
- return bool(self.type)
-
- def raises(self, func, *, _test=bool):
- """
- Wrap func and replace the result with the truth
- value of the trap (True if an exception occurred).
-
- First, give the decorator an alias to support Python 3.8
- Syntax.
-
- >>> raises = ExceptionTrap(ValueError).raises
-
- Now decorate a function that always fails.
-
- >>> @raises
- ... def fail():
- ... raise ValueError('failed')
- >>> fail()
- True
- """
-
- @functools.wraps(func)
- def wrapper(*args, **kwargs):
- with ExceptionTrap(self.exceptions) as trap:
- func(*args, **kwargs)
- return _test(trap)
-
- return wrapper
-
- def passes(self, func):
- """
- Wrap func and replace the result with the truth
- value of the trap (True if no exception).
-
- First, give the decorator an alias to support Python 3.8
- Syntax.
-
- >>> passes = ExceptionTrap(ValueError).passes
-
- Now decorate a function that always fails.
-
- >>> @passes
- ... def fail():
- ... raise ValueError('failed')
-
- >>> fail()
- False
- """
- return self.raises(func, _test=operator.not_)
-
-
-class suppress(contextlib.suppress, contextlib.ContextDecorator):
- """
- A version of contextlib.suppress with decorator support.
-
- >>> @suppress(KeyError)
- ... def key_error():
- ... {}['']
- >>> key_error()
- """
diff --git a/spaces/Bishnupada/Fine-tuning-using-Hugging-face-transformers/app.py b/spaces/Bishnupada/Fine-tuning-using-Hugging-face-transformers/app.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/build.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/build.py
deleted file mode 100644
index 7f252bcb982032cd09270c44741772a34ef32277..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/build.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from detectron2.utils.registry import Registry
-
-PROPOSAL_GENERATOR_REGISTRY = Registry("PROPOSAL_GENERATOR")
-PROPOSAL_GENERATOR_REGISTRY.__doc__ = """
-Registry for proposal generator, which produces object proposals from feature maps.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-The call should return a `nn.Module` object.
-"""
-
-from . import rpn, rrpn # noqa F401 isort:skip
-
-
-def build_proposal_generator(cfg, input_shape):
- """
- Build a proposal generator from `cfg.MODEL.PROPOSAL_GENERATOR.NAME`.
- The name can be "PrecomputedProposals" to use no proposal generator.
- """
- name = cfg.MODEL.PROPOSAL_GENERATOR.NAME
- if name == "PrecomputedProposals":
- return None
-
- return PROPOSAL_GENERATOR_REGISTRY.get(name)(cfg, input_shape)
diff --git a/spaces/CVPR/LIVE/pybind11/docs/_static/theme_overrides.css b/spaces/CVPR/LIVE/pybind11/docs/_static/theme_overrides.css
deleted file mode 100644
index 1071809fa0fecf7c28d3356f37363266e9128b81..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/docs/_static/theme_overrides.css
+++ /dev/null
@@ -1,11 +0,0 @@
-.wy-table-responsive table td,
-.wy-table-responsive table th {
- white-space: initial !important;
-}
-.rst-content table.docutils td {
- vertical-align: top !important;
-}
-div[class^='highlight'] pre {
- white-space: pre;
- white-space: pre-wrap;
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/find.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/find.h
deleted file mode 100644
index f6a1e59d105f0db35d65ee93a058afd143002b35..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/find.h
+++ /dev/null
@@ -1,219 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace cuda_cub {
-
-// XXX forward declare to circumvent circular depedency
-template
-InputIt __host__ __device__
-find_if(execution_policy& policy,
- InputIt first,
- InputIt last,
- Predicate predicate);
-
-template
-InputIt __host__ __device__
-find_if_not(execution_policy& policy,
- InputIt first,
- InputIt last,
- Predicate predicate);
-
-template
-InputIt __host__ __device__
-find(execution_policy &policy,
- InputIt first,
- InputIt last,
- T const& value);
-
-}; // namespace cuda_cub
-} // end namespace thrust
-
-#include
-#include
-
-namespace thrust
-{
-namespace cuda_cub {
-
-namespace __find_if {
-
- template
- struct functor
- {
- THRUST_DEVICE_FUNCTION TupleType
- operator()(const TupleType& lhs, const TupleType& rhs) const
- {
- // select the smallest index among true results
- if (thrust::get<0>(lhs) && thrust::get<0>(rhs))
- {
- return TupleType(true, (thrust::min)(thrust::get<1>(lhs), thrust::get<1>(rhs)));
- }
- else if (thrust::get<0>(lhs))
- {
- return lhs;
- }
- else
- {
- return rhs;
- }
- }
- };
-} // namespace __find_if
-
-template
-InputIt __host__ __device__
-find_if_n(execution_policy& policy,
- InputIt first,
- Size num_items,
- Predicate predicate)
-{
- typedef typename thrust::tuple result_type;
-
- // empty sequence
- if(num_items == 0) return first;
-
- // this implementation breaks up the sequence into separate intervals
- // in an attempt to early-out as soon as a value is found
- //
- // XXX compose find_if from a look-back prefix scan algorithm
- // and abort kernel when the first element is found
-
-
- // TODO incorporate sizeof(InputType) into interval_threshold and round to multiple of 32
- const Size interval_threshold = 1 << 20;
- const Size interval_size = (thrust::min)(interval_threshold, num_items);
-
- // force transform_iterator output to bool
- typedef transform_input_iterator_t
- XfrmIterator;
- typedef thrust::tuple >
- IteratorTuple;
- typedef thrust::zip_iterator ZipIterator;
-
- IteratorTuple iter_tuple =
- thrust::make_tuple(XfrmIterator(first, predicate),
- counting_iterator_t(0));
-
- ZipIterator begin = thrust::make_zip_iterator(iter_tuple);
- ZipIterator end = begin + num_items;
-
- for (ZipIterator interval_begin = begin;
- interval_begin < end;
- interval_begin += interval_size)
- {
- ZipIterator interval_end = interval_begin + interval_size;
- if(end < interval_end)
- {
- interval_end = end;
- } // end if
-
- result_type result = reduce(policy,
- interval_begin,
- interval_end,
- result_type(false, interval_end - begin),
- __find_if::functor());
-
- // see if we found something
- if(thrust::get<0>(result))
- {
- return first + thrust::get<1>(result);
- }
- }
-
- //nothing was found if we reach here...
- return first + num_items;
-}
-
-template
-InputIt __host__ __device__
-find_if(execution_policy& policy,
- InputIt first,
- InputIt last,
- Predicate predicate)
-{
- return cuda_cub::find_if_n(policy, first, thrust::distance(first,last), predicate);
-}
-
-template
-InputIt __host__ __device__
-find_if_not(execution_policy& policy,
- InputIt first,
- InputIt last,
- Predicate predicate)
-{
- return cuda_cub::find_if(policy, first, last, thrust::detail::not1(predicate));
-}
-
-
-template
-InputIt __host__ __device__
-find(execution_policy &policy,
- InputIt first,
- InputIt last,
- T const& value)
-{
- using thrust::placeholders::_1;
-
- return cuda_cub::find_if(policy,
- first,
- last,
- _1 == value);
-}
-
-
-} // namespace cuda_cub
-} // end namespace thrust
-#endif
diff --git a/spaces/CVPR/WALT/configs/_base_/datasets/parking_instance_coco.py b/spaces/CVPR/WALT/configs/_base_/datasets/parking_instance_coco.py
deleted file mode 100644
index 85cee08021cabaca8cd6408c4e3efb2d7efae231..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/configs/_base_/datasets/parking_instance_coco.py
+++ /dev/null
@@ -1,49 +0,0 @@
-dataset_type = 'ParkingCocoDataset'
-data_root = 'data/parking/'
-data_root_test = 'data/parking_highres/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=6,
- workers_per_gpu=6,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'GT_data/',
- img_prefix=data_root + 'images/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root_test + 'GT_data/',
- img_prefix=data_root_test + 'images',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root_test + 'GT_data/',
- img_prefix=data_root_test + 'images',
- pipeline=test_pipeline))
-evaluation = dict(metric=['bbox', 'segm'])
diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/shape_spec.py b/spaces/CVPR/regionclip-demo/detectron2/layers/shape_spec.py
deleted file mode 100644
index fe7e8e261c1ab1bb1636bd7a245068d64e632306..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/layers/shape_spec.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-from collections import namedtuple
-
-
-class ShapeSpec(namedtuple("_ShapeSpec", ["channels", "height", "width", "stride"])):
- """
- A simple structure that contains basic shape specification about a tensor.
- It is often used as the auxiliary inputs/outputs of models,
- to complement the lack of shape inference ability among pytorch modules.
-
- Attributes:
- channels:
- height:
- width:
- stride:
- """
-
- def __new__(cls, channels=None, height=None, width=None, stride=None):
- return super().__new__(cls, channels, height, width, stride)
diff --git a/spaces/Cambino/dog-classifier-gradio/README.md b/spaces/Cambino/dog-classifier-gradio/README.md
deleted file mode 100644
index 4b4b78deec2de1df4e0ac6519474d3b262c16983..0000000000000000000000000000000000000000
--- a/spaces/Cambino/dog-classifier-gradio/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Dog Classifier
-emoji: 🐠
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.1.3
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/ksdd2_parameters.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/ksdd2_parameters.py
deleted file mode 100644
index 7078234c94da5d36c55cf56c51e9a92db53be087..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/ksdd2_parameters.py
+++ /dev/null
@@ -1,11 +0,0 @@
-manual_prompts = {
- 'ksdd2': [
- ['black hole.', 'ksdd2'],
- ['defect.', 'ksdd2'],
- ],
-
-}
-
-property_prompts = {
- 'ksdd2': 'the image of ksdd2 have 1 dissimilar ksdd2, with a maximum of 5 anomaly. The anomaly would not exceed 0.9 object area. ',
-}
diff --git a/spaces/Cat125/text-generator-v3/files.py b/spaces/Cat125/text-generator-v3/files.py
deleted file mode 100644
index 278c7cdcf754c24255413fbe97b68eb62074100f..0000000000000000000000000000000000000000
--- a/spaces/Cat125/text-generator-v3/files.py
+++ /dev/null
@@ -1,7 +0,0 @@
-def read_file(filename):
- with open(filename, encoding="utf8") as f:
- return f.read()
-
-def read_lines(filename):
- with open(filename, encoding="utf8") as f:
- return f.readlines()
\ No newline at end of file
diff --git a/spaces/Covert1107/sd-diffusers-webui/modules/prompt_parser.py b/spaces/Covert1107/sd-diffusers-webui/modules/prompt_parser.py
deleted file mode 100644
index 42cbbb3038612a44571765905e8526553f462663..0000000000000000000000000000000000000000
--- a/spaces/Covert1107/sd-diffusers-webui/modules/prompt_parser.py
+++ /dev/null
@@ -1,391 +0,0 @@
-
-import re
-import math
-import numpy as np
-import torch
-
-# Code from https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/8e2aeee4a127b295bfc880800e4a312e0f049b85, modified.
-
-class PromptChunk:
- """
- This object contains token ids, weight (multipliers:1.4) and textual inversion embedding info for a chunk of prompt.
- If a prompt is short, it is represented by one PromptChunk, otherwise, multiple are necessary.
- Each PromptChunk contains an exact amount of tokens - 77, which includes one for start and end token,
- so just 75 tokens from prompt.
- """
-
- def __init__(self):
- self.tokens = []
- self.multipliers = []
- self.fixes = []
-
-
-class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
- """A pytorch module that is a wrapper for FrozenCLIPEmbedder module. it enhances FrozenCLIPEmbedder, making it possible to
- have unlimited prompt length and assign weights to tokens in prompt.
- """
-
- def __init__(self, text_encoder, enable_emphasis=True):
- super().__init__()
-
- self.device = lambda: text_encoder.device
- self.enable_emphasis = enable_emphasis
- """Original FrozenCLIPEmbedder module; can also be FrozenOpenCLIPEmbedder or xlmr.BertSeriesModelWithTransformation,
- depending on model."""
-
- self.chunk_length = 75
-
- def empty_chunk(self):
- """creates an empty PromptChunk and returns it"""
-
- chunk = PromptChunk()
- chunk.tokens = [self.id_start] + [self.id_end] * (self.chunk_length + 1)
- chunk.multipliers = [1.0] * (self.chunk_length + 2)
- return chunk
-
- def get_target_prompt_token_count(self, token_count):
- """returns the maximum number of tokens a prompt of a known length can have before it requires one more PromptChunk to be represented"""
-
- return math.ceil(max(token_count, 1) / self.chunk_length) * self.chunk_length
-
- def tokenize_line(self, line):
- """
- this transforms a single prompt into a list of PromptChunk objects - as many as needed to
- represent the prompt.
- Returns the list and the total number of tokens in the prompt.
- """
-
- if self.enable_emphasis:
- parsed = parse_prompt_attention(line)
- else:
- parsed = [[line, 1.0]]
-
- tokenized = self.tokenize([text for text, _ in parsed])
-
- chunks = []
- chunk = PromptChunk()
- token_count = 0
- last_comma = -1
-
- def next_chunk(is_last=False):
- """puts current chunk into the list of results and produces the next one - empty;
- if is_last is true, tokens tokens at the end won't add to token_count"""
- nonlocal token_count
- nonlocal last_comma
- nonlocal chunk
-
- if is_last:
- token_count += len(chunk.tokens)
- else:
- token_count += self.chunk_length
-
- to_add = self.chunk_length - len(chunk.tokens)
- if to_add > 0:
- chunk.tokens += [self.id_end] * to_add
- chunk.multipliers += [1.0] * to_add
-
- chunk.tokens = [self.id_start] + chunk.tokens + [self.id_end]
- chunk.multipliers = [1.0] + chunk.multipliers + [1.0]
-
- last_comma = -1
- chunks.append(chunk)
- chunk = PromptChunk()
-
- comma_padding_backtrack = 20 # default value in https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/6cff4401824299a983c8e13424018efc347b4a2b/modules/shared.py#L410
- for tokens, (text, weight) in zip(tokenized, parsed):
- if text == "BREAK" and weight == -1:
- next_chunk()
- continue
-
- position = 0
- while position < len(tokens):
- token = tokens[position]
-
- if token == self.comma_token:
- last_comma = len(chunk.tokens)
-
- # this is when we are at the end of alloted 75 tokens for the current chunk, and the current token is not a comma. opts.comma_padding_backtrack
- # is a setting that specifies that if there is a comma nearby, the text after the comma should be moved out of this chunk and into the next.
- elif (
- comma_padding_backtrack != 0
- and len(chunk.tokens) == self.chunk_length
- and last_comma != -1
- and len(chunk.tokens) - last_comma <= comma_padding_backtrack
- ):
- break_location = last_comma + 1
-
- reloc_tokens = chunk.tokens[break_location:]
- reloc_mults = chunk.multipliers[break_location:]
-
- chunk.tokens = chunk.tokens[:break_location]
- chunk.multipliers = chunk.multipliers[:break_location]
-
- next_chunk()
- chunk.tokens = reloc_tokens
- chunk.multipliers = reloc_mults
-
- if len(chunk.tokens) == self.chunk_length:
- next_chunk()
-
- chunk.tokens.append(token)
- chunk.multipliers.append(weight)
- position += 1
-
- if len(chunk.tokens) > 0 or len(chunks) == 0:
- next_chunk(is_last=True)
-
- return chunks, token_count
-
- def process_texts(self, texts):
- """
- Accepts a list of texts and calls tokenize_line() on each, with cache. Returns the list of results and maximum
- length, in tokens, of all texts.
- """
-
- token_count = 0
-
- cache = {}
- batch_chunks = []
- for line in texts:
- if line in cache:
- chunks = cache[line]
- else:
- chunks, current_token_count = self.tokenize_line(line)
- token_count = max(current_token_count, token_count)
-
- cache[line] = chunks
-
- batch_chunks.append(chunks)
-
- return batch_chunks, token_count
-
- def forward(self, texts):
- """
- Accepts an array of texts; Passes texts through transformers network to create a tensor with numerical representation of those texts.
- Returns a tensor with shape of (B, T, C), where B is length of the array; T is length, in tokens, of texts (including padding) - T will
- be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, and for SD2 it's 1024.
- An example shape returned by this function can be: (2, 77, 768).
- Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one elemenet
- is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream"
- """
-
- batch_chunks, token_count = self.process_texts(texts)
- chunk_count = max([len(x) for x in batch_chunks])
-
- zs = []
- ts = []
- for i in range(chunk_count):
- batch_chunk = [
- chunks[i] if i < len(chunks) else self.empty_chunk()
- for chunks in batch_chunks
- ]
-
- tokens = [x.tokens for x in batch_chunk]
- multipliers = [x.multipliers for x in batch_chunk]
- # self.embeddings.fixes = [x.fixes for x in batch_chunk]
-
- # for fixes in self.embeddings.fixes:
- # for position, embedding in fixes:
- # used_embeddings[embedding.name] = embedding
-
- z = self.process_tokens(tokens, multipliers)
- zs.append(z)
- ts.append(tokens)
-
- return np.hstack(ts), torch.hstack(zs)
-
- def process_tokens(self, remade_batch_tokens, batch_multipliers):
- """
- sends one single prompt chunk to be encoded by transformers neural network.
- remade_batch_tokens is a batch of tokens - a list, where every element is a list of tokens; usually
- there are exactly 77 tokens in the list. batch_multipliers is the same but for multipliers instead of tokens.
- Multipliers are used to give more or less weight to the outputs of transformers network. Each multiplier
- corresponds to one token.
- """
- tokens = torch.asarray(remade_batch_tokens).to(self.device())
-
- # this is for SD2: SD1 uses the same token for padding and end of text, while SD2 uses different ones.
- if self.id_end != self.id_pad:
- for batch_pos in range(len(remade_batch_tokens)):
- index = remade_batch_tokens[batch_pos].index(self.id_end)
- tokens[batch_pos, index + 1 : tokens.shape[1]] = self.id_pad
-
- z = self.encode_with_transformers(tokens)
-
- # restoring original mean is likely not correct, but it seems to work well to prevent artifacts that happen otherwise
- batch_multipliers = torch.asarray(batch_multipliers).to(self.device())
- original_mean = z.mean()
- z = z * batch_multipliers.reshape(batch_multipliers.shape + (1,)).expand(z.shape)
- new_mean = z.mean()
- z = z * (original_mean / new_mean)
-
- return z
-
-
-class FrozenCLIPEmbedderWithCustomWords(FrozenCLIPEmbedderWithCustomWordsBase):
- def __init__(self, tokenizer, text_encoder):
- super().__init__(text_encoder)
- self.tokenizer = tokenizer
- self.text_encoder = text_encoder
-
- vocab = self.tokenizer.get_vocab()
-
- self.comma_token = vocab.get(",", None)
-
- self.token_mults = {}
- tokens_with_parens = [
- (k, v)
- for k, v in vocab.items()
- if "(" in k or ")" in k or "[" in k or "]" in k
- ]
- for text, ident in tokens_with_parens:
- mult = 1.0
- for c in text:
- if c == "[":
- mult /= 1.1
- if c == "]":
- mult *= 1.1
- if c == "(":
- mult *= 1.1
- if c == ")":
- mult /= 1.1
-
- if mult != 1.0:
- self.token_mults[ident] = mult
-
- self.id_start = self.tokenizer.bos_token_id
- self.id_end = self.tokenizer.eos_token_id
- self.id_pad = self.id_end
-
- def tokenize(self, texts):
- tokenized = self.tokenizer(
- texts, truncation=False, add_special_tokens=False
- )["input_ids"]
-
- return tokenized
-
- def encode_with_transformers(self, tokens):
- CLIP_stop_at_last_layers = 1
- tokens = tokens.to(self.text_encoder.device)
- outputs = self.text_encoder(tokens, output_hidden_states=True)
-
- if CLIP_stop_at_last_layers > 1:
- z = outputs.hidden_states[-CLIP_stop_at_last_layers]
- z = self.text_encoder.text_model.final_layer_norm(z)
- else:
- z = outputs.last_hidden_state
-
- return z
-
-
-re_attention = re.compile(
- r"""
-\\\(|
-\\\)|
-\\\[|
-\\]|
-\\\\|
-\\|
-\(|
-\[|
-:([+-]?[.\d]+)\)|
-\)|
-]|
-[^\\()\[\]:]+|
-:
-""",
- re.X,
-)
-
-re_break = re.compile(r"\s*\bBREAK\b\s*", re.S)
-
-
-def parse_prompt_attention(text):
- """
- Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
- Accepted tokens are:
- (abc) - increases attention to abc by a multiplier of 1.1
- (abc:3.12) - increases attention to abc by a multiplier of 3.12
- [abc] - decreases attention to abc by a multiplier of 1.1
- \( - literal character '('
- \[ - literal character '['
- \) - literal character ')'
- \] - literal character ']'
- \\ - literal character '\'
- anything else - just text
-
- >>> parse_prompt_attention('normal text')
- [['normal text', 1.0]]
- >>> parse_prompt_attention('an (important) word')
- [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
- >>> parse_prompt_attention('(unbalanced')
- [['unbalanced', 1.1]]
- >>> parse_prompt_attention('\(literal\]')
- [['(literal]', 1.0]]
- >>> parse_prompt_attention('(unnecessary)(parens)')
- [['unnecessaryparens', 1.1]]
- >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
- [['a ', 1.0],
- ['house', 1.5730000000000004],
- [' ', 1.1],
- ['on', 1.0],
- [' a ', 1.1],
- ['hill', 0.55],
- [', sun, ', 1.1],
- ['sky', 1.4641000000000006],
- ['.', 1.1]]
- """
-
- res = []
- round_brackets = []
- square_brackets = []
-
- round_bracket_multiplier = 1.1
- square_bracket_multiplier = 1 / 1.1
-
- def multiply_range(start_position, multiplier):
- for p in range(start_position, len(res)):
- res[p][1] *= multiplier
-
- for m in re_attention.finditer(text):
- text = m.group(0)
- weight = m.group(1)
-
- if text.startswith("\\"):
- res.append([text[1:], 1.0])
- elif text == "(":
- round_brackets.append(len(res))
- elif text == "[":
- square_brackets.append(len(res))
- elif weight is not None and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), float(weight))
- elif text == ")" and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), round_bracket_multiplier)
- elif text == "]" and len(square_brackets) > 0:
- multiply_range(square_brackets.pop(), square_bracket_multiplier)
- else:
- parts = re.split(re_break, text)
- for i, part in enumerate(parts):
- if i > 0:
- res.append(["BREAK", -1])
- res.append([part, 1.0])
-
- for pos in round_brackets:
- multiply_range(pos, round_bracket_multiplier)
-
- for pos in square_brackets:
- multiply_range(pos, square_bracket_multiplier)
-
- if len(res) == 0:
- res = [["", 1.0]]
-
- # merge runs of identical weights
- i = 0
- while i + 1 < len(res):
- if res[i][1] == res[i + 1][1]:
- res[i][0] += res[i + 1][0]
- res.pop(i + 1)
- else:
- i += 1
-
- return res
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/abc/_subprocesses.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/abc/_subprocesses.py
deleted file mode 100644
index 704b44a2dda9e21997acf52c268e414d01bd2eb5..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/abc/_subprocesses.py
+++ /dev/null
@@ -1,79 +0,0 @@
-from __future__ import annotations
-
-from abc import abstractmethod
-from signal import Signals
-
-from ._resources import AsyncResource
-from ._streams import ByteReceiveStream, ByteSendStream
-
-
-class Process(AsyncResource):
- """An asynchronous version of :class:`subprocess.Popen`."""
-
- @abstractmethod
- async def wait(self) -> int:
- """
- Wait until the process exits.
-
- :return: the exit code of the process
- """
-
- @abstractmethod
- def terminate(self) -> None:
- """
- Terminates the process, gracefully if possible.
-
- On Windows, this calls ``TerminateProcess()``.
- On POSIX systems, this sends ``SIGTERM`` to the process.
-
- .. seealso:: :meth:`subprocess.Popen.terminate`
- """
-
- @abstractmethod
- def kill(self) -> None:
- """
- Kills the process.
-
- On Windows, this calls ``TerminateProcess()``.
- On POSIX systems, this sends ``SIGKILL`` to the process.
-
- .. seealso:: :meth:`subprocess.Popen.kill`
- """
-
- @abstractmethod
- def send_signal(self, signal: Signals) -> None:
- """
- Send a signal to the subprocess.
-
- .. seealso:: :meth:`subprocess.Popen.send_signal`
-
- :param signal: the signal number (e.g. :data:`signal.SIGHUP`)
- """
-
- @property
- @abstractmethod
- def pid(self) -> int:
- """The process ID of the process."""
-
- @property
- @abstractmethod
- def returncode(self) -> int | None:
- """
- The return code of the process. If the process has not yet terminated, this will be
- ``None``.
- """
-
- @property
- @abstractmethod
- def stdin(self) -> ByteSendStream | None:
- """The stream for the standard input of the process."""
-
- @property
- @abstractmethod
- def stdout(self) -> ByteReceiveStream | None:
- """The stream for the standard output of the process."""
-
- @property
- @abstractmethod
- def stderr(self) -> ByteReceiveStream | None:
- """The stream for the standard error output of the process."""
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/lowlevel.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/lowlevel.py
deleted file mode 100644
index 0e908c65474402fa89fe933d65205378c543e3bf..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/lowlevel.py
+++ /dev/null
@@ -1,174 +0,0 @@
-from __future__ import annotations
-
-import enum
-import sys
-from dataclasses import dataclass
-from typing import Any, Generic, TypeVar, overload
-from weakref import WeakKeyDictionary
-
-from ._core._eventloop import get_asynclib
-
-if sys.version_info >= (3, 8):
- from typing import Literal
-else:
- from typing_extensions import Literal
-
-T = TypeVar("T")
-D = TypeVar("D")
-
-
-async def checkpoint() -> None:
- """
- Check for cancellation and allow the scheduler to switch to another task.
-
- Equivalent to (but more efficient than)::
-
- await checkpoint_if_cancelled()
- await cancel_shielded_checkpoint()
-
-
- .. versionadded:: 3.0
-
- """
- await get_asynclib().checkpoint()
-
-
-async def checkpoint_if_cancelled() -> None:
- """
- Enter a checkpoint if the enclosing cancel scope has been cancelled.
-
- This does not allow the scheduler to switch to a different task.
-
- .. versionadded:: 3.0
-
- """
- await get_asynclib().checkpoint_if_cancelled()
-
-
-async def cancel_shielded_checkpoint() -> None:
- """
- Allow the scheduler to switch to another task but without checking for cancellation.
-
- Equivalent to (but potentially more efficient than)::
-
- with CancelScope(shield=True):
- await checkpoint()
-
-
- .. versionadded:: 3.0
-
- """
- await get_asynclib().cancel_shielded_checkpoint()
-
-
-def current_token() -> object:
- """Return a backend specific token object that can be used to get back to the event loop."""
- return get_asynclib().current_token()
-
-
-_run_vars: WeakKeyDictionary[Any, dict[str, Any]] = WeakKeyDictionary()
-_token_wrappers: dict[Any, _TokenWrapper] = {}
-
-
-@dataclass(frozen=True)
-class _TokenWrapper:
- __slots__ = "_token", "__weakref__"
- _token: object
-
-
-class _NoValueSet(enum.Enum):
- NO_VALUE_SET = enum.auto()
-
-
-class RunvarToken(Generic[T]):
- __slots__ = "_var", "_value", "_redeemed"
-
- def __init__(self, var: RunVar[T], value: T | Literal[_NoValueSet.NO_VALUE_SET]):
- self._var = var
- self._value: T | Literal[_NoValueSet.NO_VALUE_SET] = value
- self._redeemed = False
-
-
-class RunVar(Generic[T]):
- """
- Like a :class:`~contextvars.ContextVar`, except scoped to the running event loop.
- """
-
- __slots__ = "_name", "_default"
-
- NO_VALUE_SET: Literal[_NoValueSet.NO_VALUE_SET] = _NoValueSet.NO_VALUE_SET
-
- _token_wrappers: set[_TokenWrapper] = set()
-
- def __init__(
- self,
- name: str,
- default: T | Literal[_NoValueSet.NO_VALUE_SET] = NO_VALUE_SET,
- ):
- self._name = name
- self._default = default
-
- @property
- def _current_vars(self) -> dict[str, T]:
- token = current_token()
- while True:
- try:
- return _run_vars[token]
- except TypeError:
- # Happens when token isn't weak referable (TrioToken).
- # This workaround does mean that some memory will leak on Trio until the problem
- # is fixed on their end.
- token = _TokenWrapper(token)
- self._token_wrappers.add(token)
- except KeyError:
- run_vars = _run_vars[token] = {}
- return run_vars
-
- @overload
- def get(self, default: D) -> T | D:
- ...
-
- @overload
- def get(self) -> T:
- ...
-
- def get(
- self, default: D | Literal[_NoValueSet.NO_VALUE_SET] = NO_VALUE_SET
- ) -> T | D:
- try:
- return self._current_vars[self._name]
- except KeyError:
- if default is not RunVar.NO_VALUE_SET:
- return default
- elif self._default is not RunVar.NO_VALUE_SET:
- return self._default
-
- raise LookupError(
- f'Run variable "{self._name}" has no value and no default set'
- )
-
- def set(self, value: T) -> RunvarToken[T]:
- current_vars = self._current_vars
- token = RunvarToken(self, current_vars.get(self._name, RunVar.NO_VALUE_SET))
- current_vars[self._name] = value
- return token
-
- def reset(self, token: RunvarToken[T]) -> None:
- if token._var is not self:
- raise ValueError("This token does not belong to this RunVar")
-
- if token._redeemed:
- raise ValueError("This token has already been used")
-
- if token._value is _NoValueSet.NO_VALUE_SET:
- try:
- del self._current_vars[self._name]
- except KeyError:
- pass
- else:
- self._current_vars[self._name] = token._value
-
- token._redeemed = True
-
- def __repr__(self) -> str:
- return f""
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1d5a6686.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1d5a6686.js
deleted file mode 100644
index 36567a7772cbe31ec8ae2b3342e9f6135db903ce..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1d5a6686.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,e as p,s as v,h as T,j as k,k as C,o as S,t as j,z as m,v as d,x as q,B as w,a9 as z,ab as B,ac as D,ad as E,F as u}from"./index-3370be2a.js";import{T as F}from"./TabItem.svelte_svelte_type_style_lang-ffbad424.js";/* empty css */function A(s){let e;const i=s[4].default,t=z(i,s,s[8],null);return{c(){t&&t.c()},m(n,o){t&&t.m(n,o),e=!0},p(n,o){t&&t.p&&(!e||o&256)&&B(t,i,n,n[8],e?E(i,n[8],o,null):D(n[8]),null)},i(n){e||(m(t,n),e=!0)},o(n){d(t,n),e=!1},d(n){t&&t.d(n)}}}function G(s){let e,i,t;function n(l){s[5](l)}let o={visible:s[1],elem_id:s[2],elem_classes:s[3],$$slots:{default:[A]},$$scope:{ctx:s}};return s[0]!==void 0&&(o.selected=s[0]),e=new F({props:o}),T.push(()=>k(e,"selected",n)),e.$on("change",s[6]),e.$on("select",s[7]),{c(){C(e.$$.fragment)},m(l,c){S(e,l,c),t=!0},p(l,[c]){const _={};c&2&&(_.visible=l[1]),c&4&&(_.elem_id=l[2]),c&8&&(_.elem_classes=l[3]),c&256&&(_.$$scope={dirty:c,ctx:l}),!i&&c&1&&(i=!0,_.selected=l[0],j(()=>i=!1)),e.$set(_)},i(l){t||(m(e.$$.fragment,l),t=!0)},o(l){d(e.$$.fragment,l),t=!1},d(l){q(e,l)}}}function H(s,e,i){let{$$slots:t={},$$scope:n}=e;const o=w();let{visible:l=!0}=e,{elem_id:c=""}=e,{elem_classes:_=[]}=e,{selected:f}=e;function r(a){f=a,i(0,f)}function b(a){u.call(this,s,a)}function g(a){u.call(this,s,a)}return s.$$set=a=>{"visible"in a&&i(1,l=a.visible),"elem_id"in a&&i(2,c=a.elem_id),"elem_classes"in a&&i(3,_=a.elem_classes),"selected"in a&&i(0,f=a.selected),"$$scope"in a&&i(8,n=a.$$scope)},s.$$.update=()=>{s.$$.dirty&1&&o("prop_change",{selected:f})},[f,l,c,_,t,r,b,g,n]}class I extends h{constructor(e){super(),p(this,e,H,G,v,{visible:1,elem_id:2,elem_classes:3,selected:0})}}const M=I,N=["static"];export{M as Component,N as modes};
-//# sourceMappingURL=index-1d5a6686.js.map
diff --git a/spaces/DShrimp/PoseMaker/static/poseEditor.js b/spaces/DShrimp/PoseMaker/static/poseEditor.js
deleted file mode 100644
index 8bd3d5e81bc92b88c5b07abdf2133bd2cffb8329..0000000000000000000000000000000000000000
--- a/spaces/DShrimp/PoseMaker/static/poseEditor.js
+++ /dev/null
@@ -1,238 +0,0 @@
-console.log("hello from poseEditor.js")
-var canvas = null;
-var ctx = null;
-
-// candidateの形式:[[x1, y1, score1], [x2, y2, score2], ...]
-let candidateSource = [
- [235, 158, 0.93167633],
- [234, 220, 0.97106987],
- [193, 222, 0.93366587],
- [138, 263, 0.87655306],
- [89, 308, 0.8868227],
- [276, 220, 0.9038924],
- [325, 264, 0.92930061],
- [375, 309, 0.9217211],
- [207, 347, 0.86410147],
- [203, 433, 0.86538297],
- [199, 523, 0.95236528],
- [261, 347, 0.88489777],
- [262, 430, 0.90848708],
- [261, 522, 0.94749999],
- [227, 148, 0.94189668],
- [245, 148, 0.93967074],
- [208, 158, 0.92053992],
- [258, 154, 0.73533273]
-];
-
-// subsetの形式:[[index1, index2, ..., -1], [index1, index2, ..., -1], ...]
-let subset = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 33.81122635, 18]];
-
-// const candidateSource = [[618.00, 0.00], [618.00, 44.00], [304.00, 81.00], [482.00, 96.00], [66.00, 270.00], [171.00, 280.00], [618.00, 82.00], [307.00, 112.00], [460.00, 143.00], [0.00, 301.00], [65.00, 301.00], [172.00, 303.00], [584.00, 86.00], [275.00, 119.00], [420.00, 139.00], [0.00, 301.00], [41.00, 301.00], [144.00, 303.00], [544.00, 131.00], [348.00, 139.00], [262.00, 160.00], [0.00, 337.00], [52.00, 339.00], [130.00, 348.00], [570.00, 175.00], [283.00, 177.00], [78.00, 338.00], [172.00, 380.00], [651.00, 78.00], [338.00, 111.00], [505.00, 144.00], [92.00, 301.00], [198.00, 305.00], [661.00, 132.00], [349.00, 156.00], [541.00, 179.00], [106.00, 336.00], [203.00, 348.00], [305.00, 159.00], [665.00, 160.00], [563.00, 192.00], [80.00, 343.00], [181.00, 385.00], [614.00, 205.00], [291.00, 220.00], [432.00, 320.00], [152.00, 372.00], [43.00, 380.00], [0.00, 386.00], [623.00, 281.00], [306.00, 290.00], [92.00, 357.00], [509.00, 434.00], [304.00, 357.00], [622.00, 368.00], [47.00, 394.00], [0.00, 395.00], [142.00, 405.00], [535.00, 565.00], [655.00, 200.00], [337.00, 217.00], [467.00, 322.00], [191.00, 372.00], [83.00, 375.00], [344.00, 282.00], [655.00, 282.00], [103.00, 343.00], [237.00, 368.00], [22.00, 377.00], [0.00, 379.00], [460.00, 459.00], [305.00, 352.00], [638.00, 355.00], [0.00, 401.00], [110.00, 412.00], [411.00, 570.00], [608.00, 0.00], [608.00, 40.00], [297.00, 75.00], [469.00, 84.00], [0.00, 261.00], [58.00, 263.00], [165.00, 275.00], [625.00, 0.00], [625.00, 39.00], [309.00, 74.00], [486.00, 83.00], [71.00, 264.00], [180.00, 276.00], [599.00, 0.00], [599.00, 44.00], [284.00, 80.00], [440.00, 93.00], [48.00, 271.00], [0.00, 272.00], [157.00, 277.00], [634.00, 0.00], [633.00, 41.00], [319.00, 77.00], [79.00, 269.00], [190.00, 277.00]];
-// const subset = [[1.00,6.00,12.00,18.00,24.00,28.00,33.00,39.00,43.00,49.00,54.00,59.00,65.00,72.00,77.00,84.00,90.00,97.00,32.98,18.00],[5.00,11.00,17.00,23.00,27.00,32.00,37.00,42.00,46.00,-1.00,-1.00,62.00,67.00,-1.00,82.00,88.00,95.00,100.00,25.45,15.00],[4.00,10.00,16.00,22.00,26.00,31.00,36.00,41.00,47.00,51.00,57.00,63.00,66.00,74.00,81.00,87.00,93.00,99.00,26.97,18.00],[3.00,8.00,14.00,19.00,25.00,30.00,35.00,40.00,45.00,52.00,58.00,61.00,70.00,75.00,79.00,86.00,92.00,-1.00,30.45,17.00],[2.00,7.00,13.00,20.00,-1.00,29.00,34.00,38.00,44.00,50.00,53.00,60.00,64.00,71.00,78.00,85.00,91.00,98.00,27.89,17.00],[0.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,-1.00,76.00,83.00,-1.00,96.00,3.33,4.00]];
-
-let candidate = candidateSource.map(point => [point[0], point[1] - 70]);
-
-
-function clearCanvas() {
- var w = canvas.width;
- var h = canvas.height;
- ctx.fillStyle = 'black';
- ctx.fillRect(0, 0, w, h);
-}
-
-function resizeCanvas(width, height) {
- canvas.width = width ? width : canvas.width;
- canvas.height = height ? height : canvas.height;
- clearCanvas();
- drawBodyPose(candidate, subset);
-}
-
-function drawBodyPose(candidate, subset) {
- const stickwidth = 4;
- const limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10],
- [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17],
- [1, 16], [16, 18], [3, 17], [6, 18]];
-
- const colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0],
- [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255],
- [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]];
-
- for (let i = 0; i < 17; i++) {
- for (let n = 0; n < subset.length; n++) {
- const index0 = subset[n][limbSeq[i][0]-1];
- const index1 = subset[n][limbSeq[i][1]-1];
- if (index0 === -1 || index1 === -1) {
- continue;
- }
- const [X0, Y0] = candidate[index0].slice(0, 2);
- const [X1, Y1] = candidate[index1].slice(0, 2);
- ctx.beginPath();
- ctx.lineWidth=stickwidth;
- ctx.strokeStyle = `rgb(${colors[i].join(',')})`;
- ctx.moveTo(X0, Y0);
- ctx.lineTo(X1, Y1);
- ctx.stroke();
- }
- }
-
- ctx.font = '12px serif';
- for (let i = 0; i < 18; i++) {
- for (let n = 0; n < subset.length; n++) {
- const index = subset[n][i];
- if (index === -1) {
- continue;
- }
- const [x, y] = candidate[index].slice(0, 2);
- ctx.beginPath();
- ctx.arc(x, y, 4, 0, 2 * Math.PI);
- ctx.fillStyle = `rgb(${colors[i].join(',')})`;
- ctx.fill();
- ctx.fillStyle = 'rgb(255,255,255)'
- // ctx.fillText(index, x-3, y+4);
- }
- }
-}
-
-function getNearestCandidate(x, y) {
- let minDist = Infinity;
- let minIndex = -1;
- for (let i = 0; i < candidate.length; i++) {
- const dist = Math.sqrt((x - candidate[i][0]) ** 2 + (y - candidate[i][1]) ** 2);
- if (dist < minDist) {
- minDist = dist;
- minIndex = i;
- }
- }
- return [minIndex,minDist];
-}
-
-// ドラッグ中に座標を保持するための変数
-let isDragging = false;
-let dragIndex = -1;
-let dragStartX = 0;
-let dragStartY = 0;
-let draggingCandidate = null;
-let dragPersonIndex = -1;
-
-function getCanvasPosition(event) {
- const rect = canvas.getBoundingClientRect();
- const x = event.clientX - rect.left;
- const y = event.clientY - rect.top;
- return [x, y];
-}
-
-// Canvas要素上でマウスが押された場合に呼び出される関数
-function handleMouseDown(event) {
- const [x, y] = getCanvasPosition(event);
- const [index, minDist] = getNearestCandidate(x, y);
-
- // ドラッグ処理の開始
- if (event.altKey || event.ctrlKey || event.shiftKey || minDist < 16) {
- isDragging = true;
- dragIndex = index;
- dragStartX = x;
- dragStartY = y;
- draggingCandidate = JSON.parse(JSON.stringify(candidate))
-
- // indexが含まれる人間を探す
- for (let i = 0; i < subset.length; i++) {
- var found = subset[i].indexOf(index);
- if (found != -1 && found < 18) {
- dragPersonIndex = i;
- break;
- }
- }
- }
-}
-
-function forEachCandidateOfPerson(personIndex, callback) {
- if (personIndex === -1) { return; }
-
- for (let i = 0; i < 18; i++) {
- const index = subset[personIndex][i];
- if (index === -1) {
- continue;
- }
- callback(index);
- }
-}
-
-// Canvas要素上でマウスが動いた場合に呼び出される関数
-function handleMouseMove(event) {
- if (!isDragging) {
- return;
- }
-
- const [x, y] = getCanvasPosition(event);
-
- const dragOffsetX = x - dragStartX;
- const dragOffsetY = y - dragStartY;
-
- if (event.ctrlKey) {
- // 拡大縮小(人間ごと)
- let xScale = 1 + dragOffsetX / canvas.width;
- let yScale = 1 + dragOffsetY / canvas.height;
- forEachCandidateOfPerson(dragPersonIndex, (index) => {
- candidate[index][0] = (draggingCandidate[index][0] - dragStartX) * xScale + dragStartX;
- candidate[index][1] = (draggingCandidate[index][1] - dragStartY) * yScale + dragStartY;
- });
- } else if (event.shiftKey) {
- // 回転(人間ごと)
- let angle = Math.atan2(dragOffsetY, dragOffsetX);
- forEachCandidateOfPerson(dragPersonIndex, (index) => {
- let x = draggingCandidate[index][0] - dragStartX;
- let y = draggingCandidate[index][1] - dragStartY;
- candidate[index][0] = x * Math.cos(angle) - y * Math.sin(angle) + dragStartX;
- candidate[index][1] = x * Math.sin(angle) + y * Math.cos(angle) + dragStartY;
- });
- } else if (event.altKey) {
- // 全体移動(人間ごと
- forEachCandidateOfPerson(dragPersonIndex, (index) => {
- candidate[index][0] = draggingCandidate[index][0] + dragOffsetX;
- candidate[index][1] = draggingCandidate[index][1] + dragOffsetY;
- });
- } else {
- // 個別移動
- candidate[dragIndex][0] = draggingCandidate[dragIndex][0] + dragOffsetX;
- candidate[dragIndex][1] = draggingCandidate[dragIndex][1] + dragOffsetY;
- }
-
- clearCanvas();
- drawBodyPose(candidate, subset);
-}
-
-// Canvas要素上でマウスが離された場合に呼び出される関数
-function handleMouseUp(event) {
- isDragging = false;
-}
-
-function initializePose(jsonData,w,h) {
- console.log("initializePose");
- if (jsonData != null) {
- candidate = jsonData.candidate;
- subset = jsonData.subset;
- }
-
- canvas = document.getElementById('canvas');
- ctx = canvas.getContext('2d');
-
- canvas.addEventListener('mousedown', handleMouseDown);
- canvas.addEventListener('mousemove', handleMouseMove);
- canvas.addEventListener('mouseup', handleMouseUp);
-
- resizeCanvas(w, h);
-}
-
-function savePose() {
- const canvasUrl = canvas.toDataURL();
-
- const createEl = document.createElement('a');
- createEl.href = canvasUrl;
-
- // This is the name of our downloaded file
- createEl.download = "pose.png";
-
- createEl.click();
- createEl.remove();
- return { "candidate": candidate, "subset": subset };
-}
diff --git a/spaces/Datasculptor/DescriptionGPT/detic/data/transforms/custom_augmentation_impl.py b/spaces/Datasculptor/DescriptionGPT/detic/data/transforms/custom_augmentation_impl.py
deleted file mode 100644
index 47bef39566ed741287ceb55fb98ec9b03ee30b63..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/detic/data/transforms/custom_augmentation_impl.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Part of the code is from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/transforms.py
-# Modified by Xingyi Zhou
-# The original code is under Apache-2.0 License
-import numpy as np
-import sys
-from fvcore.transforms.transform import (
- BlendTransform,
- CropTransform,
- HFlipTransform,
- NoOpTransform,
- Transform,
- VFlipTransform,
-)
-from PIL import Image
-
-from detectron2.data.transforms.augmentation import Augmentation
-from .custom_transform import EfficientDetResizeCropTransform
-
-__all__ = [
- "EfficientDetResizeCrop",
-]
-
-class EfficientDetResizeCrop(Augmentation):
- """
- Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge.
- If `max_size` is reached, then downscale so that the longer edge does not exceed max_size.
- """
-
- def __init__(
- self, size, scale, interp=Image.BILINEAR
- ):
- """
- """
- super().__init__()
- self.target_size = (size, size)
- self.scale = scale
- self.interp = interp
-
- def get_transform(self, img):
- # Select a random scale factor.
- scale_factor = np.random.uniform(*self.scale)
- scaled_target_height = scale_factor * self.target_size[0]
- scaled_target_width = scale_factor * self.target_size[1]
- # Recompute the accurate scale_factor using rounded scaled image size.
- width, height = img.shape[1], img.shape[0]
- img_scale_y = scaled_target_height / height
- img_scale_x = scaled_target_width / width
- img_scale = min(img_scale_y, img_scale_x)
-
- # Select non-zero random offset (x, y) if scaled image is larger than target size
- scaled_h = int(height * img_scale)
- scaled_w = int(width * img_scale)
- offset_y = scaled_h - self.target_size[0]
- offset_x = scaled_w - self.target_size[1]
- offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1))
- offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1))
- return EfficientDetResizeCropTransform(
- scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp)
diff --git a/spaces/EuroPython2022/Sketch2ColourDemo/README.md b/spaces/EuroPython2022/Sketch2ColourDemo/README.md
deleted file mode 100644
index ef966aa2bde7e9f9cf5e54e11ba69aefc83870fb..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/Sketch2ColourDemo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sketch2ColourDemo
-emoji: 📈
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
-license: eupl-1.1
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/mel_processing.py b/spaces/EyanAn/vits-uma-genshin-honkai/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/EyanAn/vits-uma-genshin-honkai/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/FantasticGNU/AnomalyGPT/utils/io.py b/spaces/FantasticGNU/AnomalyGPT/utils/io.py
deleted file mode 100644
index d0edd1dd450d18981c545a9cb7460184186d6708..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/utils/io.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import h5py
-import numpy as np
-import open3d
-import os
-
-class IO:
- @classmethod
- def get(cls, file_path):
- _, file_extension = os.path.splitext(file_path)
-
- if file_extension in ['.npy']:
- return cls._read_npy(file_path)
- elif file_extension in ['.pcd']:
- return cls._read_pcd(file_path)
- elif file_extension in ['.h5']:
- return cls._read_h5(file_path)
- elif file_extension in ['.txt']:
- return cls._read_txt(file_path)
- else:
- raise Exception('Unsupported file extension: %s' % file_extension)
-
- # References: https://github.com/numpy/numpy/blob/master/numpy/lib/format.py
- @classmethod
- def _read_npy(cls, file_path):
- return np.load(file_path)
-
- # References: https://github.com/dimatura/pypcd/blob/master/pypcd/pypcd.py#L275
- # Support PCD files without compression ONLY!
- @classmethod
- def _read_pcd(cls, file_path):
- pc = open3d.io.read_point_cloud(file_path)
- ptcloud = np.array(pc.points)
- return ptcloud
-
- @classmethod
- def _read_txt(cls, file_path):
- return np.loadtxt(file_path)
-
- @classmethod
- def _read_h5(cls, file_path):
- f = h5py.File(file_path, 'r')
- return f['data'][()]
\ No newline at end of file
diff --git a/spaces/GT4SD/moler/utils.py b/spaces/GT4SD/moler/utils.py
deleted file mode 100644
index 2ddf89fb4439d5e64cb4174413dbdd8210521fb2..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/moler/utils.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import json
-import logging
-import os
-from collections import defaultdict
-from typing import Dict, List, Tuple
-
-import mols2grid
-import pandas as pd
-from rdkit import Chem
-from terminator.selfies import decoder
-
-logger = logging.getLogger(__name__)
-logger.addHandler(logging.NullHandler())
-
-
-def draw_grid_generate(
- seeds: List[str],
- scaffolds: List[str],
- samples: List[str],
- n_cols: int = 5,
- size=(140, 200),
-) -> str:
- """
- Uses mols2grid to draw a HTML grid for the generated molecules
-
- Args:
- samples: The generated samples.
- n_cols: Number of columns in grid. Defaults to 5.
- size: Size of molecule in grid. Defaults to (140, 200).
-
- Returns:
- HTML to display
- """
-
- result = defaultdict(list)
- result.update(
- {
- "SMILES": seeds + scaffolds + samples,
- "Name": [f"Seed_{i}" for i in range(len(seeds))]
- + [f"Scaffold_{i}" for i in range(len(scaffolds))]
- + [f"Generated_{i}" for i in range(len(samples))],
- },
- )
-
- result_df = pd.DataFrame(result)
- obj = mols2grid.display(
- result_df,
- tooltip=list(result.keys()),
- height=1100,
- n_cols=n_cols,
- name="Results",
- size=size,
- )
- return obj.data
diff --git a/spaces/GaenKoki/voicevox/build_util/check_release_build.py b/spaces/GaenKoki/voicevox/build_util/check_release_build.py
deleted file mode 100644
index 71bf49c080f4fc39d1e08ccaa9cd6b1c35731ce8..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/build_util/check_release_build.py
+++ /dev/null
@@ -1,70 +0,0 @@
-"""
-ビルド結果をテストする
-"""
-import argparse
-import json
-import time
-from io import BytesIO
-from pathlib import Path
-from subprocess import Popen
-from urllib.parse import urlencode
-from urllib.request import Request, urlopen
-
-import soundfile
-
-base_url = "http://127.0.0.1:50021/"
-
-
-def test_release_build(dist_dir: Path, skip_run_process: bool) -> None:
- run_file = dist_dir / "run"
- if not run_file.exists():
- run_file = dist_dir / "run.exe"
-
- # 起動
- process = None
- if not skip_run_process:
- process = Popen([run_file.absolute()], cwd=dist_dir)
- time.sleep(60) # 待機
-
- # バージョン取得テスト
- req = Request(base_url + "version")
- with urlopen(req) as res:
- assert len(res.read()) > 0
-
- # テキスト -> クエリ
- text = "こんにちは、音声合成の世界へようこそ"
- req = Request(
- base_url + "audio_query?" + urlencode({"speaker": "1", "text": text}),
- method="POST",
- )
- with urlopen(req) as res:
- query = json.loads(res.read().decode("utf-8"))
-
- # クエリ -> 音声
- req = Request(base_url + "synthesis?speaker=1", method="POST")
- req.add_header("Content-Type", "application/json")
- req.data = json.dumps(query).encode("utf-8")
- with urlopen(req) as res:
- wave = res.read()
- soundfile.read(BytesIO(wave))
-
- # エンジンマニフェスト
- req = Request(base_url + "engine_manifest", method="GET")
- with urlopen(req) as res:
- manifest = json.loads(res.read().decode("utf-8"))
- assert "uuid" in manifest
-
- if not skip_run_process:
- # プロセスが稼働中であることを確認
- assert process.poll() is None
-
- # 停止
- process.terminate()
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--dist_dir", type=Path, default=Path("dist/"))
- parser.add_argument("--skip_run_process", action="store_true")
- args = parser.parse_args()
- test_release_build(dist_dir=args.dist_dir, skip_run_process=args.skip_run_process)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_2x_coco.py
deleted file mode 100644
index 0fcc558018b69beedbd05781163c8043d93f7277..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_2x_coco.py
+++ /dev/null
@@ -1,3 +0,0 @@
-_base_ = './mask_rcnn_r50_fpn_gn-all_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron/resnet101_gn', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/pspnet_unet_s5-d16_128x128_40k_stare.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/pspnet_unet_s5-d16_128x128_40k_stare.py
deleted file mode 100644
index 9d729cea699e1c845549c74b52703c9ee3273662..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/pspnet_unet_s5-d16_128x128_40k_stare.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_unet_s5-d16.py', '../_base_/datasets/stare.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
-model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
-evaluation = dict(metric='mDice')
diff --git a/spaces/HOLYBOY/Customer_Churn_App/README.md b/spaces/HOLYBOY/Customer_Churn_App/README.md
deleted file mode 100644
index c94382d5c4f9be190cfac63807056ba5bd688fa3..0000000000000000000000000000000000000000
--- a/spaces/HOLYBOY/Customer_Churn_App/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Customer Churn App
-emoji: ⚡
-colorFrom: indigo
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/amp_optimizer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/amp_optimizer.py
deleted file mode 100644
index 3b7958e50ce444474c48d1f5aeff05d66c19e5b6..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/amp_optimizer.py
+++ /dev/null
@@ -1,105 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import torch
-from fairseq import optim
-from omegaconf import DictConfig
-
-logger = logging.getLogger(__name__)
-
-
-class AMPOptimizer(optim.FairseqOptimizer):
- """
- Wrap an *optimizer* to support AMP (automatic mixed precision) training.
- """
-
- def __init__(self, cfg: DictConfig, params, fp32_optimizer, **kwargs):
- super().__init__(cfg.optimizer)
- self.fp32_optimizer = fp32_optimizer
- amp_kwargs = {"init_scale": cfg.common.fp16_init_scale}
- if getattr(cfg.common, "amp_scale_window", None) is not None:
- amp_kwargs["growth_interval"] = cfg.common.amp_init_scale
- self._grad_scaler = torch.cuda.amp.GradScaler(**amp_kwargs)
- self.min_loss_scale = cfg.common.min_loss_scale
-
- @classmethod
- def build_optimizer(cls, cfg: DictConfig, params, **kwargs):
- """
- Args:
- cfg (omegaconf.DictConfig): fairseq args
- params (iterable): iterable of parameters to optimize
- """
- fp32_optimizer = optim.build_optimizer(cfg.optimizer, params)
- return cls(cfg, params, fp32_optimizer, **kwargs)
-
- def backward(self, loss):
- """Computes the sum of gradients of the given tensor w.r.t. graph leaves.
-
- Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this
- function additionally dynamically scales the loss to avoid gradient
- underflow.
- """
- self._grad_scaler.scale(loss).backward()
-
- def step(self):
- self.scaler.step(self.fp32_optimizer)
- self.scaler.update()
-
- def clip_grad_norm(self, max_norm, aggregate_norm_fn=None):
- """Clips gradient norm."""
- self.scaler.unscale_(self.optimizer)
- grad_norm = self.fp32_optimizer.clip_grad_norm(max_norm, aggregate_norm_fn)
- if not torch.isfinite(grad_norm).all():
- new_loss_scale = self.next_loss_scale
- if new_loss_scale <= self.min_loss_scale:
- raise FloatingPointError(
- (
- "AMP: Minimum loss scale reached ({}). Your loss is probably exploding. "
- "Try restarting training or use fp32. {}"
- ).format(self.min_loss_scale, new_loss_scale)
- )
- else:
- logger.info("AMP: overflow detected, setting scale to "
- f"to {new_loss_scale}")
- return grad_norm
-
- @property
- def scaler(self):
- return self._grad_scaler
-
- @property
- def next_loss_scale(self):
- return self.scaler.get_scale() * self.scaler.get_backoff_factor()
-
- @property
- def optimizer(self):
- return self.fp32_optimizer.optimizer
-
- @optimizer.setter
- def optimizer(self, optimizer):
- self.fp32_optimizer.optimizer = optimizer
-
- @property
- def lr_scheduler(self):
- return getattr(self.fp32_optimizer, "lr_scheduler", None)
-
- @property
- def optimizer_config(self):
- return self.fp32_optimizer.optimizer_config
-
- def get_lr(self):
- return self.fp32_optimizer.get_lr()
-
- def set_lr(self, lr):
- self.fp32_optimizer.set_lr(lr)
-
- def all_reduce_grads(self, module):
- self.fp32_optimizer.all_reduce_grads(module)
-
- @property
- def supports_flat_params(self):
- return self.fp32_optimizer.supports_flat_params
diff --git a/spaces/Harveenchadha/en_to_indic_translation/apply_bpe_traindevtest_notag.sh b/spaces/Harveenchadha/en_to_indic_translation/apply_bpe_traindevtest_notag.sh
deleted file mode 100644
index a3bd22677f2d9082f82052a1831139ea3d855cd5..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/apply_bpe_traindevtest_notag.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/bin/bash
-
-expdir=$1 # EXPDIR
-
-SUBWORD_NMT_DIR="subword-nmt"
-
-data_dir="$expdir/data"
-mkdir -p $expdir/bpe
-
-for dset in `echo train dev test`
-do
- echo $dset
- in_dset_dir="$data_dir/$dset"
- out_dset_dir="$expdir/bpe/$dset"
- # out_dset_dir="$expdir/final/$dset"
- echo "Apply joint vocab to SRC corpus"
- # for very large datasets, use gnu-parallel to speed up applying bpe
- # uncomment the below line if the apply bpe is slow
-
- # parallel --pipe --keep-order \
- python $SUBWORD_NMT_DIR/subword_nmt/apply_bpe.py \
- -c $expdir/vocab/bpe_codes.32k.SRC_TGT \
- --vocabulary $expdir/vocab/vocab.SRC \
- --vocabulary-threshold 5 \
- --num-workers "-1" \
- < $in_dset_dir.SRC \
- > $out_dset_dir.SRC
- echo "Apply joint vocab to TGT corpus"
-
- # for very large datasets, use gnu-parallel to speed up applying bpe
- # uncomment the below line if the apply bpe is slow
-
- # parallel --pipe --keep-order \
- python $SUBWORD_NMT_DIR/subword_nmt/apply_bpe.py \
- -c $expdir/vocab/bpe_codes.32k.SRC_TGT \
- --vocabulary $expdir/vocab/vocab.TGT \
- --vocabulary-threshold 5 \
- --num-workers "-1" \
- < $in_dset_dir.TGT \
- > $out_dset_dir.TGT
-done
diff --git a/spaces/Heckeroo/Cyberpunk-Anime-Diffusion/app.py b/spaces/Heckeroo/Cyberpunk-Anime-Diffusion/app.py
deleted file mode 100644
index c7cca2477c1da5a9e1e93ea27b5e00aa2158ca96..0000000000000000000000000000000000000000
--- a/spaces/Heckeroo/Cyberpunk-Anime-Diffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/DGSpitzer/Cyberpunk-Anime-Diffusion").launch()
\ No newline at end of file
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.9578e2e6.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.9578e2e6.js
deleted file mode 100644
index acd5d5d4e98304997f17adb5c58649d72b523cd2..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.9578e2e6.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as r,i as h,s as g,p as c,e as d,b as o,d as f,f as q,u as v,q as b,r as w,j as C,k as R,n as S}from"./index.396f4a72.js";function j(s){let e,a;const u=s[5].default,t=c(u,s,s[4],null);return{c(){e=d("div"),t&&t.c(),o(e,"class","flex row w-full flex-wrap gap-4"),o(e,"id",s[1]),f(e,"gr-compact",s[3]==="compact"),f(e,"gr-panel",s[3]==="panel"),f(e,"unequal-height",s[0].equal_height===!1),f(e,"items-stretch",s[0].equal_height),f(e,"!hidden",!s[2])},m(l,i){q(l,e,i),t&&t.m(e,null),a=!0},p(l,[i]){t&&t.p&&(!a||i&16)&&v(t,u,l,l[4],a?w(u,l[4],i,null):b(l[4]),null),(!a||i&2)&&o(e,"id",l[1]),i&8&&f(e,"gr-compact",l[3]==="compact"),i&8&&f(e,"gr-panel",l[3]==="panel"),i&1&&f(e,"unequal-height",l[0].equal_height===!1),i&1&&f(e,"items-stretch",l[0].equal_height),i&4&&f(e,"!hidden",!l[2])},i(l){a||(C(t,l),a=!0)},o(l){R(t,l),a=!1},d(l){l&&S(e),t&&t.d(l)}}}function k(s,e,a){let{$$slots:u={},$$scope:t}=e,{style:l={}}=e,{elem_id:i}=e,{visible:_=!0}=e,{variant:m="default"}=e;return s.$$set=n=>{"style"in n&&a(0,l=n.style),"elem_id"in n&&a(1,i=n.elem_id),"visible"in n&&a(2,_=n.visible),"variant"in n&&a(3,m=n.variant),"$$scope"in n&&a(4,t=n.$$scope)},[l,i,_,m,t,u]}class z extends r{constructor(e){super(),h(this,e,k,j,g,{style:0,elem_id:1,visible:2,variant:3})}}var B=z;const D=["static"];export{B as Component,D as modes};
-//# sourceMappingURL=index.9578e2e6.js.map
diff --git a/spaces/HoangHa/IELTS_Speaking_GPT/app.py b/spaces/HoangHa/IELTS_Speaking_GPT/app.py
deleted file mode 100644
index ee7cb52a8182ea09335ecb132adc6a3690e6a11a..0000000000000000000000000000000000000000
--- a/spaces/HoangHa/IELTS_Speaking_GPT/app.py
+++ /dev/null
@@ -1,238 +0,0 @@
-# Library
-import openai
-import streamlit as st
-import pandas as pd
-from datetime import datetime
-from TTS.api import TTS
-import whisper
-from audio_recorder import record
-
-# Custom Streamlit app title and icon
-st.set_page_config(
- page_title="IELTS Speaking",
- page_icon=":robot_face:",
-)
-
-# Set the title
-st.title("Part 1 Speaking")
-
-# Sidebar Configuration
-st.sidebar.title(":gear: Model Configuration")
-
-# Toggle for API activation
-api_toggle = st.sidebar.toggle("Activate free API")
-
-# Define an empty API key
-openai_key = ""
-
-# Check if the toggle is on
-if api_toggle:
- # If the toggle is on, access the API key from secrets
- openai_key = st.secrets["OPENAI_API_KEY"]
- openai.api_key = openai_key
-else:
- # If the toggle is off, allow the user to input the API key
- openai_key = st.sidebar.text_input('Your OpenAI API key here:', value="")
- openai.api_key = openai_key
-
-# User Input and AI Response
-user_input_type = st.sidebar.selectbox("Choose input type:", ["Chat", "Record Audio"])
-
-# Model Name Selector
-model_name = st.sidebar.selectbox(
- "Select a Model",
- ["gpt-3.5-turbo", "gpt-4"], # Add more model names as needed
- key="model_name",
-)
-
-# Temperature Slider
-temperature = st.sidebar.slider(
- ":thermometer: Temperature",
- min_value=0.2,
- max_value=2.0,
- value=1.0,
- step=0.1,
- key="temperature",
-)
-
-# Max tokens Slider
-max_tokens = st.sidebar.slider(
- ":straight_ruler: Max Tokens",
- min_value=1,
- max_value=4095,
- value=256,
- step=1,
- key="max_tokens",
-)
-
-# Top p Slider
-# top_p = st.sidebar.slider(
-# "🎯 Top P",
-# min_value=0.00,
-# max_value=1.00,
-# value=1.00,
-# step=0.01,
-# key="top_p",
-# )
-
-# Presence penalty Slider
-# presence_penalty = st.sidebar.slider(
-# "🚫 Presence penalty",
-# min_value=0.00,
-# max_value=2.00,
-# value=0.00,
-# step=0.01,
-# key="presence_penalty",
-# )
-
-# Frequency penalty Slider
-# frequency_penalty = st.sidebar.slider(
-# "🤐 Frequency penalty",
-# min_value=0.00,
-# max_value=2.00,
-# value=0.00,
-# step=0.01,
-# key="frequency_penalty",
-# )
-
-# TEXT2SPEECH MODEL
-# Instantiate the TTS class
-tts = TTS(TTS().list_models()[13])
-def convert_2_speech(given_text):
- tts.tts_to_file(text=given_text, file_path="response.wav")
- return("response.wav")
-
-# SPEECH2TEXT MODEL
-model_whisper = whisper.load_model("tiny.en")
-def convert_2_text(speech):
- user_message = model_whisper.transcribe(speech)["text"]
- return user_message
-
-# CHAT MODEL
-# Initialize DataFrame to store chat history
-chat_history_df = pd.DataFrame(columns=["Timestamp", "Chat"])
-
-# Reset Button
-if st.sidebar.button(":arrows_counterclockwise: Reset Chat"):
- # Save the chat history to the DataFrame before clearing it
- if st.session_state.messages:
- timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- chat_history = "\n".join([f"{m['role']}: {m['content']}" for m in st.session_state.messages])
- new_entry = pd.DataFrame({"Timestamp": [timestamp], "Chat": [chat_history]})
- chat_history_df = pd.concat([chat_history_df, new_entry], ignore_index=True)
-
- # Save the DataFrame to a CSV file
- chat_history_df.to_csv("chat_history.csv", index=False)
-
- # Clear the chat messages and reset the full response
- st.session_state.messages = []
- full_response = ""
-
-# Initialize Chat Messages
-if "messages" not in st.session_state:
- st.session_state.messages = []
-
-# Initialize full_response outside the user input check
-full_response = ""
-
-# Display Chat History
-for message in st.session_state.messages:
- if message["role"] != "system":
- with st.chat_message(message["role"]):
- st.markdown(message["content"])
-
-system_text="""As a helpful, thoughtful, and wise IELTS instructor responsible for testing Speaking Part 1. The users will provide the {subject} they want to talk about.
-It's important to follow these guidelines:
-- Give only original question for provided {subject}.
-- Give one question at a time.
-For example:
-{subject}: Work
-What is your job?
-Where do you work?
-{subject}: Study
-What do you study?
-Where do you study that?
-{subject}: Hometown
-Do you live in a house or a flat?
-How are the walls decorated?
-Let's start the test."""
-
-# User Input and AI Response
-# For "Chat mode"
-# Use st.toggle to allow users to choose input type
-# record_audio_input = st.toggle("Record Audio Input", value=False) # for toggle only
-
-if user_input_type == "Chat":
-# if not record_audio_input: # for toggle only
- if prompt := st.chat_input("What is up?"):
- # System
- st.session_state.messages.append({"role": "system", "content": system_text})
-
- # User
- st.session_state.messages.append({"role": "user", "content": prompt})
- with st.chat_message("user"):
- st.markdown(prompt)
-
- # Assistant
- with st.chat_message("assistant"):
- with st.status("Generating response..."):
- message_placeholder = st.empty()
- for response in openai.ChatCompletion.create(
- model=model_name, # Use the selected model name
- messages=[
- {"role": m["role"], "content": m["content"]}
- for m in st.session_state.messages
- ],
- temperature=temperature, # Set temperature
- max_tokens=max_tokens, # Set max tokens
- # top_p=top_p, # Set top p
- # frequency_penalty=frequency_penalty, # Set frequency penalty
- # presence_penalty=presence_penalty, # Set presence penalty
- stream=True,
- ):
- full_response += response.choices[0].delta.get("content", "")
- message_placeholder.markdown(full_response + "▌")
- message_placeholder.markdown(full_response)
-
- st.session_state.messages.append({"role": "assistant", "content": full_response})
- st.audio(convert_2_speech(full_response))
-
-elif user_input_type == "Record Audio":
-# else: # for toggle only
- # Record audio when the "Record Audio" button is clicked
- if st.button("Record Audio"):
- st.write("Recording... Please speak for 10 seconds.")
- output = record(seconds=10, filename='my_recording.wav')
- st.write("Recording complete!")
-
- # Convert the recorded audio to text using the Whisper model
- user_message = convert_2_text(output)
-
- # Display the transcribed text as user input
- st.session_state.messages.append({"role": "user", "content": user_message})
- with st.chat_message("user"):
- st.markdown(user_message)
-
- # Assistant
- with st.chat_message("assistant"):
- with st.status("Generating response..."):
- message_placeholder = st.empty()
- for response in openai.ChatCompletion.create(
- model=model_name, # Use the selected model name
- messages=[
- {"role": m["role"], "content": m["content"]}
- for m in st.session_state.messages
- ],
- temperature=temperature, # Set temperature
- max_tokens=max_tokens, # Set max tokens
- # top_p=top_p, # Set top p
- # frequency_penalty=frequency_penalty, # Set frequency penalty
- # presence_penalty=presence_penalty, # Set presence penalty
- stream=True,
- ):
- full_response += response.choices[0].delta.get("content", "")
- message_placeholder.markdown(full_response + "▌")
- message_placeholder.markdown(full_response)
-
- st.session_state.messages.append({"role": "assistant", "content": full_response})
- st.audio(convert_2_speech(full_response))
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/vq-wav2vec_featurize.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/vq-wav2vec_featurize.py
deleted file mode 100644
index 627072ee174c22831209e00984b945eb9dc2c279..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/vq-wav2vec_featurize.py
+++ /dev/null
@@ -1,250 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset
-"""
-
-import argparse
-import glob
-import os
-import os.path as osp
-import pprint
-
-import soundfile as sf
-import torch
-import fairseq
-from torch import nn
-from torch.utils.data import DataLoader
-
-
-try:
- import tqdm
-except:
- print("Install tqdm to use --log-format=tqdm")
-
-
-class FilesDataset:
- def __init__(self, files, labels):
- self.files = files
- if labels and osp.exists(labels):
- with open(labels, "r") as lbl_f:
- self.labels = [line.rstrip() for line in lbl_f]
- else:
- self.labels = labels
-
- def __len__(self):
- return len(self.files)
-
- def __getitem__(self, index):
- fname = self.files[index]
-
- wav, sr = sf.read(fname)
- assert sr == 16000
-
- wav = torch.from_numpy(wav).float()
- lbls = None
- if self.labels:
- if isinstance(self.labels, str):
- lbl_file = osp.splitext(fname)[0] + "." + self.labels
- with open(lbl_file, "r") as lblf:
- lbls = lblf.readline()
- assert lbls is not None
- else:
- lbls = self.labels[index]
- return wav, lbls
-
- def collate(self, batch):
- return batch
-
-
-class ArgTypes:
- @staticmethod
- def existing_path(arg):
- arg = str(arg)
- assert osp.exists(arg), f"File {arg} does not exist"
- return arg
-
- @staticmethod
- def mkdir(arg):
- arg = str(arg)
- os.makedirs(arg, exist_ok=True)
- return arg
-
-
-class DatasetWriter:
- def __init__(self):
-
- self.args = self.load_config()
- pprint.pprint(self.args.__dict__)
-
- self.model = self.load_model()
-
- def __getattr__(self, attr):
- return getattr(self.args, attr)
-
- def read_manifest(self, fname):
-
- with open(fname, "r") as fp:
- lines = fp.read().split("\n")
- root = lines.pop(0).strip()
- fnames = [
- osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0
- ]
-
- return fnames
-
- def process_splits(self):
-
- if self.args.shard is not None or self.args.num_shards is not None:
- assert self.args.shard is not None and self.args.num_shards is not None
-
- for split in self.splits:
- print(split)
-
- if self.extension == "tsv":
- datadir = osp.join(self.data_dir, f"{split}.{self.extension}")
- print("Reading manifest file: ", datadir)
- files = self.read_manifest(datadir)
- else:
- datadir = osp.join(self.data_dir, split, f"**/*.{self.extension}")
- files = glob.glob(datadir, recursive=True)
-
- assert len(files) > 0
-
- if self.args.shard is not None:
- files = files[self.args.shard :: self.args.num_shards]
-
- lbls = []
- with open(self.data_file(split), "w") as srcf:
- for line, lbl in self.iterate(files):
- print(line, file=srcf)
- if self.args.labels:
- lbls.append(lbl + "\n")
-
- if self.args.labels:
- assert all(a is not None for a in lbls)
- with open(self.lbl_file(split), "w") as lblf:
- lblf.writelines(lbls)
-
- def iterate(self, files):
-
- data = self.load_data(files)
- for samples in tqdm.tqdm(data, total=len(files) // 32):
-
- for wav, lbl in samples:
- x = wav.unsqueeze(0).float().cuda()
-
- div = 1
- while x.size(-1) // div > self.args.max_size:
- div += 1
-
- xs = x.chunk(div, dim=-1)
-
- result = []
- for x in xs:
- torch.cuda.empty_cache()
- x = self.model.feature_extractor(x)
- if self.quantize_location == "encoder":
- with torch.no_grad():
- _, idx = self.model.vector_quantizer.forward_idx(x)
- idx = idx.squeeze(0).cpu()
- else:
- with torch.no_grad():
- z = self.model.feature_aggregator(x)
- _, idx = self.model.vector_quantizer.forward_idx(z)
- idx = idx.squeeze(0).cpu()
- result.append(idx)
-
- idx = torch.cat(result, dim=0)
- yield " ".join("-".join(map(str, a.tolist())) for a in idx), lbl
-
- def lbl_file(self, name):
- shard_part = "" if self.args.shard is None else f".{self.args.shard}"
- return osp.join(self.output_dir, f"{name}.lbl{shard_part}")
-
- def data_file(self, name):
- shard_part = "" if self.args.shard is None else f".{self.args.shard}"
- return osp.join(self.output_dir, f"{name}.src{shard_part}")
-
- def var_file(self):
- return osp.join(self.output_dir, f"vars.pt")
-
- def load_config(self):
-
- parser = argparse.ArgumentParser("Vector Quantized wav2vec features")
-
- # Model Arguments
- parser.add_argument("--checkpoint", type=ArgTypes.existing_path, required=True)
- parser.add_argument("--data-parallel", action="store_true")
-
- # Output Arguments
- parser.add_argument("--output-dir", type=ArgTypes.mkdir, required=True)
-
- # Data Arguments
- parser.add_argument("--data-dir", type=ArgTypes.existing_path, required=True)
- parser.add_argument("--splits", type=str, nargs="+", required=True)
- parser.add_argument("--extension", type=str, required=True)
- parser.add_argument("--labels", type=str, required=False)
-
- parser.add_argument("--shard", type=int, default=None)
- parser.add_argument("--num-shards", type=int, default=None)
- parser.add_argument("--max-size", type=int, default=1300000)
-
- # Logger Arguments
- parser.add_argument(
- "--log-format", type=str, choices=["none", "simple", "tqdm"]
- )
-
- return parser.parse_args()
-
- def load_data(self, fnames):
-
- dataset = FilesDataset(fnames, self.args.labels)
- loader = DataLoader(
- dataset, batch_size=32, collate_fn=dataset.collate, num_workers=8
- )
- return loader
-
- def load_model(self):
- model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([self.checkpoint])
- model = model[0]
-
- self.quantize_location = getattr(cfg.model, "vq", "encoder")
-
- model.eval().float()
- model.cuda()
-
- if self.data_parallel:
- model = nn.DataParallel(model)
-
- return model
-
- def __call__(self):
-
- self.process_splits()
-
- if hasattr(self.model.feature_extractor, "vars") and (
- self.args.shard is None or self.args.shard == 0
- ):
- vars = (
- self.model.feature_extractor.vars.view(
- self.model.feature_extractor.banks,
- self.model.feature_extractor.num_vars,
- -1,
- )
- .cpu()
- .detach()
- )
- print("writing learned latent variable embeddings: ", vars.shape)
- torch.save(vars, self.var_file())
-
-
-if __name__ == "__main__":
- write_data = DatasetWriter()
-
- write_data()
- print("Done.")
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/configs.py b/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/configs.py
deleted file mode 100644
index 8e8cec92814f55a504d36f80fb79c3e0f8280eee..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/configs.py
+++ /dev/null
@@ -1,1058 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-from dataclasses import _MISSING_TYPE, dataclass, field
-from typing import Any, List, Optional
-
-import torch
-
-from fairseq.dataclass.constants import (
- DATASET_IMPL_CHOICES,
- DDP_BACKEND_CHOICES,
- DDP_COMM_HOOK_CHOICES,
- GENERATION_CONSTRAINTS_CHOICES,
- GENERATION_DECODING_FORMAT_CHOICES,
- LOG_FORMAT_CHOICES,
- PIPELINE_CHECKPOINT_CHOICES,
- PRINT_ALIGNMENT_CHOICES,
- ZERO_SHARDING_CHOICES,
-)
-
-from omegaconf import II, MISSING
-
-
-@dataclass
-class FairseqDataclass:
- """fairseq base dataclass that supported fetching attributes and metas"""
-
- _name: Optional[str] = None
-
- @staticmethod
- def name():
- return None
-
- def _get_all_attributes(self) -> List[str]:
- return [k for k in self.__dataclass_fields__.keys()]
-
- def _get_meta(
- self, attribute_name: str, meta: str, default: Optional[Any] = None
- ) -> Any:
- return self.__dataclass_fields__[attribute_name].metadata.get(meta, default)
-
- def _get_name(self, attribute_name: str) -> str:
- return self.__dataclass_fields__[attribute_name].name
-
- def _get_default(self, attribute_name: str) -> Any:
- if hasattr(self, attribute_name):
- if str(getattr(self, attribute_name)).startswith("${"):
- return str(getattr(self, attribute_name))
- elif str(self.__dataclass_fields__[attribute_name].default).startswith(
- "${"
- ):
- return str(self.__dataclass_fields__[attribute_name].default)
- elif (
- getattr(self, attribute_name)
- != self.__dataclass_fields__[attribute_name].default
- ):
- return getattr(self, attribute_name)
-
- f = self.__dataclass_fields__[attribute_name]
- if not isinstance(f.default_factory, _MISSING_TYPE):
- return f.default_factory()
- return f.default
-
- def _get_type(self, attribute_name: str) -> Any:
- return self.__dataclass_fields__[attribute_name].type
-
- def _get_help(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "help")
-
- def _get_argparse_const(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "argparse_const")
-
- def _get_argparse_alias(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "argparse_alias")
-
- def _get_choices(self, attribute_name: str) -> Any:
- return self._get_meta(attribute_name, "choices")
-
- @classmethod
- def from_namespace(cls, args):
- if isinstance(args, cls):
- return args
- else:
- config = cls()
- for k in config.__dataclass_fields__.keys():
- if k.startswith("_"):
- # private member, skip
- continue
- if hasattr(args, k):
- setattr(config, k, getattr(args, k))
-
- return config
-
-
-
-@dataclass
-class CommonConfig(FairseqDataclass):
- # This is the core dataclass including common parameters shared by all different jobs. Please append your params to other dataclasses if they were
- # used for a particular purpose or task, such as those dedicated for `distributed training`, `optimization`, etc.
- no_progress_bar: bool = field(
- default=False, metadata={"help": "disable progress bar"}
- )
- log_interval: int = field(
- default=100,
- metadata={
- "help": "log progress every N batches (when progress bar is disabled)"
- },
- )
- log_format: Optional[LOG_FORMAT_CHOICES] = field(
- default=None, metadata={"help": "log format to use"}
- )
- log_file: Optional[str] = field(
- default=None, metadata={"help": "log file to copy metrics to."}
- )
- tensorboard_logdir: Optional[str] = field(
- default=None,
- metadata={
- "help": "path to save logs for tensorboard, should match --logdir "
- "of running tensorboard (default: no tensorboard logging)"
- },
- )
- wandb_project: Optional[str] = field(
- default=None,
- metadata={"help": "Weights and Biases project name to use for logging"},
- )
- azureml_logging: Optional[bool] = field(
- default=False, metadata={"help": "Log scalars to AzureML context"},
- )
- seed: int = field(
- default=1, metadata={"help": "pseudo random number generator seed"}
- )
- cpu: bool = field(default=False, metadata={"help": "use CPU instead of CUDA"})
- tpu: bool = field(default=False, metadata={"help": "use TPU instead of CUDA"})
- bf16: bool = field(default=False, metadata={"help": "use bfloat16; implies --tpu"})
- memory_efficient_bf16: bool = field(
- default=False,
- metadata={
- "help": "use a memory-efficient version of BF16 training; implies --bf16"
- },
- )
- fp16: bool = field(default=False, metadata={"help": "use FP16"})
- memory_efficient_fp16: bool = field(
- default=False,
- metadata={
- "help": "use a memory-efficient version of FP16 training; implies --fp16"
- },
- )
- fp16_no_flatten_grads: bool = field(
- default=False, metadata={"help": "don't flatten FP16 grads tensor"}
- )
- fp16_init_scale: int = field(
- default=2 ** 7, metadata={"help": "default FP16 loss scale"}
- )
- fp16_scale_window: Optional[int] = field(
- default=None,
- metadata={"help": "number of updates before increasing loss scale"},
- )
- fp16_scale_tolerance: float = field(
- default=0.0,
- metadata={
- "help": "pct of updates that can overflow before decreasing the loss scale"
- },
- )
- on_cpu_convert_precision: bool = field(
- default=False,
- metadata={
- "help": "if set, the floating point conversion to fp16/bf16 runs on CPU. "
- "This reduces bus transfer time and GPU memory usage."
- }
- )
- min_loss_scale: float = field(
- default=1e-4,
- metadata={"help": "minimum FP16/AMP loss scale, after which training is stopped"},
- )
- threshold_loss_scale: Optional[float] = field(
- default=None, metadata={"help": "threshold FP16 loss scale from below"}
- )
- amp: bool = field(default=False, metadata={"help": "use automatic mixed precision"})
- amp_batch_retries: int = field(
- default=2,
- metadata={"help": "number of retries of same batch after reducing loss scale with AMP"},
- )
- amp_init_scale: int = field(
- default=2 ** 7, metadata={"help": "default AMP loss scale"}
- )
- amp_scale_window: Optional[int] = field(
- default=None,
- metadata={"help": "number of updates before increasing AMP loss scale"},
- )
- user_dir: Optional[str] = field(
- default=None,
- metadata={
- "help": "path to a python module containing custom extensions (tasks and/or architectures)"
- },
- )
- empty_cache_freq: int = field(
- default=0,
- metadata={"help": "how often to clear the PyTorch CUDA cache (0 to disable)"},
- )
- all_gather_list_size: int = field(
- default=16384,
- metadata={"help": "number of bytes reserved for gathering stats from workers"},
- )
- model_parallel_size: int = field(
- default=1, metadata={"help": "total number of GPUs to parallelize model over"}
- )
- quantization_config_path: Optional[str] = field(
- default=None, metadata={"help": "path to quantization config file"}
- )
- profile: bool = field(
- default=False, metadata={"help": "enable autograd profiler emit_nvtx"}
- )
- reset_logging: bool = field(
- default=False,
- metadata={
- "help": "when using Hydra, reset the logging at the beginning of training"
- },
- )
- suppress_crashes: bool = field(
- default=False,
- metadata={
- "help": "suppress crashes when training with the hydra_train entry point so that the "
- "main method can return a value (useful for sweeps)"
- },
- )
- use_plasma_view: bool = field(
- default=False, metadata={"help": "Store indices and sizes in shared memory"}
- )
- plasma_path: Optional[str] = field(
- default="/tmp/plasma",
- metadata={
- "help": "path to run plasma_store, defaults to /tmp/plasma. Paths outside /tmp tend to fail."
- },
- )
-
-
-@dataclass
-class DistributedTrainingConfig(FairseqDataclass):
- distributed_world_size: int = field(
- default=max(1, torch.cuda.device_count()),
- metadata={
- "help": "total number of GPUs across all nodes (default: all visible GPUs)"
- },
- )
- distributed_num_procs: Optional[int] = field(
- default=max(1, torch.cuda.device_count()),
- metadata={
- "help": "total number of processes to fork (default: all visible GPUs)"
- },
- )
- distributed_rank: Optional[int] = field(
- default=0, metadata={"help": "rank of the current worker"}
- )
- distributed_backend: str = field(
- default="nccl", metadata={"help": "distributed backend"}
- )
- distributed_init_method: Optional[str] = field(
- default=None,
- metadata={
- "help": "typically tcp://hostname:port that will be used to "
- "establish initial connetion"
- },
- )
- distributed_port: int = field(
- default=-1,
- metadata={
- "help": "port number (not required if using --distributed-init-method)"
- },
- )
- device_id: int = field(
- default=0,
- metadata={
- "help": "which GPU to use (usually configured automatically)",
- "argparse_alias": "--local_rank",
- },
- )
- distributed_no_spawn: bool = field(
- default=False,
- metadata={
- "help": "do not spawn multiple processes even if multiple GPUs are visible"
- },
- )
- ddp_backend: DDP_BACKEND_CHOICES = field(
- default="pytorch_ddp", metadata={"help": "DistributedDataParallel backend"}
- )
- ddp_comm_hook: DDP_COMM_HOOK_CHOICES = field(
- default="none", metadata={"help": "communication hook"}
- )
- bucket_cap_mb: int = field(
- default=25, metadata={"help": "bucket size for reduction"}
- )
- fix_batches_to_gpus: bool = field(
- default=False,
- metadata={
- "help": "don't shuffle batches between GPUs; this reduces overall "
- "randomness and may affect precision but avoids the cost of re-reading the data"
- },
- )
- find_unused_parameters: bool = field(
- default=False,
- metadata={
- "help": "disable unused parameter detection (not applicable to "
- "--ddp-backend=legacy_ddp)"
- },
- )
- gradient_as_bucket_view: bool = field(
- default=False,
- metadata={
- "help": "when set to True, gradients will be views pointing to different offsets of allreduce communication buckets. This can reduce peak memory usage, where the saved memory size will be equal to the total gradients size. "
- "--gradient-as-bucket-view=gradient_as_bucket_view)"
- },
- )
- fast_stat_sync: bool = field(
- default=False,
- metadata={"help": "[deprecated] this is now defined per Criterion"},
- )
- heartbeat_timeout: int = field(
- default=-1,
- metadata={
- "help": "kill the job if no progress is made in N seconds; "
- "set to -1 to disable"
- },
- )
- broadcast_buffers: bool = field(
- default=False,
- metadata={
- "help": "Copy non-trainable parameters between GPUs, such as "
- "batchnorm population statistics"
- },
- )
- slowmo_momentum: Optional[float] = field(
- default=None,
- metadata={
- "help": "SlowMo momentum term; by default use 0.0 for 16 GPUs, "
- "0.2 for 32 GPUs; 0.5 for 64 GPUs, 0.6 for > 64 GPUs"
- },
- )
- slowmo_algorithm: str = field(
- default="LocalSGD", metadata={"help": "whether to use LocalSGD or SGP"}
- )
- localsgd_frequency: int = field(
- default=3, metadata={"help": "Local SGD allreduce frequency"}
- )
- nprocs_per_node: int = field(
- default=max(1, torch.cuda.device_count()),
- metadata={
- "help": "number of GPUs in each node. An allreduce operation across GPUs in "
- "a node is very fast. Hence, we do allreduce across GPUs in a node, "
- "and gossip across different nodes"
- },
- )
- pipeline_model_parallel: bool = field(
- default=False,
- metadata={"help": "if set, use pipeline model parallelism across GPUs"},
- )
- pipeline_balance: Optional[str] = field(
- default=None,
- metadata={
- "help": "partition the model into N_K pieces, where each piece "
- "contains N_i layers. The sum(args.pipeline_balance) "
- "should equal the total number of layers in the model"
- },
- )
- pipeline_devices: Optional[str] = field(
- default=None,
- metadata={
- "help": "a list of device indices indicating which device to place "
- "each of the N_K partitions. The length of this list should "
- "equal the length of the --pipeline-balance argument"
- },
- )
- pipeline_chunks: Optional[int] = field(
- default=0, metadata={"help": "microbatch count for pipeline model parallelism"}
- )
- pipeline_encoder_balance: Optional[str] = field(
- default=None,
- metadata={
- "help": "partition the pipeline parallel encoder into N_K pieces, where each piece "
- "contains N_i layers. The sum(args.pipeline_encoder_balance) "
- "should equal the total number of encoder layers in the model"
- },
- )
- pipeline_encoder_devices: Optional[str] = field(
- default=None,
- metadata={
- "help": "a list of device indices indicating which device to place "
- "each of the N_K partitions. The length of this list should "
- "equal the length of the --pipeline-encoder-balance argument"
- },
- )
- pipeline_decoder_balance: Optional[str] = field(
- default=None,
- metadata={
- "help": "partition the pipeline parallel decoder into N_K pieces, where each piece "
- "contains N_i layers. The sum(args.pipeline_decoder_balance) "
- "should equal the total number of decoder layers in the model"
- },
- )
- pipeline_decoder_devices: Optional[str] = field(
- default=None,
- metadata={
- "help": "a list of device indices indicating which device to place "
- "each of the N_K partitions. The length of this list should "
- "equal the length of the --pipeline-decoder-balance argument"
- },
- )
- pipeline_checkpoint: PIPELINE_CHECKPOINT_CHOICES = field(
- default="never",
- metadata={"help": "checkpointing mode for pipeline model parallelism"},
- )
- zero_sharding: ZERO_SHARDING_CHOICES = field(
- default="none", metadata={"help": "ZeRO sharding"}
- )
- fp16: bool = II("common.fp16")
- memory_efficient_fp16: bool = II("common.memory_efficient_fp16")
- tpu: bool = II("common.tpu")
- # configuration for --ddp-backend=fully_sharded
- no_reshard_after_forward: bool = field(
- default=False, metadata={"help": "don't reshard parameters after forward pass"},
- )
- fp32_reduce_scatter: bool = field(
- default=False, metadata={"help": "reduce-scatter grads in FP32"},
- )
- cpu_offload: bool = field(
- default=False, metadata={"help": "offload FP32 params to CPU"}
- )
- use_sharded_state: bool = field(
- default=False, metadata={"help": "use sharded checkpoint files"},
- )
-
-
-@dataclass
-class DatasetConfig(FairseqDataclass):
- num_workers: int = field(
- default=1, metadata={"help": "how many subprocesses to use for data loading"}
- )
- skip_invalid_size_inputs_valid_test: bool = field(
- default=False,
- metadata={"help": "ignore too long or too short lines in valid and test set"},
- )
- max_tokens: Optional[int] = field(
- default=None, metadata={"help": "maximum number of tokens in a batch"}
- )
- batch_size: Optional[int] = field(
- default=None,
- metadata={
- "help": "number of examples in a batch",
- "argparse_alias": "--max-sentences",
- },
- )
- required_batch_size_multiple: int = field(
- default=8, metadata={"help": "batch size will be a multiplier of this value"}
- )
- required_seq_len_multiple: int = field(
- default=1,
- metadata={
- "help": "maximum sequence length in batch will be a multiplier of this value"
- },
- )
- dataset_impl: Optional[DATASET_IMPL_CHOICES] = field(
- default=None, metadata={"help": "output dataset implementation"}
- )
- data_buffer_size: int = field(
- default=10, metadata={"help": "Number of batches to preload"}
- )
- train_subset: str = field(
- default="train",
- metadata={"help": "data subset to use for training (e.g. train, valid, test)"},
- )
- valid_subset: str = field(
- default="valid",
- metadata={
- "help": "comma separated list of data subsets to use for validation"
- " (e.g. train, valid, test)"
- },
- )
- combine_valid_subsets: Optional[bool] = field(
- default=None,
- metadata={
- "help": "comma separated list of data subsets to use for validation"
- " (e.g. train, valid, test)",
- "argparse_alias": "--combine-val",
- },
- )
- ignore_unused_valid_subsets: Optional[bool] = field(
- default=False,
- metadata={"help": "do not raise error if valid subsets are ignored"},
- )
-
- validate_interval: int = field(
- default=1, metadata={"help": "validate every N epochs"}
- )
- validate_interval_updates: int = field(
- default=0, metadata={"help": "validate every N updates"}
- )
- validate_after_updates: int = field(
- default=0, metadata={"help": "dont validate until reaching this many updates"}
- )
- fixed_validation_seed: Optional[int] = field(
- default=None, metadata={"help": "specified random seed for validation"}
- )
- disable_validation: bool = field(
- default=False, metadata={"help": "disable validation"}
- )
- max_tokens_valid: Optional[int] = field(
- default=II("dataset.max_tokens"),
- metadata={
- "help": "maximum number of tokens in a validation batch"
- " (defaults to --max-tokens)"
- },
- )
- batch_size_valid: Optional[int] = field(
- default=II("dataset.batch_size"),
- metadata={
- "help": "batch size of the validation batch (defaults to --batch-size)",
- "argparse_alias": "--max-sentences-valid",
- },
- )
- max_valid_steps: Optional[int] = field(default=None, metadata={'help': 'How many batches to evaluate',
- "argparse_alias": "--nval"})
- curriculum: int = field(
- default=0, metadata={"help": "don't shuffle batches for first N epochs"}
- )
- gen_subset: str = field(
- default="test",
- metadata={"help": "data subset to generate (train, valid, test)"},
- )
- num_shards: int = field(
- default=1, metadata={"help": "shard generation over N shards"}
- )
- shard_id: int = field(
- default=0, metadata={"help": "id of the shard to generate (id < num_shards)"}
- )
-
-
-@dataclass
-class OptimizationConfig(FairseqDataclass):
- max_epoch: int = field(
- default=0, metadata={"help": "force stop training at specified epoch"}
- )
- max_update: int = field(
- default=0, metadata={"help": "force stop training at specified update"}
- )
- stop_time_hours: float = field(
- default=0,
- metadata={
- "help": "force stop training after specified cumulative time (if >0)"
- },
- )
- clip_norm: float = field(
- default=0.0, metadata={"help": "clip threshold of gradients"}
- )
- sentence_avg: bool = field(
- default=False,
- metadata={
- "help": "normalize gradients by the number of sentences in a batch"
- " (default is to normalize by number of tokens)"
- },
- )
- update_freq: List[int] = field(
- default_factory=lambda: [1],
- metadata={"help": "update parameters every N_i batches, when in epoch i"},
- )
- lr: List[float] = field(
- default_factory=lambda: [0.25],
- metadata={
- "help": "learning rate for the first N epochs; all epochs >N using LR_N"
- " (note: this may be interpreted differently depending on --lr-scheduler)"
- },
- )
- stop_min_lr: float = field(
- default=-1.0,
- metadata={"help": "stop training when the learning rate reaches this minimum"},
- )
- use_bmuf: bool = field(
- default=False,
- metadata={
- "help": "specify global optimizer for syncing models on different GPUs/shards"
- },
- )
-
-
-@dataclass
-class CheckpointConfig(FairseqDataclass):
- save_dir: str = field(
- default="checkpoints", metadata={"help": "path to save checkpoints"}
- )
- restore_file: str = field(
- default="checkpoint_last.pt",
- metadata={
- "help": "filename from which to load checkpoint "
- "(default: /checkpoint_last.pt"
- },
- )
- finetune_from_model: Optional[str] = field(
- default=None,
- metadata={
- "help": "finetune from a pretrained model; note that meters and lr scheduler will be reset"
- },
- )
- reset_dataloader: bool = field(
- default=False,
- metadata={
- "help": "if set, does not reload dataloader state from the checkpoint"
- },
- )
- reset_lr_scheduler: bool = field(
- default=False,
- metadata={
- "help": "if set, does not load lr scheduler state from the checkpoint"
- },
- )
- reset_meters: bool = field(
- default=False,
- metadata={"help": "if set, does not load meters from the checkpoint"},
- )
- reset_optimizer: bool = field(
- default=False,
- metadata={"help": "if set, does not load optimizer state from the checkpoint"},
- )
- optimizer_overrides: str = field(
- default="{}",
- metadata={
- "help": "a dictionary used to override optimizer args when loading a checkpoint"
- },
- )
- save_interval: int = field(
- default=1, metadata={"help": "save a checkpoint every N epochs"}
- )
- save_interval_updates: int = field(
- default=0, metadata={"help": "save a checkpoint (and validate) every N updates"}
- )
- keep_interval_updates: int = field(
- default=-1,
- metadata={
- "help": "keep the last N checkpoints saved with --save-interval-updates"
- },
- )
- keep_interval_updates_pattern: int = field(
- default=-1,
- metadata={
- "help": "when used with --keep-interval-updates, skips deleting "
- "any checkpoints with update X where "
- "X %% keep_interval_updates_pattern == 0"
- },
- )
- keep_last_epochs: int = field(
- default=-1, metadata={"help": "keep last N epoch checkpoints"}
- )
- keep_best_checkpoints: int = field(
- default=-1, metadata={"help": "keep best N checkpoints based on scores"}
- )
- no_save: bool = field(
- default=False, metadata={"help": "don't save models or checkpoints"}
- )
- no_epoch_checkpoints: bool = field(
- default=False, metadata={"help": "only store last and best checkpoints"}
- )
- no_last_checkpoints: bool = field(
- default=False, metadata={"help": "don't store last checkpoints"}
- )
- no_save_optimizer_state: bool = field(
- default=False,
- metadata={"help": "don't save optimizer-state as part of checkpoint"},
- )
- best_checkpoint_metric: str = field(
- default="loss", metadata={"help": 'metric to use for saving "best" checkpoints'}
- )
- maximize_best_checkpoint_metric: bool = field(
- default=False,
- metadata={
- "help": 'select the largest metric value for saving "best" checkpoints'
- },
- )
- patience: int = field(
- default=-1,
- metadata={
- "help": (
- "early stop training if valid performance doesn't "
- "improve for N consecutive validation runs; note "
- "that this is influenced by --validate-interval"
- )
- },
- )
- checkpoint_suffix: str = field(
- default="", metadata={"help": "suffix to add to the checkpoint file name"}
- )
- checkpoint_shard_count: int = field(
- default=1,
- metadata={
- "help": "Number of shards containing the checkpoint - "
- "if the checkpoint is over 300GB, it is preferable "
- "to split it into shards to prevent OOM on CPU while loading "
- "the checkpoint"
- },
- )
- load_checkpoint_on_all_dp_ranks: bool = field(
- default=False,
- metadata={
- "help": "load checkpoints on all data parallel devices "
- "(default: only load on rank 0 and broadcast to other devices)"
- },
- )
- write_checkpoints_asynchronously: bool = field(
- default=False,
- metadata={
- "help": (
- "Write checkpoints asynchronously in a separate "
- "thread. NOTE: This feature is currently being tested."
- ),
- "argparse_alias": "--save-async",
- },
- )
- model_parallel_size: int = II("common.model_parallel_size")
- use_ema_weights_to_init_param: bool = field(
- default=False,
- metadata={
- "help": "if the checkpoint has ema weights, then use it to init the model param"
- "(default: false, use noema weights to init the model param)"
- },
- )
- use_latest_weights_to_init_ema: bool = field(
- default=False,
- metadata={
- "help": "if the model has ema params, then force to use the latest weights in the ckpt to init the ema param, even ema weights exist in the ckpt"
- "(default: false, use ema weights (if exist) to init the ema param)"
- },
- )
-
-
-@dataclass
-class FairseqBMUFConfig(FairseqDataclass):
- block_lr: float = field(
- default=1, metadata={"help": "block learning rate for bmuf"}
- )
- block_momentum: float = field(
- default=0.875, metadata={"help": "block momentum for bmuf"}
- )
- global_sync_iter: int = field(
- default=50, metadata={"help": "Iteration for syncing global model"}
- )
- warmup_iterations: int = field(
- default=500, metadata={"help": "warmup iterations for model to broadcast"}
- )
- use_nbm: bool = field(
- default=False,
- metadata={"help": "Specify whether you want to use classical BM / Nesterov BM"},
- )
- average_sync: bool = field(
- default=False,
- metadata={
- "help": "Specify whether you want to average the local momentum after each sync"
- },
- )
- distributed_world_size: int = II("distributed_training.distributed_world_size")
-
-
-@dataclass
-class GenerationConfig(FairseqDataclass):
- beam: int = field(
- default=5, metadata={"help": "beam size"},
- )
- nbest: int = field(
- default=1, metadata={"help": "number of hypotheses to output"},
- )
- max_len_a: float = field(
- default=0,
- metadata={
- "help": "generate sequences of maximum length ax + b, where x is the source length"
- },
- )
- max_len_b: int = field(
- default=200,
- metadata={
- "help": "generate sequences of maximum length ax + b, where x is the source length"
- },
- )
- min_len: int = field(
- default=1, metadata={"help": "minimum generation length"},
- )
- match_source_len: bool = field(
- default=False, metadata={"help": "generations should match the source length"},
- )
- unnormalized: bool = field(
- default=False, metadata={"help": "compare unnormalized hypothesis scores"},
- )
- no_early_stop: bool = field(
- default=False, metadata={"help": "deprecated"},
- )
- no_beamable_mm: bool = field(
- default=False, metadata={"help": "don't use BeamableMM in attention layers"},
- )
- lenpen: float = field(
- default=1,
- metadata={
- "help": "length penalty: <1.0 favors shorter, >1.0 favors longer sentences"
- },
- )
- unkpen: float = field(
- default=0,
- metadata={
- "help": "unknown word penalty: <0 produces more unks, >0 produces fewer"
- },
- )
- replace_unk: Optional[str] = field(
- default=None,
- metadata={
- "help": "perform unknown replacement (optionally with alignment dictionary)",
- "argparse_const": "@@ ",
- },
- )
- sacrebleu: bool = field(
- default=False, metadata={"help": "score with sacrebleu"},
- )
- score_reference: bool = field(
- default=False, metadata={"help": "just score the reference translation"},
- )
- prefix_size: int = field(
- default=0,
- metadata={"help": "initialize generation by target prefix of given length"},
- )
- no_repeat_ngram_size: int = field(
- default=0,
- metadata={
- "help": "ngram blocking such that this size ngram cannot be repeated in the generation"
- },
- )
- sampling: bool = field(
- default=False,
- metadata={"help": "sample hypotheses instead of using beam search"},
- )
- sampling_topk: int = field(
- default=-1,
- metadata={"help": "sample from top K likely next words instead of all words"},
- )
- sampling_topp: float = field(
- default=-1.0,
- metadata={
- "help": "sample from the smallest set whose cumulative probability mass exceeds p for next words"
- },
- )
- constraints: Optional[GENERATION_CONSTRAINTS_CHOICES] = field(
- default=None,
- metadata={
- "help": "enables lexically constrained decoding",
- "argparse_const": "ordered",
- },
- )
- temperature: float = field(
- default=1.0, metadata={"help": "temperature for generation"},
- )
- diverse_beam_groups: int = field(
- default=-1, metadata={"help": "number of groups for Diverse Beam Search"},
- )
- diverse_beam_strength: float = field(
- default=0.5,
- metadata={"help": "strength of diversity penalty for Diverse Beam Search"},
- )
- diversity_rate: float = field(
- default=-1.0,
- metadata={"help": "strength of diversity penalty for Diverse Siblings Search"},
- )
- print_alignment: Optional[PRINT_ALIGNMENT_CHOICES] = field(
- default=None,
- metadata={
- "help": "if set, uses attention feedback to compute and print alignment to source tokens "
- "(valid options are: hard, soft, otherwise treated as hard alignment)",
- "argparse_const": "hard",
- },
- )
- print_step: bool = field(
- default=False, metadata={"help": "print steps"},
- )
- lm_path: Optional[str] = field(
- default=None, metadata={"help": "path to lm checkpoint for lm fusion"},
- )
- lm_weight: float = field(
- default=0.0, metadata={"help": "weight for lm probs for lm fusion"},
- )
-
- # arguments for iterative refinement generator
- iter_decode_eos_penalty: float = field(
- default=0.0,
- metadata={"help": "if > 0.0, it penalized early-stopping in decoding."},
- )
- iter_decode_max_iter: int = field(
- default=10, metadata={"help": "maximum iterations for iterative refinement."},
- )
- iter_decode_force_max_iter: bool = field(
- default=False,
- metadata={
- "help": "if set, run exact the maximum number of iterations without early stop"
- },
- )
- iter_decode_with_beam: int = field(
- default=1,
- metadata={
- "help": "if > 1, model will generate translations varying by the lengths."
- },
- )
- iter_decode_with_external_reranker: bool = field(
- default=False,
- metadata={
- "help": "if set, the last checkpoint are assumed to be a reranker to rescore the translations"
- },
- )
- retain_iter_history: bool = field(
- default=False,
- metadata={
- "help": "if set, decoding returns the whole history of iterative refinement"
- },
- )
- retain_dropout: bool = field(
- default=False, metadata={"help": "Use dropout at inference time"},
- )
- # temporarily set to Any until https://github.com/facebookresearch/hydra/issues/1117 is fixed
- # retain_dropout_modules: Optional[List[str]] = field(
- retain_dropout_modules: Any = field(
- default=None,
- metadata={
- "help": "if set, only retain dropout for the specified modules; "
- "if not set, then dropout will be retained for all modules"
- },
- )
- # special decoding format for advanced decoding.
- decoding_format: Optional[GENERATION_DECODING_FORMAT_CHOICES] = field(
- default=None,
- metadata={"help": "special decoding format for advanced decoding."},
- )
- no_seed_provided: bool = field(
- default=False,
- metadata={"help": "if set, dont use seed for initializing random generators"},
- )
-
-
-@dataclass
-class CommonEvalConfig(FairseqDataclass):
- path: Optional[str] = field(
- default=None, metadata={"help": "path(s) to model file(s), colon separated"},
- )
- post_process: Optional[str] = field(
- default=None,
- metadata={
- "help": (
- "post-process text by removing BPE, letter segmentation, etc. "
- "Valid options can be found in fairseq.data.utils.post_process."
- ),
- "argparse_const": "subword_nmt",
- "argparse_alias": "--remove-bpe",
- },
- )
- quiet: bool = field(default=False, metadata={"help": "only print final scores"})
- model_overrides: str = field(
- default="{}",
- metadata={
- "help": "a dictionary used to override model args at generation that were used during model training"
- },
- )
- results_path: Optional[str] = field(
- default=None, metadata={"help": "path to save eval results (optional)"}
- )
-
-
-@dataclass
-class EvalLMConfig(FairseqDataclass):
- output_word_probs: bool = field(
- default=False,
- metadata={
- "help": "if set, outputs words and their predicted log probabilities to standard output"
- },
- )
- output_word_stats: bool = field(
- default=False,
- metadata={
- "help": "if set, outputs word statistics such as word count, average probability, etc"
- },
- )
- context_window: int = field(
- default=0,
- metadata={
- "help": "ensures that every evaluated token has access to a context of at least this size, if possible"
- },
- )
- softmax_batch: int = field(
- default=sys.maxsize,
- metadata={
- "help": "if BxT is more than this, will batch the softmax over vocab to this amount of tokens, in order to fit into GPU memory"
- },
- )
-
-
-@dataclass
-class InteractiveConfig(FairseqDataclass):
- buffer_size: int = field(
- default=0,
- metadata={
- "help": "read this many sentences into a buffer before processing them"
- },
- )
- input: str = field(
- default="-", metadata={"help": "file to read from; use - for stdin"},
- )
-
-
-@dataclass
-class EMAConfig(FairseqDataclass):
- store_ema: bool = field(
- default=False, metadata={
- help: "store exponential moving average shadow model"
- }
- )
- ema_decay: float = field(
- default=0.9999, metadata={
- "help": 'decay for exponential moving average model'
- }
- )
- ema_start_update : int = field(
- default=0, metadata={"help": "start EMA update after this many model updates"}
- )
- ema_seed_model : Optional[str] = field(
- default=None, metadata={
- "help": "Seed to load EMA model from. "
- "Used to load EMA model separately from the actual model."
- }
- )
- ema_update_freq : int = field(
- default=1, metadata={"help": "Do EMA update every this many model updates"}
- )
- ema_fp32: bool = field(
- default=False,
- metadata={"help": "If true, store EMA model in fp32 even if model is in fp16"},
- )
-
-
-@dataclass
-class FairseqConfig(FairseqDataclass):
- common: CommonConfig = CommonConfig()
- common_eval: CommonEvalConfig = CommonEvalConfig()
- distributed_training: DistributedTrainingConfig = DistributedTrainingConfig()
- dataset: DatasetConfig = DatasetConfig()
- optimization: OptimizationConfig = OptimizationConfig()
- checkpoint: CheckpointConfig = CheckpointConfig()
- bmuf: FairseqBMUFConfig = FairseqBMUFConfig()
- generation: GenerationConfig = GenerationConfig()
- eval_lm: EvalLMConfig = EvalLMConfig()
- interactive: InteractiveConfig = InteractiveConfig()
- model: Any = MISSING
- task: Any = None
- criterion: Any = None
- optimizer: Any = None
- lr_scheduler: Any = None
- scoring: Any = None
- bpe: Any = None
- tokenizer: Any = None
- ema: EMAConfig = EMAConfig()
diff --git a/spaces/IDEA-CCNL/ziya2-13B-base/README.md b/spaces/IDEA-CCNL/ziya2-13B-base/README.md
deleted file mode 100644
index 8e99a95563c3e2c32e72ad210eb8b421f91d2034..0000000000000000000000000000000000000000
--- a/spaces/IDEA-CCNL/ziya2-13B-base/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Ziya LLaMA2 CPT
-emoji: 🏆
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/box_ops.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/box_ops.py
deleted file mode 100644
index 781068d294e576954edb4bd07b6e0f30e4e1bcd9..0000000000000000000000000000000000000000
--- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/box_ops.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Utilities for bounding box manipulation and GIoU.
-"""
-import torch
-from torchvision.ops.boxes import box_area
-
-
-def box_cxcywh_to_xyxy(x):
- x_c, y_c, w, h = x.unbind(-1)
- b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)]
- return torch.stack(b, dim=-1)
-
-
-def box_xyxy_to_cxcywh(x):
- x0, y0, x1, y1 = x.unbind(-1)
- b = [(x0 + x1) / 2, (y0 + y1) / 2, (x1 - x0), (y1 - y0)]
- return torch.stack(b, dim=-1)
-
-
-# modified from torchvision to also return the union
-def box_iou(boxes1, boxes2):
- area1 = box_area(boxes1)
- area2 = box_area(boxes2)
-
- # import ipdb; ipdb.set_trace()
- lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
- rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
-
- wh = (rb - lt).clamp(min=0) # [N,M,2]
- inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]
-
- union = area1[:, None] + area2 - inter
-
- iou = inter / (union + 1e-6)
- return iou, union
-
-
-def generalized_box_iou(boxes1, boxes2):
- """
- Generalized IoU from https://giou.stanford.edu/
-
- The boxes should be in [x0, y0, x1, y1] format
-
- Returns a [N, M] pairwise matrix, where N = len(boxes1)
- and M = len(boxes2)
- """
- # degenerate boxes gives inf / nan results
- # so do an early check
- assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
- assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
- # except:
- # import ipdb; ipdb.set_trace()
- iou, union = box_iou(boxes1, boxes2)
-
- lt = torch.min(boxes1[:, None, :2], boxes2[:, :2])
- rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])
-
- wh = (rb - lt).clamp(min=0) # [N,M,2]
- area = wh[:, :, 0] * wh[:, :, 1]
-
- return iou - (area - union) / (area + 1e-6)
-
-
-# modified from torchvision to also return the union
-def box_iou_pairwise(boxes1, boxes2):
- area1 = box_area(boxes1)
- area2 = box_area(boxes2)
-
- lt = torch.max(boxes1[:, :2], boxes2[:, :2]) # [N,2]
- rb = torch.min(boxes1[:, 2:], boxes2[:, 2:]) # [N,2]
-
- wh = (rb - lt).clamp(min=0) # [N,2]
- inter = wh[:, 0] * wh[:, 1] # [N]
-
- union = area1 + area2 - inter
-
- iou = inter / union
- return iou, union
-
-
-def generalized_box_iou_pairwise(boxes1, boxes2):
- """
- Generalized IoU from https://giou.stanford.edu/
-
- Input:
- - boxes1, boxes2: N,4
- Output:
- - giou: N, 4
- """
- # degenerate boxes gives inf / nan results
- # so do an early check
- assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
- assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
- assert boxes1.shape == boxes2.shape
- iou, union = box_iou_pairwise(boxes1, boxes2) # N, 4
-
- lt = torch.min(boxes1[:, :2], boxes2[:, :2])
- rb = torch.max(boxes1[:, 2:], boxes2[:, 2:])
-
- wh = (rb - lt).clamp(min=0) # [N,2]
- area = wh[:, 0] * wh[:, 1]
-
- return iou - (area - union) / area
-
-
-def masks_to_boxes(masks):
- """Compute the bounding boxes around the provided masks
-
- The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions.
-
- Returns a [N, 4] tensors, with the boxes in xyxy format
- """
- if masks.numel() == 0:
- return torch.zeros((0, 4), device=masks.device)
-
- h, w = masks.shape[-2:]
-
- y = torch.arange(0, h, dtype=torch.float)
- x = torch.arange(0, w, dtype=torch.float)
- y, x = torch.meshgrid(y, x)
-
- x_mask = masks * x.unsqueeze(0)
- x_max = x_mask.flatten(1).max(-1)[0]
- x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]
-
- y_mask = masks * y.unsqueeze(0)
- y_max = y_mask.flatten(1).max(-1)[0]
- y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]
-
- return torch.stack([x_min, y_min, x_max, y_max], 1)
-
-
-if __name__ == "__main__":
- x = torch.rand(5, 4)
- y = torch.rand(3, 4)
- iou, union = box_iou(x, y)
- import ipdb
-
- ipdb.set_trace()
diff --git a/spaces/IgorSense/Diffusion_Space2/utils.py b/spaces/IgorSense/Diffusion_Space2/utils.py
deleted file mode 100644
index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000
--- a/spaces/IgorSense/Diffusion_Space2/utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def is_google_colab():
- try:
- import google.colab
- return True
- except:
- return False
\ No newline at end of file
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/modules/modules.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/modules/modules.py
deleted file mode 100644
index 54290fd207b25e93831bd21005990ea137e6b50e..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/modules/modules.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import modules.commons as commons
-from modules.commons import init_weights, get_padding
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
diff --git a/spaces/Intae/deepfake/training/transforms/__init__.py b/spaces/Intae/deepfake/training/transforms/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/JUNGU/Image-to-Story-Ko-multiplot/share_btn.py b/spaces/JUNGU/Image-to-Story-Ko-multiplot/share_btn.py
deleted file mode 100644
index a14082bc8a4902e010d8a98431adc3e7ffecf5a3..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/Image-to-Story-Ko-multiplot/share_btn.py
+++ /dev/null
@@ -1,75 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
-
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
-
- const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app');
- const inputImgEl = gradioEl.querySelector('#image-in img');
- const outputTxt = gradioEl.querySelector('#story textarea').value;
- const wordsArray = outputTxt.split(" ");
- const firstWords = wordsArray.slice(0, 12).join(" ");
-
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
-
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
-
-
- const inputFile = await getInputImgFile(inputImgEl);
- const urlInputImg = await uploadFile(inputFile);
-
- const descriptionMd = `
-
-#### Image input:
-
-
-#### Generated Story:
-${outputTxt}
-`;
- const params = new URLSearchParams({
- title: firstWords,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/fffiloni/Image-to-Story/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/JackBAI/MassageMateNLP/models/bert/position_y/README.md b/spaces/JackBAI/MassageMateNLP/models/bert/position_y/README.md
deleted file mode 100644
index 16b794008dfcf6e8150518dabfcabecfce6f0d2d..0000000000000000000000000000000000000000
--- a/spaces/JackBAI/MassageMateNLP/models/bert/position_y/README.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-license: apache-2.0
-tags:
-- generated_from_trainer
-metrics:
-- accuracy
-model-index:
-- name: position_y
- results: []
----
-
-
-
-# position_y
-
-This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
-It achieves the following results on the evaluation set:
-- Loss: 0.4375
-- Accuracy: 0.8227
-
-## Model description
-
-More information needed
-
-## Intended uses & limitations
-
-More information needed
-
-## Training and evaluation data
-
-More information needed
-
-## Training procedure
-
-### Training hyperparameters
-
-The following hyperparameters were used during training:
-- learning_rate: 2e-05
-- train_batch_size: 128
-- eval_batch_size: 8
-- seed: 42
-- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
-- lr_scheduler_type: linear
-- num_epochs: 12.0
-
-### Training results
-
-
-
-### Framework versions
-
-- Transformers 4.25.1
-- Pytorch 1.11.0+cu113
-- Datasets 2.8.0
-- Tokenizers 0.13.2
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_2d_blocks_flax.py b/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_2d_blocks_flax.py
deleted file mode 100644
index 96e76cb06a59a31beebf4449786b72a7c838a298..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/models/unet_2d_blocks_flax.py
+++ /dev/null
@@ -1,365 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import flax.linen as nn
-import jax.numpy as jnp
-
-from .attention_flax import FlaxTransformer2DModel
-from .resnet_flax import FlaxDownsample2D, FlaxResnetBlock2D, FlaxUpsample2D
-
-
-class FlaxCrossAttnDownBlock2D(nn.Module):
- r"""
- Cross Attention 2D Downsizing block - original architecture from Unet transformers:
- https://arxiv.org/abs/2103.06104
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- out_channels (:obj:`int`):
- Output channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of attention blocks layers
- attn_num_head_channels (:obj:`int`, *optional*, defaults to 1):
- Number of attention heads of each spatial transformer block
- add_downsample (:obj:`bool`, *optional*, defaults to `True`):
- Whether to add downsampling layer before each final output
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- out_channels: int
- dropout: float = 0.0
- num_layers: int = 1
- attn_num_head_channels: int = 1
- add_downsample: bool = True
- use_linear_projection: bool = False
- only_cross_attention: bool = False
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- resnets = []
- attentions = []
-
- for i in range(self.num_layers):
- in_channels = self.in_channels if i == 0 else self.out_channels
-
- res_block = FlaxResnetBlock2D(
- in_channels=in_channels,
- out_channels=self.out_channels,
- dropout_prob=self.dropout,
- dtype=self.dtype,
- )
- resnets.append(res_block)
-
- attn_block = FlaxTransformer2DModel(
- in_channels=self.out_channels,
- n_heads=self.attn_num_head_channels,
- d_head=self.out_channels // self.attn_num_head_channels,
- depth=1,
- use_linear_projection=self.use_linear_projection,
- only_cross_attention=self.only_cross_attention,
- dtype=self.dtype,
- )
- attentions.append(attn_block)
-
- self.resnets = resnets
- self.attentions = attentions
-
- if self.add_downsample:
- self.downsamplers_0 = FlaxDownsample2D(self.out_channels, dtype=self.dtype)
-
- def __call__(self, hidden_states, temb, encoder_hidden_states, deterministic=True):
- output_states = ()
-
- for resnet, attn in zip(self.resnets, self.attentions):
- hidden_states = resnet(hidden_states, temb, deterministic=deterministic)
- hidden_states = attn(hidden_states, encoder_hidden_states, deterministic=deterministic)
- output_states += (hidden_states,)
-
- if self.add_downsample:
- hidden_states = self.downsamplers_0(hidden_states)
- output_states += (hidden_states,)
-
- return hidden_states, output_states
-
-
-class FlaxDownBlock2D(nn.Module):
- r"""
- Flax 2D downsizing block
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- out_channels (:obj:`int`):
- Output channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of attention blocks layers
- add_downsample (:obj:`bool`, *optional*, defaults to `True`):
- Whether to add downsampling layer before each final output
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- out_channels: int
- dropout: float = 0.0
- num_layers: int = 1
- add_downsample: bool = True
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- resnets = []
-
- for i in range(self.num_layers):
- in_channels = self.in_channels if i == 0 else self.out_channels
-
- res_block = FlaxResnetBlock2D(
- in_channels=in_channels,
- out_channels=self.out_channels,
- dropout_prob=self.dropout,
- dtype=self.dtype,
- )
- resnets.append(res_block)
- self.resnets = resnets
-
- if self.add_downsample:
- self.downsamplers_0 = FlaxDownsample2D(self.out_channels, dtype=self.dtype)
-
- def __call__(self, hidden_states, temb, deterministic=True):
- output_states = ()
-
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states, temb, deterministic=deterministic)
- output_states += (hidden_states,)
-
- if self.add_downsample:
- hidden_states = self.downsamplers_0(hidden_states)
- output_states += (hidden_states,)
-
- return hidden_states, output_states
-
-
-class FlaxCrossAttnUpBlock2D(nn.Module):
- r"""
- Cross Attention 2D Upsampling block - original architecture from Unet transformers:
- https://arxiv.org/abs/2103.06104
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- out_channels (:obj:`int`):
- Output channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of attention blocks layers
- attn_num_head_channels (:obj:`int`, *optional*, defaults to 1):
- Number of attention heads of each spatial transformer block
- add_upsample (:obj:`bool`, *optional*, defaults to `True`):
- Whether to add upsampling layer before each final output
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- out_channels: int
- prev_output_channel: int
- dropout: float = 0.0
- num_layers: int = 1
- attn_num_head_channels: int = 1
- add_upsample: bool = True
- use_linear_projection: bool = False
- only_cross_attention: bool = False
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- resnets = []
- attentions = []
-
- for i in range(self.num_layers):
- res_skip_channels = self.in_channels if (i == self.num_layers - 1) else self.out_channels
- resnet_in_channels = self.prev_output_channel if i == 0 else self.out_channels
-
- res_block = FlaxResnetBlock2D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=self.out_channels,
- dropout_prob=self.dropout,
- dtype=self.dtype,
- )
- resnets.append(res_block)
-
- attn_block = FlaxTransformer2DModel(
- in_channels=self.out_channels,
- n_heads=self.attn_num_head_channels,
- d_head=self.out_channels // self.attn_num_head_channels,
- depth=1,
- use_linear_projection=self.use_linear_projection,
- only_cross_attention=self.only_cross_attention,
- dtype=self.dtype,
- )
- attentions.append(attn_block)
-
- self.resnets = resnets
- self.attentions = attentions
-
- if self.add_upsample:
- self.upsamplers_0 = FlaxUpsample2D(self.out_channels, dtype=self.dtype)
-
- def __call__(self, hidden_states, res_hidden_states_tuple, temb, encoder_hidden_states, deterministic=True):
- for resnet, attn in zip(self.resnets, self.attentions):
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = jnp.concatenate((hidden_states, res_hidden_states), axis=-1)
-
- hidden_states = resnet(hidden_states, temb, deterministic=deterministic)
- hidden_states = attn(hidden_states, encoder_hidden_states, deterministic=deterministic)
-
- if self.add_upsample:
- hidden_states = self.upsamplers_0(hidden_states)
-
- return hidden_states
-
-
-class FlaxUpBlock2D(nn.Module):
- r"""
- Flax 2D upsampling block
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- out_channels (:obj:`int`):
- Output channels
- prev_output_channel (:obj:`int`):
- Output channels from the previous block
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of attention blocks layers
- add_downsample (:obj:`bool`, *optional*, defaults to `True`):
- Whether to add downsampling layer before each final output
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- out_channels: int
- prev_output_channel: int
- dropout: float = 0.0
- num_layers: int = 1
- add_upsample: bool = True
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- resnets = []
-
- for i in range(self.num_layers):
- res_skip_channels = self.in_channels if (i == self.num_layers - 1) else self.out_channels
- resnet_in_channels = self.prev_output_channel if i == 0 else self.out_channels
-
- res_block = FlaxResnetBlock2D(
- in_channels=resnet_in_channels + res_skip_channels,
- out_channels=self.out_channels,
- dropout_prob=self.dropout,
- dtype=self.dtype,
- )
- resnets.append(res_block)
-
- self.resnets = resnets
-
- if self.add_upsample:
- self.upsamplers_0 = FlaxUpsample2D(self.out_channels, dtype=self.dtype)
-
- def __call__(self, hidden_states, res_hidden_states_tuple, temb, deterministic=True):
- for resnet in self.resnets:
- # pop res hidden states
- res_hidden_states = res_hidden_states_tuple[-1]
- res_hidden_states_tuple = res_hidden_states_tuple[:-1]
- hidden_states = jnp.concatenate((hidden_states, res_hidden_states), axis=-1)
-
- hidden_states = resnet(hidden_states, temb, deterministic=deterministic)
-
- if self.add_upsample:
- hidden_states = self.upsamplers_0(hidden_states)
-
- return hidden_states
-
-
-class FlaxUNetMidBlock2DCrossAttn(nn.Module):
- r"""
- Cross Attention 2D Mid-level block - original architecture from Unet transformers: https://arxiv.org/abs/2103.06104
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of attention blocks layers
- attn_num_head_channels (:obj:`int`, *optional*, defaults to 1):
- Number of attention heads of each spatial transformer block
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- dropout: float = 0.0
- num_layers: int = 1
- attn_num_head_channels: int = 1
- use_linear_projection: bool = False
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- # there is always at least one resnet
- resnets = [
- FlaxResnetBlock2D(
- in_channels=self.in_channels,
- out_channels=self.in_channels,
- dropout_prob=self.dropout,
- dtype=self.dtype,
- )
- ]
-
- attentions = []
-
- for _ in range(self.num_layers):
- attn_block = FlaxTransformer2DModel(
- in_channels=self.in_channels,
- n_heads=self.attn_num_head_channels,
- d_head=self.in_channels // self.attn_num_head_channels,
- depth=1,
- use_linear_projection=self.use_linear_projection,
- dtype=self.dtype,
- )
- attentions.append(attn_block)
-
- res_block = FlaxResnetBlock2D(
- in_channels=self.in_channels,
- out_channels=self.in_channels,
- dropout_prob=self.dropout,
- dtype=self.dtype,
- )
- resnets.append(res_block)
-
- self.resnets = resnets
- self.attentions = attentions
-
- def __call__(self, hidden_states, temb, encoder_hidden_states, deterministic=True):
- hidden_states = self.resnets[0](hidden_states, temb)
- for attn, resnet in zip(self.attentions, self.resnets[1:]):
- hidden_states = attn(hidden_states, encoder_hidden_states, deterministic=deterministic)
- hidden_states = resnet(hidden_states, temb, deterministic=deterministic)
-
- return hidden_states
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/html/update.html b/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/html/update.html
deleted file mode 100644
index 6f005e11a0a11f441ff2c56d05b98402c640a53f..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/html/update.html
+++ /dev/null
@@ -1,29 +0,0 @@
-
can intimidate students, code doesn't work and they don't know why
-
-
Questions about SM:
Ideal plan - land my next job and THEN start mentoring. But interviews are coming in slow - don't rush it
What kind of things are mentees looking for?
whats' the job market look like these days?
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/avid-ml/bias-detection/README.md b/spaces/avid-ml/bias-detection/README.md
deleted file mode 100644
index 379e8cbabb93b13ead7401be8308adc44fa5c3e7..0000000000000000000000000000000000000000
--- a/spaces/avid-ml/bias-detection/README.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Plug-and-Play Bias Detection
-emoji: 🦝
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
-tags:
- - ethics
- - rigorous
- - inquisitive
----
-
-# Plug-and-Play Bias Detection
-The AVID (AI Vulnerability Database) team is examining a few large language models (LLMs) on Hugging Face. We will develop a way to evaluate and catalog their vulnerabilities in the hopes of encouraging the community to contribute. As a first step, we’re going to pick a single model and try to evaluate it for vulnerabilities on a specific task. Once we have done one model, we’ll see if we can generalize our data sets and tools to function broadly on the Hugging Face platform.
-
-## Vision
-Build a foundation for evaluating LLMs using the Hugging Face platform and start populating our database with real incidents.
-
-## Goals
-* Build, test, and refine our own data sets for evaluating models
-* Identify existing data sets we want to use for evaluating models (Ex. Stereoset, wino_bias, etc.)
-* Test different tools and methods for evaluating LLMs so we can start to create and support some for cataloging vulnerabilities in our database
-* Start populating the database with known, verified, and discovered vulnerabilities for models hosted on Hugging Face
-
-## Resources
-The links below should help anyone who wants to support the project find a place to start. They are not exhaustive, and people should feel free to add anything relevant.
-* [Huggingface.co](https://huggingface.co/) - platform for hosting data sets, models, etc.
-* [Papers With Code](https://paperswithcode.com/) - a platform for the ML community to share research, it may have additional data sets or papers
-* Potential Models
- * [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
- * [Bert-base-uncased](https://huggingface.co/bert-base-uncased)
- * [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
- * [gpt2](https://huggingface.co/gpt2)
-* Data Sets
- * [StereoSet](https://stereoset.mit.edu/) - StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measure model preferences across gender, race, religion, and profession.
- * [Wino_bias](https://huggingface.co/datasets/wino_bias) - WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias.
- * [Jigsaw_unintended_bias](https://huggingface.co/distilbert-base-uncased) - The main target for this dataset is toxicity prediction. Several toxicity subtypes are also available, so the dataset can be used for multi-attribute prediction.
- * [BigScienceBiasEval/bias-shades](https://huggingface.co/datasets/BigScienceBiasEval/bias-shades) - This dataset was curated by hand-crafting stereotype sentences by native speakers from the culture which is being targeted. (Seems incomplete)
- * [md_gender_bias](https://huggingface.co/datasets/md_gender_bias) - The dataset can be used to train a model for classification of various kinds of gender bias.
- * [social_bias_frames](https://huggingface.co/datasets/social_bias_frames) - This dataset supports both classification and generation. Sap et al. developed several models using the SBIC.
- * [BIG-bench/keywords_to_tasks.md at main](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md#pro-social-behavior) - includes many options for testing bias of different types (gender, religion, etc.)
- * [FB Fairscore](https://github.com/facebookresearch/ResponsibleNLP/tree/main/fairscore) - Has a wide selection of sources, focuses on gender (including non-binary).
-* Papers
- * [Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurement](https://arxiv.org/abs/2210.01970)
- * [On the Dangers of Stochastic Parrots](https://dl.acm.org/doi/10.1145/3442188.3445922)
- * [Language (Technology) is Power: A Critical Survey of “Bias” in NLP](https://aclanthology.org/2020.acl-main.485/)
- * [Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models](https://aclanthology.org/2022.naacl-main.122/)
- * [Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies](https://aclanthology.org/2021.emnlp-main.150/)
diff --git a/spaces/avid-ml/bias-detection/scripts/download_bold.sh b/spaces/avid-ml/bias-detection/scripts/download_bold.sh
deleted file mode 100644
index 341b9ea3f8a35dbc8f58496ede721c9e7908ea12..0000000000000000000000000000000000000000
--- a/spaces/avid-ml/bias-detection/scripts/download_bold.sh
+++ /dev/null
@@ -1,11 +0,0 @@
-#!/bin/bash
-mkdir -p ../prompts
-cd ../prompts
-
-PROMPT_LINK="https://raw.githubusercontent.com/amazon-science/bold/main/prompts"
-
-wget -O gender_prompt.json $PROMPT_LINK/gender_prompt.json
-wget -O political_ideology_prompt.json $PROMPT_LINK/political_ideology_prompt.json
-wget -O profession_prompt.json $PROMPT_LINK/profession_prompt.json
-wget -O race_prompt.json $PROMPT_LINK/race_prompt.json
-wget -O religious_ideology_prompt.json $PROMPT_LINK/religious_ideology_prompt.json
diff --git a/spaces/awacke1/Assess.LOINC.Panel.Extractor/README.md b/spaces/awacke1/Assess.LOINC.Panel.Extractor/README.md
deleted file mode 100644
index 73f88d0d53521565667f159a8168bbff5af7fea2..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Assess.LOINC.Panel.Extractor/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Assess.LOINC.Panel.Extractor
-emoji: 🔥
-colorFrom: blue
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/static/script.js b/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/static/script.js
deleted file mode 100644
index efd05c5d1e76ecd3d0e41927b073c8d10f1e8e20..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Docker-FlanT5-TextGeneratorTranslator/static/script.js
+++ /dev/null
@@ -1,21 +0,0 @@
-const textGenForm = document.querySelector('.text-gen-form');
-
-const translateText = async (text) => {
- const inferResponse = await fetch(`infer_t5?input=${text}`);
- const inferJson = await inferResponse.json();
-
- return inferJson.output;
-};
-
-textGenForm.addEventListener('submit', async (event) => {
- event.preventDefault();
-
- const textGenInput = document.getElementById('text-gen-input');
- const textGenParagraph = document.querySelector('.text-gen-output');
-
- try {
- textGenParagraph.textContent = await translateText(textGenInput.value);
- } catch (err) {
- console.error(err);
- }
-});
\ No newline at end of file
diff --git a/spaces/awacke1/UnitedStatesMapAIandNLP/README.md b/spaces/awacke1/UnitedStatesMapAIandNLP/README.md
deleted file mode 100644
index 5c6afdf30d2c7ef0a86793cbcd6a7681629c0e03..0000000000000000000000000000000000000000
--- a/spaces/awacke1/UnitedStatesMapAIandNLP/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: UnitedStatesMapAIandNLP
-emoji: 🇺🇸🗺️🤖
-colorFrom: red
-colorTo: green
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/objects/LightningStorm.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/objects/LightningStorm.js
deleted file mode 100644
index a6a22ebe5fa06caed6bcd903d1f04141dbb72dca..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/objects/LightningStorm.js
+++ /dev/null
@@ -1,238 +0,0 @@
-/**
- * @author yomboprime https://github.com/yomboprime
- *
- * @fileoverview Lightning strike object generator
- *
- *
- * Usage
- *
- * var myStorm = new THREE.LightningStorm( paramsObject );
- * myStorm.position.set( ... );
- * scene.add( myStorm );
- * ...
- * myStorm.update( currentTime );
- *
- * The "currentTime" can only go forwards or be stopped.
- *
- *
- * LightningStorm parameters:
- *
- * @param {double} size Size of the storm. If no 'onRayPosition' parameter is defined, it means the side of the rectangle the storm covers.
- *
- * @param {double} minHeight Minimum height a ray can start at. If no 'onRayPosition' parameter is defined, it means the height above plane y = 0.
- *
- * @param {double} maxHeight Maximum height a ray can start at. If no 'onRayPosition' parameter is defined, it means the height above plane y = 0.
- *
- * @param {double} maxSlope The maximum inclination slope of a ray. If no 'onRayPosition' parameter is defined, it means the slope relative to plane y = 0.
- *
- * @param {integer} maxLightnings Greater than 0. The maximum number of simultaneous rays.
- *
- * @param {double} lightningMinPeriod minimum time between two consecutive rays.
- *
- * @param {double} lightningMaxPeriod maximum time between two consecutive rays.
- *
- * @param {double} lightningMinDuration The minimum time a ray can last.
- *
- * @param {double} lightningMaxDuration The maximum time a ray can last.
- *
- * @param {Object} lightningParameters The parameters for created rays. See THREE.LightningStrike (geometry)
- *
- * @param {Material} lightningMaterial The THREE.Material used for the created rays.
- *
- * @param {function} onRayPosition Optional callback with two Vector3 parameters (source, dest). You can set here the start and end points for each created ray, using the standard size, minHeight, etc parameters and other values in your algorithm.
- *
- * @param {function} onLightningDown This optional callback is called with one parameter (lightningStrike) when a ray ends propagating, so it has hit the ground.
- *
- *
-*/
-
-THREE.LightningStorm = function ( stormParams ) {
-
- THREE.Object3D.call( this );
-
- // Parameters
-
- stormParams = stormParams || {};
- this.stormParams = stormParams;
-
- stormParams.size = stormParams.size !== undefined ? stormParams.size : 1000.0;
- stormParams.minHeight = stormParams.minHeight !== undefined ? stormParams.minHeight : 80.0;
- stormParams.maxHeight = stormParams.maxHeight !== undefined ? stormParams.maxHeight : 100.0;
- stormParams.maxSlope = stormParams.maxSlope !== undefined ? stormParams.maxSlope : 1.1;
-
- stormParams.maxLightnings = stormParams.maxLightnings !== undefined ? stormParams.maxLightnings : 3;
-
- stormParams.lightningMinPeriod = stormParams.lightningMinPeriod !== undefined ? stormParams.lightningMinPeriod : 3.0;
- stormParams.lightningMaxPeriod = stormParams.lightningMaxPeriod !== undefined ? stormParams.lightningMaxPeriod : 7.0;
-
- stormParams.lightningMinDuration = stormParams.lightningMinDuration !== undefined ? stormParams.lightningMinDuration : 1.0;
- stormParams.lightningMaxDuration = stormParams.lightningMaxDuration !== undefined ? stormParams.lightningMaxDuration : 2.5;
-
- this.lightningParameters = THREE.LightningStrike.copyParameters( stormParams.lightningParameters, stormParams.lightningParameters );
-
- this.lightningParameters.isEternal = false;
-
- this.lightningMaterial = stormParams.lightningMaterial !== undefined ? stormParams.lightningMaterial : new THREE.MeshBasicMaterial( { color: 0xB0FFFF } );
-
- if ( stormParams.onRayPosition !== undefined ) {
-
- this.onRayPosition = stormParams.onRayPosition;
-
- } else {
-
- this.onRayPosition = function ( source, dest ) {
-
- dest.set( ( Math.random() - 0.5 ) * stormParams.size, 0, ( Math.random() - 0.5 ) * stormParams.size );
-
- var height = THREE.Math.lerp( stormParams.minHeight, stormParams.maxHeight, Math.random() );
-
- source.set( stormParams.maxSlope * ( 2 * Math.random() - 1 ), 1, stormParams.maxSlope * ( 2 * Math.random() - 1 ) ).multiplyScalar( height ).add( dest );
-
- };
-
- }
-
- this.onLightningDown = stormParams.onLightningDown;
-
- // Internal state
-
- this.inited = false;
- this.nextLightningTime = 0;
- this.lightningsMeshes = [];
- this.deadLightningsMeshes = [];
-
- for ( var i = 0; i < this.stormParams.maxLightnings; i ++ ) {
-
- var lightning = new THREE.LightningStrike( THREE.LightningStrike.copyParameters( {}, this.lightningParameters ) );
- var mesh = new THREE.Mesh( lightning, this.lightningMaterial );
- this.deadLightningsMeshes.push( mesh );
-
- }
-
-};
-
-THREE.LightningStorm.prototype = Object.create( THREE.Object3D.prototype );
-
-THREE.LightningStorm.prototype.constructor = THREE.LightningStorm;
-
-THREE.LightningStorm.prototype.isLightningStorm = true;
-
-THREE.LightningStorm.prototype.update = function ( time ) {
-
- if ( ! this.inited ) {
-
- this.nextLightningTime = this.getNextLightningTime( time ) * Math.random();
- this.inited = true;
-
- }
-
- if ( time >= this.nextLightningTime ) {
-
- // Lightning creation
-
- var lightningMesh = this.deadLightningsMeshes.pop();
-
- if ( lightningMesh ) {
-
- var lightningParams1 = THREE.LightningStrike.copyParameters( lightningMesh.geometry.rayParameters, this.lightningParameters );
-
- lightningParams1.birthTime = time;
- lightningParams1.deathTime = time + THREE.Math.lerp( this.stormParams.lightningMinDuration, this.stormParams.lightningMaxDuration, Math.random() );
-
- this.onRayPosition( lightningParams1.sourceOffset, lightningParams1.destOffset );
-
- lightningParams1.noiseSeed = Math.random();
-
- this.add( lightningMesh );
-
- this.lightningsMeshes.push( lightningMesh );
-
- }
-
- // Schedule next lightning
- this.nextLightningTime = this.getNextLightningTime( time );
-
- }
-
- var i = 0, il = this.lightningsMeshes.length;
-
- while ( i < il ) {
-
- var mesh = this.lightningsMeshes[ i ];
-
- var lightning = mesh.geometry;
-
- var prevState = lightning.state;
-
- lightning.update( time );
-
- if ( prevState === THREE.LightningStrike.RAY_PROPAGATING && lightning.state > prevState ) {
-
- if ( this.onLightningDown ) {
-
- this.onLightningDown( lightning );
-
- }
-
- }
-
- if ( lightning.state === THREE.LightningStrike.RAY_EXTINGUISHED ) {
-
- // Lightning is to be destroyed
-
- this.lightningsMeshes.splice( this.lightningsMeshes.indexOf( mesh ), 1 );
-
- this.deadLightningsMeshes.push( mesh );
-
- this.remove( mesh );
-
- il --;
-
- } else {
-
- i ++;
-
- }
-
- }
-
-};
-
-THREE.LightningStorm.prototype.getNextLightningTime = function ( currentTime ) {
-
- return currentTime + THREE.Math.lerp( this.stormParams.lightningMinPeriod, this.stormParams.lightningMaxPeriod, Math.random() ) / ( this.stormParams.maxLightnings + 1 );
-
-};
-
-THREE.LightningStorm.prototype.copy = function ( source ) {
-
- THREE.Object3D.prototype.copy.call( this, source );
-
- this.stormParams.size = source.stormParams.size;
- this.stormParams.minHeight = source.stormParams.minHeight;
- this.stormParams.maxHeight = source.stormParams.maxHeight;
- this.stormParams.maxSlope = source.stormParams.maxSlope;
-
- this.stormParams.maxLightnings = source.stormParams.maxLightnings;
-
- this.stormParams.lightningMinPeriod = source.stormParams.lightningMinPeriod;
- this.stormParams.lightningMaxPeriod = source.stormParams.lightningMaxPeriod;
-
- this.stormParams.lightningMinDuration = source.stormParams.lightningMinDuration;
- this.stormParams.lightningMaxDuration = source.stormParams.lightningMaxDuration;
-
- this.lightningParameters = THREE.LightningStrike.copyParameters( {}, source.lightningParameters );
-
- this.lightningMaterial = source.stormParams.lightningMaterial;
-
- this.onLightningDown = source.onLightningDown;
-
- return this;
-
-};
-
-THREE.LightningStrike.prototype.clone = function () {
-
- return new this.constructor( this.stormParams ).copy( this );
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/Box3Helper.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/helpers/Box3Helper.d.ts
deleted file mode 100644
index 0033bc1259b0a8a09ef1dbe2014a24b91ad50683..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/Box3Helper.d.ts
+++ /dev/null
@@ -1,11 +0,0 @@
-import { Object3D } from './../core/Object3D';
-import { Box3 } from './../math/Box3';
-import { Color } from './../math/Color';
-import { LineSegments } from './../objects/LineSegments';
-
-export class Box3Helper {
- constructor(object?: Object3D, color?: Color);
-
- type: string;
- box: Box3;
-}
diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/utils/train_utils.py b/spaces/bankholdup/stylegan_petbreeder/e4e/utils/train_utils.py
deleted file mode 100644
index 0c55177f7442010bc1fcc64de3d142585c22adc0..0000000000000000000000000000000000000000
--- a/spaces/bankholdup/stylegan_petbreeder/e4e/utils/train_utils.py
+++ /dev/null
@@ -1,13 +0,0 @@
-
-def aggregate_loss_dict(agg_loss_dict):
- mean_vals = {}
- for output in agg_loss_dict:
- for key in output:
- mean_vals[key] = mean_vals.setdefault(key, []) + [output[key]]
- for key in mean_vals:
- if len(mean_vals[key]) > 0:
- mean_vals[key] = sum(mean_vals[key]) / len(mean_vals[key])
- else:
- print('{} has no value'.format(key))
- mean_vals[key] = 0
- return mean_vals
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222253.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222253.py
deleted file mode 100644
index fe2dcde2a2493f2ee33d4765b83a4724e98aeab7..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327222253.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import os
-os.system("pip install gfpgan")
-
-#os.system("pip freeze")
-#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .")
-import random
-import gradio as gr
-from PIL import Image
-import torch
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg')
-
-
-import cv2
-import glob
-import numpy as np
-from basicsr.utils import imwrite
-from gfpgan import GFPGANer
-
-bg_upsampler = None
-
-
-
-# set up GFPGAN restorer
-restorer = GFPGANer(
- model_path='experiments/pretrained_models/GFPGANv1.3.pth',
- upscale=2,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=bg_upsampler)
-
-
-def inference(img):
- input_img = cv2.imread(img, cv2.IMREAD_COLOR)
- cropped_faces, restored_faces, restored_img = restorer.enhance(
- input_img, has_aligned=False, only_center_face=False, paste_back=True)
-
- #return Image.fromarray(restored_faces[0][:,:,::-1])
- return Image.fromarray(restored_img[:, :, ::-1])
-
-title = "让美好回忆更清晰"
-
-
-description = "上传老照片,点击Submit,稍等片刻,右侧Output将照片另存为即可。"
-
-article = "
-
-Chem Office 15 Keygen Torrent chem bio office 2008 keygen Serial key for. ChemDraw Ultra activation key Chemdraw ultra 12 mac keygen Planilla de. 1fdad05405
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Dil Pardesi Ho Gayaa movie 1 in hindi 3gp free download A tale of courage and sacrifice.md b/spaces/bioriAsaeru/text-to-voice/Dil Pardesi Ho Gayaa movie 1 in hindi 3gp free download A tale of courage and sacrifice.md
deleted file mode 100644
index 4a385bea2d701644b2cbd4d1b142ff7bc7a28307..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Dil Pardesi Ho Gayaa movie 1 in hindi 3gp free download A tale of courage and sacrifice.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Dil Pardesi Ho Gayaa movie 1 in hindi 3gp free download
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (united Bank Of India Kyc Form Pdf Do).md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (united Bank Of India Kyc Form Pdf Do).md
deleted file mode 100644
index 02124f86b8fddaab3be25e162230e2f4eeb41cd7..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (united Bank Of India Kyc Form Pdf Do).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
HD Online Player (united bank of india kyc form pdf do)
-
-Download free HD wallpapers of Bollywood celebrities and recent movies. y ... [शिकायत करें] Bank Lokpal Comlaint Kaise Kare ? ... _Computer A to Z full form. ... Google Pay se Indane Gas (Indian Oil) Cylinder Book aur Payment Kaise Kare. ... के 7 Easy उपाय. listube is a free online on-demand music player. 4d29de3e1b
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Intracranial Pressure Monitoring Principles Techniques and Complications (PDF).md b/spaces/bioriAsaeru/text-to-voice/Intracranial Pressure Monitoring Principles Techniques and Complications (PDF).md
deleted file mode 100644
index 14452fe544abf40ef549913c2151af5e14049957..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Intracranial Pressure Monitoring Principles Techniques and Complications (PDF).md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
Brain Trauma Foundation (BTF) guidelines recommend intracranial pressure (ICP) monitoring in patients who sustained severe traumatic brain injury (TBI). Compliance to BTF guidelines is variable, and the effect of ICP monitoring on outcomes remains a controversial issue. The purpose of this study was to assess guidelines compliance in patients who sustain a severe TBI and to analyze the effect of ICP monitoring on outcomes.
Intracranial pressure (ICP) monitoring in severe TBI is recommended in the Brain Trauma Foundation (BTF) guidelines. The guidelines recommend ICP monitoring in patients with GCS 40 years, unilateral/bilateral motor posturing and systolic blood pressure
-
Variables extracted were demographics, comorbidities, mechanism of injury, injury specifics (epidural, subdural, subarachnoid, intracranial hemorrhage and diffuse axonal injury), AIS for each body area, Injury Severity Score (ISS), vital signs in the emergency department, ICP monitoring and type, compliance with BTF guidelines and craniectomy. Outcomes included in-hospital mortality, complications, ventilation days, intensive care unit (ICU) and hospital length of stay (HLOS), and functional independence at discharge.
-
The theoretical rationale for ICP monitoring is to maintain adequate cerebral blood flow and oxygenation by preventing or treating intracranial hypertension in a timely fashion. This, in turn, should decrease the risk of secondary brain injury and improve survival and neurologic functional outcomes. The extensively publicized relationship between intracranial hypertension and poor outcomes has led to the widespread use of ICP > 20 mmHg as the threshold for therapeutic interventions to lower the intracranial pressure. ICP monitoring can not only drive intervention, but also allow for evaluation of the response to various therapeutic pressure lowering interventions.
-
Intracranial hypertension (ICH) complicates roughly 25% of acute liver failure (ALF) patients with grade III/IV encephalopathy. Intracranial pressure (ICP) monitoring is controversial due to complications in 5 to 20% and absence of documented mortality benefit.
-
The most common etiology of ALF was acetaminophen (51%, P = 0.13 between groups). Of ICP monitored (ICPM) patients, 85% (n = 121) received devices within 24 hours of admission to study. ICPM patients were significantly younger (36 ± 6 years vs. 43 ± 15 years, P < 0.001) than controls, more likely to be on renal replacement therapy (48% vs. 31%, P < 0.001) but less likely to be on vasopressors (20% vs. 32%, P = 0.008). ICPM patients were given more ICH directed therapies (mannitol 43% vs. 13%, hypertonic saline 21% vs. 6%, hypothermia 29% vs. 11%, P < 0.001 for each comparison). For ICPM patients, the median INR on the day of monitor insertion was 2.2 (1.6 to 2.9) and platelet count 116 (84 to 171); 74% were given FFP (vs. 46% controls, P < 0.001) and 19% (vs. 14% controls, P = 0.14) received platelets. ICP monitoring was also strongly associated with listing (78% vs. 27%, P < 0.001) and receipt of liver transplant (42% vs. 18%, P < 0.001). Twenty-one-day mortality was similar between ICPM patients (33%) and controls (37%, P = 0.33) when all or only nontransplanted patients (46% vs. 45%, 0.8) were considered. Of 66 ICPM patients with detailed information, 18 (29%) had evidence of ICH (ICP >25 mmHg) at the time of ICPM insertion (maximum ICP on day 1 ~18 (12 to 26) mmHg). Of 49 patients with a known ICPM device, 14 patients received epidural catheters, six subdural, 11 intraparenchymal, seven intraventricular and 11 lumbar monitors. In only one of 49 ICPM patients was intracranial hemorrhage reported, and this patient survived.
-
In ALF patients, ICP monitor placement is strongly associated with liver transplantation but not with overall or transplant free mortality. In the absence of ICP monitoring, ALF patients may be less aggressively treated for intracranial hypertension. The value of ICP monitoring in ALF remains to be determined but ICPM placement clearly affects the frequency of interventions for elevated ICP.
-
Intracranial pressure (ICP) monitoring is recommended for severe traumatic brain injuries (TBI) but some data suggests it may not improve outcomes. The objective was to investigate the effect of ICP monitoring among TBI.
-
Severe traumatic brain injury (TBI) management focuses on preventing secondary insults [1]. The Brain Trauma Foundation severe TBI guideline recommends treatment guided by monitoring modalities including intracranial pressure (ICP) monitors [2]. They state that Level 2B evidence shows ICP monitoring may reduce mortality rates [2]. The Brian Injury Guideline, recommends that severe TBIs, categorized as Brain Injury Guideline 3, receive repeat imaging, a neurosurgeon consultation, and are admitted to the hospital, but have no ICP monitoring recommendations [3].
-
-
The COVID-19 pandemic had a significant impact on the intracranial pressure monitoring market owing to the factors such as the diversion of resources to COVID-19 patients, the decline in the footfall of patients other than COVID-19 at hospitals and clinics, and cancellation or rescheduling of procedures, among other factors. Several studies reported that COVID-19 had a severe impact on people with medical conditions like traumatic brain injury (TBI), which may have increased the demand for intracranial pressure monitoring. For instance, according to a research study published in September 2021, by the National Institute of Health, elderly patients with moderate to severe Traumatic Brain Injury (TBI) experienced a considerably higher risk of in-hospital mortality if they had a coronavirus infection and the patients with COVID-19 were 5.45 times more at risk to expire before discharge than the TBI patients who were not COVID-19 positive. Hence, COVID-19 had a significant impact on the intracranial pressure monitoring market. However, currently, the market has reached its pre-pandemic nature in terms of demand for the products and is expected to witness strong growth in the coming years.
-
Key factors propelling the market growth include the increasing prevalence of neurodegenerative disorders, the growing geriatric population, increasing cases of head and brain injury, and increasing awareness and technological advancements in intracranial pressure monitoring.
-
Similarly, according to another study by the National Institute of Health published in March 2022, the overall annual incidence of Intracerebral Hemorrhage (ICH) in the United States is 23.15 per 100,000 people, and it has been observed that ICH incidence increased with age and its increasing in young and middle-aged Americans. Hence, the high burden of diseases associated with the head and brain is expected to fuel growth in the intracranial pressure monitoring market over the forecast period.
-
Furthermore, technological advancements in the field of intracranial pressure monitoring are expected to propel market growth. For instance, in February 2021, in order to provide alarms to clinicians directly from its cerebral ultrasound gadget, Novasignal Corporation released a cloud-based app. The new app enables clinicians to receive immediate notifications from the Novaguide device, a transcranial Doppler ultrasound technology that combines robotics and artificial intelligence to provide real-time monitoring of blood flow in the brain (AI).
-
Hence, owing to the above-mentioned factors, the intracranial pressure monitoring market is expected to grow over the forecast period. However, the lack of skilled professionals and the high cost associated with intracranial pressure monitoring is expected to restrain the growth of the market over the forecast period.
-
According to a study from the National Institute of Health published in June 2022, intracranial pressure was most frequently brought on by traumatic brain injuries, which commonly result from car accidents, violent acts, sports injuries, explosions, and other types of combat traumas, and thus, the high burden of road accidents in developing and under-developing countries are further expected to have a significant impact on the growth of TBI segment. Therefore, due to the above-mentioned factors, the traumatic brain injury segment is expected to have a significant share of the studied market over the forecast period.
-
The North American region is expected to occupy a significant market share in the intracranial pressure monitoring market over the forecast period owing to the presence of a high burden of head and brain injury/diseases, robust healthcare infrastructure, and the presence of some of the key players of the market in the region.
-
In the North American region, the United States is expected to be a major market owing to a high prevalence of diseases demanding intracranial pressure monitoring, the launch of new products, and the presence of an esteemed healthcare system in the country. For instance, according to the research study from the American Heart Association published in March 2022, the overall annual incidence of Intracerebral Hemorrhage (ICH) in the United States is 23.15 per 100,000 people, and it has been observed that ICH incidence increased with age and its increasing in young and middle-aged Americans.
-
Additionally, the technological advancement in the devices used in monitoring intracranial pressure is further expected to boost market growth. For instance, in November 2021, FUJIFILM Healthcare Americas Corporation launched its advanced, high-field open MRI system called Velocity MRI System during the 2021 Radiological Society of North America (RSNA) event. Therefore, due to the above-mentioned factors, the North American region is expected to have a significant share in the intracranial pressure monitoring market during the forecast period.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/roi_heads/roi_head.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/roi_heads/roi_head.py
deleted file mode 100644
index 8f9d9a612645b06c04648c2be4d556e3467204a9..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/roi_heads/roi_head.py
+++ /dev/null
@@ -1,221 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import numpy as np
-from typing import Dict, List, Optional
-import fvcore.nn.weight_init as weight_init
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-from detectron2.modeling import ROI_HEADS_REGISTRY, StandardROIHeads
-from detectron2.modeling.poolers import ROIPooler
-from detectron2.modeling.roi_heads import select_foreground_proposals
-from detectron2.structures import ImageList, Instances
-
-from .. import (
- build_densepose_data_filter,
- build_densepose_embedder,
- build_densepose_head,
- build_densepose_losses,
- build_densepose_predictor,
- densepose_inference,
-)
-
-
-class Decoder(nn.Module):
- """
- A semantic segmentation head described in detail in the Panoptic Feature Pyramid Networks paper
- (https://arxiv.org/abs/1901.02446). It takes FPN features as input and merges information from
- all levels of the FPN into single output.
- """
-
- def __init__(self, cfg, input_shape: Dict[str, ShapeSpec], in_features):
- super(Decoder, self).__init__()
-
- # fmt: off
- self.in_features = in_features
- feature_strides = {k: v.stride for k, v in input_shape.items()}
- feature_channels = {k: v.channels for k, v in input_shape.items()}
- num_classes = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_NUM_CLASSES
- conv_dims = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_CONV_DIMS
- self.common_stride = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_COMMON_STRIDE
- norm = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_NORM
- # fmt: on
-
- self.scale_heads = []
- for in_feature in self.in_features:
- head_ops = []
- head_length = max(
- 1, int(np.log2(feature_strides[in_feature]) - np.log2(self.common_stride))
- )
- for k in range(head_length):
- conv = Conv2d(
- feature_channels[in_feature] if k == 0 else conv_dims,
- conv_dims,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=not norm,
- norm=get_norm(norm, conv_dims),
- activation=F.relu,
- )
- weight_init.c2_msra_fill(conv)
- head_ops.append(conv)
- if feature_strides[in_feature] != self.common_stride:
- head_ops.append(
- nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False)
- )
- self.scale_heads.append(nn.Sequential(*head_ops))
- self.add_module(in_feature, self.scale_heads[-1])
- self.predictor = Conv2d(conv_dims, num_classes, kernel_size=1, stride=1, padding=0)
- weight_init.c2_msra_fill(self.predictor)
-
- def forward(self, features: List[torch.Tensor]):
- for i, _ in enumerate(self.in_features):
- if i == 0:
- x = self.scale_heads[i](features[i])
- else:
- x = x + self.scale_heads[i](features[i])
- x = self.predictor(x)
- return x
-
-
-@ROI_HEADS_REGISTRY.register()
-class DensePoseROIHeads(StandardROIHeads):
- """
- A Standard ROIHeads which contains an addition of DensePose head.
- """
-
- def __init__(self, cfg, input_shape):
- super().__init__(cfg, input_shape)
- self._init_densepose_head(cfg, input_shape)
-
- def _init_densepose_head(self, cfg, input_shape):
- # fmt: off
- self.densepose_on = cfg.MODEL.DENSEPOSE_ON
- if not self.densepose_on:
- return
- self.densepose_data_filter = build_densepose_data_filter(cfg)
- dp_pooler_resolution = cfg.MODEL.ROI_DENSEPOSE_HEAD.POOLER_RESOLUTION
- dp_pooler_sampling_ratio = cfg.MODEL.ROI_DENSEPOSE_HEAD.POOLER_SAMPLING_RATIO
- dp_pooler_type = cfg.MODEL.ROI_DENSEPOSE_HEAD.POOLER_TYPE
- self.use_decoder = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_ON
- # fmt: on
- if self.use_decoder:
- dp_pooler_scales = (1.0 / input_shape[self.in_features[0]].stride,)
- else:
- dp_pooler_scales = tuple(1.0 / input_shape[k].stride for k in self.in_features)
- in_channels = [input_shape[f].channels for f in self.in_features][0]
-
- if self.use_decoder:
- self.decoder = Decoder(cfg, input_shape, self.in_features)
-
- self.densepose_pooler = ROIPooler(
- output_size=dp_pooler_resolution,
- scales=dp_pooler_scales,
- sampling_ratio=dp_pooler_sampling_ratio,
- pooler_type=dp_pooler_type,
- )
- self.densepose_head = build_densepose_head(cfg, in_channels)
- self.densepose_predictor = build_densepose_predictor(
- cfg, self.densepose_head.n_out_channels
- )
- self.densepose_losses = build_densepose_losses(cfg)
- self.embedder = build_densepose_embedder(cfg)
-
- def _forward_densepose(self, features: Dict[str, torch.Tensor], instances: List[Instances]):
- """
- Forward logic of the densepose prediction branch.
-
- Args:
- features (dict[str, Tensor]): input data as a mapping from feature
- map name to tensor. Axis 0 represents the number of images `N` in
- the input data; axes 1-3 are channels, height, and width, which may
- vary between feature maps (e.g., if a feature pyramid is used).
- instances (list[Instances]): length `N` list of `Instances`. The i-th
- `Instances` contains instances for the i-th input image,
- In training, they can be the proposals.
- In inference, they can be the predicted boxes.
-
- Returns:
- In training, a dict of losses.
- In inference, update `instances` with new fields "densepose" and return it.
- """
- if not self.densepose_on:
- return {} if self.training else instances
-
- features_list = [features[f] for f in self.in_features]
- if self.training:
- proposals, _ = select_foreground_proposals(instances, self.num_classes)
- features_list, proposals = self.densepose_data_filter(features_list, proposals)
- if len(proposals) > 0:
- proposal_boxes = [x.proposal_boxes for x in proposals]
-
- if self.use_decoder:
- # pyre-fixme[29]: `Union[nn.Module, torch.Tensor]` is not a
- # function.
- features_list = [self.decoder(features_list)]
-
- features_dp = self.densepose_pooler(features_list, proposal_boxes)
- densepose_head_outputs = self.densepose_head(features_dp)
- densepose_predictor_outputs = self.densepose_predictor(densepose_head_outputs)
- densepose_loss_dict = self.densepose_losses(
- proposals, densepose_predictor_outputs, embedder=self.embedder
- )
- return densepose_loss_dict
- else:
- pred_boxes = [x.pred_boxes for x in instances]
-
- if self.use_decoder:
- # pyre-fixme[29]: `Union[nn.Module, torch.Tensor]` is not a function.
- features_list = [self.decoder(features_list)]
-
- features_dp = self.densepose_pooler(features_list, pred_boxes)
- if len(features_dp) > 0:
- densepose_head_outputs = self.densepose_head(features_dp)
- densepose_predictor_outputs = self.densepose_predictor(densepose_head_outputs)
- else:
- densepose_predictor_outputs = None
-
- densepose_inference(densepose_predictor_outputs, instances)
- return instances
-
- def forward(
- self,
- images: ImageList,
- features: Dict[str, torch.Tensor],
- proposals: List[Instances],
- targets: Optional[List[Instances]] = None,
- ):
- instances, losses = super().forward(images, features, proposals, targets)
- del targets, images
-
- if self.training:
- losses.update(self._forward_densepose(features, instances))
- return instances, losses
-
- def forward_with_given_boxes(
- self, features: Dict[str, torch.Tensor], instances: List[Instances]
- ):
- """
- Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
-
- This is useful for downstream tasks where a box is known, but need to obtain
- other attributes (outputs of other heads).
- Test-time augmentation also uses this.
-
- Args:
- features: same as in `forward()`
- instances (list[Instances]): instances to predict other outputs. Expect the keys
- "pred_boxes" and "pred_classes" to exist.
-
- Returns:
- instances (list[Instances]):
- the same `Instances` objects, with extra
- fields such as `pred_masks` or `pred_keypoints`.
- """
-
- instances = super().forward_with_given_boxes(features, instances)
- instances = self._forward_densepose(features, instances)
- return instances
diff --git a/spaces/caffeinum/VToonify/vtoonify/util.py b/spaces/caffeinum/VToonify/vtoonify/util.py
deleted file mode 100644
index 01ad2930c55d07866dee02e019d359bb78f65fc7..0000000000000000000000000000000000000000
--- a/spaces/caffeinum/VToonify/vtoonify/util.py
+++ /dev/null
@@ -1,229 +0,0 @@
-import numpy as np
-import matplotlib.pyplot as plt
-from PIL import Image
-import cv2
-import random
-import math
-import argparse
-import torch
-from torch.utils import data
-from torch.nn import functional as F
-from torch import autograd
-from torch.nn import init
-import torchvision.transforms as transforms
-from model.stylegan.op import conv2d_gradfix
-from model.encoder.encoders.psp_encoders import GradualStyleEncoder
-from model.encoder.align_all_parallel import get_landmark
-
-def visualize(img_arr, dpi):
- plt.figure(figsize=(10,10),dpi=dpi)
- plt.imshow(((img_arr.detach().cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8))
- plt.axis('off')
- plt.show()
-
-def save_image(img, filename):
- tmp = ((img.detach().cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8)
- cv2.imwrite(filename, cv2.cvtColor(tmp, cv2.COLOR_RGB2BGR))
-
-def load_image(filename):
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- img = Image.open(filename)
- img = transform(img)
- return img.unsqueeze(dim=0)
-
-def data_sampler(dataset, shuffle, distributed):
- if distributed:
- return data.distributed.DistributedSampler(dataset, shuffle=shuffle)
-
- if shuffle:
- return data.RandomSampler(dataset)
-
- else:
- return data.SequentialSampler(dataset)
-
-
-def requires_grad(model, flag=True):
- for p in model.parameters():
- p.requires_grad = flag
-
-
-def accumulate(model1, model2, decay=0.999):
- par1 = dict(model1.named_parameters())
- par2 = dict(model2.named_parameters())
-
- for k in par1.keys():
- par1[k].data.mul_(decay).add_(par2[k].data, alpha=1 - decay)
-
-
-def sample_data(loader):
- while True:
- for batch in loader:
- yield batch
-
-
-def d_logistic_loss(real_pred, fake_pred):
- real_loss = F.softplus(-real_pred)
- fake_loss = F.softplus(fake_pred)
-
- return real_loss.mean() + fake_loss.mean()
-
-
-def d_r1_loss(real_pred, real_img):
- with conv2d_gradfix.no_weight_gradients():
- grad_real, = autograd.grad(
- outputs=real_pred.sum(), inputs=real_img, create_graph=True
- )
- grad_penalty = grad_real.pow(2).reshape(grad_real.shape[0], -1).sum(1).mean()
-
- return grad_penalty
-
-
-def g_nonsaturating_loss(fake_pred):
- loss = F.softplus(-fake_pred).mean()
-
- return loss
-
-
-def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01):
- noise = torch.randn_like(fake_img) / math.sqrt(
- fake_img.shape[2] * fake_img.shape[3]
- )
- grad, = autograd.grad(
- outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True
- )
- path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1))
-
- path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length)
-
- path_penalty = (path_lengths - path_mean).pow(2).mean()
-
- return path_penalty, path_mean.detach(), path_lengths
-
-
-def make_noise(batch, latent_dim, n_noise, device):
- if n_noise == 1:
- return torch.randn(batch, latent_dim, device=device)
-
- noises = torch.randn(n_noise, batch, latent_dim, device=device).unbind(0)
-
- return noises
-
-
-def mixing_noise(batch, latent_dim, prob, device):
- if prob > 0 and random.random() < prob:
- return make_noise(batch, latent_dim, 2, device)
-
- else:
- return [make_noise(batch, latent_dim, 1, device)]
-
-
-def set_grad_none(model, targets):
- for n, p in model.named_parameters():
- if n in targets:
- p.grad = None
-
-
-def weights_init(m):
- classname = m.__class__.__name__
- if classname.find('BatchNorm2d') != -1:
- if hasattr(m, 'weight') and m.weight is not None:
- init.normal_(m.weight.data, 1.0, 0.02)
- if hasattr(m, 'bias') and m.bias is not None:
- init.constant_(m.bias.data, 0.0)
- elif hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
- init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
- if hasattr(m, 'bias') and m.bias is not None:
- init.constant_(m.bias.data, 0.0)
-
-
-def load_psp_standalone(checkpoint_path, device='cuda'):
- ckpt = torch.load(checkpoint_path, map_location='cpu')
- opts = ckpt['opts']
- if 'output_size' not in opts:
- opts['output_size'] = 1024
- opts['n_styles'] = int(math.log(opts['output_size'], 2)) * 2 - 2
- opts = argparse.Namespace(**opts)
- psp = GradualStyleEncoder(50, 'ir_se', opts)
- psp_dict = {k.replace('encoder.', ''): v for k, v in ckpt['state_dict'].items() if k.startswith('encoder.')}
- psp.load_state_dict(psp_dict)
- psp.eval()
- psp = psp.to(device)
- latent_avg = ckpt['latent_avg'].to(device)
-
- def add_latent_avg(model, inputs, outputs):
- return outputs + latent_avg.repeat(outputs.shape[0], 1, 1)
-
- psp.register_forward_hook(add_latent_avg)
- return psp
-
-def get_video_crop_parameter(filepath, predictor, padding=[200,200,200,200]):
- if type(filepath) == str:
- img = dlib.load_rgb_image(filepath)
- else:
- img = filepath
- lm = get_landmark(img, predictor)
- if lm is None:
- return None
- lm_chin = lm[0 : 17] # left-right
- lm_eyebrow_left = lm[17 : 22] # left-right
- lm_eyebrow_right = lm[22 : 27] # left-right
- lm_nose = lm[27 : 31] # top-down
- lm_nostrils = lm[31 : 36] # top-down
- lm_eye_left = lm[36 : 42] # left-clockwise
- lm_eye_right = lm[42 : 48] # left-clockwise
- lm_mouth_outer = lm[48 : 60] # left-clockwise
- lm_mouth_inner = lm[60 : 68] # left-clockwise
-
- scale = 64. / (np.mean(lm_eye_right[:,0])-np.mean(lm_eye_left[:,0]))
- center = ((np.mean(lm_eye_right, axis=0)+np.mean(lm_eye_left, axis=0)) / 2) * scale
- h, w = round(img.shape[0] * scale), round(img.shape[1] * scale)
- left = max(round(center[0] - padding[0]), 0) // 8 * 8
- right = min(round(center[0] + padding[1]), w) // 8 * 8
- top = max(round(center[1] - padding[2]), 0) // 8 * 8
- bottom = min(round(center[1] + padding[3]), h) // 8 * 8
- return h,w,top,bottom,left,right,scale
-
-def tensor2cv2(img):
- tmp = ((img.cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8)
- return cv2.cvtColor(tmp, cv2.COLOR_RGB2BGR)
-
-# get parameters from the stylegan and mark them with their layers
-def gather_params(G):
- params = dict(
- [(res, {}) for res in range(18)] + [("others", {})]
- )
- for n, p in sorted(list(G.named_buffers()) + list(G.named_parameters())):
- if n.startswith("convs"):
- layer = int(n.split(".")[1]) + 1
- params[layer][n] = p
- elif n.startswith("to_rgbs"):
- layer = int(n.split(".")[1]) * 2 + 3
- params[layer][n] = p
- elif n.startswith("conv1"):
- params[0][n] = p
- elif n.startswith("to_rgb1"):
- params[1][n] = p
- else:
- params["others"][n] = p
- return params
-
-# blend the ffhq stylegan model and the finetuned model for toonify
-# see ``Resolution Dependent GAN Interpolation for Controllable Image Synthesis Between Domains''
-def blend_models(G_low, G_high, weight=[1]*7+[0]*11):
- params_low = gather_params(G_low)
- params_high = gather_params(G_high)
-
- for res in range(18):
- for n, p in params_high[res].items():
- params_high[res][n] = params_high[res][n] * (1-weight[res]) + params_low[res][n] * weight[res]
-
- state_dict = {}
- for _, p in params_high.items():
- state_dict.update(p)
-
- return state_dict
-
diff --git a/spaces/carlosabadia/face_detection/faceNet/faceNet.py b/spaces/carlosabadia/face_detection/faceNet/faceNet.py
deleted file mode 100644
index fecee28c3475ecb6388084a3b480de2291f01526..0000000000000000000000000000000000000000
--- a/spaces/carlosabadia/face_detection/faceNet/faceNet.py
+++ /dev/null
@@ -1,189 +0,0 @@
-import cv2
-import stow
-import typing
-import numpy as np
-import onnxruntime as ort
-
-class FaceNet:
- """FaceNet class object, which can be used for simplified face recognition
- """
- def __init__(
- self,
- detector: object,
- onnx_model_path: str = "models/faceNet.onnx",
- anchors: typing.Union[str, dict] = 'faces',
- force_cpu: bool = False,
- threshold: float = 0.5,
- color: tuple = (255, 255, 255),
- thickness: int = 2,
- ) -> None:
- """Object for face recognition
- Params:
- detector: (object) - detector object to detect faces in image
- onnx_model_path: (str) - path to onnx model
- force_cpu: (bool) - if True, onnx model will be run on CPU
- anchors: (str or dict) - path to directory with faces or dictionary with anchor names as keys and anchor encodings as values
- threshold: (float) - threshold for face recognition
- color: (tuple) - color of bounding box and text
- thickness: (int) - thickness of bounding box and text
- """
- if not stow.exists(onnx_model_path):
- raise Exception(f"Model doesn't exists in {onnx_model_path}")
-
- self.detector = detector
- self.threshold = threshold
- self.color = color
- self.thickness = thickness
-
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
-
- providers = providers if ort.get_device() == "GPU" and not force_cpu else providers[::-1]
-
- self.ort_sess = ort.InferenceSession(onnx_model_path, providers=providers)
-
- self.input_shape = self.ort_sess._inputs_meta[0].shape[1:3]
-
- self.anchors = self.load_anchors(anchors) if isinstance(anchors, str) else anchors
-
- def normalize(self, img: np.ndarray) -> np.ndarray:
- """Normalize image
-
- Args:
- img: (np.ndarray) - image to be normalized
-
- Returns:
- img: (np.ndarray) - normalized image
- """
- mean, std = img.mean(), img.std()
- return (img - mean) / std
-
- def l2_normalize(self, x: np.ndarray, axis: int = -1, epsilon: float = 1e-10) -> np.ndarray:
- """l2 normalization function
-
- Args:
- x: (np.ndarray) - input array
- axis: (int) - axis to normalize
- epsilon: (float) - epsilon to avoid division by zero
-
- Returns:
- x: (np.ndarray) - normalized array
- """
- output = x / np.sqrt(np.maximum(np.sum(np.square(x), axis=axis, keepdims=True), epsilon))
- return output
-
- def detect_save_faces(self, image: np.ndarray, output_dir: str = "faces"):
- """Detect faces in given image and save them to output_dir
-
- Args:
- image: (np.ndarray) - image to be processed
- output_dir: (str) - directory where faces will be saved
-
- Returns:
- bool: (bool) - True if faces were detected and saved
- """
- face_crops = [image[t:b, l:r] for t, l, b, r in self.detector(image, return_tlbr=True)]
-
- if face_crops == []:
- return False
-
- #stow.mkdir(output_dir)
-
- for index, crop in enumerate(face_crops):
- #output_path = stow.join(output_dir, f"face_{str(index)}.png")
- #cv2.imwrite(output_path, crop)
- #print("Crop saved to:", output_path)
-
- #self.anchors = self.load_anchors(output_dir)
-
- return crop
-
- def load_anchors(self, faces_path: str):
- """Generate anchors for given faces path
-
- Args:
- faces_path: (str) - path to directory with faces
-
- Returns:
- anchors: (dict) - dictionary with anchor names as keys and anchor encodings as values
- """
- anchors = {}
- if not stow.exists(faces_path):
- return {}
-
- for face_path in stow.ls(faces_path):
- anchors[stow.basename(face_path)] = self.encode(cv2.imread(face_path.path))
-
- return anchors
-
- def encode(self, face_image: np.ndarray) -> np.ndarray:
- """Encode face image with FaceNet model
-
- Args
- face_image: (np.ndarray) - face image to be encoded
-
- Returns:
- face_encoding: (np.ndarray) - face encoding
- """
- face = self.normalize(face_image)
- face = cv2.resize(face, self.input_shape).astype(np.float32)
-
- encode = self.ort_sess.run(None, {self.ort_sess._inputs_meta[0].name: np.expand_dims(face, axis=0)})[0][0]
- normalized_encode = self.l2_normalize(encode)
-
- return normalized_encode
-
- def cosine_distance(self, a: np.ndarray, b: typing.Union[np.ndarray, list]) -> np.ndarray:
- """Cosine distance between wectors a and b
-
- Args:
- a: (np.ndarray) - first vector
- b: (np.ndarray) - second list of vectors
-
- Returns:
- distance: (float) - cosine distance
- """
- if isinstance(a, list):
- a = np.array(a)
-
- if isinstance(b, list):
- b = np.array(b)
-
- return np.dot(a, b.T) / (np.linalg.norm(a) * np.linalg.norm(b))
-
- def draw(self, image: np.ndarray, face_crops: dict):
- """Draw face crops on image
-
- Args:
- image: (np.ndarray) - image to be drawn on
- face_crops: (dict) - dictionary with face crops as values and face names as keys
-
- Returns:
- image: (np.ndarray) - image with drawn face crops
- """
- for value in face_crops.values():
- t, l, b, r = value["tlbr"]
- cv2.rectangle(image, (l, t), (r, b), self.color, self.thickness)
- cv2.putText(image, stow.name(value['name']), (l, t - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, self.color, self.thickness)
-
- return image
-
- def __call__(self, frame: np.ndarray) -> np.ndarray:
- """Face recognition pipeline
-
- Args:
- frame: (np.ndarray) - image to be processed
-
- Returns:
- frame: (np.ndarray) - image with drawn face recognition results
- """
- face_crops = {index: {"name": "Unknown", "tlbr": tlbr} for index, tlbr in enumerate(self.detector(frame, return_tlbr=True))}
- for key, value in face_crops.items():
- t, l, b, r = value["tlbr"]
- face_encoding = self.encode(frame[t:b, l:r])
- distances = self.cosine_distance(face_encoding, list(self.anchors.values()))
- if np.max(distances) > self.threshold:
- face_crops[key]["name"] = list(self.anchors.keys())[np.argmax(distances)]
-
- frame = self.draw(frame, face_crops)
-
- return frame
\ No newline at end of file
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h
deleted file mode 100644
index 03f4211003f42f601f0cfcf4a690f5da4a0a1f67..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h
+++ /dev/null
@@ -1,115 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-#include
-
-namespace detectron2 {
-
-at::Tensor ROIAlignRotated_forward_cpu(
- const at::Tensor& input,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio);
-
-at::Tensor ROIAlignRotated_backward_cpu(
- const at::Tensor& grad,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int batch_size,
- const int channels,
- const int height,
- const int width,
- const int sampling_ratio);
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-at::Tensor ROIAlignRotated_forward_cuda(
- const at::Tensor& input,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio);
-
-at::Tensor ROIAlignRotated_backward_cuda(
- const at::Tensor& grad,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int batch_size,
- const int channels,
- const int height,
- const int width,
- const int sampling_ratio);
-#endif
-
-// Interface for Python
-inline at::Tensor ROIAlignRotated_forward(
- const at::Tensor& input,
- const at::Tensor& rois,
- const double spatial_scale,
- const int64_t pooled_height,
- const int64_t pooled_width,
- const int64_t sampling_ratio) {
- if (input.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- return ROIAlignRotated_forward_cuda(
- input,
- rois,
- spatial_scale,
- pooled_height,
- pooled_width,
- sampling_ratio);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- return ROIAlignRotated_forward_cpu(
- input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
-}
-
-inline at::Tensor ROIAlignRotated_backward(
- const at::Tensor& grad,
- const at::Tensor& rois,
- const double spatial_scale,
- const int64_t pooled_height,
- const int64_t pooled_width,
- const int64_t batch_size,
- const int64_t channels,
- const int64_t height,
- const int64_t width,
- const int64_t sampling_ratio) {
- if (grad.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- return ROIAlignRotated_backward_cuda(
- grad,
- rois,
- spatial_scale,
- pooled_height,
- pooled_width,
- batch_size,
- channels,
- height,
- width,
- sampling_ratio);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- return ROIAlignRotated_backward_cpu(
- grad,
- rois,
- spatial_scale,
- pooled_height,
- pooled_width,
- batch_size,
- channels,
- height,
- width,
- sampling_ratio);
-}
-
-} // namespace detectron2
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/vis/extractor.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/vis/extractor.py
deleted file mode 100644
index bfb2bdf693254a954e54a74b8766e5f574f6cf3a..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/vis/extractor.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-from typing import List, Optional, Sequence, Tuple
-import torch
-
-from detectron2.layers.nms import batched_nms
-from detectron2.structures.instances import Instances
-
-from densepose.converters import ToChartResultConverterWithConfidences
-from densepose.structures import (
- DensePoseChartResultWithConfidences,
- DensePoseEmbeddingPredictorOutput,
-)
-from densepose.vis.bounding_box import BoundingBoxVisualizer, ScoredBoundingBoxVisualizer
-from densepose.vis.densepose_outputs_vertex import DensePoseOutputsVertexVisualizer
-from densepose.vis.densepose_results import DensePoseResultsVisualizer
-
-from .base import CompoundVisualizer
-
-Scores = Sequence[float]
-DensePoseChartResultsWithConfidences = List[DensePoseChartResultWithConfidences]
-
-
-def extract_scores_from_instances(instances: Instances, select=None):
- if instances.has("scores"):
- return instances.scores if select is None else instances.scores[select]
- return None
-
-
-def extract_boxes_xywh_from_instances(instances: Instances, select=None):
- if instances.has("pred_boxes"):
- boxes_xywh = instances.pred_boxes.tensor.clone()
- boxes_xywh[:, 2] -= boxes_xywh[:, 0]
- boxes_xywh[:, 3] -= boxes_xywh[:, 1]
- return boxes_xywh if select is None else boxes_xywh[select]
- return None
-
-
-def create_extractor(visualizer: object):
- """
- Create an extractor for the provided visualizer
- """
- if isinstance(visualizer, CompoundVisualizer):
- extractors = [create_extractor(v) for v in visualizer.visualizers]
- return CompoundExtractor(extractors)
- elif isinstance(visualizer, DensePoseResultsVisualizer):
- return DensePoseResultExtractor()
- elif isinstance(visualizer, ScoredBoundingBoxVisualizer):
- return CompoundExtractor([extract_boxes_xywh_from_instances, extract_scores_from_instances])
- elif isinstance(visualizer, BoundingBoxVisualizer):
- return extract_boxes_xywh_from_instances
- elif isinstance(visualizer, DensePoseOutputsVertexVisualizer):
- return DensePoseOutputsExtractor()
- else:
- logger = logging.getLogger(__name__)
- logger.error(f"Could not create extractor for {visualizer}")
- return None
-
-
-class BoundingBoxExtractor(object):
- """
- Extracts bounding boxes from instances
- """
-
- def __call__(self, instances: Instances):
- boxes_xywh = extract_boxes_xywh_from_instances(instances)
- return boxes_xywh
-
-
-class ScoredBoundingBoxExtractor(object):
- """
- Extracts bounding boxes from instances
- """
-
- def __call__(self, instances: Instances, select=None):
- scores = extract_scores_from_instances(instances)
- boxes_xywh = extract_boxes_xywh_from_instances(instances)
- if (scores is None) or (boxes_xywh is None):
- return (boxes_xywh, scores)
- if select is not None:
- scores = scores[select]
- boxes_xywh = boxes_xywh[select]
- return (boxes_xywh, scores)
-
-
-class DensePoseResultExtractor(object):
- """
- Extracts DensePose chart result with confidences from instances
- """
-
- def __call__(
- self, instances: Instances, select=None
- ) -> Tuple[Optional[DensePoseChartResultsWithConfidences], Optional[torch.Tensor]]:
- if instances.has("pred_densepose") and instances.has("pred_boxes"):
- dpout = instances.pred_densepose
- boxes_xyxy = instances.pred_boxes
- boxes_xywh = extract_boxes_xywh_from_instances(instances)
- if select is not None:
- dpout = dpout[select]
- boxes_xyxy = boxes_xyxy[select]
- converter = ToChartResultConverterWithConfidences()
- results = [converter.convert(dpout[i], boxes_xyxy[[i]]) for i in range(len(dpout))]
- return results, boxes_xywh
- else:
- return None, None
-
-
-class DensePoseOutputsExtractor(object):
- """
- Extracts DensePose result from instances
- """
-
- def __call__(
- self,
- instances: Instances,
- select=None,
- ) -> Tuple[
- Optional[DensePoseEmbeddingPredictorOutput], Optional[torch.Tensor], Optional[List[int]]
- ]:
- if not (instances.has("pred_densepose") and instances.has("pred_boxes")):
- return None, None, None
-
- dpout = instances.pred_densepose
- boxes_xyxy = instances.pred_boxes
- boxes_xywh = extract_boxes_xywh_from_instances(instances)
-
- if instances.has("pred_classes"):
- classes = instances.pred_classes.tolist()
- else:
- classes = None
-
- if select is not None:
- dpout = dpout[select]
- boxes_xyxy = boxes_xyxy[select]
- if classes is not None:
- classes = classes[select]
-
- return dpout, boxes_xywh, classes
-
-
-class CompoundExtractor(object):
- """
- Extracts data for CompoundVisualizer
- """
-
- def __init__(self, extractors):
- self.extractors = extractors
-
- def __call__(self, instances: Instances, select=None):
- datas = []
- for extractor in self.extractors:
- data = extractor(instances, select)
- datas.append(data)
- return datas
-
-
-class NmsFilteredExtractor(object):
- """
- Extracts data in the format accepted by NmsFilteredVisualizer
- """
-
- def __init__(self, extractor, iou_threshold):
- self.extractor = extractor
- self.iou_threshold = iou_threshold
-
- def __call__(self, instances: Instances, select=None):
- scores = extract_scores_from_instances(instances)
- boxes_xywh = extract_boxes_xywh_from_instances(instances)
- if boxes_xywh is None:
- return None
- select_local_idx = batched_nms(
- boxes_xywh,
- scores,
- torch.zeros(len(scores), dtype=torch.int32),
- iou_threshold=self.iou_threshold,
- ).squeeze()
- select_local = torch.zeros(len(boxes_xywh), dtype=torch.bool, device=boxes_xywh.device)
- select_local[select_local_idx] = True
- select = select_local if select is None else (select & select_local)
- return self.extractor(instances, select=select)
-
-
-class ScoreThresholdedExtractor(object):
- """
- Extracts data in the format accepted by ScoreThresholdedVisualizer
- """
-
- def __init__(self, extractor, min_score):
- self.extractor = extractor
- self.min_score = min_score
-
- def __call__(self, instances: Instances, select=None):
- scores = extract_scores_from_instances(instances)
- if scores is None:
- return None
- select_local = scores > self.min_score
- select = select_local if select is None else (select & select_local)
- data = self.extractor(instances, select=select)
- return data
diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/dataset_zoo/aro_datasets.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/dataset_zoo/aro_datasets.py
deleted file mode 100644
index 14e91d500c01dd232c8a7b23fb9af266ecd1d513..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/dataset_zoo/aro_datasets.py
+++ /dev/null
@@ -1,365 +0,0 @@
-import os
-import json
-import subprocess
-
-import numpy as np
-
-from PIL import Image
-from tqdm import tqdm
-from torch.utils.data import Dataset
-from easydict import EasyDict as edict
-from torchvision.datasets.utils import download_url
-
-from .perturbations import TextShuffler
-from .constants import ARO_ROOT, COCO_ROOT, FLICKR_ROOT
-from .retrieval import pre_caption
-
-
-class VG_Relation(Dataset):
- def __init__(self, image_preprocess, text_perturb_fn=None, image_perturb_fn=None, root_dir=ARO_ROOT, download=False):
- '''
- image_preprocess: a function that takes in a PIL image and returns a tensor.
- text_perturb_fn: Not used for this dataset. Just for compatibility with other datasets.
- image_perturb_fn: Not used for this dataset. Just for compatibility with other datasets.
- root_dir: Directory for the VG-R dataset.
- download: Whether to download the dataset if it does not exist.
- '''
- self.root_dir = root_dir
- annotation_file = os.path.join(root_dir, "visual_genome_relation.json")
- image_dir = os.path.join(root_dir, "images")
- if not os.path.exists(image_dir):
- print("Image Directory for VG_Relation could not be found!")
- if download:
- self.download()
- else:
- raise RuntimeError("Please either download the dataset by letting `--download` or specify the correct directory.")
-
- if not os.path.exists(annotation_file):
- subprocess.call(["gdown", "--id", "1kX2iCHEv0CADL8dSO1nMdW-V0NqIAiP3", "--output", annotation_file])
-
- with open(annotation_file, "r") as f:
- self.dataset = json.load(f)
-
- self.all_relations = list()
- for item in self.dataset:
- item["image_path"] = os.path.join(image_dir, item["image_path"])
- self.all_relations.append(item["relation_name"])
-
- self.image_preprocess = image_preprocess
-
- def __len__(self):
- return len(self.dataset)
-
- def __getitem__(self, index):
- test_case = self.dataset[index]
- image = Image.open(test_case["image_path"]).convert('RGB')
- # Get the bounding box that contains the relation. This is to remove the irrelevant details in the scene.
- image = image.crop((test_case["bbox_x"], test_case["bbox_y"], test_case["bbox_x"] + test_case["bbox_w"], test_case["bbox_y"] + test_case["bbox_h"]))
-
- if self.image_preprocess is not None:
- image = self.image_preprocess(image)
-
- # Each test case has a correct and incorrect caption.
- true_caption = test_case["true_caption"]
- false_caption = test_case["false_caption"]
- item = edict({"image_options": [image], "caption_options": [false_caption, true_caption]})
- return item
-
- def download(self):
- os.makedirs(self.root_dir, exist_ok=True)
- image_zip_file = os.path.join(self.root_dir, "vgr_vga_images.zip")
- subprocess.call(["gdown", "--no-cookies", "1qaPlrwhGNMrR3a11iopZUT_GPP_LrgP9", "--output", image_zip_file])
- subprocess.call(["unzip", "vgr_vga_images.zip"], cwd=self.root_dir)
-
-
- def evaluate_scores(self, scores):
- """
- Scores: N x 1 x 2, i.e. first caption is the perturbed one, second is the positive one
- """
- if isinstance(scores, tuple):
- scores_i2t = scores[1]
- scores_t2i = scores[0]
- else:
- scores_t2i = scores
- scores_i2t = scores
-
- metrics = {"Accuracy": None}
- preds = np.argmax(np.squeeze(scores_i2t, axis=1), axis=-1)
- correct_mask = (preds == 1)
- metrics["Accuracy"] = np.mean(correct_mask)
-
- all_relations = np.array(self.all_relations)
-
- result_records = []
- # Log the accuracy of all relations
- for relation in np.unique(all_relations):
- relation_mask = (all_relations == relation)
- if relation_mask.sum() == 0:
- continue
- result_records.append({
- "Relation": relation,
- "Accuracy": correct_mask[relation_mask].mean(),
- "Count": relation_mask.sum(),
- "Dataset": "Visual Genome Relation"
- })
- return result_records
-
-
-
-class VG_Attribution(Dataset):
- def __init__(self, image_preprocess, text_perturb_fn=None, image_perturb_fn=None, root_dir=ARO_ROOT, download=False):
- '''
- image_preprocess: a function that takes in a PIL image and returns a tensor.
- text_perturb_fn: Not used for this dataset. Just for compatibility with other datasets.
- image_perturb_fn: Not used for this dataset. Just for compatibility with other datasets.
- root_dir: Directory for the VG-A dataset.
- '''
- self.root_dir = root_dir
- annotation_file = os.path.join(root_dir, "visual_genome_attribution.json")
- image_dir = os.path.join(root_dir, "images")
- if not os.path.exists(image_dir):
- print("Image Directory for VG_Attribution could not be found!")
- if download:
- self.download()
- else:
- raise RuntimeError("Please either download the dataset by letting `--download` or specify the correct directory.")
-
-
- if not os.path.exists(annotation_file):
- subprocess.call(["gdown", "--id", "13tWvOrNOLHxl3Rm9cR3geAdHx2qR3-Tw", "--output", annotation_file])
-
- with open(annotation_file, "r") as f:
- self.dataset = json.load(f)
-
- for item in self.dataset:
- item["image_path"] = os.path.join(image_dir, item["image_path"])
-
- # Set of attributes in each test case
- self.all_attributes = [f"{item['attributes'][0]}_{item['attributes'][1]}" for item in self.dataset]
- self.image_preprocess = image_preprocess
-
- def __len__(self):
- return len(self.dataset)
-
- def __getitem__(self, index):
- test_case = self.dataset[index]
- image = Image.open(test_case["image_path"]).convert('RGB')
- # Get the bounding box that contains the relation. This is to remove the irrelevant details in the scene.
- image = image.crop((test_case["bbox_x"], test_case["bbox_y"], test_case["bbox_x"] + test_case["bbox_w"], test_case["bbox_y"] + test_case["bbox_h"]))
-
- if self.image_preprocess is not None:
- image = self.image_preprocess(image)
-
- # Each test case has a correct and incorrect caption.
- true_caption = test_case["true_caption"]
- false_caption = test_case["false_caption"]
- item = edict({"image_options": [image], "caption_options": [false_caption, true_caption]})
- return item
-
- def download(self):
- os.makedirs(self.root_dir, exist_ok=True)
- image_zip_file = os.path.join(self.root_dir, "vgr_vga_images.zip")
- subprocess.call(["gdown", "--no-cookies", "1qaPlrwhGNMrR3a11iopZUT_GPP_LrgP9", "--output", image_zip_file])
- subprocess.call(["unzip", "vgr_vga_images.zip"], cwd=self.root_dir)
-
-
- def evaluate_scores(self, scores):
- """
- Scores: N x 1 x 2, i.e. first caption is the perturbed one, second is the positive one
- """
- if isinstance(scores, tuple):
- scores_i2t = scores[1]
- scores_t2i = scores[0]
- else:
- scores_t2i = scores
- scores_i2t = scores
-
- preds = np.argmax(np.squeeze(scores_i2t, axis=1), axis=-1)
- correct_mask = (preds == 1)
- result_records = []
- all_attributes = np.array(self.all_attributes)
- for attr in np.unique(all_attributes):
- attr_mask = (all_attributes == attr)
- if attr_mask.sum() < 25:
- continue
- result_records.append({
- "Attributes": attr,
- "Accuracy": correct_mask[attr_mask].mean(),
- "Count": attr_mask.sum(),
- "Dataset": "Visual Genome Attribution"
- })
- return result_records
-
-
-
-
-class COCO_Order(Dataset):
- def __init__(self, image_preprocess=None, root_dir=COCO_ROOT, max_words=30, split="test",
- image_perturb_fn=None, download=False):
- """
- COCO Order Dataset.
- image_preprocess: image preprocessing function
- root_dir: The directory of the coco dataset. This directory should contain test2014 files.
- max_words: Cropping the caption to max_words.
- split: 'val' or 'test'
- image_perturb_fn: not used; for compatibility.
- download: Whether to download the dataset if it does not exist.
- """
- shuffler = TextShuffler()
- perturb_functions = [shuffler.shuffle_nouns_and_adj, shuffler.shuffle_allbut_nouns_and_adj,
- shuffler.shuffle_within_trigrams, shuffler.shuffle_trigrams]
-
- self.root_dir = root_dir
- if not os.path.exists(root_dir):
- print("Directory for COCO could not be found!")
- if download:
- print("Downloading COCO now.")
- self.download()
- else:
- raise RuntimeError("Please either download the dataset by letting `--download` or specify the correct directory.")
-
- urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json',
- 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json'}
- filenames = {'val':'coco_karpathy_val.json','test':'coco_karpathy_test.json'}
- download_url(urls[split],root_dir)
-
- self.annotation = json.load(open(os.path.join(root_dir,filenames[split]),'r'))
- self.image_preprocess = image_preprocess
- self.image_root = root_dir
-
- self.test_cases = []
-
- for img_id, ann in tqdm(enumerate(self.annotation)):
- for i, caption in enumerate(ann['caption']):
- test_case = {}
- test_case["image"] = ann["image"]
- test_case["caption_options"] = [pre_caption(caption,max_words)]
-
- for perturb_fn in perturb_functions:
- test_case["caption_options"].append(pre_caption(perturb_fn(caption), max_words))
- self.test_cases.append(test_case)
-
- def __len__(self):
- return len(self.test_cases)
-
- def __getitem__(self, index):
- test_case = self.test_cases[index]
- image_path = os.path.join(self.image_root, test_case["image"])
-
- image = Image.open(image_path).convert('RGB')
- if self.image_preprocess is not None:
- image = self.image_preprocess(image)
-
- item = edict({"image_options": [image], "caption_options": test_case["caption_options"]})
- return item
-
- def download(self):
- import subprocess
- os.makedirs(self.root_dir, exist_ok=True)
- #subprocess.call(["wget", "http://images.cocodataset.org/zips/train2014.zip"], cwd=self.root_dir)
- #subprocess.call(["unzip", "train2014.zip"], cwd=self.root_dir)
-
- subprocess.call(["wget", "http://images.cocodataset.org/zips/val2014.zip"], cwd=self.root_dir)
- subprocess.call(["unzip", "val2014.zip"], cwd=self.root_dir)
-
- subprocess.call(["wget", "http://images.cocodataset.org/zips/test2014.zip"], cwd=self.root_dir)
- subprocess.call(["unzip", "test2014.zip"], cwd=self.root_dir)
-
-
- def evaluate_scores(self, scores):
- if isinstance(scores, tuple):
- scores_i2t = scores[0]
- scores_t2i = scores[1].T # Make it N_ims x N_text
-
- else:
- scores_t2i = scores
- scores_i2t = scores
-
- preds = np.argmax(np.squeeze(scores_i2t, axis=1), axis=-1)
- correct_mask = (preds == 0)
- records = [{"Precision@1": np.mean(correct_mask)}]
- return records
-
-
-class Flickr30k_Order(Dataset):
- def __init__(self, image_preprocess, split, root_dir=FLICKR_ROOT, max_words=30,
- *args, **kwargs):
- """
- image_preprocess: image preprocessing function
- split: 'val' or 'test'
- root_dir: The directory of the flickr30k images. This should contain the `flickr30k-images` directory that \
- contains all the images.
- """
- urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_val.json',
- 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_test.json'}
- filenames = {'val':'flickr30k_val.json','test':'flickr30k_test.json'}
- if not os.path.exists(root_dir):
- print("Directory for Flickr30k could not be found!")
- flickr_url = "https://forms.illinois.edu/sec/229675"
- raise RuntimeError(f"You need to manually sign up and download the dataset from {flickr_url} and place it in the `root_dir`.")
-
- download_url(urls[split],root_dir)
-
- self.annotation = json.load(open(os.path.join(root_dir,filenames[split]),'r'))
- self.image_preprocess = image_preprocess
- self.root_dir = root_dir
-
- self.test_cases = []
-
- shuffler = TextShuffler()
- perturb_functions = [shuffler.shuffle_nouns_and_adj, shuffler.shuffle_allbut_nouns_and_adj,
- shuffler.shuffle_within_trigrams, shuffler.shuffle_trigrams]
- for img_id, ann in tqdm(enumerate(self.annotation)):
- for i, caption in enumerate(ann['caption']):
- test_case = {}
- test_case["image"] = ann["image"]
- test_case["caption_options"] = [pre_caption(caption,max_words)]
-
- for perturb_fn in perturb_functions:
- test_case["caption_options"].append(pre_caption(perturb_fn(caption), max_words))
- self.test_cases.append(test_case)
-
- def __len__(self):
- return len(self.test_cases)
-
- def __getitem__(self, index):
- test_case = self.test_cases[index]
- image_path = os.path.join(self.root_dir, test_case["image"])
- image = Image.open(image_path).convert('RGB')
-
- if self.image_preprocess is not None:
- image = self.image_preprocess(image)
-
- item = edict({"image_options": [image], "caption_options": test_case["caption_options"]})
- return item
-
- def evaluate_scores(self, scores):
- if isinstance(scores, tuple):
- scores_i2t = scores[0]
- scores_t2i = scores[1].T # Make it N_ims x N_text
- else:
- scores_t2i = scores
- scores_i2t = scores
-
- preds = np.argmax(np.squeeze(scores_i2t, axis=1), axis=-1)
- correct_mask = (preds == 0)
- result_records = [{"Precision@1": np.mean(correct_mask)}]
- return result_records
-
-
-def get_visual_genome_relation(image_preprocess, text_perturb_fn=None, image_perturb_fn=None, download=False):
- return VG_Relation(image_preprocess=image_preprocess, text_perturb_fn=text_perturb_fn, image_perturb_fn=image_perturb_fn, download=download)
-
-
-def get_visual_genome_attribution(image_preprocess, text_perturb_fn=None, image_perturb_fn=None, download=False):
- return VG_Attribution(image_preprocess=image_preprocess, text_perturb_fn=text_perturb_fn,
- image_perturb_fn=image_perturb_fn, download=download)
-
-def get_coco_order(image_preprocess, image_perturb_fn, text_perturb_fn, max_words=30, download=False, root_dir=COCO_ROOT, split="test"):
- return COCO_Order(root_dir=root_dir, split=split, image_preprocess=image_preprocess, image_perturb_fn=image_perturb_fn, max_words=max_words,
- download=download)
-
-def get_flickr30k_order(image_preprocess, image_perturb_fn, text_perturb_fn, max_words=30, download=False, root_dir=FLICKR_ROOT, split="test"):
- return Flickr30k_Order(root_dir=root_dir, split=split, image_preprocess=image_preprocess, image_perturb_fn=image_perturb_fn, max_words=max_words,
- download=download)
-
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/src/modeling_highway_roberta.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/src/modeling_highway_roberta.py
deleted file mode 100644
index c21fb32fde762a8269f1f5b78b0e51e07b17f606..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/src/modeling_highway_roberta.py
+++ /dev/null
@@ -1,154 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-from torch import nn
-from torch.nn import CrossEntropyLoss, MSELoss
-
-from transformers import RobertaConfig
-from transformers.file_utils import add_start_docstrings, add_start_docstrings_to_model_forward
-from transformers.models.roberta.modeling_roberta import (
- ROBERTA_INPUTS_DOCSTRING,
- ROBERTA_START_DOCSTRING,
- RobertaEmbeddings,
-)
-
-from .modeling_highway_bert import BertPreTrainedModel, DeeBertModel, HighwayException, entropy
-
-
-@add_start_docstrings(
- "The RoBERTa Model transformer with early exiting (DeeRoBERTa). ",
- ROBERTA_START_DOCSTRING,
-)
-class DeeRobertaModel(DeeBertModel):
- config_class = RobertaConfig
- base_model_prefix = "roberta"
-
- def __init__(self, config):
- super().__init__(config)
-
- self.embeddings = RobertaEmbeddings(config)
- self.init_weights()
-
-
-@add_start_docstrings(
- """RoBERTa Model (with early exiting - DeeRoBERTa) with a classifier on top,
- also takes care of multi-layer training. """,
- ROBERTA_START_DOCSTRING,
-)
-class DeeRobertaForSequenceClassification(BertPreTrainedModel):
- config_class = RobertaConfig
- base_model_prefix = "roberta"
-
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
- self.num_layers = config.num_hidden_layers
-
- self.roberta = DeeRobertaModel(config)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, self.config.num_labels)
-
- @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_layer=-1,
- train_highway=False,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for computing the sequence classification/regression loss.
- Indices should be in :obj:`[0, ..., config.num_labels - 1]`.
- If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
- If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
-
- Returns:
- :obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.RobertaConfig`) and inputs:
- loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`label` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):
- Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
- of shape :obj:`(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):
- Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
- :obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- highway_exits (:obj:`tuple(tuple(torch.Tensor))`:
- Tuple of each early exit's results (total length: number of layers)
- Each tuple is again, a tuple of length 2 - the first entry is logits and the second entry is hidden states.
- """
-
- exit_layer = self.num_layers
- try:
- outputs = self.roberta(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- )
-
- pooled_output = outputs[1]
-
- pooled_output = self.dropout(pooled_output)
- logits = self.classifier(pooled_output)
- outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
- except HighwayException as e:
- outputs = e.message
- exit_layer = e.exit_layer
- logits = outputs[0]
-
- if not self.training:
- original_entropy = entropy(logits)
- highway_entropy = []
- highway_logits_all = []
- if labels is not None:
- if self.num_labels == 1:
- # We are doing regression
- loss_fct = MSELoss()
- loss = loss_fct(logits.view(-1), labels.view(-1))
- else:
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
-
- # work with highway exits
- highway_losses = []
- for highway_exit in outputs[-1]:
- highway_logits = highway_exit[0]
- if not self.training:
- highway_logits_all.append(highway_logits)
- highway_entropy.append(highway_exit[2])
- if self.num_labels == 1:
- # We are doing regression
- loss_fct = MSELoss()
- highway_loss = loss_fct(highway_logits.view(-1), labels.view(-1))
- else:
- loss_fct = CrossEntropyLoss()
- highway_loss = loss_fct(highway_logits.view(-1, self.num_labels), labels.view(-1))
- highway_losses.append(highway_loss)
-
- if train_highway:
- outputs = (sum(highway_losses[:-1]),) + outputs
- # exclude the final highway, of course
- else:
- outputs = (loss,) + outputs
- if not self.training:
- outputs = outputs + ((original_entropy, highway_entropy), exit_layer)
- if output_layer >= 0:
- outputs = (
- (outputs[0],) + (highway_logits_all[output_layer],) + outputs[2:]
- ) # use the highway of the last layer
-
- return outputs # (loss), logits, (hidden_states), (attentions), entropy
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/GifImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/GifImagePlugin.py
deleted file mode 100644
index cf2993e38920bdebf79c6342875c2898e174ef6b..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/GifImagePlugin.py
+++ /dev/null
@@ -1,1064 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# GIF file handling
-#
-# History:
-# 1995-09-01 fl Created
-# 1996-12-14 fl Added interlace support
-# 1996-12-30 fl Added animation support
-# 1997-01-05 fl Added write support, fixed local colour map bug
-# 1997-02-23 fl Make sure to load raster data in getdata()
-# 1997-07-05 fl Support external decoder (0.4)
-# 1998-07-09 fl Handle all modes when saving (0.5)
-# 1998-07-15 fl Renamed offset attribute to avoid name clash
-# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6)
-# 2001-04-17 fl Added palette optimization (0.7)
-# 2002-06-06 fl Added transparency support for save (0.8)
-# 2004-02-24 fl Disable interlacing for small images
-#
-# Copyright (c) 1997-2004 by Secret Labs AB
-# Copyright (c) 1995-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import itertools
-import math
-import os
-import subprocess
-from enum import IntEnum
-
-from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence
-from ._binary import i16le as i16
-from ._binary import o8
-from ._binary import o16le as o16
-
-
-class LoadingStrategy(IntEnum):
- """.. versionadded:: 9.1.0"""
-
- RGB_AFTER_FIRST = 0
- RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1
- RGB_ALWAYS = 2
-
-
-#: .. versionadded:: 9.1.0
-LOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST
-
-# --------------------------------------------------------------------
-# Identify/read GIF files
-
-
-def _accept(prefix):
- return prefix[:6] in [b"GIF87a", b"GIF89a"]
-
-
-##
-# Image plugin for GIF images. This plugin supports both GIF87 and
-# GIF89 images.
-
-
-class GifImageFile(ImageFile.ImageFile):
- format = "GIF"
- format_description = "Compuserve GIF"
- _close_exclusive_fp_after_loading = False
-
- global_palette = None
-
- def data(self):
- s = self.fp.read(1)
- if s and s[0]:
- return self.fp.read(s[0])
- return None
-
- def _is_palette_needed(self, p):
- for i in range(0, len(p), 3):
- if not (i // 3 == p[i] == p[i + 1] == p[i + 2]):
- return True
- return False
-
- def _open(self):
- # Screen
- s = self.fp.read(13)
- if not _accept(s):
- msg = "not a GIF file"
- raise SyntaxError(msg)
-
- self.info["version"] = s[:6]
- self._size = i16(s, 6), i16(s, 8)
- self.tile = []
- flags = s[10]
- bits = (flags & 7) + 1
-
- if flags & 128:
- # get global palette
- self.info["background"] = s[11]
- # check if palette contains colour indices
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- p = ImagePalette.raw("RGB", p)
- self.global_palette = self.palette = p
-
- self._fp = self.fp # FIXME: hack
- self.__rewind = self.fp.tell()
- self._n_frames = None
- self._is_animated = None
- self._seek(0) # get ready to read first frame
-
- @property
- def n_frames(self):
- if self._n_frames is None:
- current = self.tell()
- try:
- while True:
- self._seek(self.tell() + 1, False)
- except EOFError:
- self._n_frames = self.tell() + 1
- self.seek(current)
- return self._n_frames
-
- @property
- def is_animated(self):
- if self._is_animated is None:
- if self._n_frames is not None:
- self._is_animated = self._n_frames != 1
- else:
- current = self.tell()
- if current:
- self._is_animated = True
- else:
- try:
- self._seek(1, False)
- self._is_animated = True
- except EOFError:
- self._is_animated = False
-
- self.seek(current)
- return self._is_animated
-
- def seek(self, frame):
- if not self._seek_check(frame):
- return
- if frame < self.__frame:
- self.im = None
- self._seek(0)
-
- last_frame = self.__frame
- for f in range(self.__frame + 1, frame + 1):
- try:
- self._seek(f)
- except EOFError as e:
- self.seek(last_frame)
- msg = "no more images in GIF file"
- raise EOFError(msg) from e
-
- def _seek(self, frame, update_image=True):
- if frame == 0:
- # rewind
- self.__offset = 0
- self.dispose = None
- self.__frame = -1
- self._fp.seek(self.__rewind)
- self.disposal_method = 0
- if "comment" in self.info:
- del self.info["comment"]
- else:
- # ensure that the previous frame was loaded
- if self.tile and update_image:
- self.load()
-
- if frame != self.__frame + 1:
- msg = f"cannot seek to frame {frame}"
- raise ValueError(msg)
-
- self.fp = self._fp
- if self.__offset:
- # backup to last frame
- self.fp.seek(self.__offset)
- while self.data():
- pass
- self.__offset = 0
-
- s = self.fp.read(1)
- if not s or s == b";":
- raise EOFError
-
- palette = None
-
- info = {}
- frame_transparency = None
- interlace = None
- frame_dispose_extent = None
- while True:
- if not s:
- s = self.fp.read(1)
- if not s or s == b";":
- break
-
- elif s == b"!":
- #
- # extensions
- #
- s = self.fp.read(1)
- block = self.data()
- if s[0] == 249:
- #
- # graphic control extension
- #
- flags = block[0]
- if flags & 1:
- frame_transparency = block[3]
- info["duration"] = i16(block, 1) * 10
-
- # disposal method - find the value of bits 4 - 6
- dispose_bits = 0b00011100 & flags
- dispose_bits = dispose_bits >> 2
- if dispose_bits:
- # only set the dispose if it is not
- # unspecified. I'm not sure if this is
- # correct, but it seems to prevent the last
- # frame from looking odd for some animations
- self.disposal_method = dispose_bits
- elif s[0] == 254:
- #
- # comment extension
- #
- comment = b""
-
- # Read this comment block
- while block:
- comment += block
- block = self.data()
-
- if "comment" in info:
- # If multiple comment blocks in frame, separate with \n
- info["comment"] += b"\n" + comment
- else:
- info["comment"] = comment
- s = None
- continue
- elif s[0] == 255 and frame == 0:
- #
- # application extension
- #
- info["extension"] = block, self.fp.tell()
- if block[:11] == b"NETSCAPE2.0":
- block = self.data()
- if len(block) >= 3 and block[0] == 1:
- self.info["loop"] = i16(block, 1)
- while self.data():
- pass
-
- elif s == b",":
- #
- # local image
- #
- s = self.fp.read(9)
-
- # extent
- x0, y0 = i16(s, 0), i16(s, 2)
- x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6)
- if (x1 > self.size[0] or y1 > self.size[1]) and update_image:
- self._size = max(x1, self.size[0]), max(y1, self.size[1])
- Image._decompression_bomb_check(self._size)
- frame_dispose_extent = x0, y0, x1, y1
- flags = s[8]
-
- interlace = (flags & 64) != 0
-
- if flags & 128:
- bits = (flags & 7) + 1
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- palette = ImagePalette.raw("RGB", p)
- else:
- palette = False
-
- # image data
- bits = self.fp.read(1)[0]
- self.__offset = self.fp.tell()
- break
-
- else:
- pass
- # raise OSError, "illegal GIF tag `%x`" % s[0]
- s = None
-
- if interlace is None:
- # self._fp = None
- raise EOFError
-
- self.__frame = frame
- if not update_image:
- return
-
- self.tile = []
-
- if self.dispose:
- self.im.paste(self.dispose, self.dispose_extent)
-
- self._frame_palette = palette if palette is not None else self.global_palette
- self._frame_transparency = frame_transparency
- if frame == 0:
- if self._frame_palette:
- if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- self.mode = "RGBA" if frame_transparency is not None else "RGB"
- else:
- self.mode = "P"
- else:
- self.mode = "L"
-
- if not palette and self.global_palette:
- from copy import copy
-
- palette = copy(self.global_palette)
- self.palette = palette
- else:
- if self.mode == "P":
- if (
- LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY
- or palette
- ):
- self.pyaccess = None
- if "transparency" in self.info:
- self.im.putpalettealpha(self.info["transparency"], 0)
- self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG)
- self.mode = "RGBA"
- del self.info["transparency"]
- else:
- self.mode = "RGB"
- self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG)
-
- def _rgb(color):
- if self._frame_palette:
- color = tuple(self._frame_palette.palette[color * 3 : color * 3 + 3])
- else:
- color = (color, color, color)
- return color
-
- self.dispose_extent = frame_dispose_extent
- try:
- if self.disposal_method < 2:
- # do not dispose or none specified
- self.dispose = None
- elif self.disposal_method == 2:
- # replace with background colour
-
- # only dispose the extent in this frame
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
-
- # by convention, attempt to use transparency first
- dispose_mode = "P"
- color = self.info.get("transparency", frame_transparency)
- if color is not None:
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(color) + (0,)
- else:
- color = self.info.get("background", 0)
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGB"
- color = _rgb(color)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- else:
- # replace with previous contents
- if self.im is not None:
- # only dispose the extent in this frame
- self.dispose = self._crop(self.im, self.dispose_extent)
- elif frame_transparency is not None:
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
- dispose_mode = "P"
- color = frame_transparency
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(frame_transparency) + (0,)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- except AttributeError:
- pass
-
- if interlace is not None:
- transparency = -1
- if frame_transparency is not None:
- if frame == 0:
- if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS:
- self.info["transparency"] = frame_transparency
- elif self.mode not in ("RGB", "RGBA"):
- transparency = frame_transparency
- self.tile = [
- (
- "gif",
- (x0, y0, x1, y1),
- self.__offset,
- (bits, interlace, transparency),
- )
- ]
-
- if info.get("comment"):
- self.info["comment"] = info["comment"]
- for k in ["duration", "extension"]:
- if k in info:
- self.info[k] = info[k]
- elif k in self.info:
- del self.info[k]
-
- def load_prepare(self):
- temp_mode = "P" if self._frame_palette else "L"
- self._prev_im = None
- if self.__frame == 0:
- if self._frame_transparency is not None:
- self.im = Image.core.fill(
- temp_mode, self.size, self._frame_transparency
- )
- elif self.mode in ("RGB", "RGBA"):
- self._prev_im = self.im
- if self._frame_palette:
- self.im = Image.core.fill("P", self.size, self._frame_transparency or 0)
- self.im.putpalette(*self._frame_palette.getdata())
- else:
- self.im = None
- self.mode = temp_mode
- self._frame_palette = None
-
- super().load_prepare()
-
- def load_end(self):
- if self.__frame == 0:
- if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- self.mode = "RGBA"
- else:
- self.mode = "RGB"
- self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG)
- return
- if not self._prev_im:
- return
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- frame_im = self.im.convert("RGBA")
- else:
- frame_im = self.im.convert("RGB")
- frame_im = self._crop(frame_im, self.dispose_extent)
-
- self.im = self._prev_im
- self.mode = self.im.mode
- if frame_im.mode == "RGBA":
- self.im.paste(frame_im, self.dispose_extent, frame_im)
- else:
- self.im.paste(frame_im, self.dispose_extent)
-
- def tell(self):
- return self.__frame
-
-
-# --------------------------------------------------------------------
-# Write GIF files
-
-
-RAWMODE = {"1": "L", "L": "L", "P": "P"}
-
-
-def _normalize_mode(im):
- """
- Takes an image (or frame), returns an image in a mode that is appropriate
- for saving in a Gif.
-
- It may return the original image, or it may return an image converted to
- palette or 'L' mode.
-
- :param im: Image object
- :returns: Image object
- """
- if im.mode in RAWMODE:
- im.load()
- return im
- if Image.getmodebase(im.mode) == "RGB":
- im = im.convert("P", palette=Image.Palette.ADAPTIVE)
- if im.palette.mode == "RGBA":
- for rgba in im.palette.colors:
- if rgba[3] == 0:
- im.info["transparency"] = im.palette.colors[rgba]
- break
- return im
- return im.convert("L")
-
-
-def _normalize_palette(im, palette, info):
- """
- Normalizes the palette for image.
- - Sets the palette to the incoming palette, if provided.
- - Ensures that there's a palette for L mode images
- - Optimizes the palette if necessary/desired.
-
- :param im: Image object
- :param palette: bytes object containing the source palette, or ....
- :param info: encoderinfo
- :returns: Image object
- """
- source_palette = None
- if palette:
- # a bytes palette
- if isinstance(palette, (bytes, bytearray, list)):
- source_palette = bytearray(palette[:768])
- if isinstance(palette, ImagePalette.ImagePalette):
- source_palette = bytearray(palette.palette)
-
- if im.mode == "P":
- if not source_palette:
- source_palette = im.im.getpalette("RGB")[:768]
- else: # L-mode
- if not source_palette:
- source_palette = bytearray(i // 3 for i in range(768))
- im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette)
-
- if palette:
- used_palette_colors = []
- for i in range(0, len(source_palette), 3):
- source_color = tuple(source_palette[i : i + 3])
- index = im.palette.colors.get(source_color)
- if index in used_palette_colors:
- index = None
- used_palette_colors.append(index)
- for i, index in enumerate(used_palette_colors):
- if index is None:
- for j in range(len(used_palette_colors)):
- if j not in used_palette_colors:
- used_palette_colors[i] = j
- break
- im = im.remap_palette(used_palette_colors)
- else:
- used_palette_colors = _get_optimize(im, info)
- if used_palette_colors is not None:
- return im.remap_palette(used_palette_colors, source_palette)
-
- im.palette.palette = source_palette
- return im
-
-
-def _write_single_frame(im, fp, palette):
- im_out = _normalize_mode(im)
- for k, v in im_out.info.items():
- im.encoderinfo.setdefault(k, v)
- im_out = _normalize_palette(im_out, palette, im.encoderinfo)
-
- for s in _get_global_header(im_out, im.encoderinfo):
- fp.write(s)
-
- # local image header
- flags = 0
- if get_interlace(im):
- flags = flags | 64
- _write_local_header(fp, im, (0, 0), flags)
-
- im_out.encoderconfig = (8, get_interlace(im))
- ImageFile._save(im_out, fp, [("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])])
-
- fp.write(b"\0") # end of image data
-
-
-def _getbbox(base_im, im_frame):
- if _get_palette_bytes(im_frame) == _get_palette_bytes(base_im):
- delta = ImageChops.subtract_modulo(im_frame, base_im)
- else:
- delta = ImageChops.subtract_modulo(
- im_frame.convert("RGBA"), base_im.convert("RGBA")
- )
- return delta.getbbox(alpha_only=False)
-
-
-def _write_multiple_frames(im, fp, palette):
- duration = im.encoderinfo.get("duration")
- disposal = im.encoderinfo.get("disposal", im.info.get("disposal"))
-
- im_frames = []
- frame_count = 0
- background_im = None
- for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])):
- for im_frame in ImageSequence.Iterator(imSequence):
- # a copy is required here since seek can still mutate the image
- im_frame = _normalize_mode(im_frame.copy())
- if frame_count == 0:
- for k, v in im_frame.info.items():
- if k == "transparency":
- continue
- im.encoderinfo.setdefault(k, v)
-
- encoderinfo = im.encoderinfo.copy()
- im_frame = _normalize_palette(im_frame, palette, encoderinfo)
- if "transparency" in im_frame.info:
- encoderinfo.setdefault("transparency", im_frame.info["transparency"])
- if isinstance(duration, (list, tuple)):
- encoderinfo["duration"] = duration[frame_count]
- elif duration is None and "duration" in im_frame.info:
- encoderinfo["duration"] = im_frame.info["duration"]
- if isinstance(disposal, (list, tuple)):
- encoderinfo["disposal"] = disposal[frame_count]
- frame_count += 1
-
- if im_frames:
- # delta frame
- previous = im_frames[-1]
- bbox = _getbbox(previous["im"], im_frame)
- if not bbox:
- # This frame is identical to the previous frame
- if encoderinfo.get("duration"):
- previous["encoderinfo"]["duration"] += encoderinfo["duration"]
- continue
- if encoderinfo.get("disposal") == 2:
- if background_im is None:
- color = im.encoderinfo.get(
- "transparency", im.info.get("transparency", (0, 0, 0))
- )
- background = _get_background(im_frame, color)
- background_im = Image.new("P", im_frame.size, background)
- background_im.putpalette(im_frames[0]["im"].palette)
- bbox = _getbbox(background_im, im_frame)
- else:
- bbox = None
- im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo})
-
- if len(im_frames) > 1:
- for frame_data in im_frames:
- im_frame = frame_data["im"]
- if not frame_data["bbox"]:
- # global header
- for s in _get_global_header(im_frame, frame_data["encoderinfo"]):
- fp.write(s)
- offset = (0, 0)
- else:
- # compress difference
- if not palette:
- frame_data["encoderinfo"]["include_color_table"] = True
-
- im_frame = im_frame.crop(frame_data["bbox"])
- offset = frame_data["bbox"][:2]
- _write_frame_data(fp, im_frame, offset, frame_data["encoderinfo"])
- return True
- elif "duration" in im.encoderinfo and isinstance(
- im.encoderinfo["duration"], (list, tuple)
- ):
- # Since multiple frames will not be written, add together the frame durations
- im.encoderinfo["duration"] = sum(im.encoderinfo["duration"])
-
-
-def _save_all(im, fp, filename):
- _save(im, fp, filename, save_all=True)
-
-
-def _save(im, fp, filename, save_all=False):
- # header
- if "palette" in im.encoderinfo or "palette" in im.info:
- palette = im.encoderinfo.get("palette", im.info.get("palette"))
- else:
- palette = None
- im.encoderinfo["optimize"] = im.encoderinfo.get("optimize", True)
-
- if not save_all or not _write_multiple_frames(im, fp, palette):
- _write_single_frame(im, fp, palette)
-
- fp.write(b";") # end of file
-
- if hasattr(fp, "flush"):
- fp.flush()
-
-
-def get_interlace(im):
- interlace = im.encoderinfo.get("interlace", 1)
-
- # workaround for @PIL153
- if min(im.size) < 16:
- interlace = 0
-
- return interlace
-
-
-def _write_local_header(fp, im, offset, flags):
- transparent_color_exists = False
- try:
- if "transparency" in im.encoderinfo:
- transparency = im.encoderinfo["transparency"]
- else:
- transparency = im.info["transparency"]
- transparency = int(transparency)
- except (KeyError, ValueError):
- pass
- else:
- # optimize the block away if transparent color is not used
- transparent_color_exists = True
-
- used_palette_colors = _get_optimize(im, im.encoderinfo)
- if used_palette_colors is not None:
- # adjust the transparency index after optimize
- try:
- transparency = used_palette_colors.index(transparency)
- except ValueError:
- transparent_color_exists = False
-
- if "duration" in im.encoderinfo:
- duration = int(im.encoderinfo["duration"] / 10)
- else:
- duration = 0
-
- disposal = int(im.encoderinfo.get("disposal", 0))
-
- if transparent_color_exists or duration != 0 or disposal:
- packed_flag = 1 if transparent_color_exists else 0
- packed_flag |= disposal << 2
- if not transparent_color_exists:
- transparency = 0
-
- fp.write(
- b"!"
- + o8(249) # extension intro
- + o8(4) # length
- + o8(packed_flag) # packed fields
- + o16(duration) # duration
- + o8(transparency) # transparency index
- + o8(0)
- )
-
- include_color_table = im.encoderinfo.get("include_color_table")
- if include_color_table:
- palette_bytes = _get_palette_bytes(im)
- color_table_size = _get_color_table_size(palette_bytes)
- if color_table_size:
- flags = flags | 128 # local color table flag
- flags = flags | color_table_size
-
- fp.write(
- b","
- + o16(offset[0]) # offset
- + o16(offset[1])
- + o16(im.size[0]) # size
- + o16(im.size[1])
- + o8(flags) # flags
- )
- if include_color_table and color_table_size:
- fp.write(_get_header_palette(palette_bytes))
- fp.write(o8(8)) # bits
-
-
-def _save_netpbm(im, fp, filename):
- # Unused by default.
- # To use, uncomment the register_save call at the end of the file.
- #
- # If you need real GIF compression and/or RGB quantization, you
- # can use the external NETPBM/PBMPLUS utilities. See comments
- # below for information on how to enable this.
- tempfile = im._dump()
-
- try:
- with open(filename, "wb") as f:
- if im.mode != "RGB":
- subprocess.check_call(
- ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL
- )
- else:
- # Pipe ppmquant output into ppmtogif
- # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename)
- quant_cmd = ["ppmquant", "256", tempfile]
- togif_cmd = ["ppmtogif"]
- quant_proc = subprocess.Popen(
- quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL
- )
- togif_proc = subprocess.Popen(
- togif_cmd,
- stdin=quant_proc.stdout,
- stdout=f,
- stderr=subprocess.DEVNULL,
- )
-
- # Allow ppmquant to receive SIGPIPE if ppmtogif exits
- quant_proc.stdout.close()
-
- retcode = quant_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, quant_cmd)
-
- retcode = togif_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, togif_cmd)
- finally:
- try:
- os.unlink(tempfile)
- except OSError:
- pass
-
-
-# Force optimization so that we can test performance against
-# cases where it took lots of memory and time previously.
-_FORCE_OPTIMIZE = False
-
-
-def _get_optimize(im, info):
- """
- Palette optimization is a potentially expensive operation.
-
- This function determines if the palette should be optimized using
- some heuristics, then returns the list of palette entries in use.
-
- :param im: Image object
- :param info: encoderinfo
- :returns: list of indexes of palette entries in use, or None
- """
- if im.mode in ("P", "L") and info and info.get("optimize", 0):
- # Potentially expensive operation.
-
- # The palette saves 3 bytes per color not used, but palette
- # lengths are restricted to 3*(2**N) bytes. Max saving would
- # be 768 -> 6 bytes if we went all the way down to 2 colors.
- # * If we're over 128 colors, we can't save any space.
- # * If there aren't any holes, it's not worth collapsing.
- # * If we have a 'large' image, the palette is in the noise.
-
- # create the new palette if not every color is used
- optimise = _FORCE_OPTIMIZE or im.mode == "L"
- if optimise or im.width * im.height < 512 * 512:
- # check which colors are used
- used_palette_colors = []
- for i, count in enumerate(im.histogram()):
- if count:
- used_palette_colors.append(i)
-
- if optimise or max(used_palette_colors) >= len(used_palette_colors):
- return used_palette_colors
-
- num_palette_colors = len(im.palette.palette) // Image.getmodebands(
- im.palette.mode
- )
- current_palette_size = 1 << (num_palette_colors - 1).bit_length()
- if (
- # check that the palette would become smaller when saved
- len(used_palette_colors) <= current_palette_size // 2
- # check that the palette is not already the smallest possible size
- and current_palette_size > 2
- ):
- return used_palette_colors
-
-
-def _get_color_table_size(palette_bytes):
- # calculate the palette size for the header
- if not palette_bytes:
- return 0
- elif len(palette_bytes) < 9:
- return 1
- else:
- return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1
-
-
-def _get_header_palette(palette_bytes):
- """
- Returns the palette, null padded to the next power of 2 (*3) bytes
- suitable for direct inclusion in the GIF header
-
- :param palette_bytes: Unpadded palette bytes, in RGBRGB form
- :returns: Null padded palette
- """
- color_table_size = _get_color_table_size(palette_bytes)
-
- # add the missing amount of bytes
- # the palette has to be 2< 0:
- palette_bytes += o8(0) * 3 * actual_target_size_diff
- return palette_bytes
-
-
-def _get_palette_bytes(im):
- """
- Gets the palette for inclusion in the gif header
-
- :param im: Image object
- :returns: Bytes, len<=768 suitable for inclusion in gif header
- """
- return im.palette.palette if im.palette else b""
-
-
-def _get_background(im, info_background):
- background = 0
- if info_background:
- if isinstance(info_background, tuple):
- # WebPImagePlugin stores an RGBA value in info["background"]
- # So it must be converted to the same format as GifImagePlugin's
- # info["background"] - a global color table index
- try:
- background = im.palette.getcolor(info_background, im)
- except ValueError as e:
- if str(e) not in (
- # If all 256 colors are in use,
- # then there is no need for the background color
- "cannot allocate more than 256 colors",
- # Ignore non-opaque WebP background
- "cannot add non-opaque RGBA color to RGB palette",
- ):
- raise
- else:
- background = info_background
- return background
-
-
-def _get_global_header(im, info):
- """Return a list of strings representing a GIF header"""
-
- # Header Block
- # https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp
-
- version = b"87a"
- if im.info.get("version") == b"89a" or (
- info
- and (
- "transparency" in info
- or "loop" in info
- or info.get("duration")
- or info.get("comment")
- )
- ):
- version = b"89a"
-
- background = _get_background(im, info.get("background"))
-
- palette_bytes = _get_palette_bytes(im)
- color_table_size = _get_color_table_size(palette_bytes)
-
- header = [
- b"GIF" # signature
- + version # version
- + o16(im.size[0]) # canvas width
- + o16(im.size[1]), # canvas height
- # Logical Screen Descriptor
- # size of global color table + global color table flag
- o8(color_table_size + 128), # packed fields
- # background + reserved/aspect
- o8(background) + o8(0),
- # Global Color Table
- _get_header_palette(palette_bytes),
- ]
- if "loop" in info:
- header.append(
- b"!"
- + o8(255) # extension intro
- + o8(11)
- + b"NETSCAPE2.0"
- + o8(3)
- + o8(1)
- + o16(info["loop"]) # number of loops
- + o8(0)
- )
- if info.get("comment"):
- comment_block = b"!" + o8(254) # extension intro
-
- comment = info["comment"]
- if isinstance(comment, str):
- comment = comment.encode()
- for i in range(0, len(comment), 255):
- subblock = comment[i : i + 255]
- comment_block += o8(len(subblock)) + subblock
-
- comment_block += o8(0)
- header.append(comment_block)
- return header
-
-
-def _write_frame_data(fp, im_frame, offset, params):
- try:
- im_frame.encoderinfo = params
-
- # local image header
- _write_local_header(fp, im_frame, offset, 0)
-
- ImageFile._save(
- im_frame, fp, [("gif", (0, 0) + im_frame.size, 0, RAWMODE[im_frame.mode])]
- )
-
- fp.write(b"\0") # end of image data
- finally:
- del im_frame.encoderinfo
-
-
-# --------------------------------------------------------------------
-# Legacy GIF utilities
-
-
-def getheader(im, palette=None, info=None):
- """
- Legacy Method to get Gif data from image.
-
- Warning:: May modify image data.
-
- :param im: Image object
- :param palette: bytes object containing the source palette, or ....
- :param info: encoderinfo
- :returns: tuple of(list of header items, optimized palette)
-
- """
- used_palette_colors = _get_optimize(im, info)
-
- if info is None:
- info = {}
-
- if "background" not in info and "background" in im.info:
- info["background"] = im.info["background"]
-
- im_mod = _normalize_palette(im, palette, info)
- im.palette = im_mod.palette
- im.im = im_mod.im
- header = _get_global_header(im, info)
-
- return header, used_palette_colors
-
-
-def getdata(im, offset=(0, 0), **params):
- """
- Legacy Method
-
- Return a list of strings representing this image.
- The first string is a local image header, the rest contains
- encoded image data.
-
- To specify duration, add the time in milliseconds,
- e.g. ``getdata(im_frame, duration=1000)``
-
- :param im: Image object
- :param offset: Tuple of (x, y) pixels. Defaults to (0, 0)
- :param \\**params: e.g. duration or other encoder info parameters
- :returns: List of bytes containing GIF encoded frame data
-
- """
-
- class Collector:
- data = []
-
- def write(self, data):
- self.data.append(data)
-
- im.load() # make sure raster data is available
-
- fp = Collector()
-
- _write_frame_data(fp, im, offset, params)
-
- return fp.data
-
-
-# --------------------------------------------------------------------
-# Registry
-
-Image.register_open(GifImageFile.format, GifImageFile, _accept)
-Image.register_save(GifImageFile.format, _save)
-Image.register_save_all(GifImageFile.format, _save_all)
-Image.register_extension(GifImageFile.format, ".gif")
-Image.register_mime(GifImageFile.format, "image/gif")
-
-#
-# Uncomment the following line if you wish to use NETPBM/PBMPLUS
-# instead of the built-in "uncompressed" GIF encoder
-
-# Image.register_save(GifImageFile.format, _save_netpbm)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/varLib/__main__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/varLib/__main__.py
deleted file mode 100644
index 56fab06e0fe6ac22fce428209c373ecb82d8472a..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/varLib/__main__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import sys
-from fontTools.varLib import main
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Model3D-98fc2b2c.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Model3D-98fc2b2c.css
deleted file mode 100644
index cee82ea831d77ca0e001baf10a07f84e176679f0..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Model3D-98fc2b2c.css
+++ /dev/null
@@ -1 +0,0 @@
-.gallery.svelte-1ayixqk{padding:var(--size-1) var(--size-2)}
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-5c6740a6.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-5c6740a6.js
deleted file mode 100644
index 6759c659bd51f324a3721cecd4fdb6d0ea79e0c8..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-5c6740a6.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as Z,e as x,s as J,J as H,K as f,p as T,M as N,n as E,A as q,N as F,O,P as ae,k as j,T as se,Z as je,U as me,o as U,Q as z,aj as Ue,af as he,Y as Me,X as Se,u as G,v as p,y as Q,z as g,R as _e,x as M,a1 as Ce,B as ce,a6 as Ee,aB as Pe,F as y,h as ve,m as le,j as Re,t as ze,a9 as Fe,ab as Ae,ac as De,ad as Ie,al as Oe,a7 as Xe,E as Le,ae as He,q as Je,r as Ke}from"./index-f877dfd5.js";import{n as be}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import{B as We}from"./Button-11a87b79.js";import{U as Ge}from"./Upload-3aa22eef.js";import{M as Qe}from"./ModifyUpload-87f877d6.js";import{B as ye}from"./BlockLabel-7929e88d.js";import{U as Ye,W as Ze}from"./StaticImage.svelte_svelte_type_style_lang-72cfcc0b.js";import{I as xe}from"./IconButton-34da90d2.js";import{E as $e}from"./Empty-2159e5e9.js";import{u as el,S as ll}from"./ShareButton-cdd94184.js";import{D as tl}from"./Download-a587c81f.js";import{U as nl}from"./UploadText-8aae32a4.js";import"./Blocks-adc2d4ca.js";function al(n){let e,t;return{c(){e=H("svg"),t=H("path"),f(t,"d","M8 3H5a2 2 0 0 0-2 2v3m18 0V5a2 2 0 0 0-2-2h-3m0 18h3a2 2 0 0 0 2-2v-3M3 16v3a2 2 0 0 0 2 2h3"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(l,a){T(l,e,a),N(e,t)},p:E,i:E,o:E,d(l){l&&q(e)}}}class ol extends Z{constructor(e){super(),x(this,e,null,al,J,{})}}function rl(n){let e,t,l;return{c(){e=H("svg"),t=H("rect"),l=H("rect"),f(t,"x","6"),f(t,"y","4"),f(t,"width","4"),f(t,"height","16"),f(l,"x","14"),f(l,"y","4"),f(l,"width","4"),f(l,"height","16"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(a,o){T(a,e,o),N(e,t),N(e,l)},p:E,i:E,o:E,d(a){a&&q(e)}}}class il extends Z{constructor(e){super(),x(this,e,null,rl,J,{})}}function ul(n){let e,t;return{c(){e=H("svg"),t=H("polygon"),f(t,"points","5 3 19 12 5 21 5 3"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round")},m(l,a){T(l,e,a),N(e,t)},p:E,i:E,o:E,d(l){l&&q(e)}}}class sl extends Z{constructor(e){super(),x(this,e,null,ul,J,{})}}function fl(n){let e,t,l;return{c(){e=H("svg"),t=H("polygon"),l=H("rect"),f(t,"points","23 7 16 12 23 17 23 7"),f(l,"x","1"),f(l,"y","5"),f(l,"width","15"),f(l,"height","14"),f(l,"rx","2"),f(l,"ry","2"),f(e,"xmlns","http://www.w3.org/2000/svg"),f(e,"width","100%"),f(e,"height","100%"),f(e,"viewBox","0 0 24 24"),f(e,"fill","none"),f(e,"stroke","currentColor"),f(e,"stroke-width","1.5"),f(e,"stroke-linecap","round"),f(e,"stroke-linejoin","round"),f(e,"class","feather feather-video")},m(a,o){T(a,e,o),N(e,t),N(e,l)},p:E,i:E,o:E,d(a){a&&q(e)}}}class de extends Z{constructor(e){super(),x(this,e,null,fl,J,{})}}const ge=n=>{let e=["B","KB","MB","GB","PB"],t=0;for(;n>1024;)n/=1024,t++;let l=e[t];return n.toFixed(1)+" "+l},_l=()=>!0;function cl(n,{autoplay:e}){async function t(){e&&await n.play()}return n.addEventListener("loadeddata",t),{destroy(){n.removeEventListener("loadeddata",t)}}}const{isNaN:dl}=Ee;function ml(n){let e,t;return e=new il({}),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function hl(n){let e,t;return e=new sl({}),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function bl(n){let e,t;return e=new Ye({}),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function gl(n){let e,t,l,a,o,u,b=!1,c,m=!0,r,s,i,d,C,B,V,D,w,A=fe(n[5])+"",X,K,I=fe(n[6])+"",L,k,S,h,$,ee,Y,R,te,oe;function re(){cancelAnimationFrame(c),t.paused||(c=Pe(re),b=!0),n[15].call(t)}const ie=[bl,hl,ml],W=[];function ue(v,P){return v[5]===v[6]?0:v[7]?1:2}return B=ue(n),V=W[B]=ie[B](n),Y=new ol({}),{c(){e=F("div"),t=F("video"),l=F("track"),s=O(),i=F("div"),d=F("div"),C=F("span"),V.c(),D=O(),w=F("span"),X=ae(A),K=ae(" / "),L=ae(I),k=O(),S=F("progress"),$=O(),ee=F("div"),j(Y.$$.fragment),f(l,"kind","captions"),se(l.src,a=n[1])||f(l,"src",a),l.default=!0,se(t.src,o=n[0])||f(t,"src",o),f(t,"preload","auto"),f(t,"data-testid",u=`${n[4]}-player`),f(t,"class","svelte-1voqrms"),n[6]===void 0&&je(()=>n[16].call(t)),me(t,"mirror",n[2]),f(C,"class","icon svelte-1voqrms"),f(w,"class","time svelte-1voqrms"),S.value=h=n[5]/n[6]||0,f(S,"class","svelte-1voqrms"),f(ee,"class","icon svelte-1voqrms"),f(d,"class","inner svelte-1voqrms"),f(i,"class","controls svelte-1voqrms"),f(e,"class","wrap svelte-1voqrms")},m(v,P){T(v,e,P),N(e,t),N(t,l),n[18](t),N(e,s),N(e,i),N(i,d),N(d,C),W[B].m(C,null),N(d,D),N(d,w),N(w,X),N(w,K),N(w,L),N(d,k),N(d,S),N(d,$),N(d,ee),U(Y,ee,null),R=!0,te||(oe=[z(t,"click",n[10]),z(t,"play",n[13]),z(t,"pause",n[14]),z(t,"ended",n[12]),z(t,"timeupdate",re),z(t,"durationchange",n[16]),z(t,"play",n[17]),z(t,"pause",n[17]),Ue(r=cl.call(null,t,{autoplay:n[3]})),z(C,"click",n[10]),z(S,"mousemove",n[9]),z(S,"touchmove",he(n[9])),z(S,"click",Me(he(n[11]))),z(ee,"click",n[19])],te=!0)},p(v,[P]){(!R||P&2&&!se(l.src,a=v[1]))&&f(l,"src",a),(!R||P&1&&!se(t.src,o=v[0]))&&f(t,"src",o),(!R||P&16&&u!==(u=`${v[4]}-player`))&&f(t,"data-testid",u),!b&&P&32&&!dl(v[5])&&(t.currentTime=v[5]),b=!1,P&128&&m!==(m=v[7])&&t[m?"pause":"play"](),r&&Se(r.update)&&P&8&&r.update.call(null,{autoplay:v[3]}),(!R||P&4)&&me(t,"mirror",v[2]);let ne=B;B=ue(v),B!==ne&&(G(),p(W[ne],1,1,()=>{W[ne]=null}),Q(),V=W[B],V||(V=W[B]=ie[B](v),V.c()),g(V,1),V.m(C,null)),(!R||P&32)&&A!==(A=fe(v[5])+"")&&_e(X,A),(!R||P&64)&&I!==(I=fe(v[6])+"")&&_e(L,I),(!R||P&96&&h!==(h=v[5]/v[6]||0))&&(S.value=h)},i(v){R||(g(V),g(Y.$$.fragment,v),R=!0)},o(v){p(V),p(Y.$$.fragment,v),R=!1},d(v){v&&q(e),n[18](null),W[B].d(),M(Y),te=!1,Ce(oe)}}}function fe(n){if(isNaN(n)||!isFinite(n))return"...";const e=Math.floor(n/60);let t=Math.floor(n%60);return n<10&&(t=`0${t}`),`${e}:${t}`}function wl(n,e,t){let{src:l}=e,{subtitle:a=null}=e,{mirror:o}=e,{autoplay:u}=e,{label:b="test"}=e;const c=ce();let m=0,r,s=!0,i;function d(k){if(!r)return;if(k.type==="click"){B(k);return}if(k.type!=="touchmove"&&!(k.buttons&1))return;const S=k.type==="touchmove"?k.touches[0].clientX:k.clientX,{left:h,right:$}=k.currentTarget.getBoundingClientRect();t(5,m=r*(S-h)/($-h))}async function C(){document.fullscreenElement!=i&&(i.currentTime>0&&!i.paused&&!i.ended&&i.readyState>i.HAVE_CURRENT_DATA?i.pause():await i.play())}function B(k){const{left:S,right:h}=k.currentTarget.getBoundingClientRect();t(5,m=r*(k.clientX-S)/(h-S))}function V(){c("stop"),c("end")}function D(k){y.call(this,n,k)}function w(k){y.call(this,n,k)}function A(){m=this.currentTime,t(5,m)}function X(){r=this.duration,t(6,r)}function K(){s=this.paused,t(7,s)}function I(k){ve[k?"unshift":"push"](()=>{i=k,t(8,i)})}const L=()=>i.requestFullscreen();return n.$$set=k=>{"src"in k&&t(0,l=k.src),"subtitle"in k&&t(1,a=k.subtitle),"mirror"in k&&t(2,o=k.mirror),"autoplay"in k&&t(3,u=k.autoplay),"label"in k&&t(4,b=k.label)},[l,a,o,u,b,m,r,s,i,d,C,B,V,D,w,A,X,K,I,L]}class Be extends Z{constructor(e){super(),x(this,e,wl,gl,J,{src:0,subtitle:1,mirror:2,autoplay:3,label:4})}}function kl(n){let e,t,l,a,o,u,b;e=new Qe({}),e.$on("clear",n[11]);const c=[yl,vl],m=[];function r(s,i){return l==null&&(l=!!_l()),l?0:s[0].size?1:-1}return~(a=r(n))&&(o=m[a]=c[a](n)),{c(){j(e.$$.fragment),t=O(),o&&o.c(),u=le()},m(s,i){U(e,s,i),T(s,t,i),~a&&m[a].m(s,i),T(s,u,i),b=!0},p(s,i){let d=a;a=r(s),a===d?~a&&m[a].p(s,i):(o&&(G(),p(m[d],1,1,()=>{m[d]=null}),Q()),~a?(o=m[a],o?o.p(s,i):(o=m[a]=c[a](s),o.c()),g(o,1),o.m(u.parentNode,u)):o=null)},i(s){b||(g(e.$$.fragment,s),g(o),b=!0)},o(s){p(e.$$.fragment,s),p(o),b=!1},d(s){s&&(q(t),q(u)),M(e,s),~a&&m[a].d(s)}}}function pl(n){let e,t,l,a;const o=[Nl,Bl],u=[];function b(c,m){return c[2]==="upload"?0:c[2]==="webcam"?1:-1}return~(e=b(n))&&(t=u[e]=o[e](n)),{c(){t&&t.c(),l=le()},m(c,m){~e&&u[e].m(c,m),T(c,l,m),a=!0},p(c,m){let r=e;e=b(c),e===r?~e&&u[e].p(c,m):(t&&(G(),p(u[r],1,1,()=>{u[r]=null}),Q()),~e?(t=u[e],t?t.p(c,m):(t=u[e]=o[e](c),t.c()),g(t,1),t.m(l.parentNode,l)):t=null)},i(c){a||(g(t),a=!0)},o(c){p(t),a=!1},d(c){c&&q(l),~e&&u[e].d(c)}}}function vl(n){let e,t=n[0].name+"",l,a,o,u=ge(n[0].size)+"",b;return{c(){e=F("div"),l=ae(t),a=O(),o=F("div"),b=ae(u),f(e,"class","file-name svelte-a6ruol"),f(o,"class","file-size svelte-a6ruol")},m(c,m){T(c,e,m),N(e,l),T(c,a,m),T(c,o,m),N(o,b)},p(c,m){m&1&&t!==(t=c[0].name+"")&&_e(l,t),m&1&&u!==(u=ge(c[0].size)+"")&&_e(b,u)},i:E,o:E,d(c){c&&(q(e),q(a),q(o))}}}function yl(n){let e=n[0]?.data,t,l,a=we(n);return{c(){a.c(),t=le()},m(o,u){a.m(o,u),T(o,t,u),l=!0},p(o,u){u&1&&J(e,e=o[0]?.data)?(G(),p(a,1,1,E),Q(),a=we(o),a.c(),g(a,1),a.m(t.parentNode,t)):a.p(o,u)},i(o){l||(g(a),l=!0)},o(o){p(a),l=!1},d(o){o&&q(t),a.d(o)}}}function we(n){let e,t;return e=new Be({props:{autoplay:n[7],src:n[0].data,subtitle:n[1]?.data,mirror:n[5]&&n[2]==="webcam",label:n[3]}}),e.$on("play",n[18]),e.$on("pause",n[19]),e.$on("stop",n[20]),e.$on("end",n[21]),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a&128&&(o.autoplay=l[7]),a&1&&(o.src=l[0].data),a&2&&(o.subtitle=l[1]?.data),a&36&&(o.mirror=l[5]&&l[2]==="webcam"),a&8&&(o.label=l[3]),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function Bl(n){let e,t;return e=new Ze({props:{mirror_webcam:n[5],include_audio:n[6],mode:"video"}}),e.$on("error",n[14]),e.$on("capture",n[15]),e.$on("start_recording",n[16]),e.$on("stop_recording",n[17]),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a&32&&(o.mirror_webcam=l[5]),a&64&&(o.include_audio=l[6]),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function Nl(n){let e,t,l;function a(u){n[13](u)}let o={filetype:"video/x-m4v,video/*",$$slots:{default:[Vl]},$$scope:{ctx:n}};return n[8]!==void 0&&(o.dragging=n[8]),e=new Ge({props:o}),ve.push(()=>Re(e,"dragging",a)),e.$on("load",n[10]),{c(){j(e.$$.fragment)},m(u,b){U(e,u,b),l=!0},p(u,b){const c={};b&4194304&&(c.$$scope={dirty:b,ctx:u}),!t&&b&256&&(t=!0,c.dragging=u[8],ze(()=>t=!1)),e.$set(c)},i(u){l||(g(e.$$.fragment,u),l=!0)},o(u){p(e.$$.fragment,u),l=!1},d(u){M(e,u)}}}function Vl(n){let e;const t=n[12].default,l=Fe(t,n,n[22],null);return{c(){l&&l.c()},m(a,o){l&&l.m(a,o),e=!0},p(a,o){l&&l.p&&(!e||o&4194304)&&Ae(l,t,a,a[22],e?Ie(t,a[22],o,null):De(a[22]),null)},i(a){e||(g(l,a),e=!0)},o(a){p(l,a),e=!1},d(a){l&&l.d(a)}}}function Tl(n){let e,t,l,a,o,u;e=new ye({props:{show_label:n[4],Icon:de,label:n[3]||"Video"}});const b=[pl,kl],c=[];function m(r,s){return r[0]===null?0:1}return l=m(n),a=c[l]=b[l](n),{c(){j(e.$$.fragment),t=O(),a.c(),o=le()},m(r,s){U(e,r,s),T(r,t,s),c[l].m(r,s),T(r,o,s),u=!0},p(r,[s]){const i={};s&16&&(i.show_label=r[4]),s&8&&(i.label=r[3]||"Video"),e.$set(i);let d=l;l=m(r),l===d?c[l].p(r,s):(G(),p(c[d],1,1,()=>{c[d]=null}),Q(),a=c[l],a?a.p(r,s):(a=c[l]=b[l](r),a.c()),g(a,1),a.m(o.parentNode,o))},i(r){u||(g(e.$$.fragment,r),g(a),u=!0)},o(r){p(e.$$.fragment,r),p(a),u=!1},d(r){r&&(q(t),q(o)),M(e,r),c[l].d(r)}}}function ql(n,e,t){let{$$slots:l={},$$scope:a}=e,{value:o=null}=e,{subtitle:u=null}=e,{source:b}=e,{label:c=void 0}=e,{show_label:m=!0}=e,{mirror_webcam:r=!1}=e,{include_audio:s}=e,{autoplay:i}=e;const d=ce();function C({detail:h}){d("change",h),d("upload",h),t(0,o=h)}function B({detail:h}){t(0,o=null),d("change",h),d("clear")}let V=!1;function D(h){V=h,t(8,V)}function w(h){y.call(this,n,h)}const A=({detail:h})=>d("change",h);function X(h){y.call(this,n,h)}function K(h){y.call(this,n,h)}function I(h){y.call(this,n,h)}function L(h){y.call(this,n,h)}function k(h){y.call(this,n,h)}function S(h){y.call(this,n,h)}return n.$$set=h=>{"value"in h&&t(0,o=h.value),"subtitle"in h&&t(1,u=h.subtitle),"source"in h&&t(2,b=h.source),"label"in h&&t(3,c=h.label),"show_label"in h&&t(4,m=h.show_label),"mirror_webcam"in h&&t(5,r=h.mirror_webcam),"include_audio"in h&&t(6,s=h.include_audio),"autoplay"in h&&t(7,i=h.autoplay),"$$scope"in h&&t(22,a=h.$$scope)},n.$$.update=()=>{n.$$.dirty&256&&d("drag",V)},[o,u,b,c,m,r,s,i,V,d,C,B,l,D,w,A,X,K,I,L,k,S,a]}let jl=class extends Z{constructor(e){super(),x(this,e,ql,Tl,J,{value:0,subtitle:1,source:2,label:3,show_label:4,mirror_webcam:5,include_audio:6,autoplay:7})}};function Ul(n){let e=n[0].data,t,l,a,o,u,b,c,m,r=ke(n);o=new xe({props:{Icon:tl,label:"Download"}});let s=n[5]&&pe(n);return{c(){r.c(),t=O(),l=F("div"),a=F("a"),j(o.$$.fragment),c=O(),s&&s.c(),f(a,"href",u=n[0].data),f(a,"target",window.__is_colab__?"_blank":null),f(a,"download",b=n[0].orig_name||n[0].name),f(l,"class","icon-buttons svelte-rvdo70"),f(l,"data-testid","download-div")},m(i,d){r.m(i,d),T(i,t,d),T(i,l,d),N(l,a),U(o,a,null),N(l,c),s&&s.m(l,null),m=!0},p(i,d){d&1&&J(e,e=i[0].data)?(G(),p(r,1,1,E),Q(),r=ke(i),r.c(),g(r,1),r.m(t.parentNode,t)):r.p(i,d),(!m||d&1&&u!==(u=i[0].data))&&f(a,"href",u),(!m||d&1&&b!==(b=i[0].orig_name||i[0].name))&&f(a,"download",b),i[5]?s?(s.p(i,d),d&32&&g(s,1)):(s=pe(i),s.c(),g(s,1),s.m(l,null)):s&&(G(),p(s,1,1,()=>{s=null}),Q())},i(i){m||(g(r),g(o.$$.fragment,i),g(s),m=!0)},o(i){p(r),p(o.$$.fragment,i),p(s),m=!1},d(i){i&&(q(t),q(l)),r.d(i),M(o),s&&s.d()}}}function Ml(n){let e,t;return e=new $e({props:{unpadded_box:!0,size:"large",$$slots:{default:[Sl]},$$scope:{ctx:n}}}),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a&32768&&(o.$$scope={dirty:a,ctx:l}),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function ke(n){let e,t;return e=new Be({props:{src:n[0].data,subtitle:n[1]?.data,autoplay:n[4],mirror:!1,label:n[2]}}),e.$on("play",n[6]),e.$on("pause",n[7]),e.$on("ended",n[8]),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a&1&&(o.src=l[0].data),a&2&&(o.subtitle=l[1]?.data),a&16&&(o.autoplay=l[4]),a&4&&(o.label=l[2]),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function pe(n){let e,t;return e=new ll({props:{value:n[0],formatter:n[9]}}),e.$on("error",n[10]),e.$on("share",n[11]),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a&1&&(o.value=l[0]),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function Sl(n){let e,t;return e=new de({}),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function Cl(n){let e,t,l,a,o,u;e=new ye({props:{show_label:n[3],Icon:de,label:n[2]||"Video"}});const b=[Ml,Ul],c=[];function m(r,s){return r[0]===null?0:1}return l=m(n),a=c[l]=b[l](n),{c(){j(e.$$.fragment),t=O(),a.c(),o=le()},m(r,s){U(e,r,s),T(r,t,s),c[l].m(r,s),T(r,o,s),u=!0},p(r,[s]){const i={};s&8&&(i.show_label=r[3]),s&4&&(i.label=r[2]||"Video"),e.$set(i);let d=l;l=m(r),l===d?c[l].p(r,s):(G(),p(c[d],1,1,()=>{c[d]=null}),Q(),a=c[l],a?a.p(r,s):(a=c[l]=b[l](r),a.c()),g(a,1),a.m(o.parentNode,o))},i(r){u||(g(e.$$.fragment,r),g(a),u=!0)},o(r){p(e.$$.fragment,r),p(a),u=!1},d(r){r&&(q(t),q(o)),M(e,r),c[l].d(r)}}}function El(n,e,t){let{value:l=null}=e,{subtitle:a=null}=e,{label:o=void 0}=e,{show_label:u=!0}=e,{autoplay:b}=e,{show_share_button:c=!0}=e,m=null,r=null;const s=ce();Oe(async()=>{l!==m&&a!==r&&r!==null&&(m=l,t(0,l=null),await Xe(),t(0,l=m)),m=l,r=a});function i(w){y.call(this,n,w)}function d(w){y.call(this,n,w)}function C(w){y.call(this,n,w)}const B=async w=>w?await el(w.data,"url"):"";function V(w){y.call(this,n,w)}function D(w){y.call(this,n,w)}return n.$$set=w=>{"value"in w&&t(0,l=w.value),"subtitle"in w&&t(1,a=w.subtitle),"label"in w&&t(2,o=w.label),"show_label"in w&&t(3,u=w.show_label),"autoplay"in w&&t(4,b=w.autoplay),"show_share_button"in w&&t(5,c=w.show_share_button)},n.$$.update=()=>{n.$$.dirty&1&&l&&s("change",l)},[l,a,o,u,b,c,i,d,C,B,V,D]}class Pl extends Z{constructor(e){super(),x(this,e,El,Cl,J,{value:0,subtitle:1,label:2,show_label:3,autoplay:4,show_share_button:5})}}function Rl(n){let e,t;return e=new jl({props:{value:n[18],subtitle:n[19],label:n[5],show_label:n[7],source:n[6],mirror_webcam:n[10],include_audio:n[11],autoplay:n[16],$$slots:{default:[Fl]},$$scope:{ctx:n}}}),e.$on("change",n[21]),e.$on("drag",n[30]),e.$on("error",n[31]),e.$on("clear",n[32]),e.$on("play",n[33]),e.$on("pause",n[34]),e.$on("upload",n[35]),e.$on("stop",n[36]),e.$on("end",n[37]),e.$on("start_recording",n[38]),e.$on("stop_recording",n[39]),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a[0]&262144&&(o.value=l[18]),a[0]&524288&&(o.subtitle=l[19]),a[0]&32&&(o.label=l[5]),a[0]&128&&(o.show_label=l[7]),a[0]&64&&(o.source=l[6]),a[0]&1024&&(o.mirror_webcam=l[10]),a[0]&2048&&(o.include_audio=l[11]),a[0]&65536&&(o.autoplay=l[16]),a[1]&1024&&(o.$$scope={dirty:a,ctx:l}),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function zl(n){let e,t;return e=new Pl({props:{value:n[18],subtitle:n[19],label:n[5],show_label:n[7],autoplay:n[16],show_share_button:n[17]}}),e.$on("play",n[25]),e.$on("pause",n[26]),e.$on("stop",n[27]),e.$on("share",n[28]),e.$on("error",n[29]),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a[0]&262144&&(o.value=l[18]),a[0]&524288&&(o.subtitle=l[19]),a[0]&32&&(o.label=l[5]),a[0]&128&&(o.show_label=l[7]),a[0]&65536&&(o.autoplay=l[16]),a[0]&131072&&(o.show_share_button=l[17]),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function Fl(n){let e,t;return e=new nl({props:{type:"video"}}),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p:E,i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function Al(n){let e,t,l,a,o,u;const b=[n[1]];let c={};for(let i=0;i{r[B]=null}),Q(),a=r[l],a?a.p(i,d):(a=r[l]=m[l](i),a.c()),g(a,1),a.m(o.parentNode,o))},i(i){u||(g(e.$$.fragment,i),g(a),u=!0)},o(i){p(e.$$.fragment,i),p(a),u=!1},d(i){i&&(q(t),q(o)),M(e,i),r[l].d(i)}}}function Dl(n){let e,t;return e=new We({props:{visible:n[4],variant:n[15]==="dynamic"&&n[0]===null&&n[6]==="upload"?"dashed":"solid",border_mode:n[20]?"focus":"base",padding:!1,elem_id:n[2],elem_classes:n[3],height:n[8],width:n[9],container:n[12],scale:n[13],min_width:n[14],allow_overflow:!1,$$slots:{default:[Al]},$$scope:{ctx:n}}}),{c(){j(e.$$.fragment)},m(l,a){U(e,l,a),t=!0},p(l,a){const o={};a[0]&16&&(o.visible=l[4]),a[0]&32833&&(o.variant=l[15]==="dynamic"&&l[0]===null&&l[6]==="upload"?"dashed":"solid"),a[0]&1048576&&(o.border_mode=l[20]?"focus":"base"),a[0]&4&&(o.elem_id=l[2]),a[0]&8&&(o.elem_classes=l[3]),a[0]&256&&(o.height=l[8]),a[0]&512&&(o.width=l[9]),a[0]&4096&&(o.container=l[12]),a[0]&8192&&(o.scale=l[13]),a[0]&16384&&(o.min_width=l[14]),a[0]&2067682|a[1]&1024&&(o.$$scope={dirty:a,ctx:l}),e.$set(o)},i(l){t||(g(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){M(e,l)}}}function Il(n,e,t){let{elem_id:l=""}=e,{elem_classes:a=[]}=e,{visible:o=!0}=e,{value:u=null}=e,b=null,{label:c}=e,{source:m}=e,{root:r}=e,{root_url:s}=e,{show_label:i}=e,{loading_status:d}=e,{height:C}=e,{width:B}=e,{mirror_webcam:V}=e,{include_audio:D}=e,{container:w=!1}=e,{scale:A=null}=e,{min_width:X=void 0}=e,{mode:K}=e,{autoplay:I=!1}=e,{show_share_button:L=!0}=e,k=null,S=null,h=!1;const $=ce();function ee({detail:_}){_!=null?t(0,u=[_,null]):t(0,u=null),$("change")}function Y(_){y.call(this,n,_)}function R(_){y.call(this,n,_)}function te(_){y.call(this,n,_)}function oe(_){y.call(this,n,_)}function re(_){y.call(this,n,_)}const ie=({detail:_})=>t(20,h=_),W=({detail:_})=>{t(1,d=d||{}),t(1,d.status="error",d),t(1,d.message=_,d)};function ue(_){y.call(this,n,_)}function v(_){y.call(this,n,_)}function P(_){y.call(this,n,_)}function ne(_){y.call(this,n,_)}function Ne(_){y.call(this,n,_)}function Ve(_){y.call(this,n,_)}function Te(_){y.call(this,n,_)}function qe(_){y.call(this,n,_)}return n.$$set=_=>{"elem_id"in _&&t(2,l=_.elem_id),"elem_classes"in _&&t(3,a=_.elem_classes),"visible"in _&&t(4,o=_.visible),"value"in _&&t(0,u=_.value),"label"in _&&t(5,c=_.label),"source"in _&&t(6,m=_.source),"root"in _&&t(22,r=_.root),"root_url"in _&&t(23,s=_.root_url),"show_label"in _&&t(7,i=_.show_label),"loading_status"in _&&t(1,d=_.loading_status),"height"in _&&t(8,C=_.height),"width"in _&&t(9,B=_.width),"mirror_webcam"in _&&t(10,V=_.mirror_webcam),"include_audio"in _&&t(11,D=_.include_audio),"container"in _&&t(12,w=_.container),"scale"in _&&t(13,A=_.scale),"min_width"in _&&t(14,X=_.min_width),"mode"in _&&t(15,K=_.mode),"autoplay"in _&&t(16,I=_.autoplay),"show_share_button"in _&&t(17,L=_.show_share_button)},n.$$.update=()=>{n.$$.dirty[0]&12582913&&(u!=null?(t(18,k=be(u[0],r,s)),t(19,S=be(u[1],r,s))):(t(18,k=null),t(19,S=null))),n.$$.dirty[0]&16777217&&JSON.stringify(u)!==JSON.stringify(b)&&(t(24,b=u),$("change"))},[u,d,l,a,o,c,m,i,C,B,V,D,w,A,X,K,I,L,k,S,h,ee,r,s,b,Y,R,te,oe,re,ie,W,ue,v,P,ne,Ne,Ve,Te,qe]}class Ol extends Z{constructor(e){super(),x(this,e,Il,Dl,J,{elem_id:2,elem_classes:3,visible:4,value:0,label:5,source:6,root:22,root_url:23,show_label:7,loading_status:1,height:8,width:9,mirror_webcam:10,include_audio:11,container:12,scale:13,min_width:14,mode:15,autoplay:16,show_share_button:17},null,[-1,-1])}}const tt=Ol,nt=["static","dynamic"],at=n=>({type:{input_payload:"{ name: string; data: string }",response_object:"{ name: string; data: string, is_file: boolean }"},description:{input_payload:"object with file name and base64 data",response_object:"object that includes path to video file. The URL: {ROOT}file={name} contains the data"}});export{tt as Component,at as document,nt as modes};
-//# sourceMappingURL=index-5c6740a6.js.map
diff --git a/spaces/chunnibyou/min_test_1/app.py b/spaces/chunnibyou/min_test_1/app.py
deleted file mode 100644
index df5f68d2a4bbc2072c0880e455e08c546cbb8ea3..0000000000000000000000000000000000000000
--- a/spaces/chunnibyou/min_test_1/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-
-# In[84]:
-
-
-# defaul_exp app
-
-
-#
-# ## dogs vs cats
-
-# In[85]:
-# export
-from fastai.vision.all import *
-import gradio as gr
-def is_cat(x): return x[0].isupper()
-
-
-# export
-learner=load_learner('model.pkl')
-
-
-
-# In[89]:
-
-
-# export
-categories=('Dog','Cat')
-def classify_image(img):
- pred,idx,probs=learner.predict(img)
- return dict(zip(categories,map(float,probs)))
-
-
-# export
-image =gr.inputs.Image(shape=(192,192))
-labels=gr.outputs.Label()
-examples=['dog.jpg']
-
-intf=gr.Interface(fn=classify_image,inputs=image,outputs=labels,examples=examples)
-intf.launch(inline=False)
-
-
-# In[ ]:
-
-
-
-
-
-# In[ ]:
-
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Conoce el libro estructura cientifica de la venta jose maria llamas pdf 102 y transforma tu forma de vender.md b/spaces/cihyFjudo/fairness-paper-search/Conoce el libro estructura cientifica de la venta jose maria llamas pdf 102 y transforma tu forma de vender.md
deleted file mode 100644
index 36422a2a7946cdc30609f6996f1473927018e234..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Conoce el libro estructura cientifica de la venta jose maria llamas pdf 102 y transforma tu forma de vender.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
estructura cientifica de la venta jose maria llamas pdf 102
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/How to Download Medical Dictionary PDF File for Free A Step-by-Step Tutorial.md b/spaces/cihyFjudo/fairness-paper-search/How to Download Medical Dictionary PDF File for Free A Step-by-Step Tutorial.md
deleted file mode 100644
index 60558731acef311d60de03d8aa354c3403abd3cf..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/How to Download Medical Dictionary PDF File for Free A Step-by-Step Tutorial.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
The Centers for Medicare & Medicaid Services (CMS) Center for Consumer Information and Insurance Oversight (CCIIO) is committed to increasing transparency in the Health Insurance Exchange. While health plan information including benefits, copayments, premiums, and geographic coverage is publicly available on Healthcare.gov, CMS also publishes downloadable public use files (PUFs) so that researchers and other stakeholders can more easily access Exchange data.
This best-selling and market-leading dictionary covers all aspects of medical science and terminology. Written by a team of medical experts, it has been fully revised and updated for this new edition to reflect the very latest in medical knowledge and practice. Accessible entries are complemented by over 150 illustrations. The 9th edition includes over 450 new entries and features up-to-date coverage of public health medicine and general practice, drugs and pharmacology, endocrinology (particularly diabetology), and cardiology, amongst other specialist areas. Recommended web links are provided for many entries, and appendices have been expanded to include units of alcohol and the calculation of alcohol by volume, and a table of inherited medical conditions.
-
The UMLS, or Unified Medical Language System, is a set of files and software that brings together many health and biomedical vocabularies and standards to enable interoperability between computer systems.
-
To install the UMLS on your computer, download the files. The MetamorphoSys tool, included with the downloaded files, allows you to customize the UMLS according to your needs. You can then load your customized data into your own database system, such as MySQL or Oracle, or you may browse your data using the MetamorphoSys RRF browser.
-
The medical dictionary for regulatory activities (MedDRA) is designed to be used in the registration, documentation and safety monitoring of products during the marketing authorisation process. Developed by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) (multidisciplinary topic M1), MedDRA contains highly specific, standardised medical terminology.
-
-
To request a copy of the PUF, individuals (data recipients) must agree to comply with the terms and conditions set forth in the Data Use Agreement, provide contact information, and complete a short online questionnaire. Once the information provided by the data recipient is received and processed by ACS NSQIP staff, a website address will be submitted electronically to the data recipient. The data recipient will then have 10 days (240 hours) to visit the website and download the data file.
-
Use the dropdown menu to choose which table columns are downloaded for each study and in what format:
Displayed Columns. Choose this option to download only table columns shown onscreen. The default study columns shown onscreen are Row, Status, Study Title, Condition and Interventions. To change which columns are shown in your search results, close the window you are in, click on the Show/Hide Columns link (located on the right side of the search results List tab), and then add or remove columns by marking or unmarking the column names.
All Available Columns. Choose this option to download all available table columns. Includes over 20 columns such as Status, Conditions, Interventions, Study Type, Phase, and Sponsor/Collaborators. For more information about columns, see Customize Your Search Results Display.
Select file format.
-
To immediately begin downloading study records (that is, all registration information as well as any available results information) for the studies found by your search, add "download_fields" between "results/" and "?" in "search request" URL, and one or more of the following URL parameters to the end of the "search request" URL: Parameter Options* Description down_count Number of records to download: 10, 100, 1000, 10000 Specify if the top 10, 100, 1000, or 10,000 (maximum) studies retrieved by your search are to be downloaded. down_flds Fields to download: all, default Specify "all" available fields listed in the Show/Hide Columns window or "default" fields (including Title, Status, Has Study Results, Conditions, and Interventions) in the download file. down_fmt File format: plain, csv, tsv, xml, pdf Specify the format of the downloaded file. (See Select File Format) down_chunk Set of records to download: 1, 2, 3,...,N Specify which set of records to include in the downloaded file relative to the option selected for the down_count parameter. For example, down_chunk=1 when down_count=10 indicates the first set of 10 study records (i.e., rows 1 to 10 on the Search Results List). For down_chunk=2 when down_count=10, the second set 10 study records (i.e., rows 11 to 20) is downloaded. *Bold text indicates the default setting for each parameter (used if that parameter is missing/not specified)
Example: _fields?cond=cancer&down_count=10
Entering the above URL in a browser searches for "cancer" in the Other Terms search field and downloads a PDF file (default file format when down_fmt is missing) that includes the default fields (when down_flds is missing) for the top 10 studies listed in rows 1 to 10 of the Search Results List (default when down_chunk is missing). To download the "second set" of 10 study records (that is, rows 11 to 20) for the same search as a plain text file, use the following URL:
Display a Single Record in XML To display an individual study protocol record in your browser in XML, add the URL parameter "displayxml=true" to the end of a "show study" URL:
-
Note: This is a very large file. It will likely take several minutes to download the entire zip file. Additionally, many receiving systems may subject the zip file to automatic security/virus scanning. This scanning may take several additional minutes to complete before the zip file is ready for use. Please be patient.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Labview Serial Number Generator A Guide to Finding and Using Your Serial Number[2].md b/spaces/cihyFjudo/fairness-paper-search/Labview Serial Number Generator A Guide to Finding and Using Your Serial Number[2].md
deleted file mode 100644
index ec3a1e5a3df45e6270ec52b9cfe593e10e121530..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Labview Serial Number Generator A Guide to Finding and Using Your Serial Number[2].md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
Phase Matrix warrants its products to be free from defects in material and workmanship for one year from the date of delivery. Damage due to accident, abuse, or improper signal level is not covered by the warranty. Removal, defacement, or alteration of any serial number, inspection label, marking or seal may void the warranty. Phase Matrix will repair or replace, at its option, any components of this product that prove to be defective during the warranty period provided that the entire unit is returned freight prepaid to Phase Matrix, Inc. or an authorized service facility. In-warranty units will be returned freight prepaid; out-of-warranty units will be returned freight COLLECT. No warranty other than the above is expressed or implied.
The LI-62XX series of analyzers has a serial port for digital communications and LabVIEW can be used to read the serial port. The serial port settings on the 62XX instrument should be set using keypad FCT17. The variables that have to be measured can be set by sending the *13x,x,x,x,x,x,x,x,x,x, where x is the channels you want to output. By writing *12 to the serial port along with a carriage return (serial write block), the data will be sent out and it can be read by using the serial read block. Writing and reading of the instrument data can be done in a simple while loop. The instrument is polled for data and it can be read continuously at a rate set by a LabVIEW timer. In the example below, channels 22, 23 29, 32, 33, 38, 39, 42 and 43 are configured to be sent out through the serial port. The serial setting property node is used to estimate the number of bytes at the port and then this information is fed to the serial read block to read in the data. To parse out and plot the data, search and replace vi and spread sheet string to array vi are used.
-
Parity is a simple way to error-check. There are Even, Odd, Mark and Space indicators. You can also use no parity. For Even and Odd parity, the serial port sets the parity bit (the last bit after the data bit) to a value to ensure that the data packet has an Even or Odd number of logic-high bits. For example, if the data is 10010010, for Even parity, the serial port sets the parity bit as 1 to keep the number of logic-high bits Even. For Odd parity, the parity bit is 0 so that the number of logic-high bits is Odd. Mark parity simply sets the parity bit to logic-high and Space sets the parity bit to logic-low, so that the receiving party can determine if the data is corrupted.
-
The basis of 1-Wire® technology is a serial protocol using a single data line plus ground reference for communication. A 1-Wire master initiates and controls the communication with one or more 1-Wire slave devices on the 1-Wire bus (Figure 1). Each 1-Wire slave device has a unique, unalterable, factory-programmed, 64-bit identification number (ID), which serves as device address on the 1-Wire bus. The 8-bit family code, a subset of the 64-bit ID, identifies the device type and functionality. Typically, 1-Wire slave devices operate over the following four voltage ranges:
-
1-Wire technology is based on a serial communication protocol that uses a single data line plus ground reference between the master and slave. The 1-Wire slaves are available in plastic packages as bumped die or stainless-steel iButton form. The minimum function of 1-Wire slaves is a 64-bit ID number. Additional functions are PIO, temperature sensor, time counter, NV SRAM, OTP EPROM, EEPROM, SHA-256/SHA-3/ECDSA engine, SHA-256/SHA-3/ECDSA secure EEPROM, temperature logging and humidity logging. Typical applications for 1-Wire devices include identification and authentication of consumables, rack cards, PCBs, computer accessories, and the protection of IP (e.g., cloning prevention). Special uses of iButton devices are access control, asset management, guard tour systems, time and attendance, electronic cash, and temperature monitoring for food and pharmaceutical safety. Starter EV kits and software drivers are available to assist customers integrating 1-Wire technology in their systems.
-
-
LabVIEW 2010 also uses customer feedback to deliver new features that make getting started easier. For example, LabVIEW now provides a new hardware configuration tool that makes it possible for users to access and configure their LabVIEW Real-Time targets remotely via a Web browser. Other features include a smart installer that automatically detects the software associated with a serial number for faster installation and an improved instrument driver finder that offers prebuilt project examples for specific instruments.
-
The Lab Brick signal generators include a green LED status indicator to show connection to a USB host computer. When the host computer recognizes a Lab Brick signal generator, it loads the GUI software and displays that signal generator's serial number and model number. The GUI software can track and control several connected Lab Brick signal generators, simplifying multiple-signal test setups. In addition, each Lab Brick signal generator can store settings in internal memory, allowing it to power up in a specific instrument state. This same capability also allows a Lab Brick signal generator to be used in an embedded or remote instrument application without USB control required to achieve a given instrument state. In non-USB applications, the Lab Brick signal generators can operate with battery power or remote power supply. For automatic-test-equipment (ATE) applications, a programming guide is available for each Lab Brick signal generator. In addition, they are programmable by means of LabView software drivers from National Instruments (www.ni.com).
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cllatMTK/TransformerAnalyzer/app.py b/spaces/cllatMTK/TransformerAnalyzer/app.py
deleted file mode 100644
index 97aa5cfbbd0c1fd54c9629ed19a434ddff15bb52..0000000000000000000000000000000000000000
--- a/spaces/cllatMTK/TransformerAnalyzer/app.py
+++ /dev/null
@@ -1,230 +0,0 @@
-# Ref: Ouyang, A. (2023). Understanding the Performance of Transformer Inference (Doctoral dissertation, Massachusetts Institute of Technology).
-
-import streamlit as st
-import pandas as pd
-from model_util import fetch_dictionary_content, load_parameter, get_model, classify_module, get_module_tensors
-from calc_util import *
-from render_util import create_table, header4, header5
-
-
-st.set_page_config(layout='wide')
-if 'model_config' not in st.session_state:
- st.session_state['model_config'] = {}
-
-
-def load_model_config(model_id):
- if 'model_id' in st.session_state['model_config'] and st.session_state['model_config']['model_id'] == model_id:
- return st.session_state['model_config']
- if 'parameter_count' in st.session_state:
- st.session_state.pop('parameter_count')
-
- model_config = {}
- dictionary_content = fetch_dictionary_content(model_id)
- if dictionary_content:
- model_config['model_id'] = model_id
- model_config['hidden_size'] = dictionary_content['hidden_size']
- model_config['num_attention_heads'] = dictionary_content['num_attention_heads']
- model_config['num_hidden_layers'] = dictionary_content['num_hidden_layers']
- model_config['intermediate_size'] = load_parameter(dictionary_content, ['intermediate_size', 'ffn_dim'])
- model_config['vocab_size'] = dictionary_content['vocab_size']
- model_config['max_position_embeddings'] = dictionary_content['max_position_embeddings']
- model_config['layernorm_operation'] = 2
- else:
- st.warning("Fetching information failed! Maybe model info is not public!")
- model_config['model_id'] = 'opt-1.3b'
- model_config['hidden_size'] = 2048
- model_config['num_attention_heads'] = 32
- model_config['num_hidden_layers'] = 24
- model_config['intermediate_size'] = 8192
- model_config['vocab_size'] = 50272
- model_config['max_position_embeddings'] = 2048
- model_config['layernorm_operation'] = 2
-
- try:
- model_config['model'] = get_model(model_id, None, None)
- module_tensors = get_module_tensors(model_config['model'])
- model_config['module_classes'] = classify_module(module_tensors)
- except Exception as e:
- st.warning(e)
- model_config['model'] = None
- model_config['module_classes'] = None
-
- st.session_state['model_config'] = model_config
- return model_config
-
-
-subtotal_parameters = [
- 'embedding_weights',
- 'attention_weights',
- 'mlp_weights',
-]
-
-subtotal_operations = [
- 'embeddings',
- 'attention',
- 'mlp',
- 'total',
-]
-
-
-
-col1, col2, col3, col4, col5 = st.columns([0.8, 2, 2.5, 2.5, 0.01])
-
-inference_config = {}
-parameter_count = {}
-cached_parameter_count = {}
-
-prefilling_operation_count = {}
-generation_operation_count = {}
-prefilling_memory_count = {}
-generation_memory_count = {}
-
-gpu_config = {}
-inference_info = {}
-
-with col1:
- header4("Model")
- model_id = st.text_input("huggingface model id", 'ArthurZ/opt-13b')
- model_config = load_model_config(model_id)
- model_config['hidden_size'] = st.number_input('hidden size', value=model_config['hidden_size'], format ="%d")
- model_config['num_attention_heads'] = st.number_input('num attention heads', value=model_config['num_attention_heads'], format ="%d")
- model_config['num_hidden_layers'] = st.number_input('num hidden layers', value=model_config['num_hidden_layers'], format ="%d")
- model_config['intermediate_size'] = st.number_input('intermediate size', value=model_config['intermediate_size'], format ="%d")
- model_config['vocab_size'] = st.number_input('vocab size', value= model_config['vocab_size'], format ="%d")
- model_config['max_position_embeddings'] = st.number_input('max position embeddings', value=model_config['max_position_embeddings'], format ="%d")
- model_config['hidden_size_per_head'] = model_config['hidden_size']/model_config['num_attention_heads']
-
- header4("Inference Setting")
- inference_config['batchsize'] = st.number_input('batchsize', value=1, format ="%d")
- inference_config['input_seq_length'] = st.number_input('input seq length', value=1, format ="%d")
- inference_config['output_seq_length'] = st.number_input('output seq length', value=1, format ="%d")
- inference_config['byte_per_parameter'] = st.number_input('byte per parameter', value=2, format ="%d")
- inference_config['KV_cache'] = st.checkbox("Use KV cache", value=True)
-
- header4("GPU Setting")
- gpu_config['Name'] = st.text_input('GPU Type', value="A6000")
- gpu_config['TFLOP'] = st.number_input('TFLOP', value=38.7, format ="%2f")
- gpu_config['memory_bandwidth'] = st.number_input('memory bandwidth (GB/s)', value=768, format ="%2d")
- gpu_config['arithmetic_intensity'] = gpu_config['TFLOP']*10**12/gpu_config['memory_bandwidth']/1024**3
- st.write(f"arithmetic_intensity: {gpu_config['arithmetic_intensity']:.3f}")
-
-with col2:
- if 'parameter_count' not in st.session_state:
- if model_config['model']:
- st.info("Model info fetcted!")
- parameter_count = calc_model_size_from_model(model_config, inference_config)
- else:
- st.info("Fail to fetch model info. Using estimation!")
- parameter_count = model_size_estimate(model_config, inference_config)
- st.session_state.parameter_count = parameter_count
- else:
- parameter_count = st.session_state.parameter_count
-
- parameters_items = {key: "{:,}".format(int(parameter_count[key])) for key in parameter_count if key not in subtotal_parameters}
- subtotal_parameters_items = {key: "{:,}".format(int(parameter_count[key])) for key in parameter_count if key in subtotal_parameters}
-
- # Convert dictionaries to pandas dataframes for table display
- df_parameters_items = pd.DataFrame(list(parameters_items.items()), columns=["Parameter", "Count"])
- df_subtotal_parameters_items = pd.DataFrame(list(subtotal_parameters_items.items()), columns=["Parameter", "Count"])
-
- header4("Model Parameters")
- st.markdown(create_table(df_parameters_items))
-
- header4("Parameters Summary")
- st.markdown(create_table(df_subtotal_parameters_items))
-
- model_total_size_in_byte = inference_config['byte_per_parameter'] * (
- parameter_count['embedding_weights'] +
- parameter_count['attention_weights'] +
- parameter_count['mlp_weights'] +
- parameter_count['layernorm']
- )
- st.write(f'model_total_size (Byte): {model_total_size_in_byte:,}')
-
-
- # add parameter viewer
- if model_config['model']:
- header4("Parameters Viewer")
- weight_generic = st.selectbox('Select weight:', options=model_config['module_classes'])
- modules = {}
- for module in model_config['module_classes'][weight_generic]:
- modules.update(module)
- modules = {k: list(v) for k, v in modules.items()}
- modules = pd.DataFrame(list(modules.items()), columns=["Parameter", "Shape"])
- st.markdown(create_table(modules))
-
-with col3: # Prefilling
- prefilling_operation_count = prefilling_operation(model_config, inference_config)
- prefilling_activation_memory_count = prefilling_activation_memory(model_config, inference_config)
- inference_info['inference_prefilling_time'] = prefilling_operation_count['total'] / (gpu_config['TFLOP']*1024**4)
- inference_info['prefilling_memory_latency'] = prefilling_activation_memory_count['total'] / (gpu_config['memory_bandwidth']*1024**3)
- calc_prefilling_throughput(model_config, inference_config, inference_info)
-
- cached_parameter_count['kv_cache'] = 2 * (inference_config['batchsize'] * (model_config['hidden_size'] * model_config['num_hidden_layers'] * inference_config['input_seq_length']))
-
- operation_items = {key: "{:,}".format(int(prefilling_operation_count[key])) for key in prefilling_operation_count if key not in subtotal_operations}
- subtotal_operation_items = {key: "{:,}".format(int(prefilling_operation_count[key])) for key in prefilling_operation_count if key in subtotal_operations}
- prefilling_arithmetic_intensity = {key: "{:.3f}".format(prefilling_operation_count[key]/prefilling_activation_memory_count[key] if prefilling_activation_memory_count[key]>0 else float('inf')) for key in prefilling_activation_memory_count}
- prefilling_activation_memory_count = {key: "{:,}".format(int(value)) for key, value in prefilling_activation_memory_count.items()}
-
-
- ## Convert dictionaries to pandas dataframes for table display
- df_operation_count = pd.DataFrame(list(operation_items.items()), columns=["Operation", "FLOPS"])
- df_subtotal_operation_count = pd.DataFrame(list(subtotal_operation_items.items()), columns=["Operation", "FLOPS"])
-
- df_operation_count["Activation (Byte)"] = df_operation_count["Operation"].map(prefilling_activation_memory_count)
- df_operation_count["Arithmetic Intensity"] = df_operation_count["Operation"].map(prefilling_arithmetic_intensity)
- df_subtotal_operation_count["Activation (Byte)"] = df_subtotal_operation_count["Operation"].map(prefilling_activation_memory_count)
- df_subtotal_operation_count["Arithmetic Intensity"] = df_subtotal_operation_count["Operation"].map(prefilling_arithmetic_intensity)
-
- header4("Inference Ops: Prefilling")
- st.markdown(create_table(df_operation_count))
-
- header5("Summary: Prefilling")
- st.markdown(create_table(df_subtotal_operation_count))
- st.write(f"FLOPS latency: {inference_info['inference_prefilling_time']}")
- st.write(f"Memory latency: {inference_info['prefilling_memory_latency']}")
- st.write(f"Prefillng throughput (tokens/s): {inference_info['prefilling_throughput']:.2f} ({inference_info['prefilling_bound_type']}-bound)")
-
- if inference_config['KV_cache']:
- st.write(f"kv cache (Byte): {cached_parameter_count['kv_cache']:,}")
-
-
-
-with col4: # Generation
- generation_operation_count = generation_operation(model_config, inference_config)
- generation_activation_memory_count = generation_activation_memory(model_config, inference_config)
- inference_info['inference_generation_time'] = generation_operation_count['total'] / (gpu_config['TFLOP']*1024**4)
- inference_info['generation_memory_latency'] = generation_activation_memory_count['total'] / (gpu_config['memory_bandwidth']*1024**3)
- calc_generation_throughput(model_config, inference_config, inference_info)
-
- cached_parameter_count['kv_cache'] = 2 * (inference_config['batchsize'] * (model_config['hidden_size'] * model_config['num_hidden_layers'] * (inference_config['input_seq_length']+inference_config['output_seq_length'])))
-
- operation_items = {key: "{:,}".format(int(generation_operation_count[key])) for key in generation_operation_count if key not in subtotal_operations}
- subtotal_operation_items = {key: "{:,}".format(int(generation_operation_count[key])) for key in generation_operation_count if key in subtotal_operations}
- generation_arithmetic_intensity = {key: "{:.3f}".format(generation_operation_count[key]/generation_activation_memory_count[key] if generation_activation_memory_count[key]>0 else float('inf')) for key in generation_activation_memory_count}
- generation_activation_memory_count = {key: "{:,}".format(int(value)) for key, value in generation_activation_memory_count.items()}
-
- ## Convert dictionaries to pandas dataframes for table display
- df_operation_count = pd.DataFrame(list(operation_items.items()), columns=["Operation", "FLOPS"])
- df_subtotal_operation_count = pd.DataFrame(list(subtotal_operation_items.items()), columns=["Operation", "FLOPS"])
-
- df_operation_count["Activation (Byte)"] = df_operation_count["Operation"].map(generation_activation_memory_count)
- df_operation_count["Arithmetic Intensity"] = df_operation_count["Operation"].map(generation_arithmetic_intensity)
- df_subtotal_operation_count["Activation (Byte)"] = df_subtotal_operation_count["Operation"].map(generation_activation_memory_count)
- df_subtotal_operation_count["Arithmetic Intensity"] = df_subtotal_operation_count["Operation"].map(generation_arithmetic_intensity)
-
- header4("Inference Ops: Generation")
- st.markdown(create_table(df_operation_count))
-
- header5("Summary: Generation")
- st.markdown(create_table(df_subtotal_operation_count))
- #st.write(f"Generation-only throughput (tokens/s): {inference_info['inference_generation_throughput']:.2f}")
- #st.write(f"(Client) Generation throughput (tokens/s): {inference_info['inference_client_generation_throughput']:.2f}")
- st.write(f"FLOPS latency: {inference_info['inference_generation_time']}")
- st.write(f"Memory latency: {inference_info['generation_memory_latency']}")
- st.write(f"Generation-only throughput (tokens/s): {inference_info['generation_throughput']:.2f} ({inference_info['generation_bound_type']}-bound)")
- st.write(f"(Client) Generation throughput (tokens/s): {inference_info['client_generation_throughput']:.2f}")
-
- if inference_config['KV_cache']:
- st.write(f"kv cache (Byte): {cached_parameter_count['kv_cache']:,}")
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_signals.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_signals.py
deleted file mode 100644
index 8ea54af86c4be12340de02dc2a6f7eba387e0d98..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_signals.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from __future__ import annotations
-
-from typing import AsyncIterator
-
-from ._compat import DeprecatedAsyncContextManager
-from ._eventloop import get_asynclib
-
-
-def open_signal_receiver(
- *signals: int,
-) -> DeprecatedAsyncContextManager[AsyncIterator[int]]:
- """
- Start receiving operating system signals.
-
- :param signals: signals to receive (e.g. ``signal.SIGINT``)
- :return: an asynchronous context manager for an asynchronous iterator which yields signal
- numbers
-
- .. warning:: Windows does not support signals natively so it is best to avoid relying on this
- in cross-platform applications.
-
- .. warning:: On asyncio, this permanently replaces any previous signal handler for the given
- signals, as set via :meth:`~asyncio.loop.add_signal_handler`.
-
- """
- return get_asynclib().open_signal_receiver(*signals)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/builder.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/builder.py
deleted file mode 100644
index 442bc20e4223827d8e28c9fbb0290dac6f1553dc..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/builder.py
+++ /dev/null
@@ -1,659 +0,0 @@
-"""
-colorLib.builder: Build COLR/CPAL tables from scratch
-
-"""
-import collections
-import copy
-import enum
-from functools import partial
-from math import ceil, log
-from typing import (
- Any,
- Dict,
- Generator,
- Iterable,
- List,
- Mapping,
- Optional,
- Sequence,
- Tuple,
- Type,
- TypeVar,
- Union,
-)
-from fontTools.misc.arrayTools import intRect
-from fontTools.misc.fixedTools import fixedToFloat
-from fontTools.misc.treeTools import build_n_ary_tree
-from fontTools.ttLib.tables import C_O_L_R_
-from fontTools.ttLib.tables import C_P_A_L_
-from fontTools.ttLib.tables import _n_a_m_e
-from fontTools.ttLib.tables import otTables as ot
-from fontTools.ttLib.tables.otTables import ExtendMode, CompositeMode
-from .errors import ColorLibError
-from .geometry import round_start_circle_stable_containment
-from .table_builder import BuildCallback, TableBuilder
-
-
-# TODO move type aliases to colorLib.types?
-T = TypeVar("T")
-_Kwargs = Mapping[str, Any]
-_PaintInput = Union[int, _Kwargs, ot.Paint, Tuple[str, "_PaintInput"]]
-_PaintInputList = Sequence[_PaintInput]
-_ColorGlyphsDict = Dict[str, Union[_PaintInputList, _PaintInput]]
-_ColorGlyphsV0Dict = Dict[str, Sequence[Tuple[str, int]]]
-_ClipBoxInput = Union[
- Tuple[int, int, int, int, int], # format 1, variable
- Tuple[int, int, int, int], # format 0, non-variable
- ot.ClipBox,
-]
-
-
-MAX_PAINT_COLR_LAYER_COUNT = 255
-_DEFAULT_ALPHA = 1.0
-_MAX_REUSE_LEN = 32
-
-
-def _beforeBuildPaintRadialGradient(paint, source):
- x0 = source["x0"]
- y0 = source["y0"]
- r0 = source["r0"]
- x1 = source["x1"]
- y1 = source["y1"]
- r1 = source["r1"]
-
- # TODO apparently no builder_test confirms this works (?)
-
- # avoid abrupt change after rounding when c0 is near c1's perimeter
- c = round_start_circle_stable_containment((x0, y0), r0, (x1, y1), r1)
- x0, y0 = c.centre
- r0 = c.radius
-
- # update source to ensure paint is built with corrected values
- source["x0"] = x0
- source["y0"] = y0
- source["r0"] = r0
- source["x1"] = x1
- source["y1"] = y1
- source["r1"] = r1
-
- return paint, source
-
-
-def _defaultColorStop():
- colorStop = ot.ColorStop()
- colorStop.Alpha = _DEFAULT_ALPHA
- return colorStop
-
-
-def _defaultVarColorStop():
- colorStop = ot.VarColorStop()
- colorStop.Alpha = _DEFAULT_ALPHA
- return colorStop
-
-
-def _defaultColorLine():
- colorLine = ot.ColorLine()
- colorLine.Extend = ExtendMode.PAD
- return colorLine
-
-
-def _defaultVarColorLine():
- colorLine = ot.VarColorLine()
- colorLine.Extend = ExtendMode.PAD
- return colorLine
-
-
-def _defaultPaintSolid():
- paint = ot.Paint()
- paint.Alpha = _DEFAULT_ALPHA
- return paint
-
-
-def _buildPaintCallbacks():
- return {
- (
- BuildCallback.BEFORE_BUILD,
- ot.Paint,
- ot.PaintFormat.PaintRadialGradient,
- ): _beforeBuildPaintRadialGradient,
- (
- BuildCallback.BEFORE_BUILD,
- ot.Paint,
- ot.PaintFormat.PaintVarRadialGradient,
- ): _beforeBuildPaintRadialGradient,
- (BuildCallback.CREATE_DEFAULT, ot.ColorStop): _defaultColorStop,
- (BuildCallback.CREATE_DEFAULT, ot.VarColorStop): _defaultVarColorStop,
- (BuildCallback.CREATE_DEFAULT, ot.ColorLine): _defaultColorLine,
- (BuildCallback.CREATE_DEFAULT, ot.VarColorLine): _defaultVarColorLine,
- (
- BuildCallback.CREATE_DEFAULT,
- ot.Paint,
- ot.PaintFormat.PaintSolid,
- ): _defaultPaintSolid,
- (
- BuildCallback.CREATE_DEFAULT,
- ot.Paint,
- ot.PaintFormat.PaintVarSolid,
- ): _defaultPaintSolid,
- }
-
-
-def populateCOLRv0(
- table: ot.COLR,
- colorGlyphsV0: _ColorGlyphsV0Dict,
- glyphMap: Optional[Mapping[str, int]] = None,
-):
- """Build v0 color layers and add to existing COLR table.
-
- Args:
- table: a raw ``otTables.COLR()`` object (not ttLib's ``table_C_O_L_R_``).
- colorGlyphsV0: map of base glyph names to lists of (layer glyph names,
- color palette index) tuples. Can be empty.
- glyphMap: a map from glyph names to glyph indices, as returned from
- ``TTFont.getReverseGlyphMap()``, to optionally sort base records by GID.
- """
- if glyphMap is not None:
- colorGlyphItems = sorted(
- colorGlyphsV0.items(), key=lambda item: glyphMap[item[0]]
- )
- else:
- colorGlyphItems = colorGlyphsV0.items()
- baseGlyphRecords = []
- layerRecords = []
- for baseGlyph, layers in colorGlyphItems:
- baseRec = ot.BaseGlyphRecord()
- baseRec.BaseGlyph = baseGlyph
- baseRec.FirstLayerIndex = len(layerRecords)
- baseRec.NumLayers = len(layers)
- baseGlyphRecords.append(baseRec)
-
- for layerGlyph, paletteIndex in layers:
- layerRec = ot.LayerRecord()
- layerRec.LayerGlyph = layerGlyph
- layerRec.PaletteIndex = paletteIndex
- layerRecords.append(layerRec)
-
- table.BaseGlyphRecordArray = table.LayerRecordArray = None
- if baseGlyphRecords:
- table.BaseGlyphRecordArray = ot.BaseGlyphRecordArray()
- table.BaseGlyphRecordArray.BaseGlyphRecord = baseGlyphRecords
- if layerRecords:
- table.LayerRecordArray = ot.LayerRecordArray()
- table.LayerRecordArray.LayerRecord = layerRecords
- table.BaseGlyphRecordCount = len(baseGlyphRecords)
- table.LayerRecordCount = len(layerRecords)
-
-
-def buildCOLR(
- colorGlyphs: _ColorGlyphsDict,
- version: Optional[int] = None,
- *,
- glyphMap: Optional[Mapping[str, int]] = None,
- varStore: Optional[ot.VarStore] = None,
- varIndexMap: Optional[ot.DeltaSetIndexMap] = None,
- clipBoxes: Optional[Dict[str, _ClipBoxInput]] = None,
- allowLayerReuse: bool = True,
-) -> C_O_L_R_.table_C_O_L_R_:
- """Build COLR table from color layers mapping.
-
- Args:
-
- colorGlyphs: map of base glyph name to, either list of (layer glyph name,
- color palette index) tuples for COLRv0; or a single ``Paint`` (dict) or
- list of ``Paint`` for COLRv1.
- version: the version of COLR table. If None, the version is determined
- by the presence of COLRv1 paints or variation data (varStore), which
- require version 1; otherwise, if all base glyphs use only simple color
- layers, version 0 is used.
- glyphMap: a map from glyph names to glyph indices, as returned from
- TTFont.getReverseGlyphMap(), to optionally sort base records by GID.
- varStore: Optional ItemVarationStore for deltas associated with v1 layer.
- varIndexMap: Optional DeltaSetIndexMap for deltas associated with v1 layer.
- clipBoxes: Optional map of base glyph name to clip box 4- or 5-tuples:
- (xMin, yMin, xMax, yMax) or (xMin, yMin, xMax, yMax, varIndexBase).
-
- Returns:
- A new COLR table.
- """
- self = C_O_L_R_.table_C_O_L_R_()
-
- if varStore is not None and version == 0:
- raise ValueError("Can't add VarStore to COLRv0")
-
- if version in (None, 0) and not varStore:
- # split color glyphs into v0 and v1 and encode separately
- colorGlyphsV0, colorGlyphsV1 = _split_color_glyphs_by_version(colorGlyphs)
- if version == 0 and colorGlyphsV1:
- raise ValueError("Can't encode COLRv1 glyphs in COLRv0")
- else:
- # unless explicitly requested for v1 or have variations, in which case
- # we encode all color glyph as v1
- colorGlyphsV0, colorGlyphsV1 = {}, colorGlyphs
-
- colr = ot.COLR()
-
- populateCOLRv0(colr, colorGlyphsV0, glyphMap)
-
- colr.LayerList, colr.BaseGlyphList = buildColrV1(
- colorGlyphsV1,
- glyphMap,
- allowLayerReuse=allowLayerReuse,
- )
-
- if version is None:
- version = 1 if (varStore or colorGlyphsV1) else 0
- elif version not in (0, 1):
- raise NotImplementedError(version)
- self.version = colr.Version = version
-
- if version == 0:
- self.ColorLayers = self._decompileColorLayersV0(colr)
- else:
- colr.ClipList = buildClipList(clipBoxes) if clipBoxes else None
- colr.VarIndexMap = varIndexMap
- colr.VarStore = varStore
- self.table = colr
-
- return self
-
-
-def buildClipList(clipBoxes: Dict[str, _ClipBoxInput]) -> ot.ClipList:
- clipList = ot.ClipList()
- clipList.Format = 1
- clipList.clips = {name: buildClipBox(box) for name, box in clipBoxes.items()}
- return clipList
-
-
-def buildClipBox(clipBox: _ClipBoxInput) -> ot.ClipBox:
- if isinstance(clipBox, ot.ClipBox):
- return clipBox
- n = len(clipBox)
- clip = ot.ClipBox()
- if n not in (4, 5):
- raise ValueError(f"Invalid ClipBox: expected 4 or 5 values, found {n}")
- clip.xMin, clip.yMin, clip.xMax, clip.yMax = intRect(clipBox[:4])
- clip.Format = int(n == 5) + 1
- if n == 5:
- clip.VarIndexBase = int(clipBox[4])
- return clip
-
-
-class ColorPaletteType(enum.IntFlag):
- USABLE_WITH_LIGHT_BACKGROUND = 0x0001
- USABLE_WITH_DARK_BACKGROUND = 0x0002
-
- @classmethod
- def _missing_(cls, value):
- # enforce reserved bits
- if isinstance(value, int) and (value < 0 or value & 0xFFFC != 0):
- raise ValueError(f"{value} is not a valid {cls.__name__}")
- return super()._missing_(value)
-
-
-# None, 'abc' or {'en': 'abc', 'de': 'xyz'}
-_OptionalLocalizedString = Union[None, str, Dict[str, str]]
-
-
-def buildPaletteLabels(
- labels: Iterable[_OptionalLocalizedString], nameTable: _n_a_m_e.table__n_a_m_e
-) -> List[Optional[int]]:
- return [
- nameTable.addMultilingualName(l, mac=False)
- if isinstance(l, dict)
- else C_P_A_L_.table_C_P_A_L_.NO_NAME_ID
- if l is None
- else nameTable.addMultilingualName({"en": l}, mac=False)
- for l in labels
- ]
-
-
-def buildCPAL(
- palettes: Sequence[Sequence[Tuple[float, float, float, float]]],
- paletteTypes: Optional[Sequence[ColorPaletteType]] = None,
- paletteLabels: Optional[Sequence[_OptionalLocalizedString]] = None,
- paletteEntryLabels: Optional[Sequence[_OptionalLocalizedString]] = None,
- nameTable: Optional[_n_a_m_e.table__n_a_m_e] = None,
-) -> C_P_A_L_.table_C_P_A_L_:
- """Build CPAL table from list of color palettes.
-
- Args:
- palettes: list of lists of colors encoded as tuples of (R, G, B, A) floats
- in the range [0..1].
- paletteTypes: optional list of ColorPaletteType, one for each palette.
- paletteLabels: optional list of palette labels. Each lable can be either:
- None (no label), a string (for for default English labels), or a
- localized string (as a dict keyed with BCP47 language codes).
- paletteEntryLabels: optional list of palette entry labels, one for each
- palette entry (see paletteLabels).
- nameTable: optional name table where to store palette and palette entry
- labels. Required if either paletteLabels or paletteEntryLabels is set.
-
- Return:
- A new CPAL v0 or v1 table, if custom palette types or labels are specified.
- """
- if len({len(p) for p in palettes}) != 1:
- raise ColorLibError("color palettes have different lengths")
-
- if (paletteLabels or paletteEntryLabels) and not nameTable:
- raise TypeError(
- "nameTable is required if palette or palette entries have labels"
- )
-
- cpal = C_P_A_L_.table_C_P_A_L_()
- cpal.numPaletteEntries = len(palettes[0])
-
- cpal.palettes = []
- for i, palette in enumerate(palettes):
- colors = []
- for j, color in enumerate(palette):
- if not isinstance(color, tuple) or len(color) != 4:
- raise ColorLibError(
- f"In palette[{i}][{j}]: expected (R, G, B, A) tuple, got {color!r}"
- )
- if any(v > 1 or v < 0 for v in color):
- raise ColorLibError(
- f"palette[{i}][{j}] has invalid out-of-range [0..1] color: {color!r}"
- )
- # input colors are RGBA, CPAL encodes them as BGRA
- red, green, blue, alpha = color
- colors.append(
- C_P_A_L_.Color(*(round(v * 255) for v in (blue, green, red, alpha)))
- )
- cpal.palettes.append(colors)
-
- if any(v is not None for v in (paletteTypes, paletteLabels, paletteEntryLabels)):
- cpal.version = 1
-
- if paletteTypes is not None:
- if len(paletteTypes) != len(palettes):
- raise ColorLibError(
- f"Expected {len(palettes)} paletteTypes, got {len(paletteTypes)}"
- )
- cpal.paletteTypes = [ColorPaletteType(t).value for t in paletteTypes]
- else:
- cpal.paletteTypes = [C_P_A_L_.table_C_P_A_L_.DEFAULT_PALETTE_TYPE] * len(
- palettes
- )
-
- if paletteLabels is not None:
- if len(paletteLabels) != len(palettes):
- raise ColorLibError(
- f"Expected {len(palettes)} paletteLabels, got {len(paletteLabels)}"
- )
- cpal.paletteLabels = buildPaletteLabels(paletteLabels, nameTable)
- else:
- cpal.paletteLabels = [C_P_A_L_.table_C_P_A_L_.NO_NAME_ID] * len(palettes)
-
- if paletteEntryLabels is not None:
- if len(paletteEntryLabels) != cpal.numPaletteEntries:
- raise ColorLibError(
- f"Expected {cpal.numPaletteEntries} paletteEntryLabels, "
- f"got {len(paletteEntryLabels)}"
- )
- cpal.paletteEntryLabels = buildPaletteLabels(paletteEntryLabels, nameTable)
- else:
- cpal.paletteEntryLabels = [
- C_P_A_L_.table_C_P_A_L_.NO_NAME_ID
- ] * cpal.numPaletteEntries
- else:
- cpal.version = 0
-
- return cpal
-
-
-# COLR v1 tables
-# See draft proposal at: https://github.com/googlefonts/colr-gradients-spec
-
-
-def _is_colrv0_layer(layer: Any) -> bool:
- # Consider as COLRv0 layer any sequence of length 2 (be it tuple or list) in which
- # the first element is a str (the layerGlyph) and the second element is an int
- # (CPAL paletteIndex).
- # https://github.com/googlefonts/ufo2ft/issues/426
- try:
- layerGlyph, paletteIndex = layer
- except (TypeError, ValueError):
- return False
- else:
- return isinstance(layerGlyph, str) and isinstance(paletteIndex, int)
-
-
-def _split_color_glyphs_by_version(
- colorGlyphs: _ColorGlyphsDict,
-) -> Tuple[_ColorGlyphsV0Dict, _ColorGlyphsDict]:
- colorGlyphsV0 = {}
- colorGlyphsV1 = {}
- for baseGlyph, layers in colorGlyphs.items():
- if all(_is_colrv0_layer(l) for l in layers):
- colorGlyphsV0[baseGlyph] = layers
- else:
- colorGlyphsV1[baseGlyph] = layers
-
- # sanity check
- assert set(colorGlyphs) == (set(colorGlyphsV0) | set(colorGlyphsV1))
-
- return colorGlyphsV0, colorGlyphsV1
-
-
-def _reuse_ranges(num_layers: int) -> Generator[Tuple[int, int], None, None]:
- # TODO feels like something itertools might have already
- for lbound in range(num_layers):
- # Reuse of very large #s of layers is relatively unlikely
- # +2: we want sequences of at least 2
- # otData handles single-record duplication
- for ubound in range(
- lbound + 2, min(num_layers + 1, lbound + 2 + _MAX_REUSE_LEN)
- ):
- yield (lbound, ubound)
-
-
-class LayerReuseCache:
- reusePool: Mapping[Tuple[Any, ...], int]
- tuples: Mapping[int, Tuple[Any, ...]]
- keepAlive: List[ot.Paint] # we need id to remain valid
-
- def __init__(self):
- self.reusePool = {}
- self.tuples = {}
- self.keepAlive = []
-
- def _paint_tuple(self, paint: ot.Paint):
- # start simple, who even cares about cyclic graphs or interesting field types
- def _tuple_safe(value):
- if isinstance(value, enum.Enum):
- return value
- elif hasattr(value, "__dict__"):
- return tuple(
- (k, _tuple_safe(v)) for k, v in sorted(value.__dict__.items())
- )
- elif isinstance(value, collections.abc.MutableSequence):
- return tuple(_tuple_safe(e) for e in value)
- return value
-
- # Cache the tuples for individual Paint instead of the whole sequence
- # because the seq could be a transient slice
- result = self.tuples.get(id(paint), None)
- if result is None:
- result = _tuple_safe(paint)
- self.tuples[id(paint)] = result
- self.keepAlive.append(paint)
- return result
-
- def _as_tuple(self, paints: Sequence[ot.Paint]) -> Tuple[Any, ...]:
- return tuple(self._paint_tuple(p) for p in paints)
-
- def try_reuse(self, layers: List[ot.Paint]) -> List[ot.Paint]:
- found_reuse = True
- while found_reuse:
- found_reuse = False
-
- ranges = sorted(
- _reuse_ranges(len(layers)),
- key=lambda t: (t[1] - t[0], t[1], t[0]),
- reverse=True,
- )
- for lbound, ubound in ranges:
- reuse_lbound = self.reusePool.get(
- self._as_tuple(layers[lbound:ubound]), -1
- )
- if reuse_lbound == -1:
- continue
- new_slice = ot.Paint()
- new_slice.Format = int(ot.PaintFormat.PaintColrLayers)
- new_slice.NumLayers = ubound - lbound
- new_slice.FirstLayerIndex = reuse_lbound
- layers = layers[:lbound] + [new_slice] + layers[ubound:]
- found_reuse = True
- break
- return layers
-
- def add(self, layers: List[ot.Paint], first_layer_index: int):
- for lbound, ubound in _reuse_ranges(len(layers)):
- self.reusePool[self._as_tuple(layers[lbound:ubound])] = (
- lbound + first_layer_index
- )
-
-
-class LayerListBuilder:
- layers: List[ot.Paint]
- cache: LayerReuseCache
- allowLayerReuse: bool
-
- def __init__(self, *, allowLayerReuse=True):
- self.layers = []
- if allowLayerReuse:
- self.cache = LayerReuseCache()
- else:
- self.cache = None
-
- # We need to intercept construction of PaintColrLayers
- callbacks = _buildPaintCallbacks()
- callbacks[
- (
- BuildCallback.BEFORE_BUILD,
- ot.Paint,
- ot.PaintFormat.PaintColrLayers,
- )
- ] = self._beforeBuildPaintColrLayers
- self.tableBuilder = TableBuilder(callbacks)
-
- # COLR layers is unusual in that it modifies shared state
- # so we need a callback into an object
- def _beforeBuildPaintColrLayers(self, dest, source):
- # Sketchy gymnastics: a sequence input will have dropped it's layers
- # into NumLayers; get it back
- if isinstance(source.get("NumLayers", None), collections.abc.Sequence):
- layers = source["NumLayers"]
- else:
- layers = source["Layers"]
-
- # Convert maps seqs or whatever into typed objects
- layers = [self.buildPaint(l) for l in layers]
-
- # No reason to have a colr layers with just one entry
- if len(layers) == 1:
- return layers[0], {}
-
- if self.cache is not None:
- # Look for reuse, with preference to longer sequences
- # This may make the layer list smaller
- layers = self.cache.try_reuse(layers)
-
- # The layer list is now final; if it's too big we need to tree it
- is_tree = len(layers) > MAX_PAINT_COLR_LAYER_COUNT
- layers = build_n_ary_tree(layers, n=MAX_PAINT_COLR_LAYER_COUNT)
-
- # We now have a tree of sequences with Paint leaves.
- # Convert the sequences into PaintColrLayers.
- def listToColrLayers(layer):
- if isinstance(layer, collections.abc.Sequence):
- return self.buildPaint(
- {
- "Format": ot.PaintFormat.PaintColrLayers,
- "Layers": [listToColrLayers(l) for l in layer],
- }
- )
- return layer
-
- layers = [listToColrLayers(l) for l in layers]
-
- # No reason to have a colr layers with just one entry
- if len(layers) == 1:
- return layers[0], {}
-
- paint = ot.Paint()
- paint.Format = int(ot.PaintFormat.PaintColrLayers)
- paint.NumLayers = len(layers)
- paint.FirstLayerIndex = len(self.layers)
- self.layers.extend(layers)
-
- # Register our parts for reuse provided we aren't a tree
- # If we are a tree the leaves registered for reuse and that will suffice
- if self.cache is not None and not is_tree:
- self.cache.add(layers, paint.FirstLayerIndex)
-
- # we've fully built dest; empty source prevents generalized build from kicking in
- return paint, {}
-
- def buildPaint(self, paint: _PaintInput) -> ot.Paint:
- return self.tableBuilder.build(ot.Paint, paint)
-
- def build(self) -> Optional[ot.LayerList]:
- if not self.layers:
- return None
- layers = ot.LayerList()
- layers.LayerCount = len(self.layers)
- layers.Paint = self.layers
- return layers
-
-
-def buildBaseGlyphPaintRecord(
- baseGlyph: str, layerBuilder: LayerListBuilder, paint: _PaintInput
-) -> ot.BaseGlyphList:
- self = ot.BaseGlyphPaintRecord()
- self.BaseGlyph = baseGlyph
- self.Paint = layerBuilder.buildPaint(paint)
- return self
-
-
-def _format_glyph_errors(errors: Mapping[str, Exception]) -> str:
- lines = []
- for baseGlyph, error in sorted(errors.items()):
- lines.append(f" {baseGlyph} => {type(error).__name__}: {error}")
- return "\n".join(lines)
-
-
-def buildColrV1(
- colorGlyphs: _ColorGlyphsDict,
- glyphMap: Optional[Mapping[str, int]] = None,
- *,
- allowLayerReuse: bool = True,
-) -> Tuple[Optional[ot.LayerList], ot.BaseGlyphList]:
- if glyphMap is not None:
- colorGlyphItems = sorted(
- colorGlyphs.items(), key=lambda item: glyphMap[item[0]]
- )
- else:
- colorGlyphItems = colorGlyphs.items()
-
- errors = {}
- baseGlyphs = []
- layerBuilder = LayerListBuilder(allowLayerReuse=allowLayerReuse)
- for baseGlyph, paint in colorGlyphItems:
- try:
- baseGlyphs.append(buildBaseGlyphPaintRecord(baseGlyph, layerBuilder, paint))
-
- except (ColorLibError, OverflowError, ValueError, TypeError) as e:
- errors[baseGlyph] = e
-
- if errors:
- failed_glyphs = _format_glyph_errors(errors)
- exc = ColorLibError(f"Failed to build BaseGlyphList:\n{failed_glyphs}")
- exc.errors = errors
- raise exc from next(iter(errors.values()))
-
- layers = layerBuilder.build()
- glyphs = ot.BaseGlyphList()
- glyphs.BaseGlyphCount = len(baseGlyphs)
- glyphs.BaseGlyphPaintRecord = baseGlyphs
- return (layers, glyphs)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/arrayTools.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/arrayTools.py
deleted file mode 100644
index 5fb01a838ae8769809b4f8ab28cb69ea5e84a3dc..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/arrayTools.py
+++ /dev/null
@@ -1,422 +0,0 @@
-"""Routines for calculating bounding boxes, point in rectangle calculations and
-so on.
-"""
-
-from fontTools.misc.roundTools import otRound
-from fontTools.misc.vector import Vector as _Vector
-import math
-import warnings
-
-
-def calcBounds(array):
- """Calculate the bounding rectangle of a 2D points array.
-
- Args:
- array: A sequence of 2D tuples.
-
- Returns:
- A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``.
- """
- if not array:
- return 0, 0, 0, 0
- xs = [x for x, y in array]
- ys = [y for x, y in array]
- return min(xs), min(ys), max(xs), max(ys)
-
-
-def calcIntBounds(array, round=otRound):
- """Calculate the integer bounding rectangle of a 2D points array.
-
- Values are rounded to closest integer towards ``+Infinity`` using the
- :func:`fontTools.misc.fixedTools.otRound` function by default, unless
- an optional ``round`` function is passed.
-
- Args:
- array: A sequence of 2D tuples.
- round: A rounding function of type ``f(x: float) -> int``.
-
- Returns:
- A four-item tuple of integers representing the bounding rectangle:
- ``(xMin, yMin, xMax, yMax)``.
- """
- return tuple(round(v) for v in calcBounds(array))
-
-
-def updateBounds(bounds, p, min=min, max=max):
- """Add a point to a bounding rectangle.
-
- Args:
- bounds: A bounding rectangle expressed as a tuple
- ``(xMin, yMin, xMax, yMax)``.
- p: A 2D tuple representing a point.
- min,max: functions to compute the minimum and maximum.
-
- Returns:
- The updated bounding rectangle ``(xMin, yMin, xMax, yMax)``.
- """
- (x, y) = p
- xMin, yMin, xMax, yMax = bounds
- return min(xMin, x), min(yMin, y), max(xMax, x), max(yMax, y)
-
-
-def pointInRect(p, rect):
- """Test if a point is inside a bounding rectangle.
-
- Args:
- p: A 2D tuple representing a point.
- rect: A bounding rectangle expressed as a tuple
- ``(xMin, yMin, xMax, yMax)``.
-
- Returns:
- ``True`` if the point is inside the rectangle, ``False`` otherwise.
- """
- (x, y) = p
- xMin, yMin, xMax, yMax = rect
- return (xMin <= x <= xMax) and (yMin <= y <= yMax)
-
-
-def pointsInRect(array, rect):
- """Determine which points are inside a bounding rectangle.
-
- Args:
- array: A sequence of 2D tuples.
- rect: A bounding rectangle expressed as a tuple
- ``(xMin, yMin, xMax, yMax)``.
-
- Returns:
- A list containing the points inside the rectangle.
- """
- if len(array) < 1:
- return []
- xMin, yMin, xMax, yMax = rect
- return [(xMin <= x <= xMax) and (yMin <= y <= yMax) for x, y in array]
-
-
-def vectorLength(vector):
- """Calculate the length of the given vector.
-
- Args:
- vector: A 2D tuple.
-
- Returns:
- The Euclidean length of the vector.
- """
- x, y = vector
- return math.sqrt(x**2 + y**2)
-
-
-def asInt16(array):
- """Round a list of floats to 16-bit signed integers.
-
- Args:
- array: List of float values.
-
- Returns:
- A list of rounded integers.
- """
- return [int(math.floor(i + 0.5)) for i in array]
-
-
-def normRect(rect):
- """Normalize a bounding box rectangle.
-
- This function "turns the rectangle the right way up", so that the following
- holds::
-
- xMin <= xMax and yMin <= yMax
-
- Args:
- rect: A bounding rectangle expressed as a tuple
- ``(xMin, yMin, xMax, yMax)``.
-
- Returns:
- A normalized bounding rectangle.
- """
- (xMin, yMin, xMax, yMax) = rect
- return min(xMin, xMax), min(yMin, yMax), max(xMin, xMax), max(yMin, yMax)
-
-
-def scaleRect(rect, x, y):
- """Scale a bounding box rectangle.
-
- Args:
- rect: A bounding rectangle expressed as a tuple
- ``(xMin, yMin, xMax, yMax)``.
- x: Factor to scale the rectangle along the X axis.
- Y: Factor to scale the rectangle along the Y axis.
-
- Returns:
- A scaled bounding rectangle.
- """
- (xMin, yMin, xMax, yMax) = rect
- return xMin * x, yMin * y, xMax * x, yMax * y
-
-
-def offsetRect(rect, dx, dy):
- """Offset a bounding box rectangle.
-
- Args:
- rect: A bounding rectangle expressed as a tuple
- ``(xMin, yMin, xMax, yMax)``.
- dx: Amount to offset the rectangle along the X axis.
- dY: Amount to offset the rectangle along the Y axis.
-
- Returns:
- An offset bounding rectangle.
- """
- (xMin, yMin, xMax, yMax) = rect
- return xMin + dx, yMin + dy, xMax + dx, yMax + dy
-
-
-def insetRect(rect, dx, dy):
- """Inset a bounding box rectangle on all sides.
-
- Args:
- rect: A bounding rectangle expressed as a tuple
- ``(xMin, yMin, xMax, yMax)``.
- dx: Amount to inset the rectangle along the X axis.
- dY: Amount to inset the rectangle along the Y axis.
-
- Returns:
- An inset bounding rectangle.
- """
- (xMin, yMin, xMax, yMax) = rect
- return xMin + dx, yMin + dy, xMax - dx, yMax - dy
-
-
-def sectRect(rect1, rect2):
- """Test for rectangle-rectangle intersection.
-
- Args:
- rect1: First bounding rectangle, expressed as tuples
- ``(xMin, yMin, xMax, yMax)``.
- rect2: Second bounding rectangle.
-
- Returns:
- A boolean and a rectangle.
- If the input rectangles intersect, returns ``True`` and the intersecting
- rectangle. Returns ``False`` and ``(0, 0, 0, 0)`` if the input
- rectangles don't intersect.
- """
- (xMin1, yMin1, xMax1, yMax1) = rect1
- (xMin2, yMin2, xMax2, yMax2) = rect2
- xMin, yMin, xMax, yMax = (
- max(xMin1, xMin2),
- max(yMin1, yMin2),
- min(xMax1, xMax2),
- min(yMax1, yMax2),
- )
- if xMin >= xMax or yMin >= yMax:
- return False, (0, 0, 0, 0)
- return True, (xMin, yMin, xMax, yMax)
-
-
-def unionRect(rect1, rect2):
- """Determine union of bounding rectangles.
-
- Args:
- rect1: First bounding rectangle, expressed as tuples
- ``(xMin, yMin, xMax, yMax)``.
- rect2: Second bounding rectangle.
-
- Returns:
- The smallest rectangle in which both input rectangles are fully
- enclosed.
- """
- (xMin1, yMin1, xMax1, yMax1) = rect1
- (xMin2, yMin2, xMax2, yMax2) = rect2
- xMin, yMin, xMax, yMax = (
- min(xMin1, xMin2),
- min(yMin1, yMin2),
- max(xMax1, xMax2),
- max(yMax1, yMax2),
- )
- return (xMin, yMin, xMax, yMax)
-
-
-def rectCenter(rect):
- """Determine rectangle center.
-
- Args:
- rect: Bounding rectangle, expressed as tuples
- ``(xMin, yMin, xMax, yMax)``.
-
- Returns:
- A 2D tuple representing the point at the center of the rectangle.
- """
- (xMin, yMin, xMax, yMax) = rect
- return (xMin + xMax) / 2, (yMin + yMax) / 2
-
-
-def rectArea(rect):
- """Determine rectangle area.
-
- Args:
- rect: Bounding rectangle, expressed as tuples
- ``(xMin, yMin, xMax, yMax)``.
-
- Returns:
- The area of the rectangle.
- """
- (xMin, yMin, xMax, yMax) = rect
- return (yMax - yMin) * (xMax - xMin)
-
-
-def intRect(rect):
- """Round a rectangle to integer values.
-
- Guarantees that the resulting rectangle is NOT smaller than the original.
-
- Args:
- rect: Bounding rectangle, expressed as tuples
- ``(xMin, yMin, xMax, yMax)``.
-
- Returns:
- A rounded bounding rectangle.
- """
- (xMin, yMin, xMax, yMax) = rect
- xMin = int(math.floor(xMin))
- yMin = int(math.floor(yMin))
- xMax = int(math.ceil(xMax))
- yMax = int(math.ceil(yMax))
- return (xMin, yMin, xMax, yMax)
-
-
-def quantizeRect(rect, factor=1):
- """
- >>> bounds = (72.3, -218.4, 1201.3, 919.1)
- >>> quantizeRect(bounds)
- (72, -219, 1202, 920)
- >>> quantizeRect(bounds, factor=10)
- (70, -220, 1210, 920)
- >>> quantizeRect(bounds, factor=100)
- (0, -300, 1300, 1000)
- """
- if factor < 1:
- raise ValueError(f"Expected quantization factor >= 1, found: {factor!r}")
- xMin, yMin, xMax, yMax = normRect(rect)
- return (
- int(math.floor(xMin / factor) * factor),
- int(math.floor(yMin / factor) * factor),
- int(math.ceil(xMax / factor) * factor),
- int(math.ceil(yMax / factor) * factor),
- )
-
-
-class Vector(_Vector):
- def __init__(self, *args, **kwargs):
- warnings.warn(
- "fontTools.misc.arrayTools.Vector has been deprecated, please use "
- "fontTools.misc.vector.Vector instead.",
- DeprecationWarning,
- )
-
-
-def pairwise(iterable, reverse=False):
- """Iterate over current and next items in iterable.
-
- Args:
- iterable: An iterable
- reverse: If true, iterate in reverse order.
-
- Returns:
- A iterable yielding two elements per iteration.
-
- Example:
-
- >>> tuple(pairwise([]))
- ()
- >>> tuple(pairwise([], reverse=True))
- ()
- >>> tuple(pairwise([0]))
- ((0, 0),)
- >>> tuple(pairwise([0], reverse=True))
- ((0, 0),)
- >>> tuple(pairwise([0, 1]))
- ((0, 1), (1, 0))
- >>> tuple(pairwise([0, 1], reverse=True))
- ((1, 0), (0, 1))
- >>> tuple(pairwise([0, 1, 2]))
- ((0, 1), (1, 2), (2, 0))
- >>> tuple(pairwise([0, 1, 2], reverse=True))
- ((2, 1), (1, 0), (0, 2))
- >>> tuple(pairwise(['a', 'b', 'c', 'd']))
- (('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'a'))
- >>> tuple(pairwise(['a', 'b', 'c', 'd'], reverse=True))
- (('d', 'c'), ('c', 'b'), ('b', 'a'), ('a', 'd'))
- """
- if not iterable:
- return
- if reverse:
- it = reversed(iterable)
- else:
- it = iter(iterable)
- first = next(it, None)
- a = first
- for b in it:
- yield (a, b)
- a = b
- yield (a, first)
-
-
-def _test():
- """
- >>> import math
- >>> calcBounds([])
- (0, 0, 0, 0)
- >>> calcBounds([(0, 40), (0, 100), (50, 50), (80, 10)])
- (0, 10, 80, 100)
- >>> updateBounds((0, 0, 0, 0), (100, 100))
- (0, 0, 100, 100)
- >>> pointInRect((50, 50), (0, 0, 100, 100))
- True
- >>> pointInRect((0, 0), (0, 0, 100, 100))
- True
- >>> pointInRect((100, 100), (0, 0, 100, 100))
- True
- >>> not pointInRect((101, 100), (0, 0, 100, 100))
- True
- >>> list(pointsInRect([(50, 50), (0, 0), (100, 100), (101, 100)], (0, 0, 100, 100)))
- [True, True, True, False]
- >>> vectorLength((3, 4))
- 5.0
- >>> vectorLength((1, 1)) == math.sqrt(2)
- True
- >>> list(asInt16([0, 0.1, 0.5, 0.9]))
- [0, 0, 1, 1]
- >>> normRect((0, 10, 100, 200))
- (0, 10, 100, 200)
- >>> normRect((100, 200, 0, 10))
- (0, 10, 100, 200)
- >>> scaleRect((10, 20, 50, 150), 1.5, 2)
- (15.0, 40, 75.0, 300)
- >>> offsetRect((10, 20, 30, 40), 5, 6)
- (15, 26, 35, 46)
- >>> insetRect((10, 20, 50, 60), 5, 10)
- (15, 30, 45, 50)
- >>> insetRect((10, 20, 50, 60), -5, -10)
- (5, 10, 55, 70)
- >>> intersects, rect = sectRect((0, 10, 20, 30), (0, 40, 20, 50))
- >>> not intersects
- True
- >>> intersects, rect = sectRect((0, 10, 20, 30), (5, 20, 35, 50))
- >>> intersects
- 1
- >>> rect
- (5, 20, 20, 30)
- >>> unionRect((0, 10, 20, 30), (0, 40, 20, 50))
- (0, 10, 20, 50)
- >>> rectCenter((0, 0, 100, 200))
- (50.0, 100.0)
- >>> rectCenter((0, 0, 100, 199.0))
- (50.0, 99.5)
- >>> intRect((0.9, 2.9, 3.1, 4.1))
- (0, 2, 4, 5)
- """
-
-
-if __name__ == "__main__":
- import sys
- import doctest
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/svgLib/path/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/svgLib/path/__init__.py
deleted file mode 100644
index 742bc64ce037a53a765efc80ed773b840af5b4c7..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/svgLib/path/__init__.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from fontTools.pens.transformPen import TransformPen
-from fontTools.misc import etree
-from fontTools.misc.textTools import tostr
-from .parser import parse_path
-from .shapes import PathBuilder
-
-
-__all__ = [tostr(s) for s in ("SVGPath", "parse_path")]
-
-
-class SVGPath(object):
- """Parse SVG ``path`` elements from a file or string, and draw them
- onto a glyph object that supports the FontTools Pen protocol.
-
- For example, reading from an SVG file and drawing to a Defcon Glyph:
-
- import defcon
- glyph = defcon.Glyph()
- pen = glyph.getPen()
- svg = SVGPath("path/to/a.svg")
- svg.draw(pen)
-
- Or reading from a string containing SVG data, using the alternative
- 'fromstring' (a class method):
-
- data = '
- * Copyright (c) 2009 Kenan Gillet
- * Copyright (c) 2010 Martin Storsjo
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * G.722 ADPCM audio codec
- *
- * This G.722 decoder is a bit-exact implementation of the ITU G.722
- * specification for all three specified bitrates - 64000bps, 56000bps
- * and 48000bps. It passes the ITU tests.
- *
- * @note For the 56000bps and 48000bps bitrates, the lowest 1 or 2 bits
- * respectively of each byte are ignored.
- */
-
-#include "mathops.h"
-#include "g722.h"
-
-static const int8_t sign_lookup[2] = { -1, 1 };
-
-static const int16_t inv_log2_table[32] = {
- 2048, 2093, 2139, 2186, 2233, 2282, 2332, 2383,
- 2435, 2489, 2543, 2599, 2656, 2714, 2774, 2834,
- 2896, 2960, 3025, 3091, 3158, 3228, 3298, 3371,
- 3444, 3520, 3597, 3676, 3756, 3838, 3922, 4008
-};
-static const int16_t high_log_factor_step[2] = { 798, -214 };
-const int16_t ff_g722_high_inv_quant[4] = { -926, -202, 926, 202 };
-/**
- * low_log_factor_step[index] == wl[rl42[index]]
- */
-static const int16_t low_log_factor_step[16] = {
- -60, 3042, 1198, 538, 334, 172, 58, -30,
- 3042, 1198, 538, 334, 172, 58, -30, -60
-};
-const int16_t ff_g722_low_inv_quant4[16] = {
- 0, -2557, -1612, -1121, -786, -530, -323, -150,
- 2557, 1612, 1121, 786, 530, 323, 150, 0
-};
-const int16_t ff_g722_low_inv_quant6[64] = {
- -17, -17, -17, -17, -3101, -2738, -2376, -2088,
- -1873, -1689, -1535, -1399, -1279, -1170, -1072, -982,
- -899, -822, -750, -682, -618, -558, -501, -447,
- -396, -347, -300, -254, -211, -170, -130, -91,
- 3101, 2738, 2376, 2088, 1873, 1689, 1535, 1399,
- 1279, 1170, 1072, 982, 899, 822, 750, 682,
- 618, 558, 501, 447, 396, 347, 300, 254,
- 211, 170, 130, 91, 54, 17, -54, -17
-};
-
-static inline void s_zero(int cur_diff, struct G722Band *band)
-{
- int s_zero = 0;
-
- #define ACCUM(k, x, d) do { \
- int tmp = x; \
- band->zero_mem[k] = ((band->zero_mem[k] * 255) >> 8) + \
- d*((band->diff_mem[k]^cur_diff) < 0 ? -128 : 128); \
- band->diff_mem[k] = tmp; \
- s_zero += (tmp * band->zero_mem[k]) >> 15; \
- } while (0)
- if (cur_diff) {
- ACCUM(5, band->diff_mem[4], 1);
- ACCUM(4, band->diff_mem[3], 1);
- ACCUM(3, band->diff_mem[2], 1);
- ACCUM(2, band->diff_mem[1], 1);
- ACCUM(1, band->diff_mem[0], 1);
- ACCUM(0, cur_diff * 2, 1);
- } else {
- ACCUM(5, band->diff_mem[4], 0);
- ACCUM(4, band->diff_mem[3], 0);
- ACCUM(3, band->diff_mem[2], 0);
- ACCUM(2, band->diff_mem[1], 0);
- ACCUM(1, band->diff_mem[0], 0);
- ACCUM(0, cur_diff * 2, 0);
- }
- #undef ACCUM
- band->s_zero = s_zero;
-}
-
-/**
- * adaptive predictor
- *
- * @param cur_diff the dequantized and scaled delta calculated from the
- * current codeword
- */
-static void do_adaptive_prediction(struct G722Band *band, const int cur_diff)
-{
- int sg[2], limit, cur_qtzd_reconst;
-
- const int cur_part_reconst = band->s_zero + cur_diff < 0;
-
- sg[0] = sign_lookup[cur_part_reconst != band->part_reconst_mem[0]];
- sg[1] = sign_lookup[cur_part_reconst == band->part_reconst_mem[1]];
- band->part_reconst_mem[1] = band->part_reconst_mem[0];
- band->part_reconst_mem[0] = cur_part_reconst;
-
- band->pole_mem[1] = av_clip((sg[0] * av_clip(band->pole_mem[0], -8191, 8191) >> 5) +
- (sg[1] * 128) + (band->pole_mem[1] * 127 >> 7), -12288, 12288);
-
- limit = 15360 - band->pole_mem[1];
- band->pole_mem[0] = av_clip(-192 * sg[0] + (band->pole_mem[0] * 255 >> 8), -limit, limit);
-
- s_zero(cur_diff, band);
-
- cur_qtzd_reconst = av_clip_int16((band->s_predictor + cur_diff) * 2);
- band->s_predictor = av_clip_int16(band->s_zero +
- (band->pole_mem[0] * cur_qtzd_reconst >> 15) +
- (band->pole_mem[1] * band->prev_qtzd_reconst >> 15));
- band->prev_qtzd_reconst = cur_qtzd_reconst;
-}
-
-static inline int linear_scale_factor(const int log_factor)
-{
- const int wd1 = inv_log2_table[(log_factor >> 6) & 31];
- const int shift = log_factor >> 11;
- return shift < 0 ? wd1 >> -shift : wd1 << shift;
-}
-
-void ff_g722_update_low_predictor(struct G722Band *band, const int ilow)
-{
- do_adaptive_prediction(band,
- band->scale_factor * ff_g722_low_inv_quant4[ilow] >> 10);
-
- // quantizer adaptation
- band->log_factor = av_clip((band->log_factor * 127 >> 7) +
- low_log_factor_step[ilow], 0, 18432);
- band->scale_factor = linear_scale_factor(band->log_factor - (8 << 11));
-}
-
-void ff_g722_update_high_predictor(struct G722Band *band, const int dhigh,
- const int ihigh)
-{
- do_adaptive_prediction(band, dhigh);
-
- // quantizer adaptation
- band->log_factor = av_clip((band->log_factor * 127 >> 7) +
- high_log_factor_step[ihigh&1], 0, 22528);
- band->scale_factor = linear_scale_factor(band->log_factor - (10 << 11));
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libvpxdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libvpxdec.c
deleted file mode 100644
index f480545ae042fcc66065381699ded3417c0aa153..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libvpxdec.c
+++ /dev/null
@@ -1,395 +0,0 @@
-/*
- * Copyright (c) 2010, Google, Inc.
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * VP8/9 decoder support via libvpx
- */
-
-#include "config_components.h"
-
-#define VPX_CODEC_DISABLE_COMPAT 1
-#include
-#include
-#include
-
-#include "libavutil/common.h"
-#include "libavutil/cpu.h"
-#include "libavutil/imgutils.h"
-#include "libavutil/intreadwrite.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "decode.h"
-#include "libvpx.h"
-#include "profiles.h"
-
-typedef struct VPxDecoderContext {
- struct vpx_codec_ctx decoder;
- struct vpx_codec_ctx decoder_alpha;
- AVBufferPool *pool;
- size_t pool_size;
- int has_alpha_channel;
-} VPxContext;
-
-
-static int get_frame_buffer(void *priv, size_t min_size, vpx_codec_frame_buffer_t *fb)
-{
- VPxContext *ctx = priv;
- AVBufferRef *buf;
-
- if (min_size > ctx->pool_size) {
- av_buffer_pool_uninit(&ctx->pool);
- /* According to the libvpx docs the buffer must be zeroed out. */
- ctx->pool = av_buffer_pool_init(min_size, av_buffer_allocz);
- if (!ctx->pool) {
- ctx->pool_size = 0;
- return AVERROR(ENOMEM);
- }
- ctx->pool_size = min_size;
- }
-
- buf = av_buffer_pool_get(ctx->pool);
- if (!buf)
- return AVERROR(ENOMEM);
-
- fb->priv = buf;
- fb->size = ctx->pool_size;
- fb->data = buf->data;
-
- return 0;
-}
-
-static int release_frame_buffer(void *priv, vpx_codec_frame_buffer_t *fb)
-{
- AVBufferRef *buf = fb->priv;
- av_buffer_unref(&buf);
- return 0;
-}
-
-static av_cold int vpx_init(AVCodecContext *avctx,
- struct vpx_codec_ctx* decoder,
- const struct vpx_codec_iface *iface)
-{
- struct vpx_codec_dec_cfg deccfg = {
- .threads = FFMIN(avctx->thread_count ? avctx->thread_count : av_cpu_count(), MAX_VPX_THREADS)
- };
-
- av_log(avctx, AV_LOG_INFO, "%s\n", vpx_codec_version_str());
- av_log(avctx, AV_LOG_VERBOSE, "%s\n", vpx_codec_build_config());
-
- if (vpx_codec_dec_init(decoder, iface, &deccfg, 0) != VPX_CODEC_OK) {
- const char *error = vpx_codec_error(decoder);
- av_log(avctx, AV_LOG_ERROR, "Failed to initialize decoder: %s\n",
- error);
- return AVERROR(EINVAL);
- }
-
- if (avctx->codec_id == AV_CODEC_ID_VP9)
- vpx_codec_set_frame_buffer_functions(decoder, get_frame_buffer, release_frame_buffer, avctx->priv_data);
-
- return 0;
-}
-
-// returns 0 on success, AVERROR_INVALIDDATA otherwise
-static int set_pix_fmt(AVCodecContext *avctx, struct vpx_image *img,
- int has_alpha_channel)
-{
- static const enum AVColorSpace colorspaces[8] = {
- AVCOL_SPC_UNSPECIFIED, AVCOL_SPC_BT470BG, AVCOL_SPC_BT709, AVCOL_SPC_SMPTE170M,
- AVCOL_SPC_SMPTE240M, AVCOL_SPC_BT2020_NCL, AVCOL_SPC_RESERVED, AVCOL_SPC_RGB,
- };
-#if VPX_IMAGE_ABI_VERSION >= 4
- static const enum AVColorRange color_ranges[] = {
- AVCOL_RANGE_MPEG, AVCOL_RANGE_JPEG
- };
- avctx->color_range = color_ranges[img->range];
-#endif
- avctx->colorspace = colorspaces[img->cs];
- if (avctx->codec_id == AV_CODEC_ID_VP8 && img->fmt != VPX_IMG_FMT_I420)
- return AVERROR_INVALIDDATA;
- switch (img->fmt) {
- case VPX_IMG_FMT_I420:
- if (avctx->codec_id == AV_CODEC_ID_VP9)
- avctx->profile = FF_PROFILE_VP9_0;
- avctx->pix_fmt =
- has_alpha_channel ? AV_PIX_FMT_YUVA420P : AV_PIX_FMT_YUV420P;
- return 0;
-#if CONFIG_LIBVPX_VP9_DECODER
- case VPX_IMG_FMT_I422:
- avctx->profile = FF_PROFILE_VP9_1;
- avctx->pix_fmt = AV_PIX_FMT_YUV422P;
- return 0;
- case VPX_IMG_FMT_I440:
- avctx->profile = FF_PROFILE_VP9_1;
- avctx->pix_fmt = AV_PIX_FMT_YUV440P;
- return 0;
- case VPX_IMG_FMT_I444:
- avctx->profile = FF_PROFILE_VP9_1;
- avctx->pix_fmt = avctx->colorspace == AVCOL_SPC_RGB ?
- AV_PIX_FMT_GBRP : AV_PIX_FMT_YUV444P;
- return 0;
- case VPX_IMG_FMT_I42016:
- avctx->profile = FF_PROFILE_VP9_2;
- if (img->bit_depth == 10) {
- avctx->pix_fmt = AV_PIX_FMT_YUV420P10;
- return 0;
- } else if (img->bit_depth == 12) {
- avctx->pix_fmt = AV_PIX_FMT_YUV420P12;
- return 0;
- } else {
- return AVERROR_INVALIDDATA;
- }
- case VPX_IMG_FMT_I42216:
- avctx->profile = FF_PROFILE_VP9_3;
- if (img->bit_depth == 10) {
- avctx->pix_fmt = AV_PIX_FMT_YUV422P10;
- return 0;
- } else if (img->bit_depth == 12) {
- avctx->pix_fmt = AV_PIX_FMT_YUV422P12;
- return 0;
- } else {
- return AVERROR_INVALIDDATA;
- }
- case VPX_IMG_FMT_I44016:
- avctx->profile = FF_PROFILE_VP9_3;
- if (img->bit_depth == 10) {
- avctx->pix_fmt = AV_PIX_FMT_YUV440P10;
- return 0;
- } else if (img->bit_depth == 12) {
- avctx->pix_fmt = AV_PIX_FMT_YUV440P12;
- return 0;
- } else {
- return AVERROR_INVALIDDATA;
- }
- case VPX_IMG_FMT_I44416:
- avctx->profile = FF_PROFILE_VP9_3;
- if (img->bit_depth == 10) {
- avctx->pix_fmt = avctx->colorspace == AVCOL_SPC_RGB ?
- AV_PIX_FMT_GBRP10 : AV_PIX_FMT_YUV444P10;
- return 0;
- } else if (img->bit_depth == 12) {
- avctx->pix_fmt = avctx->colorspace == AVCOL_SPC_RGB ?
- AV_PIX_FMT_GBRP12 : AV_PIX_FMT_YUV444P12;
- return 0;
- } else {
- return AVERROR_INVALIDDATA;
- }
-#endif
- default:
- return AVERROR_INVALIDDATA;
- }
-}
-
-static int decode_frame(AVCodecContext *avctx, vpx_codec_ctx_t *decoder,
- const uint8_t *data, uint32_t data_sz)
-{
- if (vpx_codec_decode(decoder, data, data_sz, NULL, 0) != VPX_CODEC_OK) {
- const char *error = vpx_codec_error(decoder);
- const char *detail = vpx_codec_error_detail(decoder);
-
- av_log(avctx, AV_LOG_ERROR, "Failed to decode frame: %s\n", error);
- if (detail) {
- av_log(avctx, AV_LOG_ERROR, " Additional information: %s\n",
- detail);
- }
- return AVERROR_INVALIDDATA;
- }
- return 0;
-}
-
-static int vpx_decode(AVCodecContext *avctx, AVFrame *picture,
- int *got_frame, AVPacket *avpkt)
-{
- VPxContext *ctx = avctx->priv_data;
- const void *iter = NULL;
- const void *iter_alpha = NULL;
- struct vpx_image *img, *img_alpha;
- int ret;
- uint8_t *side_data = NULL;
- size_t side_data_size;
-
- ret = decode_frame(avctx, &ctx->decoder, avpkt->data, avpkt->size);
- if (ret)
- return ret;
-
- side_data = av_packet_get_side_data(avpkt,
- AV_PKT_DATA_MATROSKA_BLOCKADDITIONAL,
- &side_data_size);
- if (side_data_size >= 8) {
- const uint64_t additional_id = AV_RB64(side_data);
- side_data += 8;
- side_data_size -= 8;
- if (additional_id == 1) { // 1 stands for alpha channel data.
- if (!ctx->has_alpha_channel) {
- ctx->has_alpha_channel = 1;
- ret = vpx_init(avctx,
- &ctx->decoder_alpha,
-#if CONFIG_LIBVPX_VP8_DECODER && CONFIG_LIBVPX_VP9_DECODER
- (avctx->codec_id == AV_CODEC_ID_VP8) ?
- vpx_codec_vp8_dx() : vpx_codec_vp9_dx()
-#elif CONFIG_LIBVPX_VP8_DECODER
- vpx_codec_vp8_dx()
-#else
- vpx_codec_vp9_dx()
-#endif
- );
- if (ret)
- return ret;
- }
- ret = decode_frame(avctx, &ctx->decoder_alpha, side_data,
- side_data_size);
- if (ret)
- return ret;
- }
- }
-
- if ((img = vpx_codec_get_frame(&ctx->decoder, &iter)) &&
- (!ctx->has_alpha_channel ||
- (img_alpha = vpx_codec_get_frame(&ctx->decoder_alpha, &iter_alpha)))) {
- uint8_t *planes[4];
- int linesizes[4];
-
- if (img->d_w > img->w || img->d_h > img->h) {
- av_log(avctx, AV_LOG_ERROR, "Display dimensions %dx%d exceed storage %dx%d\n",
- img->d_w, img->d_h, img->w, img->h);
- return AVERROR_EXTERNAL;
- }
-
- if ((ret = set_pix_fmt(avctx, img, ctx->has_alpha_channel)) < 0) {
- av_log(avctx, AV_LOG_ERROR, "Unsupported output colorspace (%d) / bit_depth (%d)\n",
- img->fmt, img->bit_depth);
- return ret;
- }
-
- if ((int) img->d_w != avctx->width || (int) img->d_h != avctx->height) {
- av_log(avctx, AV_LOG_INFO, "dimension change! %dx%d -> %dx%d\n",
- avctx->width, avctx->height, img->d_w, img->d_h);
- ret = ff_set_dimensions(avctx, img->d_w, img->d_h);
- if (ret < 0)
- return ret;
- }
-
- if (ctx->has_alpha_channel &&
- (img->d_w != img_alpha->d_w ||
- img->d_h != img_alpha->d_h ||
- img->bit_depth != img_alpha->bit_depth)) {
- av_log(avctx, AV_LOG_ERROR,
- "Video dimensions %dx%d@%dbpc differ from alpha dimensions %dx%d@%dbpc\n",
- img->d_w, img->d_h, img->bit_depth,
- img_alpha->d_w, img_alpha->d_h, img_alpha->bit_depth);
- return AVERROR_INVALIDDATA;
- }
-
- planes[0] = img->planes[VPX_PLANE_Y];
- planes[1] = img->planes[VPX_PLANE_U];
- planes[2] = img->planes[VPX_PLANE_V];
- planes[3] =
- ctx->has_alpha_channel ? img_alpha->planes[VPX_PLANE_Y] : NULL;
- linesizes[0] = img->stride[VPX_PLANE_Y];
- linesizes[1] = img->stride[VPX_PLANE_U];
- linesizes[2] = img->stride[VPX_PLANE_V];
- linesizes[3] =
- ctx->has_alpha_channel ? img_alpha->stride[VPX_PLANE_Y] : 0;
-
- if (img->fb_priv && (!ctx->has_alpha_channel || img_alpha->fb_priv)) {
- ret = ff_decode_frame_props(avctx, picture);
- if (ret < 0)
- return ret;
- picture->buf[0] = av_buffer_ref(img->fb_priv);
- if (!picture->buf[0])
- return AVERROR(ENOMEM);
- if (ctx->has_alpha_channel) {
- picture->buf[1] = av_buffer_ref(img_alpha->fb_priv);
- if (!picture->buf[1]) {
- av_frame_unref(picture);
- return AVERROR(ENOMEM);
- }
- }
- for (int i = 0; i < 4; i++) {
- picture->data[i] = planes[i];
- picture->linesize[i] = linesizes[i];
- }
- } else {
- if ((ret = ff_get_buffer(avctx, picture, 0)) < 0)
- return ret;
- av_image_copy(picture->data, picture->linesize, (const uint8_t**)planes,
- linesizes, avctx->pix_fmt, img->d_w, img->d_h);
- }
- *got_frame = 1;
- }
- return avpkt->size;
-}
-
-static av_cold int vpx_free(AVCodecContext *avctx)
-{
- VPxContext *ctx = avctx->priv_data;
- vpx_codec_destroy(&ctx->decoder);
- if (ctx->has_alpha_channel)
- vpx_codec_destroy(&ctx->decoder_alpha);
- av_buffer_pool_uninit(&ctx->pool);
- return 0;
-}
-
-#if CONFIG_LIBVPX_VP8_DECODER
-static av_cold int vp8_init(AVCodecContext *avctx)
-{
- VPxContext *ctx = avctx->priv_data;
- return vpx_init(avctx, &ctx->decoder, vpx_codec_vp8_dx());
-}
-
-const FFCodec ff_libvpx_vp8_decoder = {
- .p.name = "libvpx",
- CODEC_LONG_NAME("libvpx VP8"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_VP8,
- .p.capabilities = AV_CODEC_CAP_OTHER_THREADS | AV_CODEC_CAP_DR1,
- .p.wrapper_name = "libvpx",
- .priv_data_size = sizeof(VPxContext),
- .init = vp8_init,
- .close = vpx_free,
- FF_CODEC_DECODE_CB(vpx_decode),
- .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
- FF_CODEC_CAP_AUTO_THREADS,
-};
-#endif /* CONFIG_LIBVPX_VP8_DECODER */
-
-#if CONFIG_LIBVPX_VP9_DECODER
-static av_cold int vp9_init(AVCodecContext *avctx)
-{
- VPxContext *ctx = avctx->priv_data;
- return vpx_init(avctx, &ctx->decoder, vpx_codec_vp9_dx());
-}
-
-const FFCodec ff_libvpx_vp9_decoder = {
- .p.name = "libvpx-vp9",
- CODEC_LONG_NAME("libvpx VP9"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_VP9,
- .p.capabilities = AV_CODEC_CAP_OTHER_THREADS,
- .p.profiles = NULL_IF_CONFIG_SMALL(ff_vp9_profiles),
- .p.wrapper_name = "libvpx",
- .priv_data_size = sizeof(VPxContext),
- .init = vp9_init,
- .close = vpx_free,
- FF_CODEC_DECODE_CB(vpx_decode),
- .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
- FF_CODEC_CAP_AUTO_THREADS,
-};
-#endif /* CONFIG_LIBVPX_VP9_DECODER */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Eldorado M Join the Adventure of Ace and Friends in Search of the Golden City.md b/spaces/congsaPfin/Manga-OCR/logs/Eldorado M Join the Adventure of Ace and Friends in Search of the Golden City.md
deleted file mode 100644
index c3bfd689c7e42153ac58d034d56b479393c2a819..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Eldorado M Join the Adventure of Ace and Friends in Search of the Golden City.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-
Eldorado M APK: A Strategic Defense Game to Find the Golden City
-
Do you love strategic defense games that challenge your mind and skills? Do you want to join a thrilling adventure to find the legendary golden city of El Dorado? If yes, then you should try Eldorado M APK, a mobile game that brings you the best of both worlds.
Eldorado M APK is a strategic defense game that depicts the adventures of Ace and his friends in search of the golden castle, El Dorado. You will have to build your own base, recruit your army, and fight against enemies from all over the world. You will also have to explore different stages, dungeons, arenas, and sky gardens to find clues and treasures. You will also have to face the world boss, a powerful enemy that requires teamwork and strategy.
-
In this article, we will tell you everything you need to know about Eldorado M APK, including what it is, how to download and install it, how to play it, and where to find more information and support. Let's get started!
-
What is Eldorado M APK?
-
Eldorado M APK is a mobile game that is based on the popular TV game "Eldorado". It is developed by Busidol, a Korean company that specializes in creating fun and engaging games for various platforms. Eldorado M APK is available for Android devices and can be downloaded for free from Google Play or other sources.
-
The background story of Eldorado M
-
The story of Eldorado M is set in the 16th century, when El Dorado was known as the golden city. However, no one has ever found it, and people have forgotten about it over time. But one day, Ace, who woke up in a glacier, heard from his ancestors about the golden man in the glacier. He was sure that there was El Dorado near his hometown.
-
He decided to go find it with his friends Smarty, a bookworm and archaeologist, Coco, a cheerful and energetic girl, and Rocky, a strong and loyal warrior. Together, they embarked on a long journey to find the golden city of El Dorado. But they were not alone. There were also other adventurers who wanted to claim the treasure for themselves. Will they be able to find El Dorado and fulfill their dreams?
-
The features of Eldorado M
-
Eldorado M has many features that make it an exciting and enjoyable game. Some of them are:
-
eldorado m strategic defense game
-eldorado m android download
-eldorado m apk latest version
-eldorado m tv game interlocking
-eldorado m adventure story
-eldorado m pvp arena
-eldorado m world boss
-eldorado m sky garden
-eldorado m daily dungeon
-eldorado m stage mode
-eldorado m golden city
-eldorado m ace friends
-eldorado m 7th anniversary
-eldorado m net energy gain
-eldorado m 100 million degrees
-eldorado m free download
-eldorado m google play
-eldorado m samsung smart tv
-eldorado m kt genie tv
-eldorado m btv
-eldorado m lgh cable tv
-eldorado m d'live cable tv
-eldorado m hcn cable tv
-eldorado m google tv
-eldorado m playz ott
-eldorado m sns
-eldorado m busidol developer
-eldorado m apkcombo download
-eldorado m appchopc download
-eldorado m xapk file format
-eldorado m bluestacks emulator
-eldorado m apk for pc windows
-eldorado m apk for mac os
-eldorado m apk for linux ubuntu
-eldorado m apk for ios iphone ipad
-eldorado m apk for firestick tv
-eldorado m apk for chromebook laptop
-eldorado m apk for android tv box
-eldorado m apk for roku device
-eldorado m apk for nvidia shield tv
-
-
It has colorful and cute graphics that appeal to all ages.
-
It has simple and intuitive controls that are easy to learn and use.
-
It has various characters with different skills and abilities that you can choose from.
-
It has hundreds of items and equipment that you can collect and upgrade.
-
It has interlocking function with Eldorado TV version, which means you can play with TV users, mobile users, and PC browser users around the world.
-
It has regular updates and events that add new content and features.
-
-
The game modes of Eldorado M
-
Eldorado M has several game modes that offer different challenges and rewards. Some of them are:
Here is the continuation of the article:
-
-
Sky Garden: This is an infinite wave mode where you have to survive as long as possible against endless enemies. You can earn gold and gems by clearing each wave.
-
World Boss: This is a weekly event where you have to team up with other players to defeat a powerful boss. You can get rewards based on your contribution and ranking.
-
-
How to download and install Eldorado M APK on your device?
-
If you want to play Eldorado M APK on your device, you will need to download and install it first. Here are the requirements and steps to do so:
-
The requirements for Eldorado M APK
-
Eldorado M APK is compatible with Android devices that have the following specifications:
-
-
Android version 4.4 or higher
-
At least 2 GB of RAM
-
At least 500 MB of free storage space
-
A stable internet connection
-
-
The steps to download and install Eldorado M APK
-
There are two ways to download and install Eldorado M APK on your device:
-
-
From Google Play: This is the easiest and safest way to get Eldorado M APK on your device. You just need to open Google Play on your device, search for Eldorado M, and tap on Install. The game will be downloaded and installed automatically.
-
From other sources: This is an alternative way to get Eldorado M APK on your device, especially if you cannot access Google Play for some reason. You will need to find a reliable website that offers the APK file of Eldorado M, such as or . You will also need to enable the installation of apps from unknown sources on your device settings. Then, you will need to download the APK file from the website, locate it on your device, and tap on it to install it.
-
-
The benefits of using Eldorado M APK
-
By using Eldorado M APK, you can enjoy some benefits that are not available in the official version of the game. Some of them are:
-
-
You can play Eldorado M APK without any ads or in-app purchases.
-
You can play Eldorado M APK with unlimited resources, such as gold, gems, and items.
-
You can play Eldorado M APK with modded features, such as unlocked characters, stages, and modes.
-
You can play Eldorado M APK with faster loading and smoother performance.
-
-
How to play Eldorado M APK and enjoy its features?
-
Now that you have downloaded and installed Eldorado M APK on your device, you are ready to play it and enjoy its features. Here are some tips and tricks to help you get started:
-
The basic gameplay of Eldorado M APK
-
The basic gameplay of Eldorado M APK is similar to other strategic defense games. You will have to build your base, recruit your army, and defend it from enemy attacks. You will also have to attack enemy bases and destroy their towers. You can use different characters, items, and skills to enhance your strategy and power.
-
You can control your characters by tapping on them and dragging them to the desired location. You can also use buttons on the screen to activate their skills or items. You can also zoom in or out by pinching the screen. You can also pause the game by tapping on the menu button.
-
The tips and tricks for Eldorado M APK
-
To play Eldorado M APK more effectively and efficiently, you should follow these tips and tricks:
-
-
Upgrade your characters and items regularly to increase their stats and abilities.
-
Use different combinations of characters and items to suit different situations and enemies.
-
Use the terrain and obstacles to your advantage by placing your characters strategically.
-
Use skills and items wisely by timing them properly and targeting them accurately.
-
Collect gold and gems by clearing stages, dungeons, arenas, sky gardens, and world bosses.
-
Join a guild or create your own to chat with other players, exchange gifts, and cooperate in battles.
-
-
The challenges and rewards of Eldorado M APK
-
Eldorado M APK offers many challenges and rewards that make it more fun and rewarding. Some of them are:
-
-
You can complete various quests and achievements that give you gold, gems, items, or characters.
-
You can participate in various events that offer special rewards or bonuses.
-
You can compete
Here is the continuation of the article:
-
-
You can compete with other players in the arena and rank up to get more rewards and glory.
-
You can challenge yourself in the dungeon and sky garden and see how far you can go.
-
You can cooperate with other players in the world boss and get a chance to win rare items and characters.
-
-
Where can you find more information and support for Eldorado M APK?
-
If you want to learn more about Eldorado M APK or need any help or support, you can visit the following sources:
-
The official website and social media of Eldorado M APK
-
The official website of Eldorado M APK is , where you can find the latest news, updates, events, and guides about the game. You can also follow the official social media accounts of Eldorado M APK on , , and , where you can interact with other players, get tips and tricks, and participate in giveaways and contests.
-
The user reviews and ratings of Eldorado M APK
-
The user reviews and ratings of Eldorado M APK are available on Google Play, where you can see what other players think about the game. You can also leave your own feedback and suggestions to help improve the game. The current rating of Eldorado M APK on Google Play is 4.4 out of 5 stars, based on more than 10,000 reviews.
-
The FAQs and contact details of Eldorado M APK
-
The FAQs of Eldorado M APK are available on the official website, where you can find answers to common questions and issues about the game. You can also contact the customer service team of Eldorado M APK by sending an email to or by using the in-game support function. The customer service team will respond to your queries as soon as possible.
-
Conclusion
-
Eldorado M APK is a strategic defense game that combines adventure, strategy, and fun. You can join Ace and his friends in their quest to find the golden city of El Dorado, while building your base, recruiting your army, and fighting against enemies from all over the world. You can also enjoy various features, modes, challenges, and rewards that make the game more exciting and rewarding.
-
If you want to play Eldorado M APK on your device, you can download and install it from Google Play or other sources. You can also use Eldorado M APK to get more benefits, such as no ads, unlimited resources, modded features, and faster performance. You can also find more information and support for Eldorado M APK on the official website, social media, user reviews, FAQs, and contact details.
-
What are you waiting for? Download Eldorado M APK now and start your adventure to find the golden city!
-
FAQs
-
Here are some frequently asked questions about Eldorado M APK:
-
-
What is the difference between Eldorado M APK and Eldorado M?
-
Eldorado M APK is a modified version of Eldorado M that offers more benefits, such as no ads, unlimited resources, modded features, and faster performance. However, it is not an official version of the game and may not be compatible with some devices or updates.
-
Is Eldorado M APK safe to use?
-
Eldorado M APK is generally safe to use if you download it from a reliable source. However, it is not endorsed by Busidol or Google Play, so you should use it at your own risk. You should also scan the APK file for viruses or malware before installing it on your device.
-
How can I update Eldorado M APK?
-
Eldorado M APK may not be updated automatically by Google Play or Busidol. You will need to check for updates manually by visiting the website where you downloaded it from or by searching for new versions online. You will also need to uninstall the old version of Eldorado M APK before installing the new one.
-
How can I uninstall Eldorado M APK?
-
You can uninstall Eldorado M APK by following these steps:
-
-
Go to your device settings and tap on Apps or Applications.
-
Find Eldorado M APK on the list of apps and tap on it.
-
Tap on Uninstall and confirm your action.
-
-
How can I play Eldorado M APK with my friends?
-
You can play Eldorado M APK with your friends by using these methods:
-
-
You can invite your friends to join your guild or create your own guild in the game.
Here is the continuation of the article:
-
-
You can invite your friends to join your guild or create your own guild in the game. You can chat with your guild members, exchange gifts, and cooperate in battles.
-
You can invite your friends to join your team or create your own team in the game. You can play with your team members in the arena, dungeon, sky garden, or world boss.
-
You can add your friends as friends in the game. You can send messages, gifts, and requests to your friends.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Flash Beats Pro APK The Best Drumpad for Star Beats and Groove.md b/spaces/congsaPfin/Manga-OCR/logs/Flash Beats Pro APK The Best Drumpad for Star Beats and Groove.md
deleted file mode 100644
index a98ecaafdc53a1f18b374480ae6ec83e9d58e280..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Flash Beats Pro APK The Best Drumpad for Star Beats and Groove.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
What is Flash Beats Pro Apk?
-
If you are looking for a way to improve your audio experience on your Android device, you might want to try Flash Beats Pro Apk. This is an app that enhances the sound quality and clarity of your device by using advanced audio processing techniques. With this app, you can enjoy a more immersive and realistic sound that matches your music genre, mood, and preference.
Flash Beats Pro Apk is not just an ordinary equalizer app. It is a comprehensive audio enhancer that offers various features and options to customize your sound. Some of these features include:
-
-
Preset modes for different music genres, such as rock, pop, jazz, classical, etc.
-
Customizable equalizer settings with 10 bands and a preamp control
-
Bass boost and virtualizer features for a deeper and wider sound
-
Loudness enhancer feature for a louder and clearer sound
-
Volume booster feature for a higher volume level
-
Sound effects feature for adding reverb, echo, flanger, etc.
-
Speaker optimization feature for improving the speaker performance
-
Headphone optimization feature for enhancing the headphone quality
-
Compatibility mode feature for ensuring compatibility with most apps and devices
-
-
Why do you need Flash Beats Pro Apk?
-
You might be wondering why you need Flash Beats Pro Apk when your device already has a built-in equalizer or sound enhancer. The answer is simple: Flash Beats Pro Apk offers more features, options, and flexibility than your default audio settings. With this app, you can fine-tune your sound according to your specific needs and preferences. You can also enjoy a more dynamic and realistic sound that makes your music come alive.
-
Flash Beats Pro Apk is especially useful if you are a music lover, audiophile, or gamer who wants to get the most out of your audio experience. Whether you are listening to music, watching videos, playing games, or making calls, this app will make your sound better and more enjoyable. You will be able to hear every detail, nuance, and emotion of your audio content.
-
How to install Flash Beats Pro Apk on your device?
-
Installing Flash Beats Pro Apk on your device is easy and simple. You just need to follow these steps:
-
-
Download the Flash Beats Pro Apk file from a trusted source. You can find the latest version of the app on the official website or on other reputable sites like APKPure or APKMirror. Make sure you download the file from a secure and virus-free site.
-
Enable the installation of apps from unknown sources on your device. To do this, go to your device settings, then security, then unknown sources. Tap on the toggle to allow the installation of apps from sources other than the Google Play Store.
-
Locate the downloaded Flash Beats Pro Apk file on your device storage. You can use a file manager app to find the file or check your download folder. Tap on the file to start the installation process.
-
Follow the instructions on the screen to complete the installation. You might need to grant some permissions to the app to access your device features and functions.
-
Once the installation is done, you can launch the app and enjoy its features.
-
-
How to use Flash Beats Pro Apk?
-
Using Flash Beats Pro Apk is easy and intuitive. You can access the app from your app drawer or from your notification bar. Once you open the app, you will see the main interface with various settings and options. Here are some of the things you can do with the app:
-
flash beats pro apk download
-flash beats pro apk free
-flash beats pro apk mod
-flash beats pro apk cracked
-flash beats pro apk latest version
-flash beats pro apk for android
-flash beats pro apk full
-flash beats pro apk premium
-flash beats pro apk no ads
-flash beats pro apk offline
-flash beats pro apk 2023
-flash beats pro apk update
-flash beats pro apk review
-flash beats pro apk features
-flash beats pro apk hack
-flash beats pro apk music player
-flash beats pro apk metronome
-flash beats pro apk speed trainer
-flash beats pro apk live mode
-flash beats pro apk setlist
-flash beats pro apk practice tool
-flash beats pro apk best
-flash beats pro apk alternative
-flash beats pro apk comparison
-flash beats pro apk tutorial
-flash beats pro apk guide
-flash beats pro apk tips
-flash beats pro apk tricks
-flash beats pro apk support
-flash beats pro apk help
-flash beats pro apk feedback
-flash beats pro apk rating
-flash beats pro apk install
-flash beats pro apk uninstall
-flash beats pro apk backup
-flash beats pro apk restore
-flash beats pro apk sync
-flash beats pro apk customize
-flash beats pro apk settings
-flash beats pro apk themes
-flash beats pro apk skins
-flash beats pro apk widgets
-flash beats pro apk shortcuts
-flash beats pro apk notifications
-flash beats pro apk permissions
-flash beats pro apk requirements
-flash beats pro apk compatibility
-flash beats pro apk bugs
-flash beats pro apk fixes
-flash beats pro apk improvements
-
-
To choose a preset mode for your music genre, tap on the preset icon at the top left corner of the screen. You will see a list of preset modes, such as rock, pop, jazz, classical, etc. Tap on the one that matches your music genre or mood.
-
To adjust the equalizer settings according to your preference, tap on the equalizer icon at the top right corner of the screen. You will see a 10-band equalizer with a preamp control. You can drag the sliders up or down to increase or decrease the level of each frequency band. You can also tap on the preamp control to adjust the overall volume level.
-
To enable the bass boost and virtualizer features for a more immersive sound, tap on the bass boost icon and the virtualizer icon at the bottom left and right corners of the screen respectively. You will see a slider for each feature that you can drag left or right to increase or decrease the level of bass boost and virtualizer.
-
To access more features and options, tap on the menu icon at the top right corner of the screen. You will see a list of more features and options, such as loudness enhancer, volume booster, sound effects, speaker optimization, headphone optimization, compatibility mode, etc. Tap on the one that you want to use and adjust the settings accordingly.
-
-
What are the pros and cons of Flash Beats Pro Apk?
-
Flash Beats Pro Apk is a great app for improving your audio experience on your Android device. However, like any other app, it has its pros and cons. Here are some of the advantages and disadvantages of using Flash Beats Pro Apk:
-
Pros of Flash Beats Pro Apk
-
-
Improved sound quality and clarity: Flash Beats Pro Apk enhances the sound quality and clarity of your device by using advanced audio processing techniques. You can hear every detail, nuance, and emotion of your audio content.
-
Compatible with most devices and apps: Flash Beats Pro Apk works with most Android devices and apps that support audio output. You can use the app with your music player, video player, game, or call app.
-
Easy to use and adjust: Flash Beats Pro Apk has a simple and intuitive interface that allows you to easily access and adjust the settings and options. You can also switch between different preset modes and equalizer settings with a few taps.
-
Free and safe to download: Flash Beats Pro Apk is free to download and use. It does not contain any malware or viruses that can harm your device or data.
-
-
Cons of Flash Beats Pro Apk
-
-
Requires root access for some features: Flash Beats Pro Apk requires root access for some features, such as loudness enhancer, volume booster, speaker optimization, etc. Rooting your device can void your warranty and expose your device to security risks.
-
May drain battery faster: Flash Beats Pro Apk uses more battery power than your default audio settings. This is because the app runs in the background and processes the audio output constantly. You may notice a faster battery drain when using the app.
-
May cause compatibility issues with some apps or devices: Flash Beats Pro Apk may not work well with some apps or devices that have their own audio settings or enhancements. This can cause conflicts or errors in the audio output. You may need to disable the app or use the compatibility mode feature to avoid these issues.
-
-
What are some tips and tricks for using Flash Beats Pro Apk?
-
If you want to optimize your audio experience with Flash Beats Pro Apk, here are some tips and tricks that you can try:
-
Tip 1: Choose the right preset for your music genre
-
Flash Beats Pro Apk offers various preset modes for different music genres, such as rock, pop, jazz, classical, etc. These preset modes are designed to enhance the sound quality and clarity of each music genre. To choose the right preset for your music genre, tap on the preset icon at the top left corner of the screen and select the one that matches your music genre or mood.
-
Tip 2: Adjust the equalizer settings according to your preference
-
Flash Beats Pro Apk allows you to customize the equalizer settings according to your preference. You can adjust the level of each frequency band and the preamp control to suit your taste. To adjust the equalizer settings, tap on the equalizer icon at the top right corner of the screen and drag the sliders up or down to increase or decrease the level of each frequency band. You can also tap on the preamp control to adjust the overall volume level.
-
Tip 3: Enable the bass boost and virtualizer features for a more immersive sound
-
Flash Beats Pro Apk offers two features that can make your sound more immersive and realistic: bass boost and virtualizer. Bass boost increases the level of low-frequency sounds, making them deeper and richer. Virtualizer creates a 3D sound effect, making it wider and more spacious. To enable these features, tap on the bass boost icon and the virtualizer icon at the bottom left and right corners of the screen respectively. You will see a slider for each feature that you can drag left or right to increase or decrease the level of bass boost and virtualizer.
-
Tip 4: Use headphones or external speakers for better results
-
Flash Beats Pro Apk works best with headphones or external speakers. These devices can deliver a more enhanced audio quality than your device's built-in speaker. They can also isolate the sound from the surrounding noise, making it more clear and crisp. To use headphones or external speakers, simply plug them into your device's audio jack or connect them via Bluetooth. The app will automatically detect and optimize the sound for your device.
-
Tip 5: Disable the app when not needed to save battery and avoid conflicts
-
Flash Beats Pro Apk uses more battery power than your default audio settings. This is because the app runs in the background and processes the audio output constantly. You may notice a faster battery drain when using the app. To save battery and avoid conflicts with other apps or devices, you can disable the app when not needed. To disable the app, tap on the menu icon at the top right corner of the screen and select disable. You can also use the toggle switch on the notification bar to enable or disable the app quickly.
-
Conclusion
-
Flash Beats Pro Apk is a powerful and versatile app that can improve your audio experience on your Android device. It offers various features and options to enhance the sound quality and clarity of your device. You can choose from different preset modes, adjust the equalizer settings, enable the bass boost and virtualizer features, and access more features and options from the menu. You can also use headphones or external speakers for better results, and disable the app when not needed to save battery and avoid conflicts.
-
If you are looking for a way to make your sound better and more enjoyable, you should give Flash Beats Pro Apk a try. You will be amazed by how much difference it can make to your audio experience. Download Flash Beats Pro Apk today and enjoy a more immersive and realistic sound that matches your music genre, mood, and preference.
-
FAQs
-
Here are some frequently asked questions related to Flash Beats Pro Apk:
-
-
Is Flash Beats Pro Apk safe to use?
-
Yes, Flash Beats Pro Apk is safe to use. It does not contain any malware or viruses that can harm your device or data. However, you should always download the app from a trusted source, such as the official website or other reputable sites like APKPure or APKMirror.
-
Does Flash Beats Pro Apk require root access?
-
No, Flash Beats Pro Apk does not require root access for most of its features. However, some features, such as loudness enhancer, volume booster, speaker optimization, etc., may require root access to work properly. Rooting your device can void your warranty and expose your device to security risks.
-
How can I update Flash Beats Pro Apk?
-
You can update Flash Beats Pro Apk by downloading and installing the latest version of the app from a trusted source. You can also check for updates from within the app by tapping on the menu icon at the top right corner of the screen and selecting check for updates.
-
How can I uninstall Flash Beats Pro Apk?
-
You can uninstall Flash Beats Pro Apk by following these steps:
-
-
Go to your device settings, then apps, then Flash Beats Pro.
-
Tap on uninstall and confirm your action.
-
Alternatively, you can long-press on the app icon on your app drawer and drag it to the uninstall option.
-
-
How can I contact Flash Beats Pro Apk support?
-
You can contact Flash Beats Pro Apk support by sending an email to flashbeatspro@gmail.com. You can also visit their official website or Facebook page for more information and feedback.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Just Cause 1 Save Game 100 - Download and Install Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Just Cause 1 Save Game 100 - Download and Install Guide.md
deleted file mode 100644
index a9be8c46b8515bb887dc77022af1e931c0bb061b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Just Cause 1 Save Game 100 - Download and Install Guide.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
How to Download and Use Just Cause 1 Save Game 100
-
Just Cause 1 is a third-person action-adventure game that lets you play as Rico Rodriguez, a CIA agent who is sent to overthrow a corrupt dictator in a fictional Caribbean island. The game offers a huge open world with various environments, vehicles, weapons, and missions to explore and complete. However, if you want to skip some of the tedious or difficult parts of the game, or if you want to enjoy the full potential of Rico's skills and arsenal, you might want to download a save game 100 file.
A save game 100 file is a file that contains all the progress and data of a completed or nearly completed game. By using a save game 100 file, you can access all the areas, items, upgrades, and achievements that are otherwise locked or unavailable in the game. This can make your gameplay more fun and exciting, as you can experiment with different options and scenarios without worrying about losing your progress or facing any consequences.
-
However, using a save game 100 file also comes with some risks and drawbacks. For one thing, you might miss out on some of the original challenges and rewards that the game offers. For another thing, you might encounter some bugs or glitches that could affect your gameplay or even damage your system. Therefore, before you decide to download and use a save game 100 file, you should weigh the pros and cons carefully and follow some precautions.
-
How to Download Just Cause 1 Save Game 100
-
The first step to using a save game 100 file is to download it from a reliable source. There are many websites and forums that offer various save game 100 files for different games, including Just Cause 1. However, not all of them are safe or trustworthy. Some of them may contain viruses, malware, or spyware that could harm your computer or steal your personal information. Some of them may also provide fake or outdated save game 100 files that could cause errors or crashes in your game. Therefore, you should always download save game 100 files from reputable and verified sources, such as [GameCopyWorld], [Nexus Mods], or [Steam Community]. These sources usually have ratings, reviews, and comments from other users that can help you judge the quality and safety of the save game 100 files.
-
The next step is to check the compatibility and integrity of the save game 100 files. Not all save game 100 files are compatible with your version of Just Cause 1 or your operating system. Some save game 100 files may require certain patches, mods, or DLCs to work properly. Some save game 100 files may also be corrupted or incomplete, which could lead to unexpected results or errors in your game. Therefore, you should always read the description and instructions of the save game 100 files carefully and make sure they match your game specifications and requirements. You can also use some tools or programs, such as [WinRAR] or [7-Zip], to extract and verify the save game 100 files before installing them.
-
just cause pc game save 100% complete
-just cause xbox 360 game save missions
-how to get 100% save game for just cause 1 pc
-just cause 1 pc save game download
-just cause 1 xbox 360 save game download
-just cause 1 pc save game location
-just cause 1 pc save game editor
-just cause 1 xbox 360 save game editor
-just cause 1 pc save game free download
-just cause 1 xbox 360 save game free download
-just cause pc game save file download
-just cause xbox 360 game save file download
-how to install 100% save game for just cause 1 pc
-how to use 100% save game for just cause 1 xbox 360
-just cause 1 pc save game full download
-just cause 1 xbox 360 save game full download
-just cause pc game save data download
-just cause xbox 360 game save data download
-how to backup 100% save game for just cause 1 pc
-how to transfer 100% save game for just cause 1 xbox 360
-just cause pc game save folder download
-just cause xbox 360 game save folder download
-how to restore 100% save game for just cause 1 pc
-how to load 100% save game for just cause 1 xbox 360
-just cause pc game save file location
-just cause xbox 360 game save file location
-how to delete 100% save game for just cause 1 pc
-how to remove 100% save game for just cause 1 xbox 360
-just cause pc game save file editor
-just cause xbox 360 game save file editor
-how to modify 100% save game for just cause 1 pc
-how to customize 100% save game for just cause 1 xbox 360
-just cause pc game completed save file download
-just cause xbox 360 game completed save file download
-how to unlock all missions with 100% save game for just cause 1 pc
-how to access all areas with 100% save game for just cause 1 xbox 360
-just cause pc game finished save file download
-just cause xbox 360 game finished save file download
-how to get unlimited ammo with 100% save game for just cause 1 pc
-how to get infinite health with 100% save game for just cause 1 xbox 360
-just cause pc game all missions completed save file download
-just cause xbox 360 game all missions completed save file download
-how to get all weapons with 100% save game for just cause 1 pc
-how to get all vehicles with 100% save game for just cause 1 xbox 360
-just cause pc game all areas unlocked save file download
-just cause xbox 360 game all areas unlocked save file download
-how to get all achievements with 100% save game for just cause 1 pc
-how to get all trophies with 100% save game for just cause 1 xbox 360
-
The final step is to extract and install the save game 100 files. Most save game 100 files come in compressed formats, such as ZIP or RAR, which need to be extracted first. You can use the same tools or programs mentioned above to extract the save game 100 files to a folder of your choice. Then, you need to copy and paste the extracted save game 100 files to the correct location in your computer. The default location for Just Cause 1 save game files is C:\Users\YourUserName\Documents\Just Cause\Saves. However, this may vary depending on your installation settings or preferences. You can also create a backup of your own save game files before replacing them with the save game 100 files, in case you want to revert back to your original progress later.
-
How to Use Just Cause 1 Save Game 100
-
Once you have downloaded and installed the save game 100 files, you can start using them in your game. To do so, you need to load and play the save game 100 files from the main menu of Just Cause 1. You can select the Load Game option and choose the save game 100 file that you want to use. Alternatively, you can select the New Game option and choose the difficulty level that corresponds to the save game 100 file that you want to use. For example, if you want to use a save game 100 file that has completed the game on Normal difficulty, you need to select the Normal difficulty when starting a new game.
-
After loading or starting the save game 100 file, you can enjoy playing Just Cause 1 with all the features and benefits that it offers. You can explore the entire map of San Esperito, use any vehicle or weapon that you like, complete any mission or challenge that you want, and unlock any achievement or trophy that you desire. However, you should also be aware of some potential issues or problems that may arise from using save game 100 files. For instance, you may encounter some bugs or glitches that could affect your gameplay or even crash your game. You may also lose some of the original fun and satisfaction that comes from playing the game by yourself and earning your own progress and rewards. Therefore, you should use save game 100 files with caution and moderation.
-
If you encounter any issues or problems with using save game 100 files, you can try some troubleshooting methods to fix them. One method is to backup and restore your own save game files, as mentioned earlier. This can help you recover your original progress and settings if something goes wrong with the save game 100 files. Another method is to update or patch your game to the latest version available. This can help you improve the performance and compatibility of your game and the save game 100 files. A third method is to contact the support team or the community of the source where you downloaded the save game 100 files. They may be able to provide you with some solutions or alternatives for your issues or problems.
-
Conclusion
-
In conclusion, downloading and using save game 100 files for Just Cause 1 can be a fun and convenient way to experience the game in a different and more exciting manner. However, it also comes with some risks and drawbacks that you should be aware of and prepared for. Therefore, before you download and use save game 100 files, you should do some research and follow some precautions to ensure that you get the best results and avoid any unwanted consequences. Here are some tips and recommendations for using save game 100 files:
-
-
Download save game 100 files from reputable and verified sources, such as [GameCopyWorld], [Nexus Mods], or [Steam Community].
-
Check the compatibility and integrity of the save game 100 files before installing them.
-
Backup and restore your own save game files regularly.
-
Update or patch your game to the latest version available.
-
Contact the support team or the community of the source where you downloaded the save game 100 files if you encounter any issues or problems.
-
-
We hope this article has helped you learn how to download and use save game 100 files for Just Cause 1. If you have any feedback or questions, please feel free to leave a comment below. We would love to hear from you!
-
FAQs
-
-
Can I use save game 100 files for other games?
-
No, save game 100 files are specific to each game and may not work for other games.
-
Can I use save game 100 files for different versions of Just Cause 1?
-
It depends on the compatibility of the save game 100 files with your version of Just Cause 1. You may need to update or patch your game to use some save game 100 files.
-
Can I use save game 100 files for multiplayer mode?
-
No, save game 100 files are only for single-player mode. Using them for multiplayer mode may cause errors or bans.
-
Can I edit or modify save game 100 files?
-
Yes, you can use some tools or programs to edit or modify save game 100 files, but do so at your own risk. Editing or modifying save game 100 files may corrupt them or make them incompatible with your game.
-
Can I share or upload my own save game 100 files?
-
Yes, you can share or upload your own save game 100 files, but make sure you have permission from the original creators or owners of the save game 100 files. Also, make sure you provide clear instructions and information about your save game 100 files.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Samsung One UI 4 How to experience Android 12 Beta on your Galaxy phone.md b/spaces/congsaPfin/Manga-OCR/logs/Samsung One UI 4 How to experience Android 12 Beta on your Galaxy phone.md
deleted file mode 100644
index d51166ebb024170adb0ca35faa7e5162245154bd..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Samsung One UI 4 How to experience Android 12 Beta on your Galaxy phone.md
+++ /dev/null
@@ -1,184 +0,0 @@
-
-
How to Download and Install Android 12 Beta on Samsung Devices
-
Android 12 is the latest version of Google's operating system for smartphones and tablets. It brings a lot of new features and improvements, such as a redesigned user interface, enhanced privacy and security, better performance and battery life, and more. Samsung, one of the biggest Android device manufacturers, has also released its own version of Android 12, called One UI 4, which adds some Samsung-specific features and customizations on top of the stock Android experience.
-
If you are a Samsung user and you want to try out the new Android 12 and One UI 4 before they are officially released, you can join the beta program and download the beta software on your device. However, before you do that, you should be aware of the benefits and risks of installing beta software, as well as the steps involved in signing up, downloading, installing, and uninstalling the beta software. In this article, we will guide you through everything you need to know about downloading and installing Android 12 beta on Samsung devices.
How to Sign Up for One UI 4/Android 12 Beta Program
-
The first thing you need to do if you want to download and install Android 12 beta on your Samsung device is to sign up for the One UI Beta Program. This is a program that allows Samsung users to test the new features and design of One UI before they are officially released. By joining the program, you will be able to send feedback and suggestions to Samsung to help them improve the software quality and user experience.
-
To sign up for the One UI Beta Program, you will need a Samsung account and a compatible device. The program is only available for certain devices, OS versions, and regions. You can check the list of eligible devices and regions below. You will also need to download the Samsung Members app from Galaxy Store or Google Play Store. This app will allow you to register for the program, check for updates, report errors, make suggestions, and communicate with other beta testers.
-
Step 1: Download Samsung Members App
-
To download the Samsung Members app, follow these steps:
-
-
Open Galaxy Store or Google Play Store on your device.
-
Search for "Samsung Members" and tap on the app icon.
-
Tap on Install and wait for the app to download and install.
-
Open the app and sign in with your Samsung account. If you don't have one, you can create one at https://account.samsung.com.
-
-
Step 2: Register for One UI Beta Program
-
To register for the One UI Beta Program, follow these steps:
-
-
Open the Samsung Members app on your device.
-
Look for a banner or a notice that says "Registration for One UI Beta Program" at the top or in the Benefits section of the app. If you don't see it, it means that the program is not available for your device or region yet.
-
Tap on the banner or notice and read through the terms and conditions of the program and tap on Agree.
-
Wait for a confirmation message that says "You're in". This means that you have successfully registered for the program and you can now download and install the beta software.
-
-
How to Download and Install Android 12 Beta on Samsung Devices
-
Once you have registered for the One UI Beta Program, you can proceed to download and install the beta software on your device. The beta software will be delivered as an over-the-air (OTA) update, which means that you can download and install it directly from your device without connecting it to a computer. However, you should make sure that your device has enough battery life and storage space before downloading and installing the update. You should also back up your data in case something goes wrong during the installation process.
-
Step 3: Check for Software Update
-
To check for the software update, follow these steps:
-
-
Open the Settings app on your device.
-
Tap on Software update.
-
Tap on Download and install.
-
Your device will check for the latest software update. If the beta software is available, you will see a message that says "One UI 4 Beta with Android 12".
-
Tap on Download to start downloading the update. The download size may vary depending on your device model and region.
-
-
Step 4: Download and Install Android 12 Beta
-
To download and install the beta software, follow these steps:
-
How to install One UI 4/Android 12 beta on Samsung Galaxy smartphones
-Samsung One UI Beta Program for Android 12
-Android 12 beta for Samsung devices: release date, features, and how to join
-Samsung Android 12 update: eligible devices, One UI 4 beta, and more
-How to download and install Android 12 beta on Samsung S21 series
-Android 12 beta for Samsung S20 series: how to get it and what's new
-Samsung One UI 4 vs Android 12: what are the differences and similarities
-How to roll back from Android 12 beta to Android 11 on Samsung phones
-Android 12 beta for Samsung Note 20 series: availability, features, and bugs
-How to fix common issues with Android 12 beta on Samsung devices
-Android 12 beta for Samsung Z Fold 3 and Z Flip 3: everything you need to know
-Samsung One UI 4 review: hands-on with the Android 12 beta
-How to enable Material You design on Samsung devices running Android 12 beta
-Android 12 beta for Samsung S10 and Note 10 series: how to join and what to expect
-How to use the new privacy dashboard and indicators on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung A series: is it available and when will it arrive
-How to customize the color palette and widgets on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung tablets: supported models and how to install
-How to use the new notification settings and quick settings on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung S21 FE: release date, features, and how to download
-How to use the new camera and gallery features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung M series: is it possible and how to get it
-How to use the new device control and smart home features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung Note series: will there be a Note 22 with One UI 4
-How to use the new accessibility and digital wellbeing features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung Z Fold and Z Flip series: how to use the new multitasking and flex mode features
-How to use the new gaming mode and game dashboard on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung S20 FE: how to join and what's new
-How to use the new face unlock and fingerprint scanner features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung S10 Lite and Note 10 Lite: availability, features, and bugs
-How to use the new edge panel and edge lighting features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung A52 and A72: release date, features, and how to download
-How to use the new Bixby routines and Bixby voice features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung A32 and A42: is it available and when will it arrive
-How to use the new keyboard and input features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung tablets: how to use the new DeX mode and second screen features
-How to use the new lock screen and always-on display features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung M51 and M31: is it possible and how to get it
-How to use the new sound assistant and audio features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung S9 and Note 9 series: will they get it or not
-How to use the new theme park and wallpaper features on Samsung One UI 4/Android 12 beta
-Android 12 beta for Samsung A51 and A71: release date, features, and how to download
-How to use the new battery saver and performance mode features on Samsung One UI 4/Android
-
-
Once the download is complete, tap on Install now to start installing the update. Your device will reboot and show a progress bar on the screen.
-
Wait for the installation to finish. This may take several minutes, so do not turn off or interrupt your device during this process.
-
Your device will reboot again and show a welcome screen. Follow the on-screen instructions to set up your device with the new software.
-
Congratulations! You have successfully downloaded and installed Android 12 beta on your Samsung device. You can now explore the new features and design of One UI 4 and Android 12.
-
-
How to Uninstall Android 12 Beta and Roll Back to Android 11
-
If you are not satisfied with the beta software or you encounter any problems or bugs, you can uninstall Android 12 beta and roll back to Android 11. However, this process will erase all your data on your device, so you should back up your data before proceeding. You will also need a computer with Smart Switch installed, which is a software that allows you to transfer data between your Samsung device and your computer. You can download Smart Switch from https://www.samsung.com/us/support/owners/app/smart-switch.
-
Step 5: Back Up Your Data
-
To back up your data, follow these steps:
-
-
Open the Settings app on your device.
-
Tap on Accounts and backup.
-
Tap on Backup and restore.
-
Select the data you want to back up, such as contacts, messages, photos, etc.
-
Tap on Back up data.
-
Your device will start backing up your data to your Samsung account or Google account, depending on your preference.
-
-
Step 6: Download Smart Switch on Your PC or Mac
-
To download Smart Switch on your computer, follow these steps:
Select your operating system (Windows or Mac) and click on Download for Windows or Download for Mac.
-
Wait for the file to download and then run it to install Smart Switch on your computer.
-
Follow the instructions on the screen to complete the installation.
-
-
Step 7: Connect Your Device to Your Computer
-
To connect your device to your computer, follow these steps:
-
-
Open Smart Switch on your computer.
-
Connect your device to your computer using a USB cable.
-
Your computer will recognize your device and show its model name and number on Smart Switch.
-
If prompted, allow access to your device data on both your device and your computer.
-
-
Step 8: Restore Your Device to Android 11
-
To restore your device to Android 11, follow these steps:
-
-
On Smart Switch, click on More at the top right corner of the screen.
-
Select Initialization.
-
Select your device model and click on OK.
-
Enter your device serial number and click on OK. You can find your device serial number on the back of your device or in the Settings app under About phone.
-
Click on OK to confirm that you want to restore your device to Android 11. This will erase all your data on your device and install the official Android 11 software.
-
Wait for the process to complete. This may take some time, so do not disconnect or interrupt your device or computer during this process.
-
Your device will reboot and show a welcome screen. Follow the on-screen instructions to set up your device with Android 11.
-
You have successfully uninstalled Android 12 beta and rolled back to Android 11. You can now restore your data from your backup if you want.
-
-
Which Samsung Devices are Eligible for Android 12 Beta?
-
Not all Samsung devices are eligible for the One UI Beta Program. The program is only available for certain devices, OS versions, and regions. The list of supported devices and regions may change over time, so you should check the Samsung Members app or the Samsung website for the latest information. As of now, these are the devices and regions that are eligible for the program:
-
-
-
Device
-
Region
-
-
-
Samsung Galaxy S21
-
US, UK, Germany, Poland, India, China, South Korea
-
-
-
Samsung Galaxy S21+
-
US, UK, Germany, Poland, India, China, South Korea
-
-
-
Samsung Galaxy S21 Ultra
-
US, UK, Germany, Poland, India, China, South Korea
-
-
-
Samsung Galaxy Z Fold3
-
US, UK, Germany, Poland, India, China, South Korea
-
-
-
Samsung Galaxy Z Flip3
-
US, UK, Germany, Poland, India, China, South Korea
-
-
-
If your device is not on the list, you will have to wait until the official release of Android 12 and One UI 4 for your device. Samsung usually releases major software updates for its flagship devices first, followed by its mid-range and budget devices. The release schedule may vary depending on your device model and region.
-
What's New in One UI 4/Android 12?
-
One UI 4 is Samsung's version of Android 12 that adds some Samsung-specific features and customizations on top of the stock Android experience. One UI 4 is based on the Material You design language that Google introduced with Android 12. Material You is a design philosophy that adapts to the user's preferences and context, such as their wallpaper color, theme, font size, accessibility settings, etc. One UI 4 also brings some new features and improvements that enhance the user experience and functionality of Samsung devices. Here are some of the highlights of One UI 4/Android 12:
-
-
A redesigned user interface that is more colorful, dynamic, and personalized. You can customize your home screen widgets, icons, notifications, quick settings panel, lock screen clock style, always-on display style, and more with different shapes and colors that match your wallpaper or mood.
-
Enhanced privacy and security features that give you more control over your data and permissions. You can see which apps are accessing your camera or microphone with a new indicator in the status bar. You can also turn off these sensors with a quick toggle in the quick settings panel. You can also manage your app permissions more easily with a new Privacy Dashboard that shows you how often apps access your location, camera, microphone, and other sensitive information. You can also request a one-time permission or a temporary permission for apps that you don't use often.
-
Better performance and battery life that optimize your device's resources and reduce power consumption. One UI 4/Android 12 uses a new system called Project Mainline that allows Google to update some core components of the OS without requiring a full system update. This means that you can get faster and more frequent security patches and bug fixes. One UI 4/Android 12 also uses a new feature called Adaptive Charging that learns your usage patterns and adjusts the charging speed accordingly to extend the battery lifespan.
-
More fun and creative features that let you express yourself and enjoy your device. You can use the new Live Space feature to create animated wallpapers that react to your touch, gestures, or music. You can also use the new Emoji Kitchen feature to mix and match different emojis and stickers to create your own custom expressions. You can also use the new Sound Assistant feature to adjust the volume and sound quality of each app individually. You can also use the new Game Mode feature to optimize your gaming experience with various settings and shortcuts.
-
-
Conclusion
-
One UI 4/Android 12 is the latest software update for Samsung devices that brings a lot of new features and improvements to the user interface, privacy, security, performance, battery life, and more. If you want to try out the new software before it is officially released, you can join the One UI Beta Program and download the beta software on your device. However, you should be aware of the benefits and risks of installing beta software, as well as the steps involved in signing up, downloading, installing, and uninstalling the beta software. In this article, we have guided you through everything you need to know about downloading and installing Android 12 beta on Samsung devices. We hope you found this article helpful and informative.
-
FAQs
-
Here are some frequently asked questions about Android 12 beta on Samsung devices:
-
-
Q: When will Android 12 and One UI 4 be officially released for Samsung devices?
-A: There is no official release date for Android 12 and One UI 4 for Samsung devices yet. However, based on previous release cycles, we can expect them to be released sometime in late 2023 or early 2024 for flagship devices, followed by mid-range and budget devices.
-
Q: How can I check if my device is eligible for the One UI Beta Program?
-A: You can check if your device is eligible for the One UI Beta Program by opening the Samsung Members app on your device and looking for a banner or a notice that says "Registration for One UI Beta Program". If you don't see it, it means that your device or region is not eligible for the program yet.
-
Q: How can I send feedback or report bugs to Samsung?
-A: You can send feedback or report bugs to Samsung by using the Samsung Members app on your device. You can access the feedback section by tapping on the menu icon at the top left corner of the app and selecting Feedback. You can then choose between Error report, Suggestion, or Question and fill out the required information.
-
Q: How can I communicate with other beta testers?
-A: You can communicate with other beta testers by using the Samsung Members app on your device. You can access the community section by tapping on the menu icon at the top left corner of the app and selecting Community. You can then browse through different topics, posts, comments, and likes from other beta testers.
-
Q: How can I uninstall Smart Switch from my computer?
-A: You can uninstall Smart Switch from your computer by following these steps:
-
-
On Windows, go to Control Panel > Programs > Programs and Features and select Smart Switch from the list of programs. Click on Uninstall and follow the instructions on the screen.
-
On Mac, go to Finder > Applications and drag Smart Switch to the Trash. Empty the Trash to complete the uninstallation.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 MOD APK Everything You Need to Know about Level 52 Money Diamonds Energy and Menu.md b/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 MOD APK Everything You Need to Know about Level 52 Money Diamonds Energy and Menu.md
deleted file mode 100644
index 65351dcd2fc2e4d85f99f2c9da892ecf3f80e29e..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 MOD APK Everything You Need to Know about Level 52 Money Diamonds Energy and Menu.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
Download Shadow Fight 2 Mod Apk Max Level 52 An1: A Complete Guide
-
If you are a fan of fighting games, you might have heard of Shadow Fight 2, one of the most popular and addictive games in this genre. Shadow Fight 2 is a game that combines martial arts, RPG elements, and stunning graphics to create a thrilling and immersive experience. However, if you want to enjoy the game to the fullest, you might want to download Shadow Fight 2 mod apk max level 52 an1, a modified version of the game that gives you access to unlimited resources, max level, and menu mod. In this article, we will tell you everything you need to know about this mod apk, including its features, how to download and install it, and some tips and tricks for playing it.
-
What is Shadow Fight 2?
-
Shadow Fight 2 is a sequel to the original Shadow Fight, a Facebook game that was released in 2011. The game follows the story of a legendary warrior who accidentally unleashes an ancient evil force that turns him into a shadow. He then travels across six different worlds, fighting various enemies and bosses, in order to close the Gates of Shadows and restore his human form.
The game features a realistic combat system that allows you to use different weapons, armor, and skills to defeat your opponents. You can also customize your character's appearance and equipment according to your preference. The game has a rich and diverse content, with over 100 stages, dozens of weapons and armor sets, hundreds of enemies and bosses, and several game modes such as story mode, tournament mode, survival mode, duel mode, and online mode.
-
Features of Shadow Fight 2
-
Some of the main features of Shadow Fight 2 are:
-
-
Smooth and realistic animations that make the fights more dynamic and exciting.
-
Stunning graphics that create a dark and atmospheric environment.
-
A captivating storyline that keeps you engaged throughout the game.
-
A variety of weapons and armor that you can unlock and upgrade as you progress.
-
A diverse range of enemies and bosses that challenge your skills and strategy.
-
A simple and intuitive control system that allows you to perform different moves and combos with ease.
-
A multiplayer mode that lets you challenge other players from around the world.
-
-
Why download Shadow Fight 2 mod apk max level 52 an1?
-
While Shadow Fight 2 is undoubtedly a fun and addictive game, it also has some drawbacks that might frustrate some players. For instance, the game can be quite difficult and challenging at times, especially when you face stronger enemies and bosses. You might also run out of energy or coins quickly, which limits your gameplay time and progress. Moreover, some of the best weapons and armor are locked behind premium currency or high levels, which means you have to spend real money or grind for hours to get them.
-
This is where Shadow Fight 2 mod apk max level 52 an1 comes in handy. This is a modified version of the game that gives you several advantages over the original version. Some of the benefits of downloading this mod apk are:
-
Unlimited money
-
With this mod apk, you will never run out of money in the game. You can use the unlimited money to buy any weapon or armor you want, without worrying about the cost. You can also upgrade your equipment to the maximum level, making you more powerful and unstoppable in the fights.
-
Infinite diamonds
-
Diamonds are the premium currency in Shadow Fight 2, which can be used to buy special items, unlock chests, or skip waiting times. However, diamonds are very rare and hard to get in the game, unless you spend real money to buy them. With this mod apk, you will have infinite diamonds at your disposal, which means you can enjoy all the benefits of the premium features without spending a dime.
-
download shadow fight 2 mod apk unlimited money and gems
-download shadow fight 2 mod apk max level 99
-download shadow fight 2 mod apk latest version
-download shadow fight 2 mod apk offline
-download shadow fight 2 mod apk unlimited everything
-download shadow fight 2 mod apk hack
-download shadow fight 2 mod apk menu
-download shadow fight 2 mod apk all weapons unlocked
-download shadow fight 2 mod apk titan
-download shadow fight 2 mod apk special edition
-download shadow fight 2 mod apk mega
-download shadow fight 2 mod apk android 1
-download shadow fight 2 mod apk revdl
-download shadow fight 2 mod apk rexdl
-download shadow fight 2 mod apk happymod
-download shadow fight 2 mod apk free shopping
-download shadow fight 2 mod apk unlimited energy
-download shadow fight 2 mod apk no root
-download shadow fight 2 mod apk full version
-download shadow fight 2 mod apk for pc
-download shadow fight 2 mod apk data file host
-download shadow fight 2 mod apk pure
-download shadow fight 2 mod apk lenov.ru
-download shadow fight 2 mod apk by nekki
-download shadow fight 2 mod apk by gaming world
-download shadow fight 2 mod apk by zippyshare
-download shadow fight 2 mod apk by mediafire
-download shadow fight 2 mod apk by google drive
-download shadow fight 2 mod apk by mega.nz
-download shadow fight 2 mod apk by an1.com
-download shadow fight 2 hack max level and money zip file
-download shadow fight 2 hack max level and money obb file
-download shadow fight 2 hack max level and money ios file
-download shadow fight 2 hack max level and money no verification
-download shadow fight 2 hack max level and money no survey
-download shadow fight 2 hack max level and money no password
-download shadow fight 2 hack max level and money generator online
-download shadow fight 2 hack max level and money cheat engine online
-how to install shadow fight 2 mod apk max level and money on android device
-how to install shadow fight 2 mod apk max level and money on windows device
-how to install shadow fight 2 mod apk max level and money on mac device
-how to install shadow fight 2 mod apk max level and money on ios device
-how to play shadow fight 2 mod apk max level and money online multiplayer
-how to play shadow fight 2 mod apk max level and money offline single player
-how to play shadow fight 2 mod apk max level and money with friends
-how to play shadow fight 2 mod apk max level and money with controller
-how to play shadow fight 2 mod apk max level and money with keyboard and mouse
-how to update shadow fight 2 mod apk max level and money to the latest version
-how to uninstall shadow fight 2 mod apk max level and money from your device
-
Unlimited energy
-
Energy is the resource that determines how long you can play the game. Every time you fight, you consume some energy, and when it runs out, you have to wait for it to regenerate or use some diamonds to refill it. This can be very annoying and frustrating, especially when you are in the middle of an exciting fight or a challenging stage. With this mod apk, you will have unlimited energy, which means you can play the game as long as you want, without any interruptions or limitations.
-
Max level 52 or 99
-
The original version of Shadow Fight 2 has a level cap of 52, which means you cannot progress further than that. However, with this mod apk, you can choose to have either max level 52 or max level 99, depending on your preference. This will give you access to all the weapons and armor in the game, as well as all the skills and perks. You will also be able to defeat any enemy or boss with ease, as you will have the highest stats and attributes possible.
-
Menu mod
-
This mod apk also comes with a menu mod that allows you to customize and tweak various aspects of the game according to your liking. For example, you can enable or disable the sound effects, music, or vibration. You can also change the language of the game, adjust the graphics quality, or enable or disable auto-update. The menu mod gives you more control and flexibility over your gaming experience.
-
How to download and install Shadow Fight 2 mod apk max level 52 an1?
-
If you are interested in downloading and installing Shadow Fight 2 mod apk max level 52 an1 on your device, you can follow these simple steps:
-
Step 1: Download the mod apk file from a trusted source
-
The first thing you need to do is to download the mod apk file from a reliable and safe source. There are many websites that claim to offer this mod apk, but not all of them are trustworthy or secure. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you should be careful and cautious when choosing where to download the file from.
-
One of the best sources that we recommend is [an1.com], a website that provides high-quality and verified mod apks for various games and apps. You can download Shadow Fight 2 mod apk max level 52 an1 from [this link], which will take you directly to the download page. You will see a green button that says "Download", which you need to click on to start the download process.
-
Step 2: Enable unknown sources on your device
-
The next thing you need to do is to enable unknown sources on your device. This is a security setting that prevents your device from installing apps from sources other than the official Google Play Store. However, since Shadow Fight 2 mod apk max level 52 an1 is not available on the Play Store, you need to enable unknown sources to install it.
-
To enable unknown sources on your device, you need to go to your device's settings and look for the security or privacy option. There, you will find a toggle switch or a checkbox that says "Unknown sources" or "Allow installation of apps from unknown sources". You need to turn it on or check it to enable it.
-
Step 3: Install the mod apk file and launch the game
-
The final step is to install the mod apk file and launch the game. To do this, you need to locate the downloaded file on your device's storage and tap on it to open it. You will see a pop-up window that asks for your permission to install the app. You need to click on "Install" and wait for a few seconds until the installation is complete.
-
Once the installation is complete, you can launch the game by tapping on its icon on your device's home screen or app drawer. You will see the game's logo and a loading screen, followed by the main menu. You can then start playing the game with all the mod features enabled.
-
Tips and tricks for playing Shadow Fight 2 mod apk max level 52 an1
-
Now that you have downloaded and installed Shadow Fight 2 mod apk max level 52 an1, you might be wondering how to play it and make the most out of it. Here are some tips and tricks that can help you improve your skills and enjoy the game more:
-
Learn the basic moves and combos
-
One of the most important things to do in Shadow Fight 2 is to learn the basic moves and combos that you can perform with your character. These include punches, kicks, jumps, rolls, slides, throws, and blocks. You can also combine these moves to create different combos that can deal more damage and stun your enemies. You can practice these moves and combos in the training mode or in the easy stages of the game.
-
Upgrade your weapons and armor
-
Another thing to do in Shadow Fight 2 is to upgrade your weapons and armor as you progress. This will increase your attack and defense power, as well as your speed and agility. You can use the unlimited money and diamonds from the mod apk to buy and upgrade any weapon or armor you want, without any restrictions. You can also try different combinations of weapons and armor to find the ones that suit your style and preference.
-
Use your special skills wisely
-
Besides the basic moves and combos, you can also use special skills that can give you an edge in the fights. These skills are unlocked as you level up, and they include magic, ranged weapons, shadow abilities, and perks. Each skill has a different effect and cooldown time, so you need to use them wisely and strategically. You can also upgrade your skills to make them more powerful and effective.
-
Challenge other players in online mode
-
If you want to test your skills and compete with other players from around the world, you can try the online mode of Shadow Fight 2. This mode allows you to challenge other players in real-time duels or join clans and participate in raids. You can also chat with other players, make friends, or enemies, and earn rewards and rankings. The online mode is a great way to have fun and show off your skills.
-
Conclusion
-
Shadow Fight 2 is a game that offers a lot of fun and excitement for fighting game fans. However, if you want to enhance your gaming experience and enjoy the game without any limitations or frustrations, you should download Shadow Fight 2 mod apk max level 52 an1. This mod apk will give you unlimited money, diamonds, energy, max level, menu mod, and more. You will be able to play the game as long as you want, buy and upgrade any weapon or armor you want, use any skill you want, and defeat any enemy or boss you want. You will also be able to customize and tweak various aspects of the game according to your liking.
-
To download Shadow Fight 2 mod apk max level 52 an1, you just need to follow the simple steps that we have explained in this article. You need to download the mod apk file from a trusted source, enable unknown sources on your device, install the mod apk file, and launch the game. You will then be able to enjoy the game with all the mod features enabled.
-
We hope that this article has helped you learn more about Shadow Fight 2 mod apk max level 52 an1 and how to download and install it on your device. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!
-
FAQs
-
Here are some of the frequently asked questions about Shadow Fight 2 mod apk max level 52 an1:
-
-
Is Shadow Fight 2 mod apk max level 52 an1 safe to use?
-
Yes, Shadow Fight 2 mod apk max level 52 an1 is safe to use, as long as you download it from a reliable and secure source like [an1.com]. However, you should always be careful when downloading any mod apk from unknown sources, as they might contain viruses or malware that can harm your device or steal your data.
-
Will I get banned for using Shadow Fight 2 mod apk max level 52 an1?
-
No, you will not get banned for using Shadow Fight 2 mod apk max level 52 an1, as this mod apk does not interfere with the game's servers or online mode. However, you should always use the mod apk at your own risk, as we cannot guarantee that it will work flawlessly or that it will not cause any issues with your device or game.
-
Can I play Shadow Fight 2 mod apk max level 52 an1 offline?
-
Yes, you can play Shadow Fight 2 mod apk max level 52 an1 offline, as this mod apk does not require an internet connection to work. You can play the game's story mode, tournament mode, survival mode, or duel mode without any problems. However, if you want to play the game's online mode, you will need to connect to the internet and log in with your account.
-
Can I update Shadow Fight 2 mod apk max level 52 an1?
-
Yes, you can update Shadow Fight 2 mod apk max level 52 an1, as this mod apk has an auto-update feature that will notify you when a new version is available. You can also check the [an1.com] website for the latest updates and download them manually. However, you should always backup your game data before updating, as some updates might cause compatibility issues or data loss.
-
Can I use Shadow Fight 2 mod apk max level 52 an1 on other devices?
-
Yes, you can use Shadow Fight 2 mod apk max level 52 an1 on other devices, as this mod apk is compatible with most Android devices that have Android 4.1 or higher. However, you should always check the device's specifications and requirements before installing the mod apk, as some devices might not support the game or the mod features.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK Home The Official App for the Most Popular Social Network.md b/spaces/congsaPfin/Manga-OCR/logs/TikTok APK Home The Official App for the Most Popular Social Network.md
deleted file mode 100644
index be2c99c74f30a04cbcccf992aacea4352e5fad8a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK Home The Official App for the Most Popular Social Network.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
TikTok APKHome: How to Download and Install the App for Android
-
TikTok is one of the most popular social networks in the world, with over a billion users who create and share fun videos with music and effects. However, some users may not be able to access the official app from the Google Play Store due to various reasons, such as regional restrictions, device compatibility, or personal preferences. That's where TikTok APKHome comes in handy. In this article, we will explain what TikTok APKHome is, how to download and install it on your Android device, and how to use it to enjoy the best of TikTok.
-
What is TikTok APKHome?
-
TikTok APKHome is an alternative version of the official TikTok app that you can download and install from a third-party source, such as Uptodown. It is not affiliated with or endorsed by TikTok Pte. Ltd., the company that owns and operates TikTok. It is a modified version of the original app that may have some features or functions that are different from or not available in the official app.
You can access TikTok even if it is not available in your region or country.
-
You can use TikTok on devices that are not compatible with the official app.
-
You can enjoy the latest version of TikTok without waiting for updates from the Google Play Store.
-
You can customize your app settings and preferences according to your needs.
-
-
The risks of using TikTok APKHome
-
Some of the risks of using TikTok APKHome are:
-
-
You may expose your device to malware or viruses that may harm your data or privacy.
-
You may violate the terms of service or policies of TikTok and risk getting banned or suspended from the platform.
-
You may encounter bugs or errors that may affect your user experience or performance.
-
You may not receive official support or assistance from TikTok in case of any issues or problems.
-
-
How to download and install TikTok APKHome
-
If you decide to use TikTok APKHome, you need to follow these steps to download and install it on your Android device:
-
Step 1: Enable unknown sources on your device
-
Since you are downloading an app from a third-party source, you need to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device. Tap OK to proceed.
-
Step 2: Download the APK file from Uptodown
-
Next, you need to download the APK file of TikTok APKHome from Uptodown, a trusted website that offers free and safe downloads of apps and games for Android. To do this, open your browser and go to https://tiktok.en.uptodown.com/android. Tap on the green Download button and wait for the file to be downloaded. You may see a notification that says this type of file can harm your device. Tap OK to continue.
Once the file is downloaded, go to your Downloads folder and tap on the file name. You may see a prompt that asks you to confirm the installation. Tap Install and wait for the app to be installed. Once the installation is complete, tap Open to launch the app. You may also see an icon of TikTok APKHome on your home screen or app drawer.
-
How to use TikTok APKHome
-
Using TikTok APKHome is similar to using the official app, with some minor differences. Here are some of the things you can do with TikTok APKHome:
-
Create and edit videos with music and effects
-
To create a video, tap on the plus sign at the bottom of the screen. You can choose to record a video with your camera or upload a video from your gallery. You can also select a sound from the library or use your own sound. You can adjust the speed, timer, filters, and beauty effects of your video. To edit your video, tap on the scissors icon and use the tools to trim, cut, split, duplicate, or adjust your video. You can also add stickers, text, emojis, or effects to your video. To save your video, tap on Next and choose a cover image. You can also add a caption, hashtags, or mentions to your video. To post your video, tap on Post.
-
Share your videos with friends and followers
-
To share your videos with others, you can either make them public or private. Public videos are visible to everyone on TikTok, while private videos are only visible to you. You can also choose who can comment, duet, react, or download your videos. To share your videos with specific friends or groups, you can use the direct message feature. To access your messages, tap on the inbox icon at the bottom right of the screen. You can also share your videos to other social media platforms, such as Facebook, Instagram, WhatsApp, or Twitter.
-
Watch and discover content from other users
-
To watch and discover content from other users, you can either use the For You page or the Following page. The For You page shows you videos that are recommended for you based on your preferences and interests. The Following page shows you videos from the users that you follow. To switch between the pages, swipe left or right on the screen. To interact with a video, you can like, comment, share, or follow the user. You can also tap on the sound icon to see more videos that use the same sound. To discover more content from different categories, genres, or trends, you can use the Discover page. To access it, tap on the magnifying glass icon at the bottom left of the screen.
-
Conclusion
-
Summary of the main points
-
TikTok APKHome is an alternative version of the official TikTok app that you can download and install from a third-party source. It has some benefits and risks that you should be aware of before using it. It allows you to create and edit videos with music and effects, share them with friends and followers, and watch and discover content from other users.
-
FAQs
-
-
Q: Is TikTok APKHome safe to use?
-
A: TikTok APKHome is not an official app from TikTok Pte. Ltd., so it may not be as safe as the official app. It may contain malware or viruses that may harm your device or data. It may also violate the terms of service or policies of TikTok and risk getting banned or suspended from the platform.
-
Q: How do I update TikTok APKHome?
-
A: TikTok APKHome does not update automatically like the official app. You need to check for updates manually from Uptodown or other sources that offer TikTok APKHome downloads.
-
Q: Can I use TikTok APKHome and the official app at the same time?
-
A: No, you cannot use both apps at the same time on the same device. You need to uninstall one app before installing another.
-
Q: Can I log in with my existing TikTok account on TikTok APKHome?
-
A: Yes, you can log in with your existing TikTok account on TikTok APKHome. However, you should be careful about using your personal information on an unofficial app.
-
Q: What are some alternatives to TikTok APKHome?
-
A: Some alternatives to TikTok APKHome are:
-
-
TikTok Lite: A lighter version of the official app that consumes less data and storage space.
-
TikTok Mod: A modified version of the official app that removes ads and watermarks.
TikTok Plus: A modified version of the official app that adds more features and functions.
-
-
-
I hope this article has helped you understand what TikTok APKHome is, how to download and install it, and how to use it. If you have any questions or feedback, please leave a comment below. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/Jalolsavam-Film-Mp3-Song-26.md b/spaces/contluForse/HuggingGPT/Jalolsavam-Film-Mp3-Song-26.md
deleted file mode 100644
index 3db6ea778226a2effb7d989ba6fdd16a64799412..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/Jalolsavam-Film-Mp3-Song-26.md
+++ /dev/null
@@ -1,72 +0,0 @@
-## jalolsavam film mp3 song 26
-
-
-
-
-
-
-
-
-
-**Jalolsavam Film Mp3 Song 26 ===== [https://urluso.com/2txV1Z](https://urluso.com/2txV1Z)**
-
-
-
-
-
-
-
-
-
-
-
- ```html
-
-# Keranirakalaadum: A Melodious Song from Jalolsavam
-
-
-
-Jalolsavam is a 2004 Malayalam romantic drama film directed by Sibi Malayil and starring Kunchacko Boban and Navya Nair in the lead roles. The film revolves around the lives of two childhood friends who fall in love but face many obstacles in their relationship.
-
-
-
-One of the highlights of the film is its music composed by Alphons Joseph. The film features six songs, each with a different mood and style. One of the most popular songs from the film is Keranirakalaadum, sung by P. Jayachandran and written by BR Prasad.
-
-
-
-Keranirakalaadum is a soothing song that describes the beauty of nature and love. The lyrics compare the lovers to a green leaf and a dew drop that adorn a flower. The song also uses imagery of a drum, a poem, a song, and a breeze to express the feelings of the couple.
-
-
-
-The song has a melodious tune and a gentle rhythm that create a romantic atmosphere. The voice of P. Jayachandran is smooth and expressive, conveying the emotions of the song. The song also has a catchy chorus that repeats the word "keranirakalaadum" several times.
-
-
-
-Keranirakalaadum is a song that can touch the hearts of anyone who listens to it. It is a song that celebrates love and nature in a simple and elegant way. You can listen to Keranirakalaadum online or download it from various platforms such as JioSaavn, YouTube, or SoundCloud.
-
- ``` ```html
-
-If you are a fan of Malayalam cinema and music, you should not miss Jalolsavam and its songs. The film is a heartwarming story of love and friendship that will make you laugh and cry. The songs are composed with care and creativity, and sung with passion and skill. The songs are not only entertaining, but also meaningful and inspiring.
-
-
-
-Jalolsavam is a film that will stay with you for a long time. It is a film that will make you appreciate the beauty of life and love. It is a film that will make you hum Keranirakalaadum and other songs from its soundtrack. Jalolsavam is a film that you should watch and listen to.
-
- ``` ```html
-
-One of the reasons why Jalolsavam and its songs are so popular is the chemistry between the lead actors, Kunchacko Boban and Navya Nair. They play the roles of Vinod and Anitha, two childhood friends who grow up together in a village. They share a bond of friendship and affection that transcends the differences in their social status and family backgrounds.
-
-
-
-However, their love faces many challenges as they grow older. Vinod's father is a wealthy landlord who wants him to marry a rich girl. Anitha's father is a poor farmer who is indebted to Vinod's father. Vinod and Anitha also have to deal with the interference of their relatives and friends who try to separate them or create misunderstandings between them.
-
-
-
-Will Vinod and Anitha overcome these obstacles and unite in the end? Will they be able to fulfill their dreams and aspirations? Will they be able to enjoy the beauty of nature and love that they cherish so much? These are some of the questions that Jalolsavam answers in a realistic and emotional way.
-
- ``` dfd1c89656
-
-
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/app.py b/spaces/contluForse/HuggingGPT/app.py
deleted file mode 100644
index 3811940cda405c3378a2278f9428e335df5252d5..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/app.py
+++ /dev/null
@@ -1,202 +0,0 @@
-import uuid
-import gradio as gr
-import re
-from diffusers.utils import load_image
-import requests
-from awesome_chat import chat_huggingface
-import os
-
-os.makedirs("public/images", exist_ok=True)
-os.makedirs("public/audios", exist_ok=True)
-os.makedirs("public/videos", exist_ok=True)
-
-class Client:
- def __init__(self) -> None:
- self.OPENAI_KEY = ""
- self.HUGGINGFACE_TOKEN = ""
- self.all_messages = []
-
- def set_key(self, openai_key):
- self.OPENAI_KEY = openai_key
- if len(self.HUGGINGFACE_TOKEN)>0:
- gr.update(visible = True)
- return self.OPENAI_KEY
-
- def set_token(self, huggingface_token):
- self.HUGGINGFACE_TOKEN = huggingface_token
- if len(self.OPENAI_KEY)>0:
- gr.update(visible = True)
- return self.HUGGINGFACE_TOKEN
-
- def add_message(self, content, role):
- message = {"role":role, "content":content}
- self.all_messages.append(message)
-
- def extract_medias(self, message):
- # url_pattern = re.compile(r"(http(s?):|\/)?([\.\/_\w:-])*?")
- urls = []
- # for match in url_pattern.finditer(message):
- # if match.group(0) not in urls:
- # urls.append(match.group(0))
-
- image_pattern = re.compile(r"(http(s?):|\/)?([\.\/_\w:-])*?\.(jpg|jpeg|tiff|gif|png)")
- image_urls = []
- for match in image_pattern.finditer(message):
- if match.group(0) not in image_urls:
- image_urls.append(match.group(0))
-
- audio_pattern = re.compile(r"(http(s?):|\/)?([\.\/_\w:-])*?\.(flac|wav)")
- audio_urls = []
- for match in audio_pattern.finditer(message):
- if match.group(0) not in audio_urls:
- audio_urls.append(match.group(0))
-
- video_pattern = re.compile(r"(http(s?):|\/)?([\.\/_\w:-])*?\.(mp4)")
- video_urls = []
- for match in video_pattern.finditer(message):
- if match.group(0) not in video_urls:
- video_urls.append(match.group(0))
-
- return urls, image_urls, audio_urls, video_urls
-
- def add_text(self, messages, message):
- if len(self.OPENAI_KEY) == 0 or not self.OPENAI_KEY.startswith("sk-") or len(self.HUGGINGFACE_TOKEN) == 0 or not self.HUGGINGFACE_TOKEN.startswith("hf_"):
- return messages, "Please set your OpenAI API key and Hugging Face token first!!!"
- self.add_message(message, "user")
- messages = messages + [(message, None)]
- urls, image_urls, audio_urls, video_urls = self.extract_medias(message)
-
- for image_url in image_urls:
- if not image_url.startswith("http") and not image_url.startswith("public"):
- image_url = "public/" + image_url
- image = load_image(image_url)
- name = f"public/images/{str(uuid.uuid4())[:4]}.jpg"
- image.save(name)
- messages = messages + [((f"{name}",), None)]
- for audio_url in audio_urls and not audio_url.startswith("public"):
- if not audio_url.startswith("http"):
- audio_url = "public/" + audio_url
- ext = audio_url.split(".")[-1]
- name = f"public/audios/{str(uuid.uuid4()[:4])}.{ext}"
- response = requests.get(audio_url)
- with open(name, "wb") as f:
- f.write(response.content)
- messages = messages + [((f"{name}",), None)]
- for video_url in video_urls and not video_url.startswith("public"):
- if not video_url.startswith("http"):
- video_url = "public/" + video_url
- ext = video_url.split(".")[-1]
- name = f"public/audios/{str(uuid.uuid4()[:4])}.{ext}"
- response = requests.get(video_url)
- with open(name, "wb") as f:
- f.write(response.content)
- messages = messages + [((f"{name}",), None)]
- return messages, ""
-
- def bot(self, messages):
- if len(self.OPENAI_KEY) == 0 or not self.OPENAI_KEY.startswith("sk-") or len(self.HUGGINGFACE_TOKEN) == 0 or not self.HUGGINGFACE_TOKEN.startswith("hf_"):
- return messages, {}
- message, results = chat_huggingface(self.all_messages, self.OPENAI_KEY, self.HUGGINGFACE_TOKEN)
- urls, image_urls, audio_urls, video_urls = self.extract_medias(message)
- self.add_message(message, "assistant")
- messages[-1][1] = message
- for image_url in image_urls:
- if not image_url.startswith("http"):
- image_url = image_url.replace("public/", "")
- messages = messages + [((None, (f"public/{image_url}",)))]
- # else:
- # messages = messages + [((None, (f"{image_url}",)))]
- for audio_url in audio_urls:
- if not audio_url.startswith("http"):
- audio_url = audio_url.replace("public/", "")
- messages = messages + [((None, (f"public/{audio_url}",)))]
- # else:
- # messages = messages + [((None, (f"{audio_url}",)))]
- for video_url in video_urls:
- if not video_url.startswith("http"):
- video_url = video_url.replace("public/", "")
- messages = messages + [((None, (f"public/{video_url}",)))]
- # else:
- # messages = messages + [((None, (f"{video_url}",)))]
- # replace int key to string key
- results = {str(k): v for k, v in results.items()}
- return messages, results
-
-css = ".json {height: 527px; overflow: scroll;} .json-holder {height: 527px; overflow: scroll;}"
-with gr.Blocks(css=css) as demo:
- state = gr.State(value={"client": Client()})
- gr.Markdown("
HuggingGPT
")
- gr.Markdown("
")
- gr.Markdown("
A system to connect LLMs with ML community. See our Project and Paper.
")
- gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key and Hugging Face Token
''')
- with gr.Row().style():
- with gr.Column(scale=0.85):
- openai_api_key = gr.Textbox(
- show_label=False,
- placeholder="Set your OpenAI API key here and press Enter",
- lines=1,
- type="password"
- ).style(container=False)
- with gr.Column(scale=0.15, min_width=0):
- btn1 = gr.Button("Submit").style(full_height=True)
-
- with gr.Row().style():
- with gr.Column(scale=0.85):
- hugging_face_token = gr.Textbox(
- show_label=False,
- placeholder="Set your Hugging Face Token here and press Enter",
- lines=1,
- type="password"
- ).style(container=False)
- with gr.Column(scale=0.15, min_width=0):
- btn3 = gr.Button("Submit").style(full_height=True)
-
-
- with gr.Row().style():
- with gr.Column(scale=0.6):
- chatbot = gr.Chatbot([], elem_id="chatbot").style(height=500)
- with gr.Column(scale=0.4):
- results = gr.JSON(elem_classes="json")
-
-
- with gr.Row().style():
- with gr.Column(scale=0.85):
- txt = gr.Textbox(
- show_label=False,
- placeholder="Enter text and press enter. The url of the multimedia resource must contain the extension name.",
- lines=1,
- ).style(container=False)
- with gr.Column(scale=0.15, min_width=0):
- btn2 = gr.Button("Send").style(full_height=True)
-
- def set_key(state, openai_api_key):
- return state["client"].set_key(openai_api_key)
-
- def add_text(state, chatbot, txt):
- return state["client"].add_text(chatbot, txt)
-
- def set_token(state, hugging_face_token):
- return state["client"].set_token(hugging_face_token)
-
- def bot(state, chatbot):
- return state["client"].bot(chatbot)
-
- openai_api_key.submit(set_key, [state, openai_api_key], [openai_api_key])
- txt.submit(add_text, [state, chatbot, txt], [chatbot, txt]).then(bot, [state, chatbot], [chatbot, results])
- hugging_face_token.submit(set_token, [state, hugging_face_token], [hugging_face_token])
- btn1.click(set_key, [state, openai_api_key], [openai_api_key])
- btn2.click(add_text, [state, chatbot, txt], [chatbot, txt]).then(bot, [state, chatbot], [chatbot, results])
- btn3.click(set_token, [state, hugging_face_token], [hugging_face_token])
-
- gr.Examples(
- examples=["Given a collection of image A: /examples/a.jpg, B: /examples/b.jpg, C: /examples/c.jpg, please tell me how many zebras in these picture?",
- "Please generate a canny image based on /examples/f.jpg",
- "show me a joke and an image of cat",
- "what is in the examples/a.jpg",
- "based on the /examples/a.jpg, please generate a video and audio",
- "based on pose of /examples/d.jpg and content of /examples/e.jpg, please show me a new image",
- ],
- inputs=txt
- )
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/resnet.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/resnet.py
deleted file mode 100644
index 34d6edf2e2ec3515ed1a395658ded85c280000b0..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/resnet.py
+++ /dev/null
@@ -1,694 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import numpy as np
-import fvcore.nn.weight_init as weight_init
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from annotator.oneformer.detectron2.layers import (
- CNNBlockBase,
- Conv2d,
- DeformConv,
- ModulatedDeformConv,
- ShapeSpec,
- get_norm,
-)
-
-from .backbone import Backbone
-from .build import BACKBONE_REGISTRY
-
-__all__ = [
- "ResNetBlockBase",
- "BasicBlock",
- "BottleneckBlock",
- "DeformBottleneckBlock",
- "BasicStem",
- "ResNet",
- "make_stage",
- "build_resnet_backbone",
-]
-
-
-class BasicBlock(CNNBlockBase):
- """
- The basic residual block for ResNet-18 and ResNet-34 defined in :paper:`ResNet`,
- with two 3x3 conv layers and a projection shortcut if needed.
- """
-
- def __init__(self, in_channels, out_channels, *, stride=1, norm="BN"):
- """
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- stride (int): Stride for the first conv.
- norm (str or callable): normalization for all conv layers.
- See :func:`layers.get_norm` for supported format.
- """
- super().__init__(in_channels, out_channels, stride)
-
- if in_channels != out_channels:
- self.shortcut = Conv2d(
- in_channels,
- out_channels,
- kernel_size=1,
- stride=stride,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- else:
- self.shortcut = None
-
- self.conv1 = Conv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- stride=stride,
- padding=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
-
- self.conv2 = Conv2d(
- out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
-
- for layer in [self.conv1, self.conv2, self.shortcut]:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
-
- def forward(self, x):
- out = self.conv1(x)
- out = F.relu_(out)
- out = self.conv2(out)
-
- if self.shortcut is not None:
- shortcut = self.shortcut(x)
- else:
- shortcut = x
-
- out += shortcut
- out = F.relu_(out)
- return out
-
-
-class BottleneckBlock(CNNBlockBase):
- """
- The standard bottleneck residual block used by ResNet-50, 101 and 152
- defined in :paper:`ResNet`. It contains 3 conv layers with kernels
- 1x1, 3x3, 1x1, and a projection shortcut if needed.
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- *,
- bottleneck_channels,
- stride=1,
- num_groups=1,
- norm="BN",
- stride_in_1x1=False,
- dilation=1,
- ):
- """
- Args:
- bottleneck_channels (int): number of output channels for the 3x3
- "bottleneck" conv layers.
- num_groups (int): number of groups for the 3x3 conv layer.
- norm (str or callable): normalization for all conv layers.
- See :func:`layers.get_norm` for supported format.
- stride_in_1x1 (bool): when stride>1, whether to put stride in the
- first 1x1 convolution or the bottleneck 3x3 convolution.
- dilation (int): the dilation rate of the 3x3 conv layer.
- """
- super().__init__(in_channels, out_channels, stride)
-
- if in_channels != out_channels:
- self.shortcut = Conv2d(
- in_channels,
- out_channels,
- kernel_size=1,
- stride=stride,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- else:
- self.shortcut = None
-
- # The original MSRA ResNet models have stride in the first 1x1 conv
- # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have
- # stride in the 3x3 conv
- stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride)
-
- self.conv1 = Conv2d(
- in_channels,
- bottleneck_channels,
- kernel_size=1,
- stride=stride_1x1,
- bias=False,
- norm=get_norm(norm, bottleneck_channels),
- )
-
- self.conv2 = Conv2d(
- bottleneck_channels,
- bottleneck_channels,
- kernel_size=3,
- stride=stride_3x3,
- padding=1 * dilation,
- bias=False,
- groups=num_groups,
- dilation=dilation,
- norm=get_norm(norm, bottleneck_channels),
- )
-
- self.conv3 = Conv2d(
- bottleneck_channels,
- out_channels,
- kernel_size=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
-
- for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
-
- # Zero-initialize the last normalization in each residual branch,
- # so that at the beginning, the residual branch starts with zeros,
- # and each residual block behaves like an identity.
- # See Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour":
- # "For BN layers, the learnable scaling coefficient γ is initialized
- # to be 1, except for each residual block's last BN
- # where γ is initialized to be 0."
-
- # nn.init.constant_(self.conv3.norm.weight, 0)
- # TODO this somehow hurts performance when training GN models from scratch.
- # Add it as an option when we need to use this code to train a backbone.
-
- def forward(self, x):
- out = self.conv1(x)
- out = F.relu_(out)
-
- out = self.conv2(out)
- out = F.relu_(out)
-
- out = self.conv3(out)
-
- if self.shortcut is not None:
- shortcut = self.shortcut(x)
- else:
- shortcut = x
-
- out += shortcut
- out = F.relu_(out)
- return out
-
-
-class DeformBottleneckBlock(CNNBlockBase):
- """
- Similar to :class:`BottleneckBlock`, but with :paper:`deformable conv `
- in the 3x3 convolution.
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- *,
- bottleneck_channels,
- stride=1,
- num_groups=1,
- norm="BN",
- stride_in_1x1=False,
- dilation=1,
- deform_modulated=False,
- deform_num_groups=1,
- ):
- super().__init__(in_channels, out_channels, stride)
- self.deform_modulated = deform_modulated
-
- if in_channels != out_channels:
- self.shortcut = Conv2d(
- in_channels,
- out_channels,
- kernel_size=1,
- stride=stride,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- else:
- self.shortcut = None
-
- stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride)
-
- self.conv1 = Conv2d(
- in_channels,
- bottleneck_channels,
- kernel_size=1,
- stride=stride_1x1,
- bias=False,
- norm=get_norm(norm, bottleneck_channels),
- )
-
- if deform_modulated:
- deform_conv_op = ModulatedDeformConv
- # offset channels are 2 or 3 (if with modulated) * kernel_size * kernel_size
- offset_channels = 27
- else:
- deform_conv_op = DeformConv
- offset_channels = 18
-
- self.conv2_offset = Conv2d(
- bottleneck_channels,
- offset_channels * deform_num_groups,
- kernel_size=3,
- stride=stride_3x3,
- padding=1 * dilation,
- dilation=dilation,
- )
- self.conv2 = deform_conv_op(
- bottleneck_channels,
- bottleneck_channels,
- kernel_size=3,
- stride=stride_3x3,
- padding=1 * dilation,
- bias=False,
- groups=num_groups,
- dilation=dilation,
- deformable_groups=deform_num_groups,
- norm=get_norm(norm, bottleneck_channels),
- )
-
- self.conv3 = Conv2d(
- bottleneck_channels,
- out_channels,
- kernel_size=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
-
- for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
-
- nn.init.constant_(self.conv2_offset.weight, 0)
- nn.init.constant_(self.conv2_offset.bias, 0)
-
- def forward(self, x):
- out = self.conv1(x)
- out = F.relu_(out)
-
- if self.deform_modulated:
- offset_mask = self.conv2_offset(out)
- offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1)
- offset = torch.cat((offset_x, offset_y), dim=1)
- mask = mask.sigmoid()
- out = self.conv2(out, offset, mask)
- else:
- offset = self.conv2_offset(out)
- out = self.conv2(out, offset)
- out = F.relu_(out)
-
- out = self.conv3(out)
-
- if self.shortcut is not None:
- shortcut = self.shortcut(x)
- else:
- shortcut = x
-
- out += shortcut
- out = F.relu_(out)
- return out
-
-
-class BasicStem(CNNBlockBase):
- """
- The standard ResNet stem (layers before the first residual block),
- with a conv, relu and max_pool.
- """
-
- def __init__(self, in_channels=3, out_channels=64, norm="BN"):
- """
- Args:
- norm (str or callable): norm after the first conv layer.
- See :func:`layers.get_norm` for supported format.
- """
- super().__init__(in_channels, out_channels, 4)
- self.in_channels = in_channels
- self.conv1 = Conv2d(
- in_channels,
- out_channels,
- kernel_size=7,
- stride=2,
- padding=3,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- weight_init.c2_msra_fill(self.conv1)
-
- def forward(self, x):
- x = self.conv1(x)
- x = F.relu_(x)
- x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
- return x
-
-
-class ResNet(Backbone):
- """
- Implement :paper:`ResNet`.
- """
-
- def __init__(self, stem, stages, num_classes=None, out_features=None, freeze_at=0):
- """
- Args:
- stem (nn.Module): a stem module
- stages (list[list[CNNBlockBase]]): several (typically 4) stages,
- each contains multiple :class:`CNNBlockBase`.
- num_classes (None or int): if None, will not perform classification.
- Otherwise, will create a linear layer.
- out_features (list[str]): name of the layers whose outputs should
- be returned in forward. Can be anything in "stem", "linear", or "res2" ...
- If None, will return the output of the last layer.
- freeze_at (int): The number of stages at the beginning to freeze.
- see :meth:`freeze` for detailed explanation.
- """
- super().__init__()
- self.stem = stem
- self.num_classes = num_classes
-
- current_stride = self.stem.stride
- self._out_feature_strides = {"stem": current_stride}
- self._out_feature_channels = {"stem": self.stem.out_channels}
-
- self.stage_names, self.stages = [], []
-
- if out_features is not None:
- # Avoid keeping unused layers in this module. They consume extra memory
- # and may cause allreduce to fail
- num_stages = max(
- [{"res2": 1, "res3": 2, "res4": 3, "res5": 4}.get(f, 0) for f in out_features]
- )
- stages = stages[:num_stages]
- for i, blocks in enumerate(stages):
- assert len(blocks) > 0, len(blocks)
- for block in blocks:
- assert isinstance(block, CNNBlockBase), block
-
- name = "res" + str(i + 2)
- stage = nn.Sequential(*blocks)
-
- self.add_module(name, stage)
- self.stage_names.append(name)
- self.stages.append(stage)
-
- self._out_feature_strides[name] = current_stride = int(
- current_stride * np.prod([k.stride for k in blocks])
- )
- self._out_feature_channels[name] = curr_channels = blocks[-1].out_channels
- self.stage_names = tuple(self.stage_names) # Make it static for scripting
-
- if num_classes is not None:
- self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
- self.linear = nn.Linear(curr_channels, num_classes)
-
- # Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour":
- # "The 1000-way fully-connected layer is initialized by
- # drawing weights from a zero-mean Gaussian with standard deviation of 0.01."
- nn.init.normal_(self.linear.weight, std=0.01)
- name = "linear"
-
- if out_features is None:
- out_features = [name]
- self._out_features = out_features
- assert len(self._out_features)
- children = [x[0] for x in self.named_children()]
- for out_feature in self._out_features:
- assert out_feature in children, "Available children: {}".format(", ".join(children))
- self.freeze(freeze_at)
-
- def forward(self, x):
- """
- Args:
- x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``.
-
- Returns:
- dict[str->Tensor]: names and the corresponding features
- """
- assert x.dim() == 4, f"ResNet takes an input of shape (N, C, H, W). Got {x.shape} instead!"
- outputs = {}
- x = self.stem(x)
- if "stem" in self._out_features:
- outputs["stem"] = x
- for name, stage in zip(self.stage_names, self.stages):
- x = stage(x)
- if name in self._out_features:
- outputs[name] = x
- if self.num_classes is not None:
- x = self.avgpool(x)
- x = torch.flatten(x, 1)
- x = self.linear(x)
- if "linear" in self._out_features:
- outputs["linear"] = x
- return outputs
-
- def output_shape(self):
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
-
- def freeze(self, freeze_at=0):
- """
- Freeze the first several stages of the ResNet. Commonly used in
- fine-tuning.
-
- Layers that produce the same feature map spatial size are defined as one
- "stage" by :paper:`FPN`.
-
- Args:
- freeze_at (int): number of stages to freeze.
- `1` means freezing the stem. `2` means freezing the stem and
- one residual stage, etc.
-
- Returns:
- nn.Module: this ResNet itself
- """
- if freeze_at >= 1:
- self.stem.freeze()
- for idx, stage in enumerate(self.stages, start=2):
- if freeze_at >= idx:
- for block in stage.children():
- block.freeze()
- return self
-
- @staticmethod
- def make_stage(block_class, num_blocks, *, in_channels, out_channels, **kwargs):
- """
- Create a list of blocks of the same type that forms one ResNet stage.
-
- Args:
- block_class (type): a subclass of CNNBlockBase that's used to create all blocks in this
- stage. A module of this type must not change spatial resolution of inputs unless its
- stride != 1.
- num_blocks (int): number of blocks in this stage
- in_channels (int): input channels of the entire stage.
- out_channels (int): output channels of **every block** in the stage.
- kwargs: other arguments passed to the constructor of
- `block_class`. If the argument name is "xx_per_block", the
- argument is a list of values to be passed to each block in the
- stage. Otherwise, the same argument is passed to every block
- in the stage.
-
- Returns:
- list[CNNBlockBase]: a list of block module.
-
- Examples:
- ::
- stage = ResNet.make_stage(
- BottleneckBlock, 3, in_channels=16, out_channels=64,
- bottleneck_channels=16, num_groups=1,
- stride_per_block=[2, 1, 1],
- dilations_per_block=[1, 1, 2]
- )
-
- Usually, layers that produce the same feature map spatial size are defined as one
- "stage" (in :paper:`FPN`). Under such definition, ``stride_per_block[1:]`` should
- all be 1.
- """
- blocks = []
- for i in range(num_blocks):
- curr_kwargs = {}
- for k, v in kwargs.items():
- if k.endswith("_per_block"):
- assert len(v) == num_blocks, (
- f"Argument '{k}' of make_stage should have the "
- f"same length as num_blocks={num_blocks}."
- )
- newk = k[: -len("_per_block")]
- assert newk not in kwargs, f"Cannot call make_stage with both {k} and {newk}!"
- curr_kwargs[newk] = v[i]
- else:
- curr_kwargs[k] = v
-
- blocks.append(
- block_class(in_channels=in_channels, out_channels=out_channels, **curr_kwargs)
- )
- in_channels = out_channels
- return blocks
-
- @staticmethod
- def make_default_stages(depth, block_class=None, **kwargs):
- """
- Created list of ResNet stages from pre-defined depth (one of 18, 34, 50, 101, 152).
- If it doesn't create the ResNet variant you need, please use :meth:`make_stage`
- instead for fine-grained customization.
-
- Args:
- depth (int): depth of ResNet
- block_class (type): the CNN block class. Has to accept
- `bottleneck_channels` argument for depth > 50.
- By default it is BasicBlock or BottleneckBlock, based on the
- depth.
- kwargs:
- other arguments to pass to `make_stage`. Should not contain
- stride and channels, as they are predefined for each depth.
-
- Returns:
- list[list[CNNBlockBase]]: modules in all stages; see arguments of
- :class:`ResNet.__init__`.
- """
- num_blocks_per_stage = {
- 18: [2, 2, 2, 2],
- 34: [3, 4, 6, 3],
- 50: [3, 4, 6, 3],
- 101: [3, 4, 23, 3],
- 152: [3, 8, 36, 3],
- }[depth]
- if block_class is None:
- block_class = BasicBlock if depth < 50 else BottleneckBlock
- if depth < 50:
- in_channels = [64, 64, 128, 256]
- out_channels = [64, 128, 256, 512]
- else:
- in_channels = [64, 256, 512, 1024]
- out_channels = [256, 512, 1024, 2048]
- ret = []
- for (n, s, i, o) in zip(num_blocks_per_stage, [1, 2, 2, 2], in_channels, out_channels):
- if depth >= 50:
- kwargs["bottleneck_channels"] = o // 4
- ret.append(
- ResNet.make_stage(
- block_class=block_class,
- num_blocks=n,
- stride_per_block=[s] + [1] * (n - 1),
- in_channels=i,
- out_channels=o,
- **kwargs,
- )
- )
- return ret
-
-
-ResNetBlockBase = CNNBlockBase
-"""
-Alias for backward compatibiltiy.
-"""
-
-
-def make_stage(*args, **kwargs):
- """
- Deprecated alias for backward compatibiltiy.
- """
- return ResNet.make_stage(*args, **kwargs)
-
-
-@BACKBONE_REGISTRY.register()
-def build_resnet_backbone(cfg, input_shape):
- """
- Create a ResNet instance from config.
-
- Returns:
- ResNet: a :class:`ResNet` instance.
- """
- # need registration of new blocks/stems?
- norm = cfg.MODEL.RESNETS.NORM
- stem = BasicStem(
- in_channels=input_shape.channels,
- out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS,
- norm=norm,
- )
-
- # fmt: off
- freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT
- out_features = cfg.MODEL.RESNETS.OUT_FEATURES
- depth = cfg.MODEL.RESNETS.DEPTH
- num_groups = cfg.MODEL.RESNETS.NUM_GROUPS
- width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP
- bottleneck_channels = num_groups * width_per_group
- in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS
- out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS
- stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1
- res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION
- deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE
- deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED
- deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS
- # fmt: on
- assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation)
-
- num_blocks_per_stage = {
- 18: [2, 2, 2, 2],
- 34: [3, 4, 6, 3],
- 50: [3, 4, 6, 3],
- 101: [3, 4, 23, 3],
- 152: [3, 8, 36, 3],
- }[depth]
-
- if depth in [18, 34]:
- assert out_channels == 64, "Must set MODEL.RESNETS.RES2_OUT_CHANNELS = 64 for R18/R34"
- assert not any(
- deform_on_per_stage
- ), "MODEL.RESNETS.DEFORM_ON_PER_STAGE unsupported for R18/R34"
- assert res5_dilation == 1, "Must set MODEL.RESNETS.RES5_DILATION = 1 for R18/R34"
- assert num_groups == 1, "Must set MODEL.RESNETS.NUM_GROUPS = 1 for R18/R34"
-
- stages = []
-
- for idx, stage_idx in enumerate(range(2, 6)):
- # res5_dilation is used this way as a convention in R-FCN & Deformable Conv paper
- dilation = res5_dilation if stage_idx == 5 else 1
- first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2
- stage_kargs = {
- "num_blocks": num_blocks_per_stage[idx],
- "stride_per_block": [first_stride] + [1] * (num_blocks_per_stage[idx] - 1),
- "in_channels": in_channels,
- "out_channels": out_channels,
- "norm": norm,
- }
- # Use BasicBlock for R18 and R34.
- if depth in [18, 34]:
- stage_kargs["block_class"] = BasicBlock
- else:
- stage_kargs["bottleneck_channels"] = bottleneck_channels
- stage_kargs["stride_in_1x1"] = stride_in_1x1
- stage_kargs["dilation"] = dilation
- stage_kargs["num_groups"] = num_groups
- if deform_on_per_stage[idx]:
- stage_kargs["block_class"] = DeformBottleneckBlock
- stage_kargs["deform_modulated"] = deform_modulated
- stage_kargs["deform_num_groups"] = deform_num_groups
- else:
- stage_kargs["block_class"] = BottleneckBlock
- blocks = ResNet.make_stage(**stage_kargs)
- in_channels = out_channels
- out_channels *= 2
- bottleneck_channels *= 2
- stages.append(blocks)
- return ResNet(stem, stages, out_features=out_features, freeze_at=freeze_at)
diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/lowvram.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/lowvram.py
deleted file mode 100644
index 82c4edf736ba57ad7ad69753697c1d87f50f9f6d..0000000000000000000000000000000000000000
--- a/spaces/cymic/Waifu_Diffusion_Webui/modules/lowvram.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import torch
-from modules.devices import get_optimal_device
-
-module_in_gpu = None
-cpu = torch.device("cpu")
-device = gpu = get_optimal_device()
-
-
-def send_everything_to_cpu():
- global module_in_gpu
-
- if module_in_gpu is not None:
- module_in_gpu.to(cpu)
-
- module_in_gpu = None
-
-
-def setup_for_low_vram(sd_model, use_medvram):
- parents = {}
-
- def send_me_to_gpu(module, _):
- """send this module to GPU; send whatever tracked module was previous in GPU to CPU;
- we add this as forward_pre_hook to a lot of modules and this way all but one of them will
- be in CPU
- """
- global module_in_gpu
-
- module = parents.get(module, module)
-
- if module_in_gpu == module:
- return
-
- if module_in_gpu is not None:
- module_in_gpu.to(cpu)
-
- module.to(gpu)
- module_in_gpu = module
-
- # see below for register_forward_pre_hook;
- # first_stage_model does not use forward(), it uses encode/decode, so register_forward_pre_hook is
- # useless here, and we just replace those methods
- def first_stage_model_encode_wrap(self, encoder, x):
- send_me_to_gpu(self, None)
- return encoder(x)
-
- def first_stage_model_decode_wrap(self, decoder, z):
- send_me_to_gpu(self, None)
- return decoder(z)
-
- # remove three big modules, cond, first_stage, and unet from the model and then
- # send the model to GPU. Then put modules back. the modules will be in CPU.
- stored = sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.model
- sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.model = None, None, None
- sd_model.to(device)
- sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.model = stored
-
- # register hooks for those the first two models
- sd_model.cond_stage_model.transformer.register_forward_pre_hook(send_me_to_gpu)
- sd_model.first_stage_model.register_forward_pre_hook(send_me_to_gpu)
- sd_model.first_stage_model.encode = lambda x, en=sd_model.first_stage_model.encode: first_stage_model_encode_wrap(sd_model.first_stage_model, en, x)
- sd_model.first_stage_model.decode = lambda z, de=sd_model.first_stage_model.decode: first_stage_model_decode_wrap(sd_model.first_stage_model, de, z)
- parents[sd_model.cond_stage_model.transformer] = sd_model.cond_stage_model
-
- if use_medvram:
- sd_model.model.register_forward_pre_hook(send_me_to_gpu)
- else:
- diff_model = sd_model.model.diffusion_model
-
- # the third remaining model is still too big for 4 GB, so we also do the same for its submodules
- # so that only one of them is in GPU at a time
- stored = diff_model.input_blocks, diff_model.middle_block, diff_model.output_blocks, diff_model.time_embed
- diff_model.input_blocks, diff_model.middle_block, diff_model.output_blocks, diff_model.time_embed = None, None, None, None
- sd_model.model.to(device)
- diff_model.input_blocks, diff_model.middle_block, diff_model.output_blocks, diff_model.time_embed = stored
-
- # install hooks for bits of third model
- diff_model.time_embed.register_forward_pre_hook(send_me_to_gpu)
- for block in diff_model.input_blocks:
- block.register_forward_pre_hook(send_me_to_gpu)
- diff_model.middle_block.register_forward_pre_hook(send_me_to_gpu)
- for block in diff_model.output_blocks:
- block.register_forward_pre_hook(send_me_to_gpu)
diff --git a/spaces/dandan4272/hand_gesture_rec/Mydataset.py b/spaces/dandan4272/hand_gesture_rec/Mydataset.py
deleted file mode 100644
index 06748c52099aaadb73f658324c22b10c49237a32..0000000000000000000000000000000000000000
--- a/spaces/dandan4272/hand_gesture_rec/Mydataset.py
+++ /dev/null
@@ -1,335 +0,0 @@
-import os
-import random
-from torch.utils.data import Dataset, DataLoader
-import torch
-import numpy as np
-from random import randint,shuffle
-
-from model.stgcn import normalize_points_with_size, scale_pose
-
-
-class Hand_Dataset(Dataset):
- """Face Landmarks dataset."""
-
- def __init__(self, data, time_len, use_data_aug):
- """
- Args:
- data: a list of video and it's label
- time_len: length of input video
- use_data_aug: flag for using data augmentation
- """
- self.use_data_aug = use_data_aug
- self.data = data
-
- self.time_len = time_len
- self.compoent_num = 22
-
-
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, ind):
- #print("ind:",ind)
- data_ele = self.data[ind]
-
-
- #hand skeleton
- skeleton = data_ele["skeleton"]
- skeleton = np.array(skeleton)
-
- if self.use_data_aug:
- skeleton = self.data_aug(skeleton)
-
- # sample time_len frames from whole video
- data_num = skeleton.shape[0]
- idx_list = self.sample_frame(data_num)
- skeleton = [skeleton[idx] for idx in idx_list]
- skeleton = np.array(skeleton)
-
- #normalize by palm center
- skeleton -= skeleton[0][1]
-
-
-
- skeleton = torch.from_numpy(skeleton).float()
- #print(skeleton.shape)
- # label
- label = data_ele["label"] - 1 #
-
- sample = {'skeleton': skeleton, "label" : label}
-
- return sample
-
- def data_aug(self, skeleton):
-
- def scale(skeleton):
- ratio = 0.2
- low = 1 - ratio
- high = 1 + ratio
- factor = np.random.uniform(low, high)
- video_len = skeleton.shape[0]
- for t in range(video_len):
- for j_id in range(self.compoent_num):
- skeleton[t][j_id] *= factor
- skeleton = np.array(skeleton)
- return skeleton
-
- def shift(skeleton):
- low = -0.1
- high = -low
- offset = np.random.uniform(low, high, 3)
- video_len = skeleton.shape[0]
- for t in range(video_len):
- for j_id in range(self.compoent_num):
- skeleton[t][j_id] += offset
- skeleton = np.array(skeleton)
- return skeleton
-
- def noise(skeleton):
- low = -0.1
- high = -low
- #select 4 joints
- all_joint = list(range(self.compoent_num))
- shuffle(all_joint)
- selected_joint = all_joint[0:4]
-
- for j_id in selected_joint:
- noise_offset = np.random.uniform(low, high, 3)
- for t in range(self.time_len):
- skeleton[t][j_id] += noise_offset
- skeleton = np.array(skeleton)
- return skeleton
-
- def time_interpolate(skeleton):
- skeleton = np.array(skeleton)
- video_len = skeleton.shape[0]
-
- r = np.random.uniform(0, 1)
-
- result = []
-
- for i in range(1, video_len):
- displace = skeleton[i] - skeleton[i - 1]#d_t = s_t+1 - s_t
- displace *= r
- result.append(skeleton[i -1] + displace)# r*disp
-
- while len(result) < self.time_len:
- result.append(result[-1]) #padding
- result = np.array(result)
- return result
-
-
-
-
- # og_id = np.random.randint(3)
- aug_num = 4
- ag_id = randint(0, aug_num - 1)
- if ag_id == 0:
- skeleton = scale(skeleton)
- elif ag_id == 1:
- skeleton = shift(skeleton)
- elif ag_id == 2:
- skeleton = noise(skeleton)
- elif ag_id == 3:
- skeleton = time_interpolate(skeleton)
-
- return skeleton
-
-
-
-
- def sample_frame(self, data_num):
- #sample #time_len frames from whole video
- sample_size = self.time_len
- each_num = (data_num - 1) / (sample_size - 1)
- idx_list = [0, data_num - 1]
- for i in range(sample_size):
- index = round(each_num * i)
- if index not in idx_list and index < data_num:
- idx_list.append(index)
- idx_list.sort()
-
- while len(idx_list) < sample_size:
- idx = random.randint(0, data_num - 1)
- if idx not in idx_list:
- idx_list.append(idx)
- idx_list.sort()
-
- return idx_list
-
-
-
-
-class MyDataset2(Dataset):
- def __init__(self, Data_path):
- # no_sequences = 30
- # Videos are going to be 30 frames in length
- # sequence_length = [2,5,8]
- # actions = np.array(['666', 'thumbs_up', 'finger_heart', 'scissor_hand', 'no_gesture', 'shake_hand'])
- actions = np.array(['shake_hand', 'palm', 'fist', 'clock_wise', 'anti_clockwise', 'ok', 'thumb', 'v', 'heart','no_gesture'])
- self.compoent_num = 21
- self.time_len = 15
-
- label_map = {label: num for num, label in enumerate(actions)}
- sequences, labels = [], []
- # self.labels = torch.zeros((120,6))
- for action in actions:
- # for sequence in range(no_sequences):
- fileNames = os.listdir(os.path.join(Data_path, action)) # 获取当前路径下的文件名,返回List
- window = []
-
- for frame_num in fileNames:
- # res = np.load(os.path.join(Data_path, action,str(frame_num)))
- # x = res[0:15]
- # pts,mot = self.prepocess(x,(640,480))
- # window.append(pts)
- # window2.append(mot)
- # label = label_map[action]
- # sequences.append((pts, mot, label))
- #
- # pts,mot = self.prepocess(res[15:30],(640,480))
- # window.append(pts)
- # window2.append(mot)
- # label = label_map[action]
- # sequences.append((pts, mot, label))
- res = np.load(os.path.join(Data_path, action, str(frame_num)))
- x = res[0:15]
- pts = self.data_aug(self.prepocess(x,(640,480)))
- # pts = x
-
- window.append(pts)
- label = label_map[action]
- sequences.append((pts, label))
-
- # pts = res[15:30]
- pts = self.data_aug(self.prepocess(res[15:30],(640,480)))
- window.append(pts)
- label = label_map[action]
- sequences.append((pts, label))
- # sequences = sequences[-1]
- self.se = sequences
- def __len__(self):
- return len(self.se)
-
- def prepocess(self, pts, image_size):
- """Predict actions from single person skeleton points and score in time sequence.
- Args:
- pts: (numpy array) points and score in shape `(t, v, c)` where
- t : inputs sequence (time steps).,
- v : number of graph node (body parts).,
- c : channel (x, y, score).,
- image_size: (tuple of int) width, height of image frame.
- Returns:
- (numpy array) Probability of each class actions.
- """
- pts[:, :, :2] = normalize_points_with_size(pts[:, :, :2], image_size[0], image_size[1])
- pts[:, :, :2] = scale_pose(pts[:, :, :2])
- # pts = np.concatenate((pts, np.expand_dims((pts[:, 1, :] + pts[:, 2, :]) / 2, 1)), axis=1)
- #
- # pts = torch.tensor(pts, dtype=torch.float32)
- # pts = pts.permute(2, 0, 1)[None, :]
- #
- # mot = pts[:, :2, 1:, :] - pts[:, :2, :-1, :]
- # mot = mot.to(self.device)
- # pts = pts.to(self.device)
-
- # out = self.model((pts, mot))
-
- # return out.detach().cpu().numpy()
- return pts
-
- def data_aug(self, skeleton):
-
- def scale(skeleton):
- ratio = 0.2
- low = 1 - ratio
- high = 1 + ratio
- factor = np.random.uniform(low, high)
- video_len = skeleton.shape[0]
- for t in range(video_len):
- for j_id in range(self.compoent_num):
- skeleton[t][j_id] *= factor
- skeleton = np.array(skeleton)
- return skeleton
-
- def shift(skeleton):
- low = -0.1
- high = -low
- offset = np.random.uniform(low, high, 3)
- video_len = skeleton.shape[0]
- for t in range(video_len):
- for j_id in range(self.compoent_num):
- skeleton[t][j_id] += offset
- skeleton = np.array(skeleton)
- return skeleton
-
- def noise(skeleton):
- low = -0.1
- high = -low
- #select 4 joints
- all_joint = list(range(self.compoent_num))
- shuffle(all_joint)
- selected_joint = all_joint[0:4]
-
- for j_id in selected_joint:
- noise_offset = np.random.uniform(low, high, 3)
- for t in range(self.time_len):
- skeleton[t][j_id] += noise_offset
- skeleton = np.array(skeleton)
- return skeleton
-
- def time_interpolate(skeleton):
- skeleton = np.array(skeleton)
- video_len = skeleton.shape[0]
-
- r = np.random.uniform(0, 1)
-
- result = []
-
- for i in range(1, video_len):
- displace = skeleton[i] - skeleton[i - 1]#d_t = s_t+1 - s_t
- displace *= r
- result.append(skeleton[i -1] + displace)# r*disp
-
- while len(result) < self.time_len:
- result.append(result[-1]) #padding
- result = np.array(result)
- return result
-
-
-
-
- # og_id = np.random.randint(3)
- aug_num = 3
- ag_id = randint(0, aug_num - 1)
- if ag_id == 0:
- skeleton = scale(skeleton)
- elif ag_id == 1:
- skeleton = shift(skeleton)
- elif ag_id == 2:
- skeleton = noise(skeleton)
- elif ag_id == 3:
- skeleton = time_interpolate(skeleton)
-
- return skeleton
-
- def __getitem__(self, index):
- img,label = self.se[index]
- # img = torch.tensor([item.cpu().detach().numpy() for item in img])
- # img2 = torch.tensor([item.cpu().detach().numpy() for item in img2])
- # img = torch.squeeze(img)
- # img2 = torch.squeeze(img2)
- img = torch.from_numpy(img)
- return img, label
-
-
-if __name__ == '__main__':
- DATA_PATH = 'dataset/train/'
-
- dataset = MyDataset2(DATA_PATH)
- # dataloader = DataLoader(dataset, batch_size=8, shuffle=False, num_workers=0, drop_last=False)
-
- for i in range(10):
- img,label = dataset[i]
- print(img,label)
\ No newline at end of file
diff --git a/spaces/datastx/EmailGenerator/app.py b/spaces/datastx/EmailGenerator/app.py
deleted file mode 100644
index 3dc6845483c09fa981b14cbe90770d8965f4f820..0000000000000000000000000000000000000000
--- a/spaces/datastx/EmailGenerator/app.py
+++ /dev/null
@@ -1,90 +0,0 @@
-import requests
-import streamlit as st
-from langchain.llms import CTransformers
-from langchain.prompts import PromptTemplate
-import os
-
-def download_model() -> None:
- """
- Downloads the model from the provided URL and saves it to the current directory.
- """
- url = 'https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q8_0.bin'
- file_name = url.split('/')[-1]
-
- response = requests.get(url, stream=True)
-
- with open(file_name, 'wb') as file:
- for chunk in response.iter_content(chunk_size=1024):
- if chunk:
- file.write(chunk)
-
- print("File downloaded successfully!")
-
-
-def getLLMResponse(form_input: str, email_sender: str, email_recipient: str, email_style: str) -> str:
- """
- Generates a response using the LLM model.
-
- :param form_input: Email topic provided by the user.
- :param email_sender: Sender name provided by the user.
- :param email_recipient: Recipient name provided by the user.
- :param email_style: Writing style provided by the user.
- :return: Generated response.
- """
- llm = CTransformers(model='llama-2-7b-chat.ggmlv3.q8_0.bin',
- model_type='llama',
- config={'max_new_tokens': 256,
- 'temperature': 0.01})
-
- template = """
- Write an email with {style} style and includes topic :{email_topic}.\n\nSender: {sender}\nRecipient: {recipient}
- \n\nEmail Text:
-
- """
-
- prompt = PromptTemplate(
- input_variables=["style", "email_topic", "sender", "recipient"],
- template=template,)
-
- response = llm(prompt.format(email_topic=form_input, sender=email_sender, recipient=email_recipient, style=email_style))
- print(response)
-
- return response
-
-
-st.set_page_config(page_title="Generate Emails",
- page_icon='📧',
- layout='centered',
- initial_sidebar_state='collapsed')
-st.header("Generate Emails 📧")
-
-model_loaded = st.session_state.get('model_loaded', False)
-
-if not model_loaded:
- if st.button('Load Model'):
- model_file = 'llama-2-7b-chat.ggmlv3.q8_0.bin'
- if not os.path.isfile(model_file):
- st.info('Loading the model, this could take ~5 minutes')
- download_model()
- st.session_state.model_loaded = True
- st.info('Model loaded successfully')
-
-if st.session_state.get('model_loaded'):
- form_input = st.text_area('Enter the email topic', height=275)
-
- col1, col2, col3 = st.columns([10, 10, 5])
- with col1:
- email_sender = st.text_input('Sender Name')
- with col2:
- email_recipient = st.text_input('Recipient Name')
- with col3:
- email_style = st.selectbox('Writing Style',
- ('Formal', 'Appreciating', 'Not Satisfied', 'Neutral'),
- index=0)
-
- submit = st.button("Generate")
-
- if submit:
- st.write(getLLMResponse(form_input, email_sender, email_recipient, email_style))
-else:
- st.write("Please load the model to proceed.")
diff --git a/spaces/dawdqd/ChuanhuChatGPT/modules/index_func.py b/spaces/dawdqd/ChuanhuChatGPT/modules/index_func.py
deleted file mode 100644
index 2e2ea982ccc7c03a62ff3a31db5244e5048c3b31..0000000000000000000000000000000000000000
--- a/spaces/dawdqd/ChuanhuChatGPT/modules/index_func.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import os
-import logging
-
-import hashlib
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_documents(file_src):
- from langchain.schema import Document
- from langchain.text_splitter import TokenTextSplitter
- text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30)
-
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filename)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- texts = [Document(page_content=pdftext,
- metadata={"source": filepath})]
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- from langchain.document_loaders import UnstructuredWordDocumentLoader
- loader = UnstructuredWordDocumentLoader(filepath)
- texts = loader.load()
- elif file_type == ".pptx":
- logging.debug("Loading PowerPoint...")
- from langchain.document_loaders import UnstructuredPowerPointLoader
- loader = UnstructuredPowerPointLoader(filepath)
- texts = loader.load()
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- from langchain.document_loaders import UnstructuredEPubLoader
- loader = UnstructuredEPubLoader(filepath)
- texts = loader.load()
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- texts = []
- for elem in text_list:
- texts.append(Document(page_content=elem,
- metadata={"source": filepath}))
- else:
- logging.debug("Loading text file...")
- from langchain.document_loaders import TextLoader
- loader = TextLoader(filepath, "utf8")
- texts = loader.load()
- except Exception as e:
- import traceback
- logging.error(f"Error loading file: {filename}")
- traceback.print_exc()
-
- texts = text_splitter.split_documents(texts)
- documents.extend(texts)
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.vectorstores import FAISS
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- index_name = get_file_hash(file_src)
- index_path = f"./index/{index_name}"
- if local_embedding:
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- embeddings = HuggingFaceEmbeddings(
- model_name="sentence-transformers/distiluse-base-multilingual-cased-v2")
- else:
- from langchain.embeddings import OpenAIEmbeddings
- if os.environ.get("OPENAI_API_TYPE", "openai") == "openai":
- embeddings = OpenAIEmbeddings(openai_api_base=os.environ.get(
- "OPENAI_API_BASE", None), openai_api_key=os.environ.get("OPENAI_EMBEDDING_API_KEY", api_key))
- else:
- embeddings = OpenAIEmbeddings(deployment=os.environ["AZURE_EMBEDDING_DEPLOYMENT_NAME"], openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
- model=os.environ["AZURE_EMBEDDING_MODEL_NAME"], openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"], openai_api_type="azure")
- if os.path.exists(index_path):
- logging.info("找到了缓存的索引文件,加载中……")
- index = FAISS.load_local(index_path, embeddings)
- os.environ["OPENAI_API_KEY"] = ""
- return index
- else:
- try:
- documents = get_documents(file_src)
- logging.info("构建索引中……")
- with retrieve_proxy():
- index = FAISS.from_documents(documents, embeddings)
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_local(index_path)
- logging.debug("索引已保存至本地!")
- os.environ["OPENAI_API_KEY"] = ""
- return index
-
- except Exception as e:
- import traceback
- logging.error("索引构建失败!%s", e)
- traceback.print_exc()
- os.environ["OPENAI_API_KEY"] = ""
- return None
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/core.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/core.py
deleted file mode 100644
index cc65e896bf2d754d74b54a84ac501b80127f83ca..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/core.py
+++ /dev/null
@@ -1,3042 +0,0 @@
-import enum
-import errno
-import inspect
-import os
-import sys
-import typing as t
-from collections import abc
-from contextlib import contextmanager
-from contextlib import ExitStack
-from functools import update_wrapper
-from gettext import gettext as _
-from gettext import ngettext
-from itertools import repeat
-from types import TracebackType
-
-from . import types
-from .exceptions import Abort
-from .exceptions import BadParameter
-from .exceptions import ClickException
-from .exceptions import Exit
-from .exceptions import MissingParameter
-from .exceptions import UsageError
-from .formatting import HelpFormatter
-from .formatting import join_options
-from .globals import pop_context
-from .globals import push_context
-from .parser import _flag_needs_value
-from .parser import OptionParser
-from .parser import split_opt
-from .termui import confirm
-from .termui import prompt
-from .termui import style
-from .utils import _detect_program_name
-from .utils import _expand_args
-from .utils import echo
-from .utils import make_default_short_help
-from .utils import make_str
-from .utils import PacifyFlushWrapper
-
-if t.TYPE_CHECKING:
- import typing_extensions as te
- from .shell_completion import CompletionItem
-
-F = t.TypeVar("F", bound=t.Callable[..., t.Any])
-V = t.TypeVar("V")
-
-
-def _complete_visible_commands(
- ctx: "Context", incomplete: str
-) -> t.Iterator[t.Tuple[str, "Command"]]:
- """List all the subcommands of a group that start with the
- incomplete value and aren't hidden.
-
- :param ctx: Invocation context for the group.
- :param incomplete: Value being completed. May be empty.
- """
- multi = t.cast(MultiCommand, ctx.command)
-
- for name in multi.list_commands(ctx):
- if name.startswith(incomplete):
- command = multi.get_command(ctx, name)
-
- if command is not None and not command.hidden:
- yield name, command
-
-
-def _check_multicommand(
- base_command: "MultiCommand", cmd_name: str, cmd: "Command", register: bool = False
-) -> None:
- if not base_command.chain or not isinstance(cmd, MultiCommand):
- return
- if register:
- hint = (
- "It is not possible to add multi commands as children to"
- " another multi command that is in chain mode."
- )
- else:
- hint = (
- "Found a multi command as subcommand to a multi command"
- " that is in chain mode. This is not supported."
- )
- raise RuntimeError(
- f"{hint}. Command {base_command.name!r} is set to chain and"
- f" {cmd_name!r} was added as a subcommand but it in itself is a"
- f" multi command. ({cmd_name!r} is a {type(cmd).__name__}"
- f" within a chained {type(base_command).__name__} named"
- f" {base_command.name!r})."
- )
-
-
-def batch(iterable: t.Iterable[V], batch_size: int) -> t.List[t.Tuple[V, ...]]:
- return list(zip(*repeat(iter(iterable), batch_size)))
-
-
-@contextmanager
-def augment_usage_errors(
- ctx: "Context", param: t.Optional["Parameter"] = None
-) -> t.Iterator[None]:
- """Context manager that attaches extra information to exceptions."""
- try:
- yield
- except BadParameter as e:
- if e.ctx is None:
- e.ctx = ctx
- if param is not None and e.param is None:
- e.param = param
- raise
- except UsageError as e:
- if e.ctx is None:
- e.ctx = ctx
- raise
-
-
-def iter_params_for_processing(
- invocation_order: t.Sequence["Parameter"],
- declaration_order: t.Sequence["Parameter"],
-) -> t.List["Parameter"]:
- """Given a sequence of parameters in the order as should be considered
- for processing and an iterable of parameters that exist, this returns
- a list in the correct order as they should be processed.
- """
-
- def sort_key(item: "Parameter") -> t.Tuple[bool, float]:
- try:
- idx: float = invocation_order.index(item)
- except ValueError:
- idx = float("inf")
-
- return not item.is_eager, idx
-
- return sorted(declaration_order, key=sort_key)
-
-
-class ParameterSource(enum.Enum):
- """This is an :class:`~enum.Enum` that indicates the source of a
- parameter's value.
-
- Use :meth:`click.Context.get_parameter_source` to get the
- source for a parameter by name.
-
- .. versionchanged:: 8.0
- Use :class:`~enum.Enum` and drop the ``validate`` method.
-
- .. versionchanged:: 8.0
- Added the ``PROMPT`` value.
- """
-
- COMMANDLINE = enum.auto()
- """The value was provided by the command line args."""
- ENVIRONMENT = enum.auto()
- """The value was provided with an environment variable."""
- DEFAULT = enum.auto()
- """Used the default specified by the parameter."""
- DEFAULT_MAP = enum.auto()
- """Used a default provided by :attr:`Context.default_map`."""
- PROMPT = enum.auto()
- """Used a prompt to confirm a default or provide a value."""
-
-
-class Context:
- """The context is a special internal object that holds state relevant
- for the script execution at every single level. It's normally invisible
- to commands unless they opt-in to getting access to it.
-
- The context is useful as it can pass internal objects around and can
- control special execution features such as reading data from
- environment variables.
-
- A context can be used as context manager in which case it will call
- :meth:`close` on teardown.
-
- :param command: the command class for this context.
- :param parent: the parent context.
- :param info_name: the info name for this invocation. Generally this
- is the most descriptive name for the script or
- command. For the toplevel script it is usually
- the name of the script, for commands below it it's
- the name of the script.
- :param obj: an arbitrary object of user data.
- :param auto_envvar_prefix: the prefix to use for automatic environment
- variables. If this is `None` then reading
- from environment variables is disabled. This
- does not affect manually set environment
- variables which are always read.
- :param default_map: a dictionary (like object) with default values
- for parameters.
- :param terminal_width: the width of the terminal. The default is
- inherit from parent context. If no context
- defines the terminal width then auto
- detection will be applied.
- :param max_content_width: the maximum width for content rendered by
- Click (this currently only affects help
- pages). This defaults to 80 characters if
- not overridden. In other words: even if the
- terminal is larger than that, Click will not
- format things wider than 80 characters by
- default. In addition to that, formatters might
- add some safety mapping on the right.
- :param resilient_parsing: if this flag is enabled then Click will
- parse without any interactivity or callback
- invocation. Default values will also be
- ignored. This is useful for implementing
- things such as completion support.
- :param allow_extra_args: if this is set to `True` then extra arguments
- at the end will not raise an error and will be
- kept on the context. The default is to inherit
- from the command.
- :param allow_interspersed_args: if this is set to `False` then options
- and arguments cannot be mixed. The
- default is to inherit from the command.
- :param ignore_unknown_options: instructs click to ignore options it does
- not know and keeps them for later
- processing.
- :param help_option_names: optionally a list of strings that define how
- the default help parameter is named. The
- default is ``['--help']``.
- :param token_normalize_func: an optional function that is used to
- normalize tokens (options, choices,
- etc.). This for instance can be used to
- implement case insensitive behavior.
- :param color: controls if the terminal supports ANSI colors or not. The
- default is autodetection. This is only needed if ANSI
- codes are used in texts that Click prints which is by
- default not the case. This for instance would affect
- help output.
- :param show_default: Show the default value for commands. If this
- value is not set, it defaults to the value from the parent
- context. ``Command.show_default`` overrides this default for the
- specific command.
-
- .. versionchanged:: 8.1
- The ``show_default`` parameter is overridden by
- ``Command.show_default``, instead of the other way around.
-
- .. versionchanged:: 8.0
- The ``show_default`` parameter defaults to the value from the
- parent context.
-
- .. versionchanged:: 7.1
- Added the ``show_default`` parameter.
-
- .. versionchanged:: 4.0
- Added the ``color``, ``ignore_unknown_options``, and
- ``max_content_width`` parameters.
-
- .. versionchanged:: 3.0
- Added the ``allow_extra_args`` and ``allow_interspersed_args``
- parameters.
-
- .. versionchanged:: 2.0
- Added the ``resilient_parsing``, ``help_option_names``, and
- ``token_normalize_func`` parameters.
- """
-
- #: The formatter class to create with :meth:`make_formatter`.
- #:
- #: .. versionadded:: 8.0
- formatter_class: t.Type["HelpFormatter"] = HelpFormatter
-
- def __init__(
- self,
- command: "Command",
- parent: t.Optional["Context"] = None,
- info_name: t.Optional[str] = None,
- obj: t.Optional[t.Any] = None,
- auto_envvar_prefix: t.Optional[str] = None,
- default_map: t.Optional[t.MutableMapping[str, t.Any]] = None,
- terminal_width: t.Optional[int] = None,
- max_content_width: t.Optional[int] = None,
- resilient_parsing: bool = False,
- allow_extra_args: t.Optional[bool] = None,
- allow_interspersed_args: t.Optional[bool] = None,
- ignore_unknown_options: t.Optional[bool] = None,
- help_option_names: t.Optional[t.List[str]] = None,
- token_normalize_func: t.Optional[t.Callable[[str], str]] = None,
- color: t.Optional[bool] = None,
- show_default: t.Optional[bool] = None,
- ) -> None:
- #: the parent context or `None` if none exists.
- self.parent = parent
- #: the :class:`Command` for this context.
- self.command = command
- #: the descriptive information name
- self.info_name = info_name
- #: Map of parameter names to their parsed values. Parameters
- #: with ``expose_value=False`` are not stored.
- self.params: t.Dict[str, t.Any] = {}
- #: the leftover arguments.
- self.args: t.List[str] = []
- #: protected arguments. These are arguments that are prepended
- #: to `args` when certain parsing scenarios are encountered but
- #: must be never propagated to another arguments. This is used
- #: to implement nested parsing.
- self.protected_args: t.List[str] = []
- #: the collected prefixes of the command's options.
- self._opt_prefixes: t.Set[str] = set(parent._opt_prefixes) if parent else set()
-
- if obj is None and parent is not None:
- obj = parent.obj
-
- #: the user object stored.
- self.obj: t.Any = obj
- self._meta: t.Dict[str, t.Any] = getattr(parent, "meta", {})
-
- #: A dictionary (-like object) with defaults for parameters.
- if (
- default_map is None
- and info_name is not None
- and parent is not None
- and parent.default_map is not None
- ):
- default_map = parent.default_map.get(info_name)
-
- self.default_map: t.Optional[t.MutableMapping[str, t.Any]] = default_map
-
- #: This flag indicates if a subcommand is going to be executed. A
- #: group callback can use this information to figure out if it's
- #: being executed directly or because the execution flow passes
- #: onwards to a subcommand. By default it's None, but it can be
- #: the name of the subcommand to execute.
- #:
- #: If chaining is enabled this will be set to ``'*'`` in case
- #: any commands are executed. It is however not possible to
- #: figure out which ones. If you require this knowledge you
- #: should use a :func:`result_callback`.
- self.invoked_subcommand: t.Optional[str] = None
-
- if terminal_width is None and parent is not None:
- terminal_width = parent.terminal_width
-
- #: The width of the terminal (None is autodetection).
- self.terminal_width: t.Optional[int] = terminal_width
-
- if max_content_width is None and parent is not None:
- max_content_width = parent.max_content_width
-
- #: The maximum width of formatted content (None implies a sensible
- #: default which is 80 for most things).
- self.max_content_width: t.Optional[int] = max_content_width
-
- if allow_extra_args is None:
- allow_extra_args = command.allow_extra_args
-
- #: Indicates if the context allows extra args or if it should
- #: fail on parsing.
- #:
- #: .. versionadded:: 3.0
- self.allow_extra_args = allow_extra_args
-
- if allow_interspersed_args is None:
- allow_interspersed_args = command.allow_interspersed_args
-
- #: Indicates if the context allows mixing of arguments and
- #: options or not.
- #:
- #: .. versionadded:: 3.0
- self.allow_interspersed_args: bool = allow_interspersed_args
-
- if ignore_unknown_options is None:
- ignore_unknown_options = command.ignore_unknown_options
-
- #: Instructs click to ignore options that a command does not
- #: understand and will store it on the context for later
- #: processing. This is primarily useful for situations where you
- #: want to call into external programs. Generally this pattern is
- #: strongly discouraged because it's not possibly to losslessly
- #: forward all arguments.
- #:
- #: .. versionadded:: 4.0
- self.ignore_unknown_options: bool = ignore_unknown_options
-
- if help_option_names is None:
- if parent is not None:
- help_option_names = parent.help_option_names
- else:
- help_option_names = ["--help"]
-
- #: The names for the help options.
- self.help_option_names: t.List[str] = help_option_names
-
- if token_normalize_func is None and parent is not None:
- token_normalize_func = parent.token_normalize_func
-
- #: An optional normalization function for tokens. This is
- #: options, choices, commands etc.
- self.token_normalize_func: t.Optional[
- t.Callable[[str], str]
- ] = token_normalize_func
-
- #: Indicates if resilient parsing is enabled. In that case Click
- #: will do its best to not cause any failures and default values
- #: will be ignored. Useful for completion.
- self.resilient_parsing: bool = resilient_parsing
-
- # If there is no envvar prefix yet, but the parent has one and
- # the command on this level has a name, we can expand the envvar
- # prefix automatically.
- if auto_envvar_prefix is None:
- if (
- parent is not None
- and parent.auto_envvar_prefix is not None
- and self.info_name is not None
- ):
- auto_envvar_prefix = (
- f"{parent.auto_envvar_prefix}_{self.info_name.upper()}"
- )
- else:
- auto_envvar_prefix = auto_envvar_prefix.upper()
-
- if auto_envvar_prefix is not None:
- auto_envvar_prefix = auto_envvar_prefix.replace("-", "_")
-
- self.auto_envvar_prefix: t.Optional[str] = auto_envvar_prefix
-
- if color is None and parent is not None:
- color = parent.color
-
- #: Controls if styling output is wanted or not.
- self.color: t.Optional[bool] = color
-
- if show_default is None and parent is not None:
- show_default = parent.show_default
-
- #: Show option default values when formatting help text.
- self.show_default: t.Optional[bool] = show_default
-
- self._close_callbacks: t.List[t.Callable[[], t.Any]] = []
- self._depth = 0
- self._parameter_source: t.Dict[str, ParameterSource] = {}
- self._exit_stack = ExitStack()
-
- def to_info_dict(self) -> t.Dict[str, t.Any]:
- """Gather information that could be useful for a tool generating
- user-facing documentation. This traverses the entire CLI
- structure.
-
- .. code-block:: python
-
- with Context(cli) as ctx:
- info = ctx.to_info_dict()
-
- .. versionadded:: 8.0
- """
- return {
- "command": self.command.to_info_dict(self),
- "info_name": self.info_name,
- "allow_extra_args": self.allow_extra_args,
- "allow_interspersed_args": self.allow_interspersed_args,
- "ignore_unknown_options": self.ignore_unknown_options,
- "auto_envvar_prefix": self.auto_envvar_prefix,
- }
-
- def __enter__(self) -> "Context":
- self._depth += 1
- push_context(self)
- return self
-
- def __exit__(
- self,
- exc_type: t.Optional[t.Type[BaseException]],
- exc_value: t.Optional[BaseException],
- tb: t.Optional[TracebackType],
- ) -> None:
- self._depth -= 1
- if self._depth == 0:
- self.close()
- pop_context()
-
- @contextmanager
- def scope(self, cleanup: bool = True) -> t.Iterator["Context"]:
- """This helper method can be used with the context object to promote
- it to the current thread local (see :func:`get_current_context`).
- The default behavior of this is to invoke the cleanup functions which
- can be disabled by setting `cleanup` to `False`. The cleanup
- functions are typically used for things such as closing file handles.
-
- If the cleanup is intended the context object can also be directly
- used as a context manager.
-
- Example usage::
-
- with ctx.scope():
- assert get_current_context() is ctx
-
- This is equivalent::
-
- with ctx:
- assert get_current_context() is ctx
-
- .. versionadded:: 5.0
-
- :param cleanup: controls if the cleanup functions should be run or
- not. The default is to run these functions. In
- some situations the context only wants to be
- temporarily pushed in which case this can be disabled.
- Nested pushes automatically defer the cleanup.
- """
- if not cleanup:
- self._depth += 1
- try:
- with self as rv:
- yield rv
- finally:
- if not cleanup:
- self._depth -= 1
-
- @property
- def meta(self) -> t.Dict[str, t.Any]:
- """This is a dictionary which is shared with all the contexts
- that are nested. It exists so that click utilities can store some
- state here if they need to. It is however the responsibility of
- that code to manage this dictionary well.
-
- The keys are supposed to be unique dotted strings. For instance
- module paths are a good choice for it. What is stored in there is
- irrelevant for the operation of click. However what is important is
- that code that places data here adheres to the general semantics of
- the system.
-
- Example usage::
-
- LANG_KEY = f'{__name__}.lang'
-
- def set_language(value):
- ctx = get_current_context()
- ctx.meta[LANG_KEY] = value
-
- def get_language():
- return get_current_context().meta.get(LANG_KEY, 'en_US')
-
- .. versionadded:: 5.0
- """
- return self._meta
-
- def make_formatter(self) -> HelpFormatter:
- """Creates the :class:`~click.HelpFormatter` for the help and
- usage output.
-
- To quickly customize the formatter class used without overriding
- this method, set the :attr:`formatter_class` attribute.
-
- .. versionchanged:: 8.0
- Added the :attr:`formatter_class` attribute.
- """
- return self.formatter_class(
- width=self.terminal_width, max_width=self.max_content_width
- )
-
- def with_resource(self, context_manager: t.ContextManager[V]) -> V:
- """Register a resource as if it were used in a ``with``
- statement. The resource will be cleaned up when the context is
- popped.
-
- Uses :meth:`contextlib.ExitStack.enter_context`. It calls the
- resource's ``__enter__()`` method and returns the result. When
- the context is popped, it closes the stack, which calls the
- resource's ``__exit__()`` method.
-
- To register a cleanup function for something that isn't a
- context manager, use :meth:`call_on_close`. Or use something
- from :mod:`contextlib` to turn it into a context manager first.
-
- .. code-block:: python
-
- @click.group()
- @click.option("--name")
- @click.pass_context
- def cli(ctx):
- ctx.obj = ctx.with_resource(connect_db(name))
-
- :param context_manager: The context manager to enter.
- :return: Whatever ``context_manager.__enter__()`` returns.
-
- .. versionadded:: 8.0
- """
- return self._exit_stack.enter_context(context_manager)
-
- def call_on_close(self, f: t.Callable[..., t.Any]) -> t.Callable[..., t.Any]:
- """Register a function to be called when the context tears down.
-
- This can be used to close resources opened during the script
- execution. Resources that support Python's context manager
- protocol which would be used in a ``with`` statement should be
- registered with :meth:`with_resource` instead.
-
- :param f: The function to execute on teardown.
- """
- return self._exit_stack.callback(f)
-
- def close(self) -> None:
- """Invoke all close callbacks registered with
- :meth:`call_on_close`, and exit all context managers entered
- with :meth:`with_resource`.
- """
- self._exit_stack.close()
- # In case the context is reused, create a new exit stack.
- self._exit_stack = ExitStack()
-
- @property
- def command_path(self) -> str:
- """The computed command path. This is used for the ``usage``
- information on the help page. It's automatically created by
- combining the info names of the chain of contexts to the root.
- """
- rv = ""
- if self.info_name is not None:
- rv = self.info_name
- if self.parent is not None:
- parent_command_path = [self.parent.command_path]
-
- if isinstance(self.parent.command, Command):
- for param in self.parent.command.get_params(self):
- parent_command_path.extend(param.get_usage_pieces(self))
-
- rv = f"{' '.join(parent_command_path)} {rv}"
- return rv.lstrip()
-
- def find_root(self) -> "Context":
- """Finds the outermost context."""
- node = self
- while node.parent is not None:
- node = node.parent
- return node
-
- def find_object(self, object_type: t.Type[V]) -> t.Optional[V]:
- """Finds the closest object of a given type."""
- node: t.Optional["Context"] = self
-
- while node is not None:
- if isinstance(node.obj, object_type):
- return node.obj
-
- node = node.parent
-
- return None
-
- def ensure_object(self, object_type: t.Type[V]) -> V:
- """Like :meth:`find_object` but sets the innermost object to a
- new instance of `object_type` if it does not exist.
- """
- rv = self.find_object(object_type)
- if rv is None:
- self.obj = rv = object_type()
- return rv
-
- @t.overload
- def lookup_default(
- self, name: str, call: "te.Literal[True]" = True
- ) -> t.Optional[t.Any]:
- ...
-
- @t.overload
- def lookup_default(
- self, name: str, call: "te.Literal[False]" = ...
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- ...
-
- def lookup_default(self, name: str, call: bool = True) -> t.Optional[t.Any]:
- """Get the default for a parameter from :attr:`default_map`.
-
- :param name: Name of the parameter.
- :param call: If the default is a callable, call it. Disable to
- return the callable instead.
-
- .. versionchanged:: 8.0
- Added the ``call`` parameter.
- """
- if self.default_map is not None:
- value = self.default_map.get(name)
-
- if call and callable(value):
- return value()
-
- return value
-
- return None
-
- def fail(self, message: str) -> "te.NoReturn":
- """Aborts the execution of the program with a specific error
- message.
-
- :param message: the error message to fail with.
- """
- raise UsageError(message, self)
-
- def abort(self) -> "te.NoReturn":
- """Aborts the script."""
- raise Abort()
-
- def exit(self, code: int = 0) -> "te.NoReturn":
- """Exits the application with a given exit code."""
- raise Exit(code)
-
- def get_usage(self) -> str:
- """Helper method to get formatted usage string for the current
- context and command.
- """
- return self.command.get_usage(self)
-
- def get_help(self) -> str:
- """Helper method to get formatted help page for the current
- context and command.
- """
- return self.command.get_help(self)
-
- def _make_sub_context(self, command: "Command") -> "Context":
- """Create a new context of the same type as this context, but
- for a new command.
-
- :meta private:
- """
- return type(self)(command, info_name=command.name, parent=self)
-
- @t.overload
- def invoke(
- __self, # noqa: B902
- __callback: "t.Callable[..., V]",
- *args: t.Any,
- **kwargs: t.Any,
- ) -> V:
- ...
-
- @t.overload
- def invoke(
- __self, # noqa: B902
- __callback: "Command",
- *args: t.Any,
- **kwargs: t.Any,
- ) -> t.Any:
- ...
-
- def invoke(
- __self, # noqa: B902
- __callback: t.Union["Command", "t.Callable[..., V]"],
- *args: t.Any,
- **kwargs: t.Any,
- ) -> t.Union[t.Any, V]:
- """Invokes a command callback in exactly the way it expects. There
- are two ways to invoke this method:
-
- 1. the first argument can be a callback and all other arguments and
- keyword arguments are forwarded directly to the function.
- 2. the first argument is a click command object. In that case all
- arguments are forwarded as well but proper click parameters
- (options and click arguments) must be keyword arguments and Click
- will fill in defaults.
-
- Note that before Click 3.2 keyword arguments were not properly filled
- in against the intention of this code and no context was created. For
- more information about this change and why it was done in a bugfix
- release see :ref:`upgrade-to-3.2`.
-
- .. versionchanged:: 8.0
- All ``kwargs`` are tracked in :attr:`params` so they will be
- passed if :meth:`forward` is called at multiple levels.
- """
- if isinstance(__callback, Command):
- other_cmd = __callback
-
- if other_cmd.callback is None:
- raise TypeError(
- "The given command does not have a callback that can be invoked."
- )
- else:
- __callback = t.cast("t.Callable[..., V]", other_cmd.callback)
-
- ctx = __self._make_sub_context(other_cmd)
-
- for param in other_cmd.params:
- if param.name not in kwargs and param.expose_value:
- kwargs[param.name] = param.type_cast_value( # type: ignore
- ctx, param.get_default(ctx)
- )
-
- # Track all kwargs as params, so that forward() will pass
- # them on in subsequent calls.
- ctx.params.update(kwargs)
- else:
- ctx = __self
-
- with augment_usage_errors(__self):
- with ctx:
- return __callback(*args, **kwargs)
-
- def forward(
- __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902
- ) -> t.Any:
- """Similar to :meth:`invoke` but fills in default keyword
- arguments from the current context if the other command expects
- it. This cannot invoke callbacks directly, only other commands.
-
- .. versionchanged:: 8.0
- All ``kwargs`` are tracked in :attr:`params` so they will be
- passed if ``forward`` is called at multiple levels.
- """
- # Can only forward to other commands, not direct callbacks.
- if not isinstance(__cmd, Command):
- raise TypeError("Callback is not a command.")
-
- for param in __self.params:
- if param not in kwargs:
- kwargs[param] = __self.params[param]
-
- return __self.invoke(__cmd, *args, **kwargs)
-
- def set_parameter_source(self, name: str, source: ParameterSource) -> None:
- """Set the source of a parameter. This indicates the location
- from which the value of the parameter was obtained.
-
- :param name: The name of the parameter.
- :param source: A member of :class:`~click.core.ParameterSource`.
- """
- self._parameter_source[name] = source
-
- def get_parameter_source(self, name: str) -> t.Optional[ParameterSource]:
- """Get the source of a parameter. This indicates the location
- from which the value of the parameter was obtained.
-
- This can be useful for determining when a user specified a value
- on the command line that is the same as the default value. It
- will be :attr:`~click.core.ParameterSource.DEFAULT` only if the
- value was actually taken from the default.
-
- :param name: The name of the parameter.
- :rtype: ParameterSource
-
- .. versionchanged:: 8.0
- Returns ``None`` if the parameter was not provided from any
- source.
- """
- return self._parameter_source.get(name)
-
-
-class BaseCommand:
- """The base command implements the minimal API contract of commands.
- Most code will never use this as it does not implement a lot of useful
- functionality but it can act as the direct subclass of alternative
- parsing methods that do not depend on the Click parser.
-
- For instance, this can be used to bridge Click and other systems like
- argparse or docopt.
-
- Because base commands do not implement a lot of the API that other
- parts of Click take for granted, they are not supported for all
- operations. For instance, they cannot be used with the decorators
- usually and they have no built-in callback system.
-
- .. versionchanged:: 2.0
- Added the `context_settings` parameter.
-
- :param name: the name of the command to use unless a group overrides it.
- :param context_settings: an optional dictionary with defaults that are
- passed to the context object.
- """
-
- #: The context class to create with :meth:`make_context`.
- #:
- #: .. versionadded:: 8.0
- context_class: t.Type[Context] = Context
- #: the default for the :attr:`Context.allow_extra_args` flag.
- allow_extra_args = False
- #: the default for the :attr:`Context.allow_interspersed_args` flag.
- allow_interspersed_args = True
- #: the default for the :attr:`Context.ignore_unknown_options` flag.
- ignore_unknown_options = False
-
- def __init__(
- self,
- name: t.Optional[str],
- context_settings: t.Optional[t.MutableMapping[str, t.Any]] = None,
- ) -> None:
- #: the name the command thinks it has. Upon registering a command
- #: on a :class:`Group` the group will default the command name
- #: with this information. You should instead use the
- #: :class:`Context`\'s :attr:`~Context.info_name` attribute.
- self.name = name
-
- if context_settings is None:
- context_settings = {}
-
- #: an optional dictionary with defaults passed to the context.
- self.context_settings: t.MutableMapping[str, t.Any] = context_settings
-
- def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]:
- """Gather information that could be useful for a tool generating
- user-facing documentation. This traverses the entire structure
- below this command.
-
- Use :meth:`click.Context.to_info_dict` to traverse the entire
- CLI structure.
-
- :param ctx: A :class:`Context` representing this command.
-
- .. versionadded:: 8.0
- """
- return {"name": self.name}
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} {self.name}>"
-
- def get_usage(self, ctx: Context) -> str:
- raise NotImplementedError("Base commands cannot get usage")
-
- def get_help(self, ctx: Context) -> str:
- raise NotImplementedError("Base commands cannot get help")
-
- def make_context(
- self,
- info_name: t.Optional[str],
- args: t.List[str],
- parent: t.Optional[Context] = None,
- **extra: t.Any,
- ) -> Context:
- """This function when given an info name and arguments will kick
- off the parsing and create a new :class:`Context`. It does not
- invoke the actual command callback though.
-
- To quickly customize the context class used without overriding
- this method, set the :attr:`context_class` attribute.
-
- :param info_name: the info name for this invocation. Generally this
- is the most descriptive name for the script or
- command. For the toplevel script it's usually
- the name of the script, for commands below it's
- the name of the command.
- :param args: the arguments to parse as list of strings.
- :param parent: the parent context if available.
- :param extra: extra keyword arguments forwarded to the context
- constructor.
-
- .. versionchanged:: 8.0
- Added the :attr:`context_class` attribute.
- """
- for key, value in self.context_settings.items():
- if key not in extra:
- extra[key] = value
-
- ctx = self.context_class(
- self, info_name=info_name, parent=parent, **extra # type: ignore
- )
-
- with ctx.scope(cleanup=False):
- self.parse_args(ctx, args)
- return ctx
-
- def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]:
- """Given a context and a list of arguments this creates the parser
- and parses the arguments, then modifies the context as necessary.
- This is automatically invoked by :meth:`make_context`.
- """
- raise NotImplementedError("Base commands do not know how to parse arguments.")
-
- def invoke(self, ctx: Context) -> t.Any:
- """Given a context, this invokes the command. The default
- implementation is raising a not implemented error.
- """
- raise NotImplementedError("Base commands are not invocable by default")
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. Looks
- at the names of chained multi-commands.
-
- Any command could be part of a chained multi-command, so sibling
- commands are valid at any point during command completion. Other
- command classes will return more completions.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- from click.shell_completion import CompletionItem
-
- results: t.List["CompletionItem"] = []
-
- while ctx.parent is not None:
- ctx = ctx.parent
-
- if isinstance(ctx.command, MultiCommand) and ctx.command.chain:
- results.extend(
- CompletionItem(name, help=command.get_short_help_str())
- for name, command in _complete_visible_commands(ctx, incomplete)
- if name not in ctx.protected_args
- )
-
- return results
-
- @t.overload
- def main(
- self,
- args: t.Optional[t.Sequence[str]] = None,
- prog_name: t.Optional[str] = None,
- complete_var: t.Optional[str] = None,
- standalone_mode: "te.Literal[True]" = True,
- **extra: t.Any,
- ) -> "te.NoReturn":
- ...
-
- @t.overload
- def main(
- self,
- args: t.Optional[t.Sequence[str]] = None,
- prog_name: t.Optional[str] = None,
- complete_var: t.Optional[str] = None,
- standalone_mode: bool = ...,
- **extra: t.Any,
- ) -> t.Any:
- ...
-
- def main(
- self,
- args: t.Optional[t.Sequence[str]] = None,
- prog_name: t.Optional[str] = None,
- complete_var: t.Optional[str] = None,
- standalone_mode: bool = True,
- windows_expand_args: bool = True,
- **extra: t.Any,
- ) -> t.Any:
- """This is the way to invoke a script with all the bells and
- whistles as a command line application. This will always terminate
- the application after a call. If this is not wanted, ``SystemExit``
- needs to be caught.
-
- This method is also available by directly calling the instance of
- a :class:`Command`.
-
- :param args: the arguments that should be used for parsing. If not
- provided, ``sys.argv[1:]`` is used.
- :param prog_name: the program name that should be used. By default
- the program name is constructed by taking the file
- name from ``sys.argv[0]``.
- :param complete_var: the environment variable that controls the
- bash completion support. The default is
- ``"__COMPLETE"`` with prog_name in
- uppercase.
- :param standalone_mode: the default behavior is to invoke the script
- in standalone mode. Click will then
- handle exceptions and convert them into
- error messages and the function will never
- return but shut down the interpreter. If
- this is set to `False` they will be
- propagated to the caller and the return
- value of this function is the return value
- of :meth:`invoke`.
- :param windows_expand_args: Expand glob patterns, user dir, and
- env vars in command line args on Windows.
- :param extra: extra keyword arguments are forwarded to the context
- constructor. See :class:`Context` for more information.
-
- .. versionchanged:: 8.0.1
- Added the ``windows_expand_args`` parameter to allow
- disabling command line arg expansion on Windows.
-
- .. versionchanged:: 8.0
- When taking arguments from ``sys.argv`` on Windows, glob
- patterns, user dir, and env vars are expanded.
-
- .. versionchanged:: 3.0
- Added the ``standalone_mode`` parameter.
- """
- if args is None:
- args = sys.argv[1:]
-
- if os.name == "nt" and windows_expand_args:
- args = _expand_args(args)
- else:
- args = list(args)
-
- if prog_name is None:
- prog_name = _detect_program_name()
-
- # Process shell completion requests and exit early.
- self._main_shell_completion(extra, prog_name, complete_var)
-
- try:
- try:
- with self.make_context(prog_name, args, **extra) as ctx:
- rv = self.invoke(ctx)
- if not standalone_mode:
- return rv
- # it's not safe to `ctx.exit(rv)` here!
- # note that `rv` may actually contain data like "1" which
- # has obvious effects
- # more subtle case: `rv=[None, None]` can come out of
- # chained commands which all returned `None` -- so it's not
- # even always obvious that `rv` indicates success/failure
- # by its truthiness/falsiness
- ctx.exit()
- except (EOFError, KeyboardInterrupt) as e:
- echo(file=sys.stderr)
- raise Abort() from e
- except ClickException as e:
- if not standalone_mode:
- raise
- e.show()
- sys.exit(e.exit_code)
- except OSError as e:
- if e.errno == errno.EPIPE:
- sys.stdout = t.cast(t.TextIO, PacifyFlushWrapper(sys.stdout))
- sys.stderr = t.cast(t.TextIO, PacifyFlushWrapper(sys.stderr))
- sys.exit(1)
- else:
- raise
- except Exit as e:
- if standalone_mode:
- sys.exit(e.exit_code)
- else:
- # in non-standalone mode, return the exit code
- # note that this is only reached if `self.invoke` above raises
- # an Exit explicitly -- thus bypassing the check there which
- # would return its result
- # the results of non-standalone execution may therefore be
- # somewhat ambiguous: if there are codepaths which lead to
- # `ctx.exit(1)` and to `return 1`, the caller won't be able to
- # tell the difference between the two
- return e.exit_code
- except Abort:
- if not standalone_mode:
- raise
- echo(_("Aborted!"), file=sys.stderr)
- sys.exit(1)
-
- def _main_shell_completion(
- self,
- ctx_args: t.MutableMapping[str, t.Any],
- prog_name: str,
- complete_var: t.Optional[str] = None,
- ) -> None:
- """Check if the shell is asking for tab completion, process
- that, then exit early. Called from :meth:`main` before the
- program is invoked.
-
- :param prog_name: Name of the executable in the shell.
- :param complete_var: Name of the environment variable that holds
- the completion instruction. Defaults to
- ``_{PROG_NAME}_COMPLETE``.
-
- .. versionchanged:: 8.2.0
- Dots (``.``) in ``prog_name`` are replaced with underscores (``_``).
- """
- if complete_var is None:
- complete_name = prog_name.replace("-", "_").replace(".", "_")
- complete_var = f"_{complete_name}_COMPLETE".upper()
-
- instruction = os.environ.get(complete_var)
-
- if not instruction:
- return
-
- from .shell_completion import shell_complete
-
- rv = shell_complete(self, ctx_args, prog_name, complete_var, instruction)
- sys.exit(rv)
-
- def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any:
- """Alias for :meth:`main`."""
- return self.main(*args, **kwargs)
-
-
-class Command(BaseCommand):
- """Commands are the basic building block of command line interfaces in
- Click. A basic command handles command line parsing and might dispatch
- more parsing to commands nested below it.
-
- :param name: the name of the command to use unless a group overrides it.
- :param context_settings: an optional dictionary with defaults that are
- passed to the context object.
- :param callback: the callback to invoke. This is optional.
- :param params: the parameters to register with this command. This can
- be either :class:`Option` or :class:`Argument` objects.
- :param help: the help string to use for this command.
- :param epilog: like the help string but it's printed at the end of the
- help page after everything else.
- :param short_help: the short help to use for this command. This is
- shown on the command listing of the parent command.
- :param add_help_option: by default each command registers a ``--help``
- option. This can be disabled by this parameter.
- :param no_args_is_help: this controls what happens if no arguments are
- provided. This option is disabled by default.
- If enabled this will add ``--help`` as argument
- if no arguments are passed
- :param hidden: hide this command from help outputs.
-
- :param deprecated: issues a message indicating that
- the command is deprecated.
-
- .. versionchanged:: 8.1
- ``help``, ``epilog``, and ``short_help`` are stored unprocessed,
- all formatting is done when outputting help text, not at init,
- and is done even if not using the ``@command`` decorator.
-
- .. versionchanged:: 8.0
- Added a ``repr`` showing the command name.
-
- .. versionchanged:: 7.1
- Added the ``no_args_is_help`` parameter.
-
- .. versionchanged:: 2.0
- Added the ``context_settings`` parameter.
- """
-
- def __init__(
- self,
- name: t.Optional[str],
- context_settings: t.Optional[t.MutableMapping[str, t.Any]] = None,
- callback: t.Optional[t.Callable[..., t.Any]] = None,
- params: t.Optional[t.List["Parameter"]] = None,
- help: t.Optional[str] = None,
- epilog: t.Optional[str] = None,
- short_help: t.Optional[str] = None,
- options_metavar: t.Optional[str] = "[OPTIONS]",
- add_help_option: bool = True,
- no_args_is_help: bool = False,
- hidden: bool = False,
- deprecated: bool = False,
- ) -> None:
- super().__init__(name, context_settings)
- #: the callback to execute when the command fires. This might be
- #: `None` in which case nothing happens.
- self.callback = callback
- #: the list of parameters for this command in the order they
- #: should show up in the help page and execute. Eager parameters
- #: will automatically be handled before non eager ones.
- self.params: t.List["Parameter"] = params or []
- self.help = help
- self.epilog = epilog
- self.options_metavar = options_metavar
- self.short_help = short_help
- self.add_help_option = add_help_option
- self.no_args_is_help = no_args_is_help
- self.hidden = hidden
- self.deprecated = deprecated
-
- def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]:
- info_dict = super().to_info_dict(ctx)
- info_dict.update(
- params=[param.to_info_dict() for param in self.get_params(ctx)],
- help=self.help,
- epilog=self.epilog,
- short_help=self.short_help,
- hidden=self.hidden,
- deprecated=self.deprecated,
- )
- return info_dict
-
- def get_usage(self, ctx: Context) -> str:
- """Formats the usage line into a string and returns it.
-
- Calls :meth:`format_usage` internally.
- """
- formatter = ctx.make_formatter()
- self.format_usage(ctx, formatter)
- return formatter.getvalue().rstrip("\n")
-
- def get_params(self, ctx: Context) -> t.List["Parameter"]:
- rv = self.params
- help_option = self.get_help_option(ctx)
-
- if help_option is not None:
- rv = [*rv, help_option]
-
- return rv
-
- def format_usage(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the usage line into the formatter.
-
- This is a low-level method called by :meth:`get_usage`.
- """
- pieces = self.collect_usage_pieces(ctx)
- formatter.write_usage(ctx.command_path, " ".join(pieces))
-
- def collect_usage_pieces(self, ctx: Context) -> t.List[str]:
- """Returns all the pieces that go into the usage line and returns
- it as a list of strings.
- """
- rv = [self.options_metavar] if self.options_metavar else []
-
- for param in self.get_params(ctx):
- rv.extend(param.get_usage_pieces(ctx))
-
- return rv
-
- def get_help_option_names(self, ctx: Context) -> t.List[str]:
- """Returns the names for the help option."""
- all_names = set(ctx.help_option_names)
- for param in self.params:
- all_names.difference_update(param.opts)
- all_names.difference_update(param.secondary_opts)
- return list(all_names)
-
- def get_help_option(self, ctx: Context) -> t.Optional["Option"]:
- """Returns the help option object."""
- help_options = self.get_help_option_names(ctx)
-
- if not help_options or not self.add_help_option:
- return None
-
- def show_help(ctx: Context, param: "Parameter", value: str) -> None:
- if value and not ctx.resilient_parsing:
- echo(ctx.get_help(), color=ctx.color)
- ctx.exit()
-
- return Option(
- help_options,
- is_flag=True,
- is_eager=True,
- expose_value=False,
- callback=show_help,
- help=_("Show this message and exit."),
- )
-
- def make_parser(self, ctx: Context) -> OptionParser:
- """Creates the underlying option parser for this command."""
- parser = OptionParser(ctx)
- for param in self.get_params(ctx):
- param.add_to_parser(parser, ctx)
- return parser
-
- def get_help(self, ctx: Context) -> str:
- """Formats the help into a string and returns it.
-
- Calls :meth:`format_help` internally.
- """
- formatter = ctx.make_formatter()
- self.format_help(ctx, formatter)
- return formatter.getvalue().rstrip("\n")
-
- def get_short_help_str(self, limit: int = 45) -> str:
- """Gets short help for the command or makes it by shortening the
- long help string.
- """
- if self.short_help:
- text = inspect.cleandoc(self.short_help)
- elif self.help:
- text = make_default_short_help(self.help, limit)
- else:
- text = ""
-
- if self.deprecated:
- text = _("(Deprecated) {text}").format(text=text)
-
- return text.strip()
-
- def format_help(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the help into the formatter if it exists.
-
- This is a low-level method called by :meth:`get_help`.
-
- This calls the following methods:
-
- - :meth:`format_usage`
- - :meth:`format_help_text`
- - :meth:`format_options`
- - :meth:`format_epilog`
- """
- self.format_usage(ctx, formatter)
- self.format_help_text(ctx, formatter)
- self.format_options(ctx, formatter)
- self.format_epilog(ctx, formatter)
-
- def format_help_text(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the help text to the formatter if it exists."""
- if self.help is not None:
- # truncate the help text to the first form feed
- text = inspect.cleandoc(self.help).partition("\f")[0]
- else:
- text = ""
-
- if self.deprecated:
- text = _("(Deprecated) {text}").format(text=text)
-
- if text:
- formatter.write_paragraph()
-
- with formatter.indentation():
- formatter.write_text(text)
-
- def format_options(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes all the options into the formatter if they exist."""
- opts = []
- for param in self.get_params(ctx):
- rv = param.get_help_record(ctx)
- if rv is not None:
- opts.append(rv)
-
- if opts:
- with formatter.section(_("Options")):
- formatter.write_dl(opts)
-
- def format_epilog(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Writes the epilog into the formatter if it exists."""
- if self.epilog:
- epilog = inspect.cleandoc(self.epilog)
- formatter.write_paragraph()
-
- with formatter.indentation():
- formatter.write_text(epilog)
-
- def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]:
- if not args and self.no_args_is_help and not ctx.resilient_parsing:
- echo(ctx.get_help(), color=ctx.color)
- ctx.exit()
-
- parser = self.make_parser(ctx)
- opts, args, param_order = parser.parse_args(args=args)
-
- for param in iter_params_for_processing(param_order, self.get_params(ctx)):
- value, args = param.handle_parse_result(ctx, opts, args)
-
- if args and not ctx.allow_extra_args and not ctx.resilient_parsing:
- ctx.fail(
- ngettext(
- "Got unexpected extra argument ({args})",
- "Got unexpected extra arguments ({args})",
- len(args),
- ).format(args=" ".join(map(str, args)))
- )
-
- ctx.args = args
- ctx._opt_prefixes.update(parser._opt_prefixes)
- return args
-
- def invoke(self, ctx: Context) -> t.Any:
- """Given a context, this invokes the attached callback (if it exists)
- in the right way.
- """
- if self.deprecated:
- message = _(
- "DeprecationWarning: The command {name!r} is deprecated."
- ).format(name=self.name)
- echo(style(message, fg="red"), err=True)
-
- if self.callback is not None:
- return ctx.invoke(self.callback, **ctx.params)
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. Looks
- at the names of options and chained multi-commands.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- from click.shell_completion import CompletionItem
-
- results: t.List["CompletionItem"] = []
-
- if incomplete and not incomplete[0].isalnum():
- for param in self.get_params(ctx):
- if (
- not isinstance(param, Option)
- or param.hidden
- or (
- not param.multiple
- and ctx.get_parameter_source(param.name) # type: ignore
- is ParameterSource.COMMANDLINE
- )
- ):
- continue
-
- results.extend(
- CompletionItem(name, help=param.help)
- for name in [*param.opts, *param.secondary_opts]
- if name.startswith(incomplete)
- )
-
- results.extend(super().shell_complete(ctx, incomplete))
- return results
-
-
-class MultiCommand(Command):
- """A multi command is the basic implementation of a command that
- dispatches to subcommands. The most common version is the
- :class:`Group`.
-
- :param invoke_without_command: this controls how the multi command itself
- is invoked. By default it's only invoked
- if a subcommand is provided.
- :param no_args_is_help: this controls what happens if no arguments are
- provided. This option is enabled by default if
- `invoke_without_command` is disabled or disabled
- if it's enabled. If enabled this will add
- ``--help`` as argument if no arguments are
- passed.
- :param subcommand_metavar: the string that is used in the documentation
- to indicate the subcommand place.
- :param chain: if this is set to `True` chaining of multiple subcommands
- is enabled. This restricts the form of commands in that
- they cannot have optional arguments but it allows
- multiple commands to be chained together.
- :param result_callback: The result callback to attach to this multi
- command. This can be set or changed later with the
- :meth:`result_callback` decorator.
- :param attrs: Other command arguments described in :class:`Command`.
- """
-
- allow_extra_args = True
- allow_interspersed_args = False
-
- def __init__(
- self,
- name: t.Optional[str] = None,
- invoke_without_command: bool = False,
- no_args_is_help: t.Optional[bool] = None,
- subcommand_metavar: t.Optional[str] = None,
- chain: bool = False,
- result_callback: t.Optional[t.Callable[..., t.Any]] = None,
- **attrs: t.Any,
- ) -> None:
- super().__init__(name, **attrs)
-
- if no_args_is_help is None:
- no_args_is_help = not invoke_without_command
-
- self.no_args_is_help = no_args_is_help
- self.invoke_without_command = invoke_without_command
-
- if subcommand_metavar is None:
- if chain:
- subcommand_metavar = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..."
- else:
- subcommand_metavar = "COMMAND [ARGS]..."
-
- self.subcommand_metavar = subcommand_metavar
- self.chain = chain
- # The result callback that is stored. This can be set or
- # overridden with the :func:`result_callback` decorator.
- self._result_callback = result_callback
-
- if self.chain:
- for param in self.params:
- if isinstance(param, Argument) and not param.required:
- raise RuntimeError(
- "Multi commands in chain mode cannot have"
- " optional arguments."
- )
-
- def to_info_dict(self, ctx: Context) -> t.Dict[str, t.Any]:
- info_dict = super().to_info_dict(ctx)
- commands = {}
-
- for name in self.list_commands(ctx):
- command = self.get_command(ctx, name)
-
- if command is None:
- continue
-
- sub_ctx = ctx._make_sub_context(command)
-
- with sub_ctx.scope(cleanup=False):
- commands[name] = command.to_info_dict(sub_ctx)
-
- info_dict.update(commands=commands, chain=self.chain)
- return info_dict
-
- def collect_usage_pieces(self, ctx: Context) -> t.List[str]:
- rv = super().collect_usage_pieces(ctx)
- rv.append(self.subcommand_metavar)
- return rv
-
- def format_options(self, ctx: Context, formatter: HelpFormatter) -> None:
- super().format_options(ctx, formatter)
- self.format_commands(ctx, formatter)
-
- def result_callback(self, replace: bool = False) -> t.Callable[[F], F]:
- """Adds a result callback to the command. By default if a
- result callback is already registered this will chain them but
- this can be disabled with the `replace` parameter. The result
- callback is invoked with the return value of the subcommand
- (or the list of return values from all subcommands if chaining
- is enabled) as well as the parameters as they would be passed
- to the main callback.
-
- Example::
-
- @click.group()
- @click.option('-i', '--input', default=23)
- def cli(input):
- return 42
-
- @cli.result_callback()
- def process_result(result, input):
- return result + input
-
- :param replace: if set to `True` an already existing result
- callback will be removed.
-
- .. versionchanged:: 8.0
- Renamed from ``resultcallback``.
-
- .. versionadded:: 3.0
- """
-
- def decorator(f: F) -> F:
- old_callback = self._result_callback
-
- if old_callback is None or replace:
- self._result_callback = f
- return f
-
- def function(__value, *args, **kwargs): # type: ignore
- inner = old_callback(__value, *args, **kwargs)
- return f(inner, *args, **kwargs)
-
- self._result_callback = rv = update_wrapper(t.cast(F, function), f)
- return rv
-
- return decorator
-
- def format_commands(self, ctx: Context, formatter: HelpFormatter) -> None:
- """Extra format methods for multi methods that adds all the commands
- after the options.
- """
- commands = []
- for subcommand in self.list_commands(ctx):
- cmd = self.get_command(ctx, subcommand)
- # What is this, the tool lied about a command. Ignore it
- if cmd is None:
- continue
- if cmd.hidden:
- continue
-
- commands.append((subcommand, cmd))
-
- # allow for 3 times the default spacing
- if len(commands):
- limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands)
-
- rows = []
- for subcommand, cmd in commands:
- help = cmd.get_short_help_str(limit)
- rows.append((subcommand, help))
-
- if rows:
- with formatter.section(_("Commands")):
- formatter.write_dl(rows)
-
- def parse_args(self, ctx: Context, args: t.List[str]) -> t.List[str]:
- if not args and self.no_args_is_help and not ctx.resilient_parsing:
- echo(ctx.get_help(), color=ctx.color)
- ctx.exit()
-
- rest = super().parse_args(ctx, args)
-
- if self.chain:
- ctx.protected_args = rest
- ctx.args = []
- elif rest:
- ctx.protected_args, ctx.args = rest[:1], rest[1:]
-
- return ctx.args
-
- def invoke(self, ctx: Context) -> t.Any:
- def _process_result(value: t.Any) -> t.Any:
- if self._result_callback is not None:
- value = ctx.invoke(self._result_callback, value, **ctx.params)
- return value
-
- if not ctx.protected_args:
- if self.invoke_without_command:
- # No subcommand was invoked, so the result callback is
- # invoked with the group return value for regular
- # groups, or an empty list for chained groups.
- with ctx:
- rv = super().invoke(ctx)
- return _process_result([] if self.chain else rv)
- ctx.fail(_("Missing command."))
-
- # Fetch args back out
- args = [*ctx.protected_args, *ctx.args]
- ctx.args = []
- ctx.protected_args = []
-
- # If we're not in chain mode, we only allow the invocation of a
- # single command but we also inform the current context about the
- # name of the command to invoke.
- if not self.chain:
- # Make sure the context is entered so we do not clean up
- # resources until the result processor has worked.
- with ctx:
- cmd_name, cmd, args = self.resolve_command(ctx, args)
- assert cmd is not None
- ctx.invoked_subcommand = cmd_name
- super().invoke(ctx)
- sub_ctx = cmd.make_context(cmd_name, args, parent=ctx)
- with sub_ctx:
- return _process_result(sub_ctx.command.invoke(sub_ctx))
-
- # In chain mode we create the contexts step by step, but after the
- # base command has been invoked. Because at that point we do not
- # know the subcommands yet, the invoked subcommand attribute is
- # set to ``*`` to inform the command that subcommands are executed
- # but nothing else.
- with ctx:
- ctx.invoked_subcommand = "*" if args else None
- super().invoke(ctx)
-
- # Otherwise we make every single context and invoke them in a
- # chain. In that case the return value to the result processor
- # is the list of all invoked subcommand's results.
- contexts = []
- while args:
- cmd_name, cmd, args = self.resolve_command(ctx, args)
- assert cmd is not None
- sub_ctx = cmd.make_context(
- cmd_name,
- args,
- parent=ctx,
- allow_extra_args=True,
- allow_interspersed_args=False,
- )
- contexts.append(sub_ctx)
- args, sub_ctx.args = sub_ctx.args, []
-
- rv = []
- for sub_ctx in contexts:
- with sub_ctx:
- rv.append(sub_ctx.command.invoke(sub_ctx))
- return _process_result(rv)
-
- def resolve_command(
- self, ctx: Context, args: t.List[str]
- ) -> t.Tuple[t.Optional[str], t.Optional[Command], t.List[str]]:
- cmd_name = make_str(args[0])
- original_cmd_name = cmd_name
-
- # Get the command
- cmd = self.get_command(ctx, cmd_name)
-
- # If we can't find the command but there is a normalization
- # function available, we try with that one.
- if cmd is None and ctx.token_normalize_func is not None:
- cmd_name = ctx.token_normalize_func(cmd_name)
- cmd = self.get_command(ctx, cmd_name)
-
- # If we don't find the command we want to show an error message
- # to the user that it was not provided. However, there is
- # something else we should do: if the first argument looks like
- # an option we want to kick off parsing again for arguments to
- # resolve things like --help which now should go to the main
- # place.
- if cmd is None and not ctx.resilient_parsing:
- if split_opt(cmd_name)[0]:
- self.parse_args(ctx, ctx.args)
- ctx.fail(_("No such command {name!r}.").format(name=original_cmd_name))
- return cmd_name if cmd else None, cmd, args[1:]
-
- def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]:
- """Given a context and a command name, this returns a
- :class:`Command` object if it exists or returns `None`.
- """
- raise NotImplementedError
-
- def list_commands(self, ctx: Context) -> t.List[str]:
- """Returns a list of subcommand names in the order they should
- appear.
- """
- return []
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. Looks
- at the names of options, subcommands, and chained
- multi-commands.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- from click.shell_completion import CompletionItem
-
- results = [
- CompletionItem(name, help=command.get_short_help_str())
- for name, command in _complete_visible_commands(ctx, incomplete)
- ]
- results.extend(super().shell_complete(ctx, incomplete))
- return results
-
-
-class Group(MultiCommand):
- """A group allows a command to have subcommands attached. This is
- the most common way to implement nesting in Click.
-
- :param name: The name of the group command.
- :param commands: A dict mapping names to :class:`Command` objects.
- Can also be a list of :class:`Command`, which will use
- :attr:`Command.name` to create the dict.
- :param attrs: Other command arguments described in
- :class:`MultiCommand`, :class:`Command`, and
- :class:`BaseCommand`.
-
- .. versionchanged:: 8.0
- The ``commands`` argument can be a list of command objects.
- """
-
- #: If set, this is used by the group's :meth:`command` decorator
- #: as the default :class:`Command` class. This is useful to make all
- #: subcommands use a custom command class.
- #:
- #: .. versionadded:: 8.0
- command_class: t.Optional[t.Type[Command]] = None
-
- #: If set, this is used by the group's :meth:`group` decorator
- #: as the default :class:`Group` class. This is useful to make all
- #: subgroups use a custom group class.
- #:
- #: If set to the special value :class:`type` (literally
- #: ``group_class = type``), this group's class will be used as the
- #: default class. This makes a custom group class continue to make
- #: custom groups.
- #:
- #: .. versionadded:: 8.0
- group_class: t.Optional[t.Union[t.Type["Group"], t.Type[type]]] = None
- # Literal[type] isn't valid, so use Type[type]
-
- def __init__(
- self,
- name: t.Optional[str] = None,
- commands: t.Optional[
- t.Union[t.MutableMapping[str, Command], t.Sequence[Command]]
- ] = None,
- **attrs: t.Any,
- ) -> None:
- super().__init__(name, **attrs)
-
- if commands is None:
- commands = {}
- elif isinstance(commands, abc.Sequence):
- commands = {c.name: c for c in commands if c.name is not None}
-
- #: The registered subcommands by their exported names.
- self.commands: t.MutableMapping[str, Command] = commands
-
- def add_command(self, cmd: Command, name: t.Optional[str] = None) -> None:
- """Registers another :class:`Command` with this group. If the name
- is not provided, the name of the command is used.
- """
- name = name or cmd.name
- if name is None:
- raise TypeError("Command has no name.")
- _check_multicommand(self, name, cmd, register=True)
- self.commands[name] = cmd
-
- @t.overload
- def command(self, __func: t.Callable[..., t.Any]) -> Command:
- ...
-
- @t.overload
- def command(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Callable[[t.Callable[..., t.Any]], Command]:
- ...
-
- def command(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], Command], Command]:
- """A shortcut decorator for declaring and attaching a command to
- the group. This takes the same arguments as :func:`command` and
- immediately registers the created command with this group by
- calling :meth:`add_command`.
-
- To customize the command class used, set the
- :attr:`command_class` attribute.
-
- .. versionchanged:: 8.1
- This decorator can be applied without parentheses.
-
- .. versionchanged:: 8.0
- Added the :attr:`command_class` attribute.
- """
- from .decorators import command
-
- func: t.Optional[t.Callable[..., t.Any]] = None
-
- if args and callable(args[0]):
- assert (
- len(args) == 1 and not kwargs
- ), "Use 'command(**kwargs)(callable)' to provide arguments."
- (func,) = args
- args = ()
-
- if self.command_class and kwargs.get("cls") is None:
- kwargs["cls"] = self.command_class
-
- def decorator(f: t.Callable[..., t.Any]) -> Command:
- cmd: Command = command(*args, **kwargs)(f)
- self.add_command(cmd)
- return cmd
-
- if func is not None:
- return decorator(func)
-
- return decorator
-
- @t.overload
- def group(self, __func: t.Callable[..., t.Any]) -> "Group":
- ...
-
- @t.overload
- def group(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Callable[[t.Callable[..., t.Any]], "Group"]:
- ...
-
- def group(
- self, *args: t.Any, **kwargs: t.Any
- ) -> t.Union[t.Callable[[t.Callable[..., t.Any]], "Group"], "Group"]:
- """A shortcut decorator for declaring and attaching a group to
- the group. This takes the same arguments as :func:`group` and
- immediately registers the created group with this group by
- calling :meth:`add_command`.
-
- To customize the group class used, set the :attr:`group_class`
- attribute.
-
- .. versionchanged:: 8.1
- This decorator can be applied without parentheses.
-
- .. versionchanged:: 8.0
- Added the :attr:`group_class` attribute.
- """
- from .decorators import group
-
- func: t.Optional[t.Callable[..., t.Any]] = None
-
- if args and callable(args[0]):
- assert (
- len(args) == 1 and not kwargs
- ), "Use 'group(**kwargs)(callable)' to provide arguments."
- (func,) = args
- args = ()
-
- if self.group_class is not None and kwargs.get("cls") is None:
- if self.group_class is type:
- kwargs["cls"] = type(self)
- else:
- kwargs["cls"] = self.group_class
-
- def decorator(f: t.Callable[..., t.Any]) -> "Group":
- cmd: Group = group(*args, **kwargs)(f)
- self.add_command(cmd)
- return cmd
-
- if func is not None:
- return decorator(func)
-
- return decorator
-
- def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]:
- return self.commands.get(cmd_name)
-
- def list_commands(self, ctx: Context) -> t.List[str]:
- return sorted(self.commands)
-
-
-class CommandCollection(MultiCommand):
- """A command collection is a multi command that merges multiple multi
- commands together into one. This is a straightforward implementation
- that accepts a list of different multi commands as sources and
- provides all the commands for each of them.
-
- See :class:`MultiCommand` and :class:`Command` for the description of
- ``name`` and ``attrs``.
- """
-
- def __init__(
- self,
- name: t.Optional[str] = None,
- sources: t.Optional[t.List[MultiCommand]] = None,
- **attrs: t.Any,
- ) -> None:
- super().__init__(name, **attrs)
- #: The list of registered multi commands.
- self.sources: t.List[MultiCommand] = sources or []
-
- def add_source(self, multi_cmd: MultiCommand) -> None:
- """Adds a new multi command to the chain dispatcher."""
- self.sources.append(multi_cmd)
-
- def get_command(self, ctx: Context, cmd_name: str) -> t.Optional[Command]:
- for source in self.sources:
- rv = source.get_command(ctx, cmd_name)
-
- if rv is not None:
- if self.chain:
- _check_multicommand(self, cmd_name, rv)
-
- return rv
-
- return None
-
- def list_commands(self, ctx: Context) -> t.List[str]:
- rv: t.Set[str] = set()
-
- for source in self.sources:
- rv.update(source.list_commands(ctx))
-
- return sorted(rv)
-
-
-def _check_iter(value: t.Any) -> t.Iterator[t.Any]:
- """Check if the value is iterable but not a string. Raises a type
- error, or return an iterator over the value.
- """
- if isinstance(value, str):
- raise TypeError
-
- return iter(value)
-
-
-class Parameter:
- r"""A parameter to a command comes in two versions: they are either
- :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently
- not supported by design as some of the internals for parsing are
- intentionally not finalized.
-
- Some settings are supported by both options and arguments.
-
- :param param_decls: the parameter declarations for this option or
- argument. This is a list of flags or argument
- names.
- :param type: the type that should be used. Either a :class:`ParamType`
- or a Python type. The latter is converted into the former
- automatically if supported.
- :param required: controls if this is optional or not.
- :param default: the default value if omitted. This can also be a callable,
- in which case it's invoked when the default is needed
- without any arguments.
- :param callback: A function to further process or validate the value
- after type conversion. It is called as ``f(ctx, param, value)``
- and must return the value. It is called for all sources,
- including prompts.
- :param nargs: the number of arguments to match. If not ``1`` the return
- value is a tuple instead of single value. The default for
- nargs is ``1`` (except if the type is a tuple, then it's
- the arity of the tuple). If ``nargs=-1``, all remaining
- parameters are collected.
- :param metavar: how the value is represented in the help page.
- :param expose_value: if this is `True` then the value is passed onwards
- to the command callback and stored on the context,
- otherwise it's skipped.
- :param is_eager: eager values are processed before non eager ones. This
- should not be set for arguments or it will inverse the
- order of processing.
- :param envvar: a string or list of strings that are environment variables
- that should be checked.
- :param shell_complete: A function that returns custom shell
- completions. Used instead of the param's type completion if
- given. Takes ``ctx, param, incomplete`` and must return a list
- of :class:`~click.shell_completion.CompletionItem` or a list of
- strings.
-
- .. versionchanged:: 8.0
- ``process_value`` validates required parameters and bounded
- ``nargs``, and invokes the parameter callback before returning
- the value. This allows the callback to validate prompts.
- ``full_process_value`` is removed.
-
- .. versionchanged:: 8.0
- ``autocompletion`` is renamed to ``shell_complete`` and has new
- semantics described above. The old name is deprecated and will
- be removed in 8.1, until then it will be wrapped to match the
- new requirements.
-
- .. versionchanged:: 8.0
- For ``multiple=True, nargs>1``, the default must be a list of
- tuples.
-
- .. versionchanged:: 8.0
- Setting a default is no longer required for ``nargs>1``, it will
- default to ``None``. ``multiple=True`` or ``nargs=-1`` will
- default to ``()``.
-
- .. versionchanged:: 7.1
- Empty environment variables are ignored rather than taking the
- empty string value. This makes it possible for scripts to clear
- variables if they can't unset them.
-
- .. versionchanged:: 2.0
- Changed signature for parameter callback to also be passed the
- parameter. The old callback format will still work, but it will
- raise a warning to give you a chance to migrate the code easier.
- """
-
- param_type_name = "parameter"
-
- def __init__(
- self,
- param_decls: t.Optional[t.Sequence[str]] = None,
- type: t.Optional[t.Union[types.ParamType, t.Any]] = None,
- required: bool = False,
- default: t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]] = None,
- callback: t.Optional[t.Callable[[Context, "Parameter", t.Any], t.Any]] = None,
- nargs: t.Optional[int] = None,
- multiple: bool = False,
- metavar: t.Optional[str] = None,
- expose_value: bool = True,
- is_eager: bool = False,
- envvar: t.Optional[t.Union[str, t.Sequence[str]]] = None,
- shell_complete: t.Optional[
- t.Callable[
- [Context, "Parameter", str],
- t.Union[t.List["CompletionItem"], t.List[str]],
- ]
- ] = None,
- ) -> None:
- self.name: t.Optional[str]
- self.opts: t.List[str]
- self.secondary_opts: t.List[str]
- self.name, self.opts, self.secondary_opts = self._parse_decls(
- param_decls or (), expose_value
- )
- self.type: types.ParamType = types.convert_type(type, default)
-
- # Default nargs to what the type tells us if we have that
- # information available.
- if nargs is None:
- if self.type.is_composite:
- nargs = self.type.arity
- else:
- nargs = 1
-
- self.required = required
- self.callback = callback
- self.nargs = nargs
- self.multiple = multiple
- self.expose_value = expose_value
- self.default = default
- self.is_eager = is_eager
- self.metavar = metavar
- self.envvar = envvar
- self._custom_shell_complete = shell_complete
-
- if __debug__:
- if self.type.is_composite and nargs != self.type.arity:
- raise ValueError(
- f"'nargs' must be {self.type.arity} (or None) for"
- f" type {self.type!r}, but it was {nargs}."
- )
-
- # Skip no default or callable default.
- check_default = default if not callable(default) else None
-
- if check_default is not None:
- if multiple:
- try:
- # Only check the first value against nargs.
- check_default = next(_check_iter(check_default), None)
- except TypeError:
- raise ValueError(
- "'default' must be a list when 'multiple' is true."
- ) from None
-
- # Can be None for multiple with empty default.
- if nargs != 1 and check_default is not None:
- try:
- _check_iter(check_default)
- except TypeError:
- if multiple:
- message = (
- "'default' must be a list of lists when 'multiple' is"
- " true and 'nargs' != 1."
- )
- else:
- message = "'default' must be a list when 'nargs' != 1."
-
- raise ValueError(message) from None
-
- if nargs > 1 and len(check_default) != nargs:
- subject = "item length" if multiple else "length"
- raise ValueError(
- f"'default' {subject} must match nargs={nargs}."
- )
-
- def to_info_dict(self) -> t.Dict[str, t.Any]:
- """Gather information that could be useful for a tool generating
- user-facing documentation.
-
- Use :meth:`click.Context.to_info_dict` to traverse the entire
- CLI structure.
-
- .. versionadded:: 8.0
- """
- return {
- "name": self.name,
- "param_type_name": self.param_type_name,
- "opts": self.opts,
- "secondary_opts": self.secondary_opts,
- "type": self.type.to_info_dict(),
- "required": self.required,
- "nargs": self.nargs,
- "multiple": self.multiple,
- "default": self.default,
- "envvar": self.envvar,
- }
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} {self.name}>"
-
- def _parse_decls(
- self, decls: t.Sequence[str], expose_value: bool
- ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]:
- raise NotImplementedError()
-
- @property
- def human_readable_name(self) -> str:
- """Returns the human readable name of this parameter. This is the
- same as the name for options, but the metavar for arguments.
- """
- return self.name # type: ignore
-
- def make_metavar(self) -> str:
- if self.metavar is not None:
- return self.metavar
-
- metavar = self.type.get_metavar(self)
-
- if metavar is None:
- metavar = self.type.name.upper()
-
- if self.nargs != 1:
- metavar += "..."
-
- return metavar
-
- @t.overload
- def get_default(
- self, ctx: Context, call: "te.Literal[True]" = True
- ) -> t.Optional[t.Any]:
- ...
-
- @t.overload
- def get_default(
- self, ctx: Context, call: bool = ...
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- ...
-
- def get_default(
- self, ctx: Context, call: bool = True
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- """Get the default for the parameter. Tries
- :meth:`Context.lookup_default` first, then the local default.
-
- :param ctx: Current context.
- :param call: If the default is a callable, call it. Disable to
- return the callable instead.
-
- .. versionchanged:: 8.0.2
- Type casting is no longer performed when getting a default.
-
- .. versionchanged:: 8.0.1
- Type casting can fail in resilient parsing mode. Invalid
- defaults will not prevent showing help text.
-
- .. versionchanged:: 8.0
- Looks at ``ctx.default_map`` first.
-
- .. versionchanged:: 8.0
- Added the ``call`` parameter.
- """
- value = ctx.lookup_default(self.name, call=False) # type: ignore
-
- if value is None:
- value = self.default
-
- if call and callable(value):
- value = value()
-
- return value
-
- def add_to_parser(self, parser: OptionParser, ctx: Context) -> None:
- raise NotImplementedError()
-
- def consume_value(
- self, ctx: Context, opts: t.Mapping[str, t.Any]
- ) -> t.Tuple[t.Any, ParameterSource]:
- value = opts.get(self.name) # type: ignore
- source = ParameterSource.COMMANDLINE
-
- if value is None:
- value = self.value_from_envvar(ctx)
- source = ParameterSource.ENVIRONMENT
-
- if value is None:
- value = ctx.lookup_default(self.name) # type: ignore
- source = ParameterSource.DEFAULT_MAP
-
- if value is None:
- value = self.get_default(ctx)
- source = ParameterSource.DEFAULT
-
- return value, source
-
- def type_cast_value(self, ctx: Context, value: t.Any) -> t.Any:
- """Convert and validate a value against the option's
- :attr:`type`, :attr:`multiple`, and :attr:`nargs`.
- """
- if value is None:
- return () if self.multiple or self.nargs == -1 else None
-
- def check_iter(value: t.Any) -> t.Iterator[t.Any]:
- try:
- return _check_iter(value)
- except TypeError:
- # This should only happen when passing in args manually,
- # the parser should construct an iterable when parsing
- # the command line.
- raise BadParameter(
- _("Value must be an iterable."), ctx=ctx, param=self
- ) from None
-
- if self.nargs == 1 or self.type.is_composite:
-
- def convert(value: t.Any) -> t.Any:
- return self.type(value, param=self, ctx=ctx)
-
- elif self.nargs == -1:
-
- def convert(value: t.Any) -> t.Any: # t.Tuple[t.Any, ...]
- return tuple(self.type(x, self, ctx) for x in check_iter(value))
-
- else: # nargs > 1
-
- def convert(value: t.Any) -> t.Any: # t.Tuple[t.Any, ...]
- value = tuple(check_iter(value))
-
- if len(value) != self.nargs:
- raise BadParameter(
- ngettext(
- "Takes {nargs} values but 1 was given.",
- "Takes {nargs} values but {len} were given.",
- len(value),
- ).format(nargs=self.nargs, len=len(value)),
- ctx=ctx,
- param=self,
- )
-
- return tuple(self.type(x, self, ctx) for x in value)
-
- if self.multiple:
- return tuple(convert(x) for x in check_iter(value))
-
- return convert(value)
-
- def value_is_missing(self, value: t.Any) -> bool:
- if value is None:
- return True
-
- if (self.nargs != 1 or self.multiple) and value == ():
- return True
-
- return False
-
- def process_value(self, ctx: Context, value: t.Any) -> t.Any:
- value = self.type_cast_value(ctx, value)
-
- if self.required and self.value_is_missing(value):
- raise MissingParameter(ctx=ctx, param=self)
-
- if self.callback is not None:
- value = self.callback(ctx, self, value)
-
- return value
-
- def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]:
- if self.envvar is None:
- return None
-
- if isinstance(self.envvar, str):
- rv = os.environ.get(self.envvar)
-
- if rv:
- return rv
- else:
- for envvar in self.envvar:
- rv = os.environ.get(envvar)
-
- if rv:
- return rv
-
- return None
-
- def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]:
- rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx)
-
- if rv is not None and self.nargs != 1:
- rv = self.type.split_envvar_value(rv)
-
- return rv
-
- def handle_parse_result(
- self, ctx: Context, opts: t.Mapping[str, t.Any], args: t.List[str]
- ) -> t.Tuple[t.Any, t.List[str]]:
- with augment_usage_errors(ctx, param=self):
- value, source = self.consume_value(ctx, opts)
- ctx.set_parameter_source(self.name, source) # type: ignore
-
- try:
- value = self.process_value(ctx, value)
- except Exception:
- if not ctx.resilient_parsing:
- raise
-
- value = None
-
- if self.expose_value:
- ctx.params[self.name] = value # type: ignore
-
- return value, args
-
- def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]:
- pass
-
- def get_usage_pieces(self, ctx: Context) -> t.List[str]:
- return []
-
- def get_error_hint(self, ctx: Context) -> str:
- """Get a stringified version of the param for use in error messages to
- indicate which param caused the error.
- """
- hint_list = self.opts or [self.human_readable_name]
- return " / ".join(f"'{x}'" for x in hint_list)
-
- def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]:
- """Return a list of completions for the incomplete value. If a
- ``shell_complete`` function was given during init, it is used.
- Otherwise, the :attr:`type`
- :meth:`~click.types.ParamType.shell_complete` function is used.
-
- :param ctx: Invocation context for this command.
- :param incomplete: Value being completed. May be empty.
-
- .. versionadded:: 8.0
- """
- if self._custom_shell_complete is not None:
- results = self._custom_shell_complete(ctx, self, incomplete)
-
- if results and isinstance(results[0], str):
- from click.shell_completion import CompletionItem
-
- results = [CompletionItem(c) for c in results]
-
- return t.cast(t.List["CompletionItem"], results)
-
- return self.type.shell_complete(ctx, self, incomplete)
-
-
-class Option(Parameter):
- """Options are usually optional values on the command line and
- have some extra features that arguments don't have.
-
- All other parameters are passed onwards to the parameter constructor.
-
- :param show_default: Show the default value for this option in its
- help text. Values are not shown by default, unless
- :attr:`Context.show_default` is ``True``. If this value is a
- string, it shows that string in parentheses instead of the
- actual value. This is particularly useful for dynamic options.
- For single option boolean flags, the default remains hidden if
- its value is ``False``.
- :param show_envvar: Controls if an environment variable should be
- shown on the help page. Normally, environment variables are not
- shown.
- :param prompt: If set to ``True`` or a non empty string then the
- user will be prompted for input. If set to ``True`` the prompt
- will be the option name capitalized.
- :param confirmation_prompt: Prompt a second time to confirm the
- value if it was prompted for. Can be set to a string instead of
- ``True`` to customize the message.
- :param prompt_required: If set to ``False``, the user will be
- prompted for input only when the option was specified as a flag
- without a value.
- :param hide_input: If this is ``True`` then the input on the prompt
- will be hidden from the user. This is useful for password input.
- :param is_flag: forces this option to act as a flag. The default is
- auto detection.
- :param flag_value: which value should be used for this flag if it's
- enabled. This is set to a boolean automatically if
- the option string contains a slash to mark two options.
- :param multiple: if this is set to `True` then the argument is accepted
- multiple times and recorded. This is similar to ``nargs``
- in how it works but supports arbitrary number of
- arguments.
- :param count: this flag makes an option increment an integer.
- :param allow_from_autoenv: if this is enabled then the value of this
- parameter will be pulled from an environment
- variable in case a prefix is defined on the
- context.
- :param help: the help string.
- :param hidden: hide this option from help outputs.
- :param attrs: Other command arguments described in :class:`Parameter`.
-
- .. versionchanged:: 8.1.0
- Help text indentation is cleaned here instead of only in the
- ``@option`` decorator.
-
- .. versionchanged:: 8.1.0
- The ``show_default`` parameter overrides
- ``Context.show_default``.
-
- .. versionchanged:: 8.1.0
- The default of a single option boolean flag is not shown if the
- default value is ``False``.
-
- .. versionchanged:: 8.0.1
- ``type`` is detected from ``flag_value`` if given.
- """
-
- param_type_name = "option"
-
- def __init__(
- self,
- param_decls: t.Optional[t.Sequence[str]] = None,
- show_default: t.Union[bool, str, None] = None,
- prompt: t.Union[bool, str] = False,
- confirmation_prompt: t.Union[bool, str] = False,
- prompt_required: bool = True,
- hide_input: bool = False,
- is_flag: t.Optional[bool] = None,
- flag_value: t.Optional[t.Any] = None,
- multiple: bool = False,
- count: bool = False,
- allow_from_autoenv: bool = True,
- type: t.Optional[t.Union[types.ParamType, t.Any]] = None,
- help: t.Optional[str] = None,
- hidden: bool = False,
- show_choices: bool = True,
- show_envvar: bool = False,
- **attrs: t.Any,
- ) -> None:
- if help:
- help = inspect.cleandoc(help)
-
- default_is_missing = "default" not in attrs
- super().__init__(param_decls, type=type, multiple=multiple, **attrs)
-
- if prompt is True:
- if self.name is None:
- raise TypeError("'name' is required with 'prompt=True'.")
-
- prompt_text: t.Optional[str] = self.name.replace("_", " ").capitalize()
- elif prompt is False:
- prompt_text = None
- else:
- prompt_text = prompt
-
- self.prompt = prompt_text
- self.confirmation_prompt = confirmation_prompt
- self.prompt_required = prompt_required
- self.hide_input = hide_input
- self.hidden = hidden
-
- # If prompt is enabled but not required, then the option can be
- # used as a flag to indicate using prompt or flag_value.
- self._flag_needs_value = self.prompt is not None and not self.prompt_required
-
- if is_flag is None:
- if flag_value is not None:
- # Implicitly a flag because flag_value was set.
- is_flag = True
- elif self._flag_needs_value:
- # Not a flag, but when used as a flag it shows a prompt.
- is_flag = False
- else:
- # Implicitly a flag because flag options were given.
- is_flag = bool(self.secondary_opts)
- elif is_flag is False and not self._flag_needs_value:
- # Not a flag, and prompt is not enabled, can be used as a
- # flag if flag_value is set.
- self._flag_needs_value = flag_value is not None
-
- self.default: t.Union[t.Any, t.Callable[[], t.Any]]
-
- if is_flag and default_is_missing and not self.required:
- if multiple:
- self.default = ()
- else:
- self.default = False
-
- if flag_value is None:
- flag_value = not self.default
-
- self.type: types.ParamType
- if is_flag and type is None:
- # Re-guess the type from the flag value instead of the
- # default.
- self.type = types.convert_type(None, flag_value)
-
- self.is_flag: bool = is_flag
- self.is_bool_flag: bool = is_flag and isinstance(self.type, types.BoolParamType)
- self.flag_value: t.Any = flag_value
-
- # Counting
- self.count = count
- if count:
- if type is None:
- self.type = types.IntRange(min=0)
- if default_is_missing:
- self.default = 0
-
- self.allow_from_autoenv = allow_from_autoenv
- self.help = help
- self.show_default = show_default
- self.show_choices = show_choices
- self.show_envvar = show_envvar
-
- if __debug__:
- if self.nargs == -1:
- raise TypeError("nargs=-1 is not supported for options.")
-
- if self.prompt and self.is_flag and not self.is_bool_flag:
- raise TypeError("'prompt' is not valid for non-boolean flag.")
-
- if not self.is_bool_flag and self.secondary_opts:
- raise TypeError("Secondary flag is not valid for non-boolean flag.")
-
- if self.is_bool_flag and self.hide_input and self.prompt is not None:
- raise TypeError(
- "'prompt' with 'hide_input' is not valid for boolean flag."
- )
-
- if self.count:
- if self.multiple:
- raise TypeError("'count' is not valid with 'multiple'.")
-
- if self.is_flag:
- raise TypeError("'count' is not valid with 'is_flag'.")
-
- def to_info_dict(self) -> t.Dict[str, t.Any]:
- info_dict = super().to_info_dict()
- info_dict.update(
- help=self.help,
- prompt=self.prompt,
- is_flag=self.is_flag,
- flag_value=self.flag_value,
- count=self.count,
- hidden=self.hidden,
- )
- return info_dict
-
- def _parse_decls(
- self, decls: t.Sequence[str], expose_value: bool
- ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]:
- opts = []
- secondary_opts = []
- name = None
- possible_names = []
-
- for decl in decls:
- if decl.isidentifier():
- if name is not None:
- raise TypeError(f"Name '{name}' defined twice")
- name = decl
- else:
- split_char = ";" if decl[:1] == "/" else "/"
- if split_char in decl:
- first, second = decl.split(split_char, 1)
- first = first.rstrip()
- if first:
- possible_names.append(split_opt(first))
- opts.append(first)
- second = second.lstrip()
- if second:
- secondary_opts.append(second.lstrip())
- if first == second:
- raise ValueError(
- f"Boolean option {decl!r} cannot use the"
- " same flag for true/false."
- )
- else:
- possible_names.append(split_opt(decl))
- opts.append(decl)
-
- if name is None and possible_names:
- possible_names.sort(key=lambda x: -len(x[0])) # group long options first
- name = possible_names[0][1].replace("-", "_").lower()
- if not name.isidentifier():
- name = None
-
- if name is None:
- if not expose_value:
- return None, opts, secondary_opts
- raise TypeError("Could not determine name for option")
-
- if not opts and not secondary_opts:
- raise TypeError(
- f"No options defined but a name was passed ({name})."
- " Did you mean to declare an argument instead? Did"
- f" you mean to pass '--{name}'?"
- )
-
- return name, opts, secondary_opts
-
- def add_to_parser(self, parser: OptionParser, ctx: Context) -> None:
- if self.multiple:
- action = "append"
- elif self.count:
- action = "count"
- else:
- action = "store"
-
- if self.is_flag:
- action = f"{action}_const"
-
- if self.is_bool_flag and self.secondary_opts:
- parser.add_option(
- obj=self, opts=self.opts, dest=self.name, action=action, const=True
- )
- parser.add_option(
- obj=self,
- opts=self.secondary_opts,
- dest=self.name,
- action=action,
- const=False,
- )
- else:
- parser.add_option(
- obj=self,
- opts=self.opts,
- dest=self.name,
- action=action,
- const=self.flag_value,
- )
- else:
- parser.add_option(
- obj=self,
- opts=self.opts,
- dest=self.name,
- action=action,
- nargs=self.nargs,
- )
-
- def get_help_record(self, ctx: Context) -> t.Optional[t.Tuple[str, str]]:
- if self.hidden:
- return None
-
- any_prefix_is_slash = False
-
- def _write_opts(opts: t.Sequence[str]) -> str:
- nonlocal any_prefix_is_slash
-
- rv, any_slashes = join_options(opts)
-
- if any_slashes:
- any_prefix_is_slash = True
-
- if not self.is_flag and not self.count:
- rv += f" {self.make_metavar()}"
-
- return rv
-
- rv = [_write_opts(self.opts)]
-
- if self.secondary_opts:
- rv.append(_write_opts(self.secondary_opts))
-
- help = self.help or ""
- extra = []
-
- if self.show_envvar:
- envvar = self.envvar
-
- if envvar is None:
- if (
- self.allow_from_autoenv
- and ctx.auto_envvar_prefix is not None
- and self.name is not None
- ):
- envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
-
- if envvar is not None:
- var_str = (
- envvar
- if isinstance(envvar, str)
- else ", ".join(str(d) for d in envvar)
- )
- extra.append(_("env var: {var}").format(var=var_str))
-
- # Temporarily enable resilient parsing to avoid type casting
- # failing for the default. Might be possible to extend this to
- # help formatting in general.
- resilient = ctx.resilient_parsing
- ctx.resilient_parsing = True
-
- try:
- default_value = self.get_default(ctx, call=False)
- finally:
- ctx.resilient_parsing = resilient
-
- show_default = False
- show_default_is_str = False
-
- if self.show_default is not None:
- if isinstance(self.show_default, str):
- show_default_is_str = show_default = True
- else:
- show_default = self.show_default
- elif ctx.show_default is not None:
- show_default = ctx.show_default
-
- if show_default_is_str or (show_default and (default_value is not None)):
- if show_default_is_str:
- default_string = f"({self.show_default})"
- elif isinstance(default_value, (list, tuple)):
- default_string = ", ".join(str(d) for d in default_value)
- elif inspect.isfunction(default_value):
- default_string = _("(dynamic)")
- elif self.is_bool_flag and self.secondary_opts:
- # For boolean flags that have distinct True/False opts,
- # use the opt without prefix instead of the value.
- default_string = split_opt(
- (self.opts if self.default else self.secondary_opts)[0]
- )[1]
- elif self.is_bool_flag and not self.secondary_opts and not default_value:
- default_string = ""
- else:
- default_string = str(default_value)
-
- if default_string:
- extra.append(_("default: {default}").format(default=default_string))
-
- if (
- isinstance(self.type, types._NumberRangeBase)
- # skip count with default range type
- and not (self.count and self.type.min == 0 and self.type.max is None)
- ):
- range_str = self.type._describe_range()
-
- if range_str:
- extra.append(range_str)
-
- if self.required:
- extra.append(_("required"))
-
- if extra:
- extra_str = "; ".join(extra)
- help = f"{help} [{extra_str}]" if help else f"[{extra_str}]"
-
- return ("; " if any_prefix_is_slash else " / ").join(rv), help
-
- @t.overload
- def get_default(
- self, ctx: Context, call: "te.Literal[True]" = True
- ) -> t.Optional[t.Any]:
- ...
-
- @t.overload
- def get_default(
- self, ctx: Context, call: bool = ...
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- ...
-
- def get_default(
- self, ctx: Context, call: bool = True
- ) -> t.Optional[t.Union[t.Any, t.Callable[[], t.Any]]]:
- # If we're a non boolean flag our default is more complex because
- # we need to look at all flags in the same group to figure out
- # if we're the default one in which case we return the flag
- # value as default.
- if self.is_flag and not self.is_bool_flag:
- for param in ctx.command.params:
- if param.name == self.name and param.default:
- return t.cast(Option, param).flag_value
-
- return None
-
- return super().get_default(ctx, call=call)
-
- def prompt_for_value(self, ctx: Context) -> t.Any:
- """This is an alternative flow that can be activated in the full
- value processing if a value does not exist. It will prompt the
- user until a valid value exists and then returns the processed
- value as result.
- """
- assert self.prompt is not None
-
- # Calculate the default before prompting anything to be stable.
- default = self.get_default(ctx)
-
- # If this is a prompt for a flag we need to handle this
- # differently.
- if self.is_bool_flag:
- return confirm(self.prompt, default)
-
- return prompt(
- self.prompt,
- default=default,
- type=self.type,
- hide_input=self.hide_input,
- show_choices=self.show_choices,
- confirmation_prompt=self.confirmation_prompt,
- value_proc=lambda x: self.process_value(ctx, x),
- )
-
- def resolve_envvar_value(self, ctx: Context) -> t.Optional[str]:
- rv = super().resolve_envvar_value(ctx)
-
- if rv is not None:
- return rv
-
- if (
- self.allow_from_autoenv
- and ctx.auto_envvar_prefix is not None
- and self.name is not None
- ):
- envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
- rv = os.environ.get(envvar)
-
- if rv:
- return rv
-
- return None
-
- def value_from_envvar(self, ctx: Context) -> t.Optional[t.Any]:
- rv: t.Optional[t.Any] = self.resolve_envvar_value(ctx)
-
- if rv is None:
- return None
-
- value_depth = (self.nargs != 1) + bool(self.multiple)
-
- if value_depth > 0:
- rv = self.type.split_envvar_value(rv)
-
- if self.multiple and self.nargs != 1:
- rv = batch(rv, self.nargs)
-
- return rv
-
- def consume_value(
- self, ctx: Context, opts: t.Mapping[str, "Parameter"]
- ) -> t.Tuple[t.Any, ParameterSource]:
- value, source = super().consume_value(ctx, opts)
-
- # The parser will emit a sentinel value if the option can be
- # given as a flag without a value. This is different from None
- # to distinguish from the flag not being given at all.
- if value is _flag_needs_value:
- if self.prompt is not None and not ctx.resilient_parsing:
- value = self.prompt_for_value(ctx)
- source = ParameterSource.PROMPT
- else:
- value = self.flag_value
- source = ParameterSource.COMMANDLINE
-
- elif (
- self.multiple
- and value is not None
- and any(v is _flag_needs_value for v in value)
- ):
- value = [self.flag_value if v is _flag_needs_value else v for v in value]
- source = ParameterSource.COMMANDLINE
-
- # The value wasn't set, or used the param's default, prompt if
- # prompting is enabled.
- elif (
- source in {None, ParameterSource.DEFAULT}
- and self.prompt is not None
- and (self.required or self.prompt_required)
- and not ctx.resilient_parsing
- ):
- value = self.prompt_for_value(ctx)
- source = ParameterSource.PROMPT
-
- return value, source
-
-
-class Argument(Parameter):
- """Arguments are positional parameters to a command. They generally
- provide fewer features than options but can have infinite ``nargs``
- and are required by default.
-
- All parameters are passed onwards to the constructor of :class:`Parameter`.
- """
-
- param_type_name = "argument"
-
- def __init__(
- self,
- param_decls: t.Sequence[str],
- required: t.Optional[bool] = None,
- **attrs: t.Any,
- ) -> None:
- if required is None:
- if attrs.get("default") is not None:
- required = False
- else:
- required = attrs.get("nargs", 1) > 0
-
- if "multiple" in attrs:
- raise TypeError("__init__() got an unexpected keyword argument 'multiple'.")
-
- super().__init__(param_decls, required=required, **attrs)
-
- if __debug__:
- if self.default is not None and self.nargs == -1:
- raise TypeError("'default' is not supported for nargs=-1.")
-
- @property
- def human_readable_name(self) -> str:
- if self.metavar is not None:
- return self.metavar
- return self.name.upper() # type: ignore
-
- def make_metavar(self) -> str:
- if self.metavar is not None:
- return self.metavar
- var = self.type.get_metavar(self)
- if not var:
- var = self.name.upper() # type: ignore
- if not self.required:
- var = f"[{var}]"
- if self.nargs != 1:
- var += "..."
- return var
-
- def _parse_decls(
- self, decls: t.Sequence[str], expose_value: bool
- ) -> t.Tuple[t.Optional[str], t.List[str], t.List[str]]:
- if not decls:
- if not expose_value:
- return None, [], []
- raise TypeError("Could not determine name for argument")
- if len(decls) == 1:
- name = arg = decls[0]
- name = name.replace("-", "_").lower()
- else:
- raise TypeError(
- "Arguments take exactly one parameter declaration, got"
- f" {len(decls)}."
- )
- return name, [arg], []
-
- def get_usage_pieces(self, ctx: Context) -> t.List[str]:
- return [self.make_metavar()]
-
- def get_error_hint(self, ctx: Context) -> str:
- return f"'{self.make_metavar()}'"
-
- def add_to_parser(self, parser: OptionParser, ctx: Context) -> None:
- parser.add_argument(dest=self.name, nargs=self.nargs, obj=self)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/openapi/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/openapi/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/declare-lab/tango/diffusers/examples/community/speech_to_image_diffusion.py b/spaces/declare-lab/tango/diffusers/examples/community/speech_to_image_diffusion.py
deleted file mode 100644
index 45050137c7683d6f96886bba27c9750138c0c326..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/community/speech_to_image_diffusion.py
+++ /dev/null
@@ -1,261 +0,0 @@
-import inspect
-from typing import Callable, List, Optional, Union
-
-import torch
-from transformers import (
- CLIPImageProcessor,
- CLIPTextModel,
- CLIPTokenizer,
- WhisperForConditionalGeneration,
- WhisperProcessor,
-)
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- DiffusionPipeline,
- LMSDiscreteScheduler,
- PNDMScheduler,
- UNet2DConditionModel,
-)
-from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from diffusers.utils import logging
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-class SpeechToImagePipeline(DiffusionPipeline):
- def __init__(
- self,
- speech_model: WhisperForConditionalGeneration,
- speech_processor: WhisperProcessor,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPImageProcessor,
- ):
- super().__init__()
-
- if safety_checker is None:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- self.register_modules(
- speech_model=speech_model,
- speech_processor=speech_processor,
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- feature_extractor=feature_extractor,
- )
-
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- if slice_size == "auto":
- slice_size = self.unet.config.attention_head_dim // 2
- self.unet.set_attention_slice(slice_size)
-
- def disable_attention_slicing(self):
- self.enable_attention_slicing(None)
-
- @torch.no_grad()
- def __call__(
- self,
- audio,
- sampling_rate=16_000,
- height: int = 512,
- width: int = 512,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[torch.Generator] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- **kwargs,
- ):
- inputs = self.speech_processor.feature_extractor(
- audio, return_tensors="pt", sampling_rate=sampling_rate
- ).input_features.to(self.device)
- predicted_ids = self.speech_model.generate(inputs, max_length=480_000)
-
- prompt = self.speech_processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize=True)[
- 0
- ]
-
- if isinstance(prompt, str):
- batch_size = 1
- elif isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
-
- if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
- removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
- text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
-
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- bs_embed, seq_len, _ = text_embeddings.shape
- text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
- text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = uncond_embeddings.shape[1]
- uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
- uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- # get the initial random noise unless the user supplied it
-
- # Unlike in other pipelines, latents need to be generated in the target device
- # for 1-to-1 results reproducibility with the CompVis implementation.
- # However this currently doesn't work in `mps`.
- latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
- latents_dtype = text_embeddings.dtype
- if latents is None:
- if self.device.type == "mps":
- # randn does not exist on mps
- latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
- self.device
- )
- else:
- latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
- else:
- if latents.shape != latents_shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
- latents = latents.to(self.device)
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- # Some schedulers like PNDM have timesteps as arrays
- # It's more optimized to move all timesteps to correct device beforehand
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
-
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return image
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
diff --git a/spaces/declare-lab/tango/diffusers/examples/rl/README.md b/spaces/declare-lab/tango/diffusers/examples/rl/README.md
deleted file mode 100644
index 17881d584a4043156b784a152253b0f83598ced9..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/rl/README.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# Overview
-
-These examples show how to run [Diffuser](https://arxiv.org/abs/2205.09991) in Diffusers.
-There are two ways to use the script, `run_diffuser_locomotion.py`.
-
-The key option is a change of the variable `n_guide_steps`.
-When `n_guide_steps=0`, the trajectories are sampled from the diffusion model, but not fine-tuned to maximize reward in the environment.
-By default, `n_guide_steps=2` to match the original implementation.
-
-
-You will need some RL specific requirements to run the examples:
-
-```
-pip install -f https://download.pytorch.org/whl/torch_stable.html \
- free-mujoco-py \
- einops \
- gym==0.24.1 \
- protobuf==3.20.1 \
- git+https://github.com/rail-berkeley/d4rl.git \
- mediapy \
- Pillow==9.0.0
-```
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py
deleted file mode 100644
index 556136d4023df32e4df2477523463829a0722db4..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/spectrogram_diffusion/continous_encoder.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright 2022 The Music Spectrogram Diffusion Authors.
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import torch
-import torch.nn as nn
-from transformers.modeling_utils import ModuleUtilsMixin
-from transformers.models.t5.modeling_t5 import (
- T5Block,
- T5Config,
- T5LayerNorm,
-)
-
-from ...configuration_utils import ConfigMixin, register_to_config
-from ...models import ModelMixin
-
-
-class SpectrogramContEncoder(ModelMixin, ConfigMixin, ModuleUtilsMixin):
- @register_to_config
- def __init__(
- self,
- input_dims: int,
- targets_context_length: int,
- d_model: int,
- dropout_rate: float,
- num_layers: int,
- num_heads: int,
- d_kv: int,
- d_ff: int,
- feed_forward_proj: str,
- is_decoder: bool = False,
- ):
- super().__init__()
-
- self.input_proj = nn.Linear(input_dims, d_model, bias=False)
-
- self.position_encoding = nn.Embedding(targets_context_length, d_model)
- self.position_encoding.weight.requires_grad = False
-
- self.dropout_pre = nn.Dropout(p=dropout_rate)
-
- t5config = T5Config(
- d_model=d_model,
- num_heads=num_heads,
- d_kv=d_kv,
- d_ff=d_ff,
- feed_forward_proj=feed_forward_proj,
- dropout_rate=dropout_rate,
- is_decoder=is_decoder,
- is_encoder_decoder=False,
- )
- self.encoders = nn.ModuleList()
- for lyr_num in range(num_layers):
- lyr = T5Block(t5config)
- self.encoders.append(lyr)
-
- self.layer_norm = T5LayerNorm(d_model)
- self.dropout_post = nn.Dropout(p=dropout_rate)
-
- def forward(self, encoder_inputs, encoder_inputs_mask):
- x = self.input_proj(encoder_inputs)
-
- # terminal relative positional encodings
- max_positions = encoder_inputs.shape[1]
- input_positions = torch.arange(max_positions, device=encoder_inputs.device)
-
- seq_lens = encoder_inputs_mask.sum(-1)
- input_positions = torch.roll(input_positions.unsqueeze(0), tuple(seq_lens.tolist()), dims=0)
- x += self.position_encoding(input_positions)
-
- x = self.dropout_pre(x)
-
- # inverted the attention mask
- input_shape = encoder_inputs.size()
- extended_attention_mask = self.get_extended_attention_mask(encoder_inputs_mask, input_shape)
-
- for lyr in self.encoders:
- x = lyr(x, extended_attention_mask)[0]
- x = self.layer_norm(x)
-
- return self.dropout_post(x), encoder_inputs_mask
diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/unclip/__init__.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/unclip/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ema.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ema.py
deleted file mode 100644
index 192b012186bab3d8a5380bc9b891da8eef0fd9fa..0000000000000000000000000000000000000000
--- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ema.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import torch
-from torch import nn
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError("Decay must be between 0 and 1")
-
- self.m_name2s_name = {}
- self.register_buffer("decay", torch.tensor(decay, dtype=torch.float32))
- self.register_buffer(
- "num_updates",
- torch.tensor(0, dtype=torch.int)
- if use_num_upates
- else torch.tensor(-1, dtype=torch.int),
- )
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- # remove as '.'-character is not allowed in buffers
- s_name = name.replace(".", "")
- self.m_name2s_name.update({name: s_name})
- self.register_buffer(s_name, p.clone().detach().data)
-
- self.collected_params = []
-
- def forward(self, model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay, (1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(
- one_minus_decay * (shadow_params[sname] - m_param[key])
- )
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/actions/talk_action.py b/spaces/deepwisdom/MetaGPT/metagpt/actions/talk_action.py
deleted file mode 100644
index 4eed0d4f8ac486eaba04debb6631153f0669a2d6..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/actions/talk_action.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/8/28
-@Author : mashenquan
-@File : talk_action.py
-@Desc : Act as it’s a talk
-"""
-
-from metagpt.actions import Action, ActionOutput
-from metagpt.config import CONFIG
-from metagpt.const import DEFAULT_LANGUAGE
-from metagpt.logs import logger
-
-
-class TalkAction(Action):
- def __init__(self, name: str = "", talk="", history_summary="", knowledge="", context=None, llm=None, **kwargs):
- context = context or {}
- context["talk"] = talk
- context["history_summery"] = history_summary
- context["knowledge"] = knowledge
- super(TalkAction, self).__init__(name=name, context=context, llm=llm)
- self._talk = talk
- self._history_summary = history_summary
- self._knowledge = knowledge
- self._rsp = None
-
- @property
- def prompt(self):
- prompt = ""
- if CONFIG.agent_description:
- prompt = (
- f"You are {CONFIG.agent_description}. Your responses should align with the role-play agreement, "
- f"maintaining the character's persona and habits. When faced with unrelated questions, playfully "
- f"decline to answer without revealing your AI nature to preserve the character's image.\n\n"
- )
- prompt += f"Background knowledge:\n{self._knowledge}\n\n" if self._knowledge else ""
- prompt += f"{self._history_summary}\n\n"
- if self._history_summary != "":
- prompt += "According to the historical conversation above, "
- language = CONFIG.language or DEFAULT_LANGUAGE
- prompt += (
- f"Answer the following questions in {language}, and the answers must follow the Markdown format.\n "
- f"{self._talk}"
- )
- return prompt
-
- async def run(self, *args, **kwargs) -> ActionOutput:
- prompt = self.prompt
- logger.info(prompt)
- rsp = await self.llm.aask(msg=prompt, system_msgs=[])
- logger.info(rsp)
- self._rsp = ActionOutput(content=rsp)
- return self._rsp
diff --git a/spaces/dexxxed/remove-object-from-photo/src/core.py b/spaces/dexxxed/remove-object-from-photo/src/core.py
deleted file mode 100644
index 9706f344d99877b9f8ea6d383ef030c0a4aebdfa..0000000000000000000000000000000000000000
--- a/spaces/dexxxed/remove-object-from-photo/src/core.py
+++ /dev/null
@@ -1,466 +0,0 @@
-import base64
-import json
-import os
-import re
-import time
-import uuid
-from io import BytesIO
-from pathlib import Path
-import cv2
-
-# For inpainting
-
-import numpy as np
-import pandas as pd
-import streamlit as st
-from PIL import Image
-from streamlit_drawable_canvas import st_canvas
-
-
-import argparse
-import io
-import multiprocessing
-from typing import Union
-
-import torch
-
-try:
- torch._C._jit_override_can_fuse_on_cpu(False)
- torch._C._jit_override_can_fuse_on_gpu(False)
- torch._C._jit_set_texpr_fuser_enabled(False)
- torch._C._jit_set_nvfuser_enabled(False)
-except:
- pass
-
-from src.helper import (
- download_model,
- load_img,
- norm_img,
- numpy_to_bytes,
- pad_img_to_modulo,
- resize_max_size,
-)
-
-NUM_THREADS = str(multiprocessing.cpu_count())
-
-os.environ["OMP_NUM_THREADS"] = NUM_THREADS
-os.environ["OPENBLAS_NUM_THREADS"] = NUM_THREADS
-os.environ["MKL_NUM_THREADS"] = NUM_THREADS
-os.environ["VECLIB_MAXIMUM_THREADS"] = NUM_THREADS
-os.environ["NUMEXPR_NUM_THREADS"] = NUM_THREADS
-if os.environ.get("CACHE_DIR"):
- os.environ["TORCH_HOME"] = os.environ["CACHE_DIR"]
-
-#BUILD_DIR = os.environ.get("LAMA_CLEANER_BUILD_DIR", "./lama_cleaner/app/build")
-
-# For Seam-carving
-
-from scipy import ndimage as ndi
-
-SEAM_COLOR = np.array([255, 200, 200]) # seam visualization color (BGR)
-SHOULD_DOWNSIZE = True # if True, downsize image for faster carving
-DOWNSIZE_WIDTH = 500 # resized image width if SHOULD_DOWNSIZE is True
-ENERGY_MASK_CONST = 100000.0 # large energy value for protective masking
-MASK_THRESHOLD = 10 # minimum pixel intensity for binary mask
-USE_FORWARD_ENERGY = True # if True, use forward energy algorithm
-
-device = torch.device("cpu")
-model_path = "./assets/big-lama.pt"
-model = torch.jit.load(model_path, map_location="cpu")
-model = model.to(device)
-model.eval()
-
-
-########################################
-# UTILITY CODE
-########################################
-
-
-def visualize(im, boolmask=None, rotate=False):
- vis = im.astype(np.uint8)
- if boolmask is not None:
- vis[np.where(boolmask == False)] = SEAM_COLOR
- if rotate:
- vis = rotate_image(vis, False)
- cv2.imshow("visualization", vis)
- cv2.waitKey(1)
- return vis
-
-def resize(image, width):
- dim = None
- h, w = image.shape[:2]
- dim = (width, int(h * width / float(w)))
- image = image.astype('float32')
- return cv2.resize(image, dim)
-
-def rotate_image(image, clockwise):
- k = 1 if clockwise else 3
- return np.rot90(image, k)
-
-
-########################################
-# ENERGY FUNCTIONS
-########################################
-
-def backward_energy(im):
- """
- Simple gradient magnitude energy map.
- """
- xgrad = ndi.convolve1d(im, np.array([1, 0, -1]), axis=1, mode='wrap')
- ygrad = ndi.convolve1d(im, np.array([1, 0, -1]), axis=0, mode='wrap')
-
- grad_mag = np.sqrt(np.sum(xgrad**2, axis=2) + np.sum(ygrad**2, axis=2))
-
- # vis = visualize(grad_mag)
- # cv2.imwrite("backward_energy_demo.jpg", vis)
-
- return grad_mag
-
-def forward_energy(im):
- """
- Forward energy algorithm as described in "Improved Seam Carving for Video Retargeting"
- by Rubinstein, Shamir, Avidan.
- Vectorized code adapted from
- https://github.com/axu2/improved-seam-carving.
- """
- h, w = im.shape[:2]
- im = cv2.cvtColor(im.astype(np.uint8), cv2.COLOR_BGR2GRAY).astype(np.float64)
-
- energy = np.zeros((h, w))
- m = np.zeros((h, w))
-
- U = np.roll(im, 1, axis=0)
- L = np.roll(im, 1, axis=1)
- R = np.roll(im, -1, axis=1)
-
- cU = np.abs(R - L)
- cL = np.abs(U - L) + cU
- cR = np.abs(U - R) + cU
-
- for i in range(1, h):
- mU = m[i-1]
- mL = np.roll(mU, 1)
- mR = np.roll(mU, -1)
-
- mULR = np.array([mU, mL, mR])
- cULR = np.array([cU[i], cL[i], cR[i]])
- mULR += cULR
-
- argmins = np.argmin(mULR, axis=0)
- m[i] = np.choose(argmins, mULR)
- energy[i] = np.choose(argmins, cULR)
-
- # vis = visualize(energy)
- # cv2.imwrite("forward_energy_demo.jpg", vis)
-
- return energy
-
-########################################
-# SEAM HELPER FUNCTIONS
-########################################
-
-def add_seam(im, seam_idx):
- """
- Add a vertical seam to a 3-channel color image at the indices provided
- by averaging the pixels values to the left and right of the seam.
- Code adapted from https://github.com/vivianhylee/seam-carving.
- """
- h, w = im.shape[:2]
- output = np.zeros((h, w + 1, 3))
- for row in range(h):
- col = seam_idx[row]
- for ch in range(3):
- if col == 0:
- p = np.mean(im[row, col: col + 2, ch])
- output[row, col, ch] = im[row, col, ch]
- output[row, col + 1, ch] = p
- output[row, col + 1:, ch] = im[row, col:, ch]
- else:
- p = np.mean(im[row, col - 1: col + 1, ch])
- output[row, : col, ch] = im[row, : col, ch]
- output[row, col, ch] = p
- output[row, col + 1:, ch] = im[row, col:, ch]
-
- return output
-
-def add_seam_grayscale(im, seam_idx):
- """
- Add a vertical seam to a grayscale image at the indices provided
- by averaging the pixels values to the left and right of the seam.
- """
- h, w = im.shape[:2]
- output = np.zeros((h, w + 1))
- for row in range(h):
- col = seam_idx[row]
- if col == 0:
- p = np.mean(im[row, col: col + 2])
- output[row, col] = im[row, col]
- output[row, col + 1] = p
- output[row, col + 1:] = im[row, col:]
- else:
- p = np.mean(im[row, col - 1: col + 1])
- output[row, : col] = im[row, : col]
- output[row, col] = p
- output[row, col + 1:] = im[row, col:]
-
- return output
-
-def remove_seam(im, boolmask):
- h, w = im.shape[:2]
- boolmask3c = np.stack([boolmask] * 3, axis=2)
- return im[boolmask3c].reshape((h, w - 1, 3))
-
-def remove_seam_grayscale(im, boolmask):
- h, w = im.shape[:2]
- return im[boolmask].reshape((h, w - 1))
-
-def get_minimum_seam(im, mask=None, remove_mask=None):
- """
- DP algorithm for finding the seam of minimum energy. Code adapted from
- https://karthikkaranth.me/blog/implementing-seam-carving-with-python/
- """
- h, w = im.shape[:2]
- energyfn = forward_energy if USE_FORWARD_ENERGY else backward_energy
- M = energyfn(im)
-
- if mask is not None:
- M[np.where(mask > MASK_THRESHOLD)] = ENERGY_MASK_CONST
-
- # give removal mask priority over protective mask by using larger negative value
- if remove_mask is not None:
- M[np.where(remove_mask > MASK_THRESHOLD)] = -ENERGY_MASK_CONST * 100
-
- seam_idx, boolmask = compute_shortest_path(M, im, h, w)
-
- return np.array(seam_idx), boolmask
-
-def compute_shortest_path(M, im, h, w):
- backtrack = np.zeros_like(M, dtype=np.int_)
-
-
- # populate DP matrix
- for i in range(1, h):
- for j in range(0, w):
- if j == 0:
- idx = np.argmin(M[i - 1, j:j + 2])
- backtrack[i, j] = idx + j
- min_energy = M[i-1, idx + j]
- else:
- idx = np.argmin(M[i - 1, j - 1:j + 2])
- backtrack[i, j] = idx + j - 1
- min_energy = M[i - 1, idx + j - 1]
-
- M[i, j] += min_energy
-
- # backtrack to find path
- seam_idx = []
- boolmask = np.ones((h, w), dtype=np.bool_)
- j = np.argmin(M[-1])
- for i in range(h-1, -1, -1):
- boolmask[i, j] = False
- seam_idx.append(j)
- j = backtrack[i, j]
-
- seam_idx.reverse()
- return seam_idx, boolmask
-
-########################################
-# MAIN ALGORITHM
-########################################
-
-def seams_removal(im, num_remove, mask=None, vis=False, rot=False):
- for _ in range(num_remove):
- seam_idx, boolmask = get_minimum_seam(im, mask)
- if vis:
- visualize(im, boolmask, rotate=rot)
- im = remove_seam(im, boolmask)
- if mask is not None:
- mask = remove_seam_grayscale(mask, boolmask)
- return im, mask
-
-
-def seams_insertion(im, num_add, mask=None, vis=False, rot=False):
- seams_record = []
- temp_im = im.copy()
- temp_mask = mask.copy() if mask is not None else None
-
- for _ in range(num_add):
- seam_idx, boolmask = get_minimum_seam(temp_im, temp_mask)
- if vis:
- visualize(temp_im, boolmask, rotate=rot)
-
- seams_record.append(seam_idx)
- temp_im = remove_seam(temp_im, boolmask)
- if temp_mask is not None:
- temp_mask = remove_seam_grayscale(temp_mask, boolmask)
-
- seams_record.reverse()
-
- for _ in range(num_add):
- seam = seams_record.pop()
- im = add_seam(im, seam)
- if vis:
- visualize(im, rotate=rot)
- if mask is not None:
- mask = add_seam_grayscale(mask, seam)
-
- # update the remaining seam indices
- for remaining_seam in seams_record:
- remaining_seam[np.where(remaining_seam >= seam)] += 2
-
- return im, mask
-
-########################################
-# MAIN DRIVER FUNCTIONS
-########################################
-
-def seam_carve(im, dy, dx, mask=None, vis=False):
- im = im.astype(np.float64)
- h, w = im.shape[:2]
- assert h + dy > 0 and w + dx > 0 and dy <= h and dx <= w
-
- if mask is not None:
- mask = mask.astype(np.float64)
-
- output = im
-
- if dx < 0:
- output, mask = seams_removal(output, -dx, mask, vis)
-
- elif dx > 0:
- output, mask = seams_insertion(output, dx, mask, vis)
-
- if dy < 0:
- output = rotate_image(output, True)
- if mask is not None:
- mask = rotate_image(mask, True)
- output, mask = seams_removal(output, -dy, mask, vis, rot=True)
- output = rotate_image(output, False)
-
- elif dy > 0:
- output = rotate_image(output, True)
- if mask is not None:
- mask = rotate_image(mask, True)
- output, mask = seams_insertion(output, dy, mask, vis, rot=True)
- output = rotate_image(output, False)
-
- return output
-
-
-def object_removal(im, rmask, mask=None, vis=False, horizontal_removal=False):
- im = im.astype(np.float64)
- rmask = rmask.astype(np.float64)
- if mask is not None:
- mask = mask.astype(np.float64)
- output = im
-
- h, w = im.shape[:2]
-
- if horizontal_removal:
- output = rotate_image(output, True)
- rmask = rotate_image(rmask, True)
- if mask is not None:
- mask = rotate_image(mask, True)
-
- while len(np.where(rmask > MASK_THRESHOLD)[0]) > 0:
- seam_idx, boolmask = get_minimum_seam(output, mask, rmask)
- if vis:
- visualize(output, boolmask, rotate=horizontal_removal)
- output = remove_seam(output, boolmask)
- rmask = remove_seam_grayscale(rmask, boolmask)
- if mask is not None:
- mask = remove_seam_grayscale(mask, boolmask)
-
- num_add = (h if horizontal_removal else w) - output.shape[1]
- output, mask = seams_insertion(output, num_add, mask, vis, rot=horizontal_removal)
- if horizontal_removal:
- output = rotate_image(output, False)
-
- return output
-
-
-
-def s_image(im,mask,vs,hs,mode="resize"):
- im = cv2.cvtColor(im, cv2.COLOR_RGBA2RGB)
- mask = 255-mask[:,:,3]
- h, w = im.shape[:2]
- if SHOULD_DOWNSIZE and w > DOWNSIZE_WIDTH:
- im = resize(im, width=DOWNSIZE_WIDTH)
- if mask is not None:
- mask = resize(mask, width=DOWNSIZE_WIDTH)
-
- # image resize mode
- if mode=="resize":
- dy = hs#reverse
- dx = vs#reverse
- assert dy is not None and dx is not None
- output = seam_carve(im, dy, dx, mask, False)
-
-
- # object removal mode
- elif mode=="remove":
- assert mask is not None
- output = object_removal(im, mask, None, False, True)
-
- return output
-
-
-##### Inpainting helper code
-
-def run(image, mask):
- """
- image: [C, H, W]
- mask: [1, H, W]
- return: BGR IMAGE
- """
- origin_height, origin_width = image.shape[1:]
- image = pad_img_to_modulo(image, mod=8)
- mask = pad_img_to_modulo(mask, mod=8)
-
- mask = (mask > 0) * 1
- image = torch.from_numpy(image).unsqueeze(0).to(device)
- mask = torch.from_numpy(mask).unsqueeze(0).to(device)
-
- start = time.time()
- with torch.no_grad():
- inpainted_image = model(image, mask)
-
- print(f"process time: {(time.time() - start)*1000}ms")
- cur_res = inpainted_image[0].permute(1, 2, 0).detach().cpu().numpy()
- cur_res = cur_res[0:origin_height, 0:origin_width, :]
- cur_res = np.clip(cur_res * 255, 0, 255).astype("uint8")
- cur_res = cv2.cvtColor(cur_res, cv2.COLOR_BGR2RGB)
- return cur_res
-
-
-def get_args_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", default=8080, type=int)
- parser.add_argument("--device", default="cuda", type=str)
- parser.add_argument("--debug", action="store_true")
- return parser.parse_args()
-
-
-def process_inpaint(image, mask):
- image = cv2.cvtColor(image, cv2.COLOR_RGBA2RGB)
- original_shape = image.shape
- interpolation = cv2.INTER_CUBIC
-
- #size_limit: Union[int, str] = request.form.get("sizeLimit", "1080")
- #if size_limit == "Original":
- size_limit = max(image.shape)
- #else:
- # size_limit = int(size_limit)
-
- print(f"Origin image shape: {original_shape}")
- image = resize_max_size(image, size_limit=size_limit, interpolation=interpolation)
- print(f"Resized image shape: {image.shape}")
- image = norm_img(image)
-
- mask = 255-mask[:,:,3]
- mask = resize_max_size(mask, size_limit=size_limit, interpolation=interpolation)
- mask = norm_img(mask)
-
- res_np_img = run(image, mask)
-
- return cv2.cvtColor(res_np_img, cv2.COLOR_BGR2RGB)
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Factorytalk View Studio Activation Crack PATCHED.md b/spaces/diacanFperku/AutoGPT/Factorytalk View Studio Activation Crack PATCHED.md
deleted file mode 100644
index 6c934cf4d466956a867c0cf92d0d3c47a334a7d9..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Factorytalk View Studio Activation Crack PATCHED.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
this site is an online portal that provides the indian army games for pc. you can download these games in standard zip format. here you will get access to the latest games developed for the indian army. you can get to download the indian army games developed for pc. all types of games developed for the indian army are available in zip format. if you’re looking for games in any language or format, you are always welcome to contact us for immediate assistance.
-
you can download the game for free. but you need to pay for the optional features. as the games are developed for the indian army, most of the features have been added to it to make it more exciting. in addition to this, most of the features are available for free.
there are a lot of indian army games in our online portal. you can download these games and play it offline for your convenience. in addition to this, you can share the downloaded games with your friends and family members. they will get a chance to download the game if you have shared it with them.
-
the data we obtain is for the purpose of contributing to the research that is driven by the u.s. army research laboratory's computational and cognitve engineering division. the information we obtain will not be sold or shared with others.
-
if you believe this information is inaccurate or if you believe that your personal information has been illegally obtained, we encourage you to follow the procedures outlined below in order to request a free removal of your profile from our databases.
-
indian army games indian army games is the most popular free shooting game available today. these are free downloads of both shooting games, strategy games and fighting games. indian army games: indian army games is a game of traditional and ancient battles. the new age of game with realistic, funny and adventurous styles of games. which is the best indian army games is a standout from all the shooting games available. recently the game has been updated with many new levels and new features. indian army games developed by an independent developer.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/text/tone_sandhi.py b/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/text/tone_sandhi.py
deleted file mode 100644
index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/text/tone_sandhi.py
+++ /dev/null
@@ -1,351 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import lazy_pinyin
-from pypinyin import Style
-
-
-class ToneSandhi():
- def __init__(self):
- self.must_neural_tone_words = {
- '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
- '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
- '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
- '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
- '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
- '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
- '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
- '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
- '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
- '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
- '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
- '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
- '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
- '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
- '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
- '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
- '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
- '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
- '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
- '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实',
- '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头',
- '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼',
- '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数',
- '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气',
- '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈',
- '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方',
- '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴',
- '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦',
- '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝',
- '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹',
- '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息',
- '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤',
- '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家',
- '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故',
- '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨',
- '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅',
- '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱',
- '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱',
- '扫把', '惦记'
- }
- self.must_not_neural_tone_words = {
- "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎"
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
-
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if j - 1 >= 0 and item == word[j - 1] and pos[0] in {
- "n", "v", "a"
- } and word not in self.must_not_neural_tone_words:
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- # finals[-1] = finals[-1][:-1] + "5"
- elif len(word) > 1 and word[-1] in "们子" and pos in {
- "r", "n"
- } and word not in self.must_not_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (ge_idx >= 1 and
- (word[ge_idx - 1].isnumeric() or
- word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个':
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i +
- 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]):
- return finals
- # "一" between reduplication words shold be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword):]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[:-len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [
- finals[:len(word_list[0])], finals[len(word_list[0]):]
- ]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \
- finals_list[0][-1][-1] == "3":
-
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, 'd'))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
- 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
- 0] == word and pos == "v":
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and self._all_tone_three(
- sub_finals_list[i - 1]) and self._all_tone_three(
- sub_finals_list[i]) and not merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \
- merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- try:
- seg = self._merge_yi(seg)
- except:
- print("_merge_yi failed")
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/README.md b/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/README.md
deleted file mode 100644
index ccc4ff58f295d846d321fae376a11df6b25ba0b8..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI星瞳 朗读专用(小王子版本)
-emoji: 🌟
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/dineshreddy/WALT/mmdet/datasets/builder.py b/spaces/dineshreddy/WALT/mmdet/datasets/builder.py
deleted file mode 100644
index c9466a517dee746a6677b27a19713f2e89ed7194..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/datasets/builder.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import copy
-import platform
-import random
-from functools import partial
-
-import numpy as np
-from mmcv.parallel import collate
-from mmcv.runner import get_dist_info
-from mmcv.utils import Registry, build_from_cfg
-from torch.utils.data import DataLoader
-
-from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler
-
-if platform.system() != 'Windows':
- # https://github.com/pytorch/pytorch/issues/973
- import resource
- rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
- hard_limit = rlimit[1]
- soft_limit = min(4096, hard_limit)
- resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit))
-
-DATASETS = Registry('dataset')
-PIPELINES = Registry('pipeline')
-
-
-def _concat_dataset(cfg, default_args=None):
- from .dataset_wrappers import ConcatDataset
- ann_files = cfg['ann_file']
- img_prefixes = cfg.get('img_prefix', None)
- seg_prefixes = cfg.get('seg_prefix', None)
- proposal_files = cfg.get('proposal_file', None)
- separate_eval = cfg.get('separate_eval', True)
-
- datasets = []
- num_dset = len(ann_files)
- for i in range(num_dset):
- data_cfg = copy.deepcopy(cfg)
- # pop 'separate_eval' since it is not a valid key for common datasets.
- if 'separate_eval' in data_cfg:
- data_cfg.pop('separate_eval')
- data_cfg['ann_file'] = ann_files[i]
- if isinstance(img_prefixes, (list, tuple)):
- data_cfg['img_prefix'] = img_prefixes[i]
- if isinstance(seg_prefixes, (list, tuple)):
- data_cfg['seg_prefix'] = seg_prefixes[i]
- if isinstance(proposal_files, (list, tuple)):
- data_cfg['proposal_file'] = proposal_files[i]
- datasets.append(build_dataset(data_cfg, default_args))
-
- return ConcatDataset(datasets, separate_eval)
-
-
-def build_dataset(cfg, default_args=None):
- from .dataset_wrappers import (ConcatDataset, RepeatDataset,
- ClassBalancedDataset)
- if isinstance(cfg, (list, tuple)):
- dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg])
- elif cfg['type'] == 'ConcatDataset':
- dataset = ConcatDataset(
- [build_dataset(c, default_args) for c in cfg['datasets']],
- cfg.get('separate_eval', True))
- elif cfg['type'] == 'RepeatDataset':
- dataset = RepeatDataset(
- build_dataset(cfg['dataset'], default_args), cfg['times'])
- elif cfg['type'] == 'ClassBalancedDataset':
- dataset = ClassBalancedDataset(
- build_dataset(cfg['dataset'], default_args), cfg['oversample_thr'])
- elif isinstance(cfg.get('ann_file'), (list, tuple)):
- dataset = _concat_dataset(cfg, default_args)
- else:
- dataset = build_from_cfg(cfg, DATASETS, default_args)
-
- return dataset
-
-
-def build_dataloader(dataset,
- samples_per_gpu,
- workers_per_gpu,
- num_gpus=1,
- dist=True,
- shuffle=True,
- seed=None,
- **kwargs):
- """Build PyTorch DataLoader.
-
- In distributed training, each GPU/process has a dataloader.
- In non-distributed training, there is only one dataloader for all GPUs.
-
- Args:
- dataset (Dataset): A PyTorch dataset.
- samples_per_gpu (int): Number of training samples on each GPU, i.e.,
- batch size of each GPU.
- workers_per_gpu (int): How many subprocesses to use for data loading
- for each GPU.
- num_gpus (int): Number of GPUs. Only used in non-distributed training.
- dist (bool): Distributed training/test or not. Default: True.
- shuffle (bool): Whether to shuffle the data at every epoch.
- Default: True.
- kwargs: any keyword argument to be used to initialize DataLoader
-
- Returns:
- DataLoader: A PyTorch dataloader.
- """
- rank, world_size = get_dist_info()
- if dist:
- # DistributedGroupSampler will definitely shuffle the data to satisfy
- # that images on each GPU are in the same group
- if shuffle:
- sampler = DistributedGroupSampler(
- dataset, samples_per_gpu, world_size, rank, seed=seed)
- else:
- sampler = DistributedSampler(
- dataset, world_size, rank, shuffle=False, seed=seed)
- batch_size = samples_per_gpu
- num_workers = workers_per_gpu
- else:
- sampler = GroupSampler(dataset, samples_per_gpu) if shuffle else None
- batch_size = num_gpus * samples_per_gpu
- num_workers = num_gpus * workers_per_gpu
-
- init_fn = partial(
- worker_init_fn, num_workers=num_workers, rank=rank,
- seed=seed) if seed is not None else None
-
- data_loader = DataLoader(
- dataset,
- batch_size=batch_size,
- sampler=sampler,
- num_workers=num_workers,
- collate_fn=partial(collate, samples_per_gpu=samples_per_gpu),
- pin_memory=False,
- worker_init_fn=init_fn,
- **kwargs)
-
- return data_loader
-
-
-def worker_init_fn(worker_id, num_workers, rank, seed):
- # The seed of each worker equals to
- # num_worker * rank + worker_id + user_seed
- worker_seed = num_workers * rank + worker_id + seed
- np.random.seed(worker_seed)
- random.seed(worker_seed)
diff --git a/spaces/dineshreddy/WALT/mmdet/models/detectors/atss.py b/spaces/dineshreddy/WALT/mmdet/models/detectors/atss.py
deleted file mode 100644
index db7139c6b4fcd7e83007cdb785520743ddae7066..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/models/detectors/atss.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class ATSS(SingleStageDetector):
- """Implementation of `ATSS `_."""
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(ATSS, self).__init__(backbone, neck, bbox_head, train_cfg,
- test_cfg, pretrained)
diff --git a/spaces/dirge/voicevox/voicevox_engine/cancellable_engine.py b/spaces/dirge/voicevox/voicevox_engine/cancellable_engine.py
deleted file mode 100644
index 1bedb3ff3ebce858d8c585cf8b0d121a4d816210..0000000000000000000000000000000000000000
--- a/spaces/dirge/voicevox/voicevox_engine/cancellable_engine.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import argparse
-import asyncio
-import queue
-from multiprocessing import Pipe, Process
-from multiprocessing.connection import Connection
-from tempfile import NamedTemporaryFile
-from typing import List, Optional, Tuple
-
-import soundfile
-
-# FIXME: remove FastAPI dependency
-from fastapi import HTTPException, Request
-
-from .model import AudioQuery
-from .synthesis_engine import make_synthesis_engines
-from .utility import get_latest_core_version
-
-
-class CancellableEngine:
- """
- 音声合成のキャンセル機能に関するクラス
- 初期化後は、synthesis関数で音声合成できる
- (オリジナルと比べ引数が増えているので注意)
-
- Attributes
- ----------
- watch_con_list: List[Tuple[Request, Process]]
- Requestは接続の監視に使用され、Processは通信切断時のプロセスキルに使用される
- クライアントから接続があるとListにTupleが追加される
- 接続が切断、もしくは音声合成が終了すると削除される
- procs_and_cons: queue.Queue[Tuple[Process, Connection]]
- 音声合成の準備が終わっているプロセスのList
- (音声合成中のプロセスは入っていない)
- """
-
- def __init__(self, args: argparse.Namespace) -> None:
- """
- 変数の初期化を行う
- また、args.init_processesの数だけプロセスを起動し、procs_and_consに格納する
- """
- self.args = args
- if not self.args.enable_cancellable_synthesis:
- raise HTTPException(
- status_code=404,
- detail="実験的機能はデフォルトで無効になっています。使用するには引数を指定してください。",
- )
-
- self.watch_con_list: List[Tuple[Request, Process]] = []
- self.procs_and_cons: queue.Queue[Tuple[Process, Connection]] = queue.Queue()
- for _ in range(self.args.init_processes):
- self.procs_and_cons.put(self.start_new_proc())
-
- def start_new_proc(
- self,
- ) -> Tuple[Process, Connection]:
- """
- 新しく開始したプロセスを返す関数
-
- Returns
- -------
- ret_proc: Process
- 新規のプロセス
- sub_proc_con1: Connection
- ret_procのプロセスと通信するためのPipe
- """
- sub_proc_con1, sub_proc_con2 = Pipe(True)
- ret_proc = Process(
- target=start_synthesis_subprocess,
- kwargs={
- "args": self.args,
- "sub_proc_con": sub_proc_con2,
- },
- daemon=True,
- )
- ret_proc.start()
- return ret_proc, sub_proc_con1
-
- def finalize_con(
- self,
- req: Request,
- proc: Process,
- sub_proc_con: Optional[Connection],
- ) -> None:
- """
- 接続が切断された時の処理を行う関数
- watch_con_listからの削除、プロセスの後処理を行う
- プロセスが生きている場合はそのままprocs_and_consに加える
- 死んでいる場合は新しく生成したものをprocs_and_consに加える
-
- Parameters
- ----------
- req: fastapi.Request
- 接続確立時に受け取ったものをそのまま渡せばよい
- https://fastapi.tiangolo.com/advanced/using-request-directly/
- proc: Process
- 音声合成を行っていたプロセス
- sub_proc_con: Connection, optional
- 音声合成を行っていたプロセスとのPipe
- 指定されていない場合、プロセスは再利用されず終了される
- """
- try:
- self.watch_con_list.remove((req, proc))
- except ValueError:
- pass
- try:
- if not proc.is_alive() or sub_proc_con is None:
- proc.close()
- raise ValueError
- # プロセスが死んでいない場合は再利用する
- self.procs_and_cons.put((proc, sub_proc_con))
- except ValueError:
- # プロセスが死んでいるので新しく作り直す
- self.procs_and_cons.put(self.start_new_proc())
-
- def _synthesis_impl(
- self,
- query: AudioQuery,
- speaker_id: int,
- request: Request,
- core_version: Optional[str],
- ) -> str:
- """
- 音声合成を行う関数
- 通常エンジンの引数に比べ、requestが必要になっている
- また、返り値がファイル名になっている
-
- Parameters
- ----------
- query: AudioQuery
- speaker_id: int
- request: fastapi.Request
- 接続確立時に受け取ったものをそのまま渡せばよい
- https://fastapi.tiangolo.com/advanced/using-request-directly/
- core_version: str
-
- Returns
- -------
- f_name: str
- 生成された音声ファイルの名前
- """
- proc, sub_proc_con1 = self.procs_and_cons.get()
- self.watch_con_list.append((request, proc))
- try:
- sub_proc_con1.send((query, speaker_id, core_version))
- f_name = sub_proc_con1.recv()
- except EOFError:
- raise HTTPException(status_code=422, detail="既にサブプロセスは終了されています")
- except Exception:
- self.finalize_con(request, proc, sub_proc_con1)
- raise
-
- self.finalize_con(request, proc, sub_proc_con1)
- return f_name
-
- async def catch_disconnection(self):
- """
- 接続監視を行うコルーチン
- """
- while True:
- await asyncio.sleep(1)
- for con in self.watch_con_list:
- req, proc = con
- if await req.is_disconnected():
- try:
- if proc.is_alive():
- proc.terminate()
- proc.join()
- proc.close()
- except ValueError:
- pass
- finally:
- self.finalize_con(req, proc, None)
-
-
-def start_synthesis_subprocess(
- args: argparse.Namespace,
- sub_proc_con: Connection,
-):
- """
- 音声合成を行うサブプロセスで行うための関数
- pickle化の関係でグローバルに書いている
-
- Parameters
- ----------
- args: argparse.Namespace
- 起動時に作られたものをそのまま渡す
- sub_proc_con: Connection
- メインプロセスと通信するためのPipe
- """
-
- synthesis_engines = make_synthesis_engines(
- use_gpu=args.use_gpu,
- voicelib_dirs=args.voicelib_dir,
- voicevox_dir=args.voicevox_dir,
- runtime_dirs=args.runtime_dir,
- cpu_num_threads=args.cpu_num_threads,
- enable_mock=args.enable_mock,
- )
- assert len(synthesis_engines) != 0, "音声合成エンジンがありません。"
- latest_core_version = get_latest_core_version(versions=synthesis_engines.keys())
- while True:
- try:
- query, speaker_id, core_version = sub_proc_con.recv()
- if core_version is None:
- _engine = synthesis_engines[latest_core_version]
- elif core_version in synthesis_engines:
- _engine = synthesis_engines[core_version]
- else:
- # バージョンが見つからないエラー
- sub_proc_con.send("")
- continue
- wave = _engine._synthesis_impl(query, speaker_id)
- with NamedTemporaryFile(delete=False) as f:
- soundfile.write(
- file=f, data=wave, samplerate=query.outputSamplingRate, format="WAV"
- )
- sub_proc_con.send(f.name)
- except Exception:
- sub_proc_con.close()
- raise
diff --git a/spaces/dmeck/RVC-Speakers/util.py b/spaces/dmeck/RVC-Speakers/util.py
deleted file mode 100644
index 15497d41e43ce315601f7c147582e1e17b763eed..0000000000000000000000000000000000000000
--- a/spaces/dmeck/RVC-Speakers/util.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import sys
-import asyncio
-from io import BytesIO
-
-from fairseq import checkpoint_utils
-
-import torch
-
-import edge_tts
-import librosa
-
-
-# https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI/blob/main/config.py#L43-L55 # noqa
-def has_mps() -> bool:
- if sys.platform != "darwin":
- return False
- else:
- if not getattr(torch, 'has_mps', False):
- return False
-
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
-
-def is_half(device: str) -> bool:
- if not device.startswith('cuda'):
- return False
- else:
- gpu_name = torch.cuda.get_device_name(
- int(device.split(':')[-1])
- ).upper()
-
- # ...regex?
- if (
- ('16' in gpu_name and 'V100' not in gpu_name)
- or 'P40' in gpu_name
- or '1060' in gpu_name
- or '1070' in gpu_name
- or '1080' in gpu_name
- ):
- return False
-
- return True
-
-
-def load_hubert_model(device: str, model_path: str = 'hubert_base.pt'):
- model = checkpoint_utils.load_model_ensemble_and_task(
- [model_path]
- )[0][0].to(device)
-
- if is_half(device):
- return model.half()
- else:
- return model.float()
-
-
-async def call_edge_tts(speaker_name: str, text: str):
- tts_com = edge_tts.Communicate(text, speaker_name)
- tts_raw = b''
-
- # Stream TTS audio to bytes
- async for chunk in tts_com.stream():
- if chunk['type'] == 'audio':
- tts_raw += chunk['data']
-
- # Convert mp3 stream to wav
- ffmpeg_proc = await asyncio.create_subprocess_exec(
- 'ffmpeg',
- '-f', 'mp3',
- '-i', '-',
- '-f', 'wav',
- '-loglevel', 'error',
- '-',
- stdin=asyncio.subprocess.PIPE,
- stdout=asyncio.subprocess.PIPE
- )
- (tts_wav, _) = await ffmpeg_proc.communicate(tts_raw)
-
- return librosa.load(BytesIO(tts_wav))
-
-
-async def call_edge_tts_config(speaker_name: str, text: str, rate: str, volume: str):
- tts_com = edge_tts.Communicate(text=text, voice=speaker_name, rate=rate, volume=volume)
- tts_raw = b''
-
- # Stream TTS audio to bytes
- async for chunk in tts_com.stream():
- if chunk['type'] == 'audio':
- tts_raw += chunk['data']
-
- # Convert mp3 stream to wav
- ffmpeg_proc = await asyncio.create_subprocess_exec(
- 'ffmpeg',
- '-f', 'mp3',
- '-i', '-',
- '-f', 'wav',
- '-loglevel', 'error',
- '-',
- stdin=asyncio.subprocess.PIPE,
- stdout=asyncio.subprocess.PIPE
- )
- (tts_wav, _) = await ffmpeg_proc.communicate(tts_raw)
-
- return librosa.load(BytesIO(tts_wav))
diff --git a/spaces/doevent/blip/train_vqa.py b/spaces/doevent/blip/train_vqa.py
deleted file mode 100644
index 89eb7490862e517cc660f842396033c21d441a20..0000000000000000000000000000000000000000
--- a/spaces/doevent/blip/train_vqa.py
+++ /dev/null
@@ -1,202 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
-'''
-import argparse
-import os
-import ruamel_yaml as yaml
-import numpy as np
-import random
-import time
-import datetime
-import json
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.utils.data import DataLoader
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-
-from models.blip_vqa import blip_vqa
-import utils
-from utils import cosine_lr_schedule
-from data import create_dataset, create_sampler, create_loader
-from data.vqa_dataset import vqa_collate_fn
-from data.utils import save_result
-
-
-def train(model, data_loader, optimizer, epoch, device):
- # train
- model.train()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
- metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}'))
-
- header = 'Train Epoch: [{}]'.format(epoch)
- print_freq = 50
-
- for i,(image, question, answer, weights, n) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
- image, weights = image.to(device,non_blocking=True), weights.to(device,non_blocking=True)
-
- loss = model(image, question, answer, train=True, n=n, weights=weights)
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- metric_logger.update(loss=loss.item())
- metric_logger.update(lr=optimizer.param_groups[0]["lr"])
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("Averaged stats:", metric_logger.global_avg())
- return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}
-
-
-@torch.no_grad()
-def evaluation(model, data_loader, device, config) :
- # test
- model.eval()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- header = 'Generate VQA test result:'
- print_freq = 50
-
- result = []
-
- if config['inference']=='rank':
- answer_list = data_loader.dataset.answer_list
- answer_candidates = model.tokenizer(answer_list, padding='longest', return_tensors='pt').to(device)
- answer_candidates.input_ids[:,0] = model.tokenizer.bos_token_id
-
- for n, (image, question, question_id) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
- image = image.to(device,non_blocking=True)
-
- if config['inference']=='generate':
- answers = model(image, question, train=False, inference='generate')
-
- for answer, ques_id in zip(answers, question_id):
- ques_id = int(ques_id.item())
- result.append({"question_id":ques_id, "answer":answer})
-
- elif config['inference']=='rank':
- answer_ids = model(image, question, answer_candidates, train=False, inference='rank', k_test=config['k_test'])
-
- for ques_id, answer_id in zip(question_id, answer_ids):
- result.append({"question_id":int(ques_id.item()), "answer":answer_list[answer_id]})
-
- return result
-
-
-def main(args, config):
- utils.init_distributed_mode(args)
-
- device = torch.device(args.device)
-
- # fix the seed for reproducibility
- seed = args.seed + utils.get_rank()
- torch.manual_seed(seed)
- np.random.seed(seed)
- random.seed(seed)
- cudnn.benchmark = True
-
- #### Dataset ####
- print("Creating vqa datasets")
- datasets = create_dataset('vqa', config)
-
- if args.distributed:
- num_tasks = utils.get_world_size()
- global_rank = utils.get_rank()
- samplers = create_sampler(datasets, [True, False], num_tasks, global_rank)
- else:
- samplers = [None, None]
-
- train_loader, test_loader = create_loader(datasets,samplers,
- batch_size=[config['batch_size_train'],config['batch_size_test']],
- num_workers=[4,4],is_trains=[True, False],
- collate_fns=[vqa_collate_fn,None])
- #### Model ####
- print("Creating model")
- model = blip_vqa(pretrained=config['pretrained'], image_size=config['image_size'],
- vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'])
-
- model = model.to(device)
-
- model_without_ddp = model
- if args.distributed:
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- model_without_ddp = model.module
-
- optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay'])
-
- best = 0
- best_epoch = 0
-
- print("Start training")
- start_time = time.time()
- for epoch in range(0, config['max_epoch']):
- if not args.evaluate:
- if args.distributed:
- train_loader.sampler.set_epoch(epoch)
-
- cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr'])
-
- train_stats = train(model, train_loader, optimizer, epoch, device)
-
- else:
- break
-
- if utils.is_main_process():
- log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
- 'epoch': epoch,
- }
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
-
- save_obj = {
- 'model': model_without_ddp.state_dict(),
- 'optimizer': optimizer.state_dict(),
- 'config': config,
- 'epoch': epoch,
- }
- torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_%02d.pth'%epoch))
-
- dist.barrier()
-
- vqa_result = evaluation(model_without_ddp, test_loader, device, config)
- result_file = save_result(vqa_result, args.result_dir, 'vqa_result')
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--config', default='./configs/vqa.yaml')
- parser.add_argument('--output_dir', default='output/VQA')
- parser.add_argument('--evaluate', action='store_true')
- parser.add_argument('--device', default='cuda')
- parser.add_argument('--seed', default=42, type=int)
- parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes')
- parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')
- parser.add_argument('--distributed', default=True, type=bool)
- args = parser.parse_args()
-
- config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
-
- args.result_dir = os.path.join(args.output_dir, 'result')
-
- Path(args.output_dir).mkdir(parents=True, exist_ok=True)
- Path(args.result_dir).mkdir(parents=True, exist_ok=True)
-
- yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w'))
-
- main(args, config)
\ No newline at end of file
diff --git a/spaces/doluvor/faster-whisper-webui/cli.py b/spaces/doluvor/faster-whisper-webui/cli.py
deleted file mode 100644
index e0e21f2a6255db83bbc2c6e5ad08c56e85f7ac9b..0000000000000000000000000000000000000000
--- a/spaces/doluvor/faster-whisper-webui/cli.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import argparse
-import os
-import pathlib
-from urllib.parse import urlparse
-import warnings
-import numpy as np
-
-import torch
-from app import VadOptions, WhisperTranscriber
-from src.config import VAD_INITIAL_PROMPT_MODE_VALUES, ApplicationConfig, VadInitialPromptMode
-from src.download import download_url
-from src.languages import get_language_names
-
-from src.utils import optional_float, optional_int, str2bool
-from src.whisper.whisperFactory import create_whisper_container
-
-def cli():
- app_config = ApplicationConfig.create_default()
- whisper_models = app_config.get_model_names()
-
- # For the CLI, we fallback to saving the output to the current directory
- output_dir = app_config.output_dir if app_config.output_dir is not None else "."
-
- # Environment variable overrides
- default_whisper_implementation = os.environ.get("WHISPER_IMPLEMENTATION", app_config.whisper_implementation)
-
- parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser.add_argument("audio", nargs="+", type=str, \
- help="audio file(s) to transcribe")
- parser.add_argument("--model", default=app_config.default_model_name, choices=whisper_models, \
- help="name of the Whisper model to use") # medium
- parser.add_argument("--model_dir", type=str, default=app_config.model_dir, \
- help="the path to save model files; uses ~/.cache/whisper by default")
- parser.add_argument("--device", default=app_config.device, \
- help="device to use for PyTorch inference")
- parser.add_argument("--output_dir", "-o", type=str, default=output_dir, \
- help="directory to save the outputs")
- parser.add_argument("--verbose", type=str2bool, default=app_config.verbose, \
- help="whether to print out the progress and debug messages")
- parser.add_argument("--whisper_implementation", type=str, default=default_whisper_implementation, choices=["whisper", "faster-whisper"],\
- help="the Whisper implementation to use")
-
- parser.add_argument("--task", type=str, default=app_config.task, choices=["transcribe", "translate"], \
- help="whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate')")
- parser.add_argument("--language", type=str, default=app_config.language, choices=sorted(get_language_names()), \
- help="language spoken in the audio, specify None to perform language detection")
-
- parser.add_argument("--vad", type=str, default=app_config.default_vad, choices=["none", "silero-vad", "silero-vad-skip-gaps", "silero-vad-expand-into-gaps", "periodic-vad"], \
- help="The voice activity detection algorithm to use") # silero-vad
- parser.add_argument("--vad_initial_prompt_mode", type=str, default=app_config.vad_initial_prompt_mode, choices=VAD_INITIAL_PROMPT_MODE_VALUES, \
- help="Whether or not to prepend the initial prompt to each VAD segment (prepend_all_segments), or just the first segment (prepend_first_segment)") # prepend_first_segment
- parser.add_argument("--vad_merge_window", type=optional_float, default=app_config.vad_merge_window, \
- help="The window size (in seconds) to merge voice segments")
- parser.add_argument("--vad_max_merge_size", type=optional_float, default=app_config.vad_max_merge_size,\
- help="The maximum size (in seconds) of a voice segment")
- parser.add_argument("--vad_padding", type=optional_float, default=app_config.vad_padding, \
- help="The padding (in seconds) to add to each voice segment")
- parser.add_argument("--vad_prompt_window", type=optional_float, default=app_config.vad_prompt_window, \
- help="The window size of the prompt to pass to Whisper")
- parser.add_argument("--vad_cpu_cores", type=int, default=app_config.vad_cpu_cores, \
- help="The number of CPU cores to use for VAD pre-processing.") # 1
- parser.add_argument("--vad_parallel_devices", type=str, default=app_config.vad_parallel_devices, \
- help="A commma delimited list of CUDA devices to use for parallel processing. If None, disable parallel processing.") # ""
- parser.add_argument("--auto_parallel", type=bool, default=app_config.auto_parallel, \
- help="True to use all available GPUs and CPU cores for processing. Use vad_cpu_cores/vad_parallel_devices to specify the number of CPU cores/GPUs to use.") # False
-
- parser.add_argument("--temperature", type=float, default=app_config.temperature, \
- help="temperature to use for sampling")
- parser.add_argument("--best_of", type=optional_int, default=app_config.best_of, \
- help="number of candidates when sampling with non-zero temperature")
- parser.add_argument("--beam_size", type=optional_int, default=app_config.beam_size, \
- help="number of beams in beam search, only applicable when temperature is zero")
- parser.add_argument("--patience", type=float, default=app_config.patience, \
- help="optional patience value to use in beam decoding, as in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search")
- parser.add_argument("--length_penalty", type=float, default=app_config.length_penalty, \
- help="optional token length penalty coefficient (alpha) as in https://arxiv.org/abs/1609.08144, uses simple lengt normalization by default")
-
- parser.add_argument("--suppress_tokens", type=str, default=app_config.suppress_tokens, \
- help="comma-separated list of token ids to suppress during sampling; '-1' will suppress most special characters except common punctuations")
- parser.add_argument("--initial_prompt", type=str, default=app_config.initial_prompt, \
- help="optional text to provide as a prompt for the first window.")
- parser.add_argument("--condition_on_previous_text", type=str2bool, default=app_config.condition_on_previous_text, \
- help="if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop")
- parser.add_argument("--fp16", type=str2bool, default=app_config.fp16, \
- help="whether to perform inference in fp16; True by default")
- parser.add_argument("--compute_type", type=str, default=app_config.compute_type, choices=["default", "auto", "int8", "int8_float16", "int16", "float16", "float32"], \
- help="the compute type to use for inference")
-
- parser.add_argument("--temperature_increment_on_fallback", type=optional_float, default=app_config.temperature_increment_on_fallback, \
- help="temperature to increase when falling back when the decoding fails to meet either of the thresholds below")
- parser.add_argument("--compression_ratio_threshold", type=optional_float, default=app_config.compression_ratio_threshold, \
- help="if the gzip compression ratio is higher than this value, treat the decoding as failed")
- parser.add_argument("--logprob_threshold", type=optional_float, default=app_config.logprob_threshold, \
- help="if the average log probability is lower than this value, treat the decoding as failed")
- parser.add_argument("--no_speech_threshold", type=optional_float, default=app_config.no_speech_threshold, \
- help="if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence")
-
- parser.add_argument("--word_timestamps", type=str2bool, default=app_config.word_timestamps,
- help="(experimental) extract word-level timestamps and refine the results based on them")
- parser.add_argument("--prepend_punctuations", type=str, default=app_config.prepend_punctuations,
- help="if word_timestamps is True, merge these punctuation symbols with the next word")
- parser.add_argument("--append_punctuations", type=str, default=app_config.append_punctuations,
- help="if word_timestamps is True, merge these punctuation symbols with the previous word")
- parser.add_argument("--highlight_words", type=str2bool, default=app_config.highlight_words,
- help="(requires --word_timestamps True) underline each word as it is spoken in srt and vtt")
- parser.add_argument("--threads", type=optional_int, default=0,
- help="number of threads used by torch for CPU inference; supercedes MKL_NUM_THREADS/OMP_NUM_THREADS")
-
- args = parser.parse_args().__dict__
- model_name: str = args.pop("model")
- model_dir: str = args.pop("model_dir")
- output_dir: str = args.pop("output_dir")
- device: str = args.pop("device")
- os.makedirs(output_dir, exist_ok=True)
-
- if (threads := args.pop("threads")) > 0:
- torch.set_num_threads(threads)
-
- whisper_implementation = args.pop("whisper_implementation")
- print(f"Using {whisper_implementation} for Whisper")
-
- if model_name.endswith(".en") and args["language"] not in {"en", "English"}:
- warnings.warn(f"{model_name} is an English-only model but receipted '{args['language']}'; using English instead.")
- args["language"] = "en"
-
- temperature = args.pop("temperature")
- temperature_increment_on_fallback = args.pop("temperature_increment_on_fallback")
- if temperature_increment_on_fallback is not None:
- temperature = tuple(np.arange(temperature, 1.0 + 1e-6, temperature_increment_on_fallback))
- else:
- temperature = [temperature]
-
- vad = args.pop("vad")
- vad_initial_prompt_mode = args.pop("vad_initial_prompt_mode")
- vad_merge_window = args.pop("vad_merge_window")
- vad_max_merge_size = args.pop("vad_max_merge_size")
- vad_padding = args.pop("vad_padding")
- vad_prompt_window = args.pop("vad_prompt_window")
- vad_cpu_cores = args.pop("vad_cpu_cores")
- auto_parallel = args.pop("auto_parallel")
-
- compute_type = args.pop("compute_type")
- highlight_words = args.pop("highlight_words")
-
- transcriber = WhisperTranscriber(delete_uploaded_files=False, vad_cpu_cores=vad_cpu_cores, app_config=app_config)
- transcriber.set_parallel_devices(args.pop("vad_parallel_devices"))
- transcriber.set_auto_parallel(auto_parallel)
-
- model = create_whisper_container(whisper_implementation=whisper_implementation, model_name=model_name,
- device=device, compute_type=compute_type, download_root=model_dir, models=app_config.models)
-
- if (transcriber._has_parallel_devices()):
- print("Using parallel devices:", transcriber.parallel_device_list)
-
- for audio_path in args.pop("audio"):
- sources = []
-
- # Detect URL and download the audio
- if (uri_validator(audio_path)):
- # Download from YouTube/URL directly
- for source_path in download_url(audio_path, maxDuration=-1, destinationDirectory=output_dir, playlistItems=None):
- source_name = os.path.basename(source_path)
- sources.append({ "path": source_path, "name": source_name })
- else:
- sources.append({ "path": audio_path, "name": os.path.basename(audio_path) })
-
- for source in sources:
- source_path = source["path"]
- source_name = source["name"]
-
- vadOptions = VadOptions(vad, vad_merge_window, vad_max_merge_size, vad_padding, vad_prompt_window,
- VadInitialPromptMode.from_string(vad_initial_prompt_mode))
-
- result = transcriber.transcribe_file(model, source_path, temperature=temperature, vadOptions=vadOptions, **args)
-
- transcriber.write_result(result, source_name, output_dir, highlight_words)
-
- transcriber.close()
-
-def uri_validator(x):
- try:
- result = urlparse(x)
- return all([result.scheme, result.netloc])
- except:
- return False
-
-if __name__ == '__main__':
- cli()
\ No newline at end of file
diff --git a/spaces/dongsiqie/Code-Interpreter/response_parser.py b/spaces/dongsiqie/Code-Interpreter/response_parser.py
deleted file mode 100644
index d74081e400f8e1356fc47c61938a22b0e834b517..0000000000000000000000000000000000000000
--- a/spaces/dongsiqie/Code-Interpreter/response_parser.py
+++ /dev/null
@@ -1,200 +0,0 @@
-from abc import ABCMeta, abstractmethod
-from functional import *
-
-
-class ChoiceStrategy(metaclass=ABCMeta):
- def __init__(self, choice):
- self.choice = choice
- self.delta = choice['delta']
-
- @abstractmethod
- def support(self):
- pass
-
- @abstractmethod
- def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
- pass
-
-
-class RoleChoiceStrategy(ChoiceStrategy):
-
- def support(self):
- return 'role' in self.delta
-
- def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
- bot_backend.set_assistant_role_name(assistant_role_name=self.delta['role'])
- return history, whether_exit
-
-
-class ContentChoiceStrategy(ChoiceStrategy):
- def support(self):
- return 'content' in self.delta and self.delta['content'] is not None
-
- def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
- # null value of content often occur in function call:
- # {
- # "role": "assistant",
- # "content": null,
- # "function_call": {
- # "name": "python",
- # "arguments": ""
- # }
- # }
- bot_backend.add_content(content=self.delta.get('content', ''))
- history[-1][1] = bot_backend.content
- return history, whether_exit
-
-
-class NameFunctionCallChoiceStrategy(ChoiceStrategy):
- def support(self):
- return 'function_call' in self.delta and 'name' in self.delta['function_call']
-
- def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
- function_dict = bot_backend.jupyter_kernel.available_functions
- bot_backend.set_function_name(function_name=self.delta['function_call']['name'])
- bot_backend.copy_current_bot_history(bot_history=history)
- if bot_backend.function_name not in function_dict:
- history.append(
- [
- None,
- f'GPT attempted to call a function that does '
- f'not exist: {bot_backend.function_name}\n '
- ]
- )
- whether_exit = True
-
- return history, whether_exit
-
-
-class ArgumentsFunctionCallChoiceStrategy(ChoiceStrategy):
-
- def support(self):
- return 'function_call' in self.delta and 'arguments' in self.delta['function_call']
-
- def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
- bot_backend.add_function_args_str(function_args_str=self.delta['function_call']['arguments'])
-
- if bot_backend.function_name == 'python': # handle hallucinatory function calls
- '''
- In practice, we have noticed that GPT, especially GPT-3.5, may occasionally produce hallucinatory
- function calls. These calls involve a non-existent function named `python` with arguments consisting
- solely of raw code text (not a JSON format).
- '''
- temp_code_str = bot_backend.function_args_str
- bot_backend.update_display_code_block(
- display_code_block="\n🔴Working:\n```python\n{}\n```".format(temp_code_str)
- )
- history = copy.deepcopy(bot_backend.bot_history)
- history[-1][1] += bot_backend.display_code_block
- else:
- temp_code_str = parse_json(function_args=bot_backend.function_args_str, finished=False)
- if temp_code_str is not None:
- bot_backend.update_display_code_block(
- display_code_block="\n🔴Working:\n```python\n{}\n```".format(
- temp_code_str
- )
- )
- history = copy.deepcopy(bot_backend.bot_history)
- history[-1][1] += bot_backend.display_code_block
-
- return history, whether_exit
-
-
-class FinishReasonChoiceStrategy(ChoiceStrategy):
- def support(self):
- return self.choice['finish_reason'] is not None
-
- def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
- function_dict = bot_backend.jupyter_kernel.available_functions
-
- if bot_backend.content:
- bot_backend.add_gpt_response_content_message()
-
- bot_backend.update_finish_reason(finish_reason=self.choice['finish_reason'])
- if bot_backend.finish_reason == 'function_call':
- try:
-
- code_str = self.get_code_str(bot_backend)
-
- bot_backend.update_display_code_block(
- display_code_block="\n🟢Working:\n```python\n{}\n```".format(code_str)
- )
- history = copy.deepcopy(bot_backend.bot_history)
- history[-1][1] += bot_backend.display_code_block
-
- # function response
- text_to_gpt, content_to_display = function_dict[
- bot_backend.function_name
- ](code_str)
-
- # add function call to conversion
- bot_backend.add_function_call_response_message(function_response=text_to_gpt, save_tokens=True)
-
- add_function_response_to_bot_history(
- content_to_display=content_to_display, history=history, unique_id=bot_backend.unique_id
- )
-
- except json.JSONDecodeError:
- history.append(
- [None, f"GPT generate wrong function args: {bot_backend.function_args_str}"]
- )
- whether_exit = True
- return history, whether_exit
-
- except Exception as e:
- history.append([None, f'Backend error: {e}'])
- whether_exit = True
- return history, whether_exit
-
- bot_backend.reset_gpt_response_log_values(exclude=['finish_reason'])
-
- return history, whether_exit
-
- @staticmethod
- def get_code_str(bot_backend):
- if bot_backend.function_name == 'python':
- code_str = bot_backend.function_args_str
- else:
- code_str = parse_json(function_args=bot_backend.function_args_str, finished=True)
- if code_str is None:
- raise json.JSONDecodeError
- return code_str
-
-
-class ChoiceHandler:
- strategies = [
- RoleChoiceStrategy, ContentChoiceStrategy, NameFunctionCallChoiceStrategy,
- ArgumentsFunctionCallChoiceStrategy, FinishReasonChoiceStrategy
- ]
-
- def __init__(self, choice):
- self.choice = choice
-
- def handle(self, bot_backend: BotBackend, history: List, whether_exit: bool):
- for Strategy in self.strategies:
- strategy_instance = Strategy(choice=self.choice)
- if not strategy_instance.support():
- continue
- history, whether_exit = strategy_instance.execute(
- bot_backend=bot_backend,
- history=history,
- whether_exit=whether_exit
- )
- return history, whether_exit
-
-
-def parse_response(chunk, history, bot_backend: BotBackend):
- """
- :return: history, whether_exit
- """
- whether_exit = False
- if chunk['choices']:
- choice = chunk['choices'][0]
- choice_handler = ChoiceHandler(choice=choice)
- history, whether_exit = choice_handler.handle(
- history=history,
- bot_backend=bot_backend,
- whether_exit=whether_exit
- )
-
- return history, whether_exit
diff --git a/spaces/dt/ner_spanish/app.py b/spaces/dt/ner_spanish/app.py
deleted file mode 100644
index 608daca2e26cc9f80d8d1273999de6f11464f1b2..0000000000000000000000000000000000000000
--- a/spaces/dt/ner_spanish/app.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import os
-import gradio as gr
-from transformers import AutoTokenizer, PreTrainedTokenizerFast
-from transformers import AutoModelForTokenClassification
-from transformers import pipeline
-import spacy
-from spacy import displacy
-from spacy.tokens import Span
-
-
-# ============ INPUT =================
-os.system("python -m spacy download es_core_news_sm")
-colors = {
- "LOC": "#ff5e5e",
- "MISC": "#ff9999",
- "ORG": "#ffd699",
- "PER": "#80c5c5",
-}
-model_name = "mrm8488/bert-spanish-cased-finetuned-ner"
-
-nlp = spacy.load("es_core_news_sm") #Esto es para usar displacy y renderizar las entidades
-nlp.disable_pipes("ner")
-
-# ============ Footer, titulo, descripciones y ejemplos ===============
-article = "
"
-
-title = "NER en español"
-description = "Esta aplicación es para detección de entidades nombradas en Español"
-examples = ["Hola me llamo David Betancur y vivo en Madrid"]
-
-# =============== Modelo ===============
-
-model = AutoModelForTokenClassification.from_pretrained(model_name)
-tokenizer = AutoTokenizer.from_pretrained(model_name)
-ner_pipe = pipeline("ner", model=model, tokenizer=tokenizer)
-
-# =============== Funcion ===============
-def ner(input_text):
- entities = ner_pipe(input_text, aggregation_strategy="first")
-
- doc = nlp(input_text)
-
- potential_entities = []
-
- for entity in entities:
- start = entity["start"]
- end = entity["end"]
- label = entity["entity_group"]
-
- ent = doc.char_span(start, end, label=label)
- if ent != None:
- doc.ents += (ent,)
- else:
- potential_entities.append(entity)
-
- potential_entities.append({"entity_group": "NONE", "start": -1, "end": -1})
-
- start = potential_entities[0]["start"]
- end = potential_entities[0]["end"]
- label = potential_entities[0]["entity_group"]
-
- for item in potential_entities:
- if item["entity_group"] == label and item["start"] == end:
- end = item["end"]
- continue
- else:
- if item["start"] != start:
- ent = doc.char_span(start, end, label=label)
- doc.ents += (ent,)
-
- start = item["start"]
- end = item["end"]
- label = item["entity_group"]
-
- options = {"ents": colors.keys(), "colors": colors}
-
- output = displacy.render(doc, style="ent", options=options)
- return output
-
-# ===============Interfaz ===============
-interface = gr.Interface(
- title=title,
- description=description,
- article=article,
- allow_screenshot=False,
- allow_flagging=False,
- fn=ner,
- inputs=gr.inputs.Textbox(placeholder="Insertar el texto para analizar", lines=10),
- outputs=gr.outputs.HTML(),
- examples=examples
- )
-
-interface.launch()
\ No newline at end of file
diff --git a/spaces/echometerain/whos-that-pokemon/app.py b/spaces/echometerain/whos-that-pokemon/app.py
deleted file mode 100644
index 901694601dbe34d29529c1dd362f2fb25d232a8d..0000000000000000000000000000000000000000
--- a/spaces/echometerain/whos-that-pokemon/app.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from fastcore.all import *
-from fastai.vision.all import *
-import io
-import random
-import requests
-import pandas as pd
-import gradio as gr
-import duckduckgo_search as ddg
-import PIL
-
-learn = load_learner('./model.pkl')
-
-
-def predictor(img):
- item, _, probs = learn.predict(img)
- response = requests.get(ddg.ddg_images(
- f"{item} pokemon")[random.randint(0, 9)]["image"])
- rand_img = PIL.Image.open(io.BytesIO(response.content))
- df = pd.DataFrame(data=probs.numpy()*100, columns=["%"])
- df.insert(0, "Pokemon", learn.dls.vocab)
- df.sort_values(inplace=True, by="%", ascending=False)
- return f"It's a: {item}!", rand_img, df.head()
-
-
-gr.Interface(fn=predictor, inputs="image", outputs=[
- "text", gr.Image(height=256), gr.DataFrame(show_label=True)], allow_flagging="never", title="Who's That Pokemon?", live=True).launch()
diff --git a/spaces/elumamai/AI-ChatBot/README.md b/spaces/elumamai/AI-ChatBot/README.md
deleted file mode 100644
index 8a3ff0e949c94eee0924990b2d1c9d3226cc1522..0000000000000000000000000000000000000000
--- a/spaces/elumamai/AI-ChatBot/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI ChatBot
-emoji: 📈
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/emc348/faces-through-time/criteria/backbones/mobilefacenet.py b/spaces/emc348/faces-through-time/criteria/backbones/mobilefacenet.py
deleted file mode 100644
index 87731491d76f9ff61cc70e57bb3f18c54fae308c..0000000000000000000000000000000000000000
--- a/spaces/emc348/faces-through-time/criteria/backbones/mobilefacenet.py
+++ /dev/null
@@ -1,130 +0,0 @@
-'''
-Adapted from https://github.com/cavalleria/cavaface.pytorch/blob/master/backbone/mobilefacenet.py
-Original author cavalleria
-'''
-
-import torch.nn as nn
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Sequential, Module
-import torch
-
-
-class Flatten(Module):
- def forward(self, x):
- return x.view(x.size(0), -1)
-
-
-class ConvBlock(Module):
- def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1):
- super(ConvBlock, self).__init__()
- self.layers = nn.Sequential(
- Conv2d(in_c, out_c, kernel, groups=groups, stride=stride, padding=padding, bias=False),
- BatchNorm2d(num_features=out_c),
- PReLU(num_parameters=out_c)
- )
-
- def forward(self, x):
- return self.layers(x)
-
-
-class LinearBlock(Module):
- def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1):
- super(LinearBlock, self).__init__()
- self.layers = nn.Sequential(
- Conv2d(in_c, out_c, kernel, stride, padding, groups=groups, bias=False),
- BatchNorm2d(num_features=out_c)
- )
-
- def forward(self, x):
- return self.layers(x)
-
-
-class DepthWise(Module):
- def __init__(self, in_c, out_c, residual=False, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=1):
- super(DepthWise, self).__init__()
- self.residual = residual
- self.layers = nn.Sequential(
- ConvBlock(in_c, out_c=groups, kernel=(1, 1), padding=(0, 0), stride=(1, 1)),
- ConvBlock(groups, groups, groups=groups, kernel=kernel, padding=padding, stride=stride),
- LinearBlock(groups, out_c, kernel=(1, 1), padding=(0, 0), stride=(1, 1))
- )
-
- def forward(self, x):
- short_cut = None
- if self.residual:
- short_cut = x
- x = self.layers(x)
- if self.residual:
- output = short_cut + x
- else:
- output = x
- return output
-
-
-class Residual(Module):
- def __init__(self, c, num_block, groups, kernel=(3, 3), stride=(1, 1), padding=(1, 1)):
- super(Residual, self).__init__()
- modules = []
- for _ in range(num_block):
- modules.append(DepthWise(c, c, True, kernel, stride, padding, groups))
- self.layers = Sequential(*modules)
-
- def forward(self, x):
- return self.layers(x)
-
-
-class GDC(Module):
- def __init__(self, embedding_size):
- super(GDC, self).__init__()
- self.layers = nn.Sequential(
- LinearBlock(512, 512, groups=512, kernel=(7, 7), stride=(1, 1), padding=(0, 0)),
- Flatten(),
- Linear(512, embedding_size, bias=False),
- BatchNorm1d(embedding_size))
-
- def forward(self, x):
- return self.layers(x)
-
-
-class MobileFaceNet(Module):
- def __init__(self, fp16=False, num_features=512):
- super(MobileFaceNet, self).__init__()
- scale = 2
- self.fp16 = fp16
- self.layers = nn.Sequential(
- ConvBlock(3, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1)),
- ConvBlock(64 * scale, 64 * scale, kernel=(3, 3), stride=(1, 1), padding=(1, 1), groups=64),
- DepthWise(64 * scale, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=128),
- Residual(64 * scale, num_block=4, groups=128, kernel=(3, 3), stride=(1, 1), padding=(1, 1)),
- DepthWise(64 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=256),
- Residual(128 * scale, num_block=6, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)),
- DepthWise(128 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=512),
- Residual(128 * scale, num_block=2, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)),
- )
- self.conv_sep = ConvBlock(128 * scale, 512, kernel=(1, 1), stride=(1, 1), padding=(0, 0))
- self.features = GDC(num_features)
- self._initialize_weights()
-
- def _initialize_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- if m.bias is not None:
- m.bias.data.zero_()
- elif isinstance(m, nn.BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
- elif isinstance(m, nn.Linear):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- if m.bias is not None:
- m.bias.data.zero_()
-
- def forward(self, x):
- with torch.cuda.amp.autocast(self.fp16):
- x = self.layers(x)
- x = self.conv_sep(x.float() if self.fp16 else x)
- x = self.features(x)
- return x
-
-
-def get_mbf(fp16, num_features):
- return MobileFaceNet(fp16, num_features)
\ No newline at end of file
diff --git a/spaces/erastorgueva-nv/NeMo-Forced-Aligner/align.py b/spaces/erastorgueva-nv/NeMo-Forced-Aligner/align.py
deleted file mode 100644
index 77ab3111fd911162a348874afbde126d0bde270f..0000000000000000000000000000000000000000
--- a/spaces/erastorgueva-nv/NeMo-Forced-Aligner/align.py
+++ /dev/null
@@ -1,352 +0,0 @@
-# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import copy
-import math
-import os
-from dataclasses import dataclass, field, is_dataclass
-from pathlib import Path
-from typing import List, Optional
-
-import torch
-from omegaconf import OmegaConf
-from utils.data_prep import (
- add_t_start_end_to_utt_obj,
- get_batch_starts_ends,
- get_batch_variables,
- get_manifest_lines_batch,
- is_entry_in_all_lines,
- is_entry_in_any_lines,
-)
-from utils.make_ass_files import make_ass_files
-from utils.make_ctm_files import make_ctm_files
-from utils.make_output_manifest import write_manifest_out_line
-from utils.viterbi_decoding import viterbi_decoding
-
-from nemo.collections.asr.models.ctc_models import EncDecCTCModel
-from nemo.collections.asr.models.hybrid_rnnt_ctc_models import EncDecHybridRNNTCTCModel
-from nemo.collections.asr.parts.utils.streaming_utils import FrameBatchASR
-from nemo.collections.asr.parts.utils.transcribe_utils import setup_model
-from nemo.core.config import hydra_runner
-from nemo.utils import logging
-
-"""
-Align the utterances in manifest_filepath.
-Results are saved in ctm files in output_dir.
-
-Arguments:
- pretrained_name: string specifying the name of a CTC NeMo ASR model which will be automatically downloaded
- from NGC and used for generating the log-probs which we will use to do alignment.
- Note: NFA can only use CTC models (not Transducer models) at the moment.
- model_path: string specifying the local filepath to a CTC NeMo ASR model which will be used to generate the
- log-probs which we will use to do alignment.
- Note: NFA can only use CTC models (not Transducer models) at the moment.
- Note: if a model_path is provided, it will override the pretrained_name.
- manifest_filepath: filepath to the manifest of the data you want to align,
- containing 'audio_filepath' and 'text' fields.
- output_dir: the folder where output CTM files and new JSON manifest will be saved.
- align_using_pred_text: if True, will transcribe the audio using the specified model and then use that transcription
- as the reference text for the forced alignment.
- transcribe_device: None, or a string specifying the device that will be used for generating log-probs (i.e. "transcribing").
- The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
- (otherwise will set it to 'cpu').
- viterbi_device: None, or string specifying the device that will be used for doing Viterbi decoding.
- The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
- (otherwise will set it to 'cpu').
- batch_size: int specifying batch size that will be used for generating log-probs and doing Viterbi decoding.
- use_local_attention: boolean flag specifying whether to try to use local attention for the ASR Model (will only
- work if the ASR Model is a Conformer model). If local attention is used, we will set the local attention context
- size to [64,64].
- additional_segment_grouping_separator: an optional string used to separate the text into smaller segments.
- If this is not specified, then the whole text will be treated as a single segment.
- remove_blank_tokens_from_ctm: a boolean denoting whether to remove tokens from token-level output CTMs.
- audio_filepath_parts_in_utt_id: int specifying how many of the 'parts' of the audio_filepath
- we will use (starting from the final part of the audio_filepath) to determine the
- utt_id that will be used in the CTM files. Note also that any spaces that are present in the audio_filepath
- will be replaced with dashes, so as not to change the number of space-separated elements in the
- CTM files.
- e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 1 => utt_id will be "e1"
- e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 2 => utt_id will be "d_e1"
- e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 3 => utt_id will be "c_d_e1"
- use_buffered_infer: False, if set True, using streaming to do get the logits for alignment
- This flag is useful when aligning large audio file.
- However, currently the chunk streaming inference does not support batch inference,
- which means even you set batch_size > 1, it will only infer one by one instead of doing
- the whole batch inference together.
- chunk_len_in_secs: float chunk length in seconds
- total_buffer_in_secs: float Length of buffer (chunk + left and right padding) in seconds
- chunk_batch_size: int batch size for buffered chunk inference,
- which will cut one audio into segments and do inference on chunk_batch_size segments at a time
-
- simulate_cache_aware_streaming: False, if set True, using cache aware streaming to do get the logits for alignment
-
- save_output_file_formats: List of strings specifying what type of output files to save (default: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig to specify the configuration of the output CTM files
- ass_file_config: ASSFileConfig to specify the configuration of the output ASS files
-"""
-
-
-@dataclass
-class CTMFileConfig:
- remove_blank_tokens: bool = False
- # minimum duration (in seconds) for timestamps in the CTM.If any line in the CTM has a
- # duration lower than this, it will be enlarged from the middle outwards until it
- # meets the minimum_timestamp_duration, or reaches the beginning or end of the audio file.
- # Note that this may cause timestamps to overlap.
- minimum_timestamp_duration: float = 0
-
-
-@dataclass
-class ASSFileConfig:
- fontsize: int = 20
- vertical_alignment: str = "center"
- # if resegment_text_to_fill_space is True, the ASS files will use new segments
- # such that each segment will not take up more than (approximately) max_lines_per_segment
- # when the ASS file is applied to a video
- resegment_text_to_fill_space: bool = False
- max_lines_per_segment: int = 2
- text_already_spoken_rgb: List[int] = field(default_factory=lambda: [49, 46, 61]) # dark gray
- text_being_spoken_rgb: List[int] = field(default_factory=lambda: [57, 171, 9]) # dark green
- text_not_yet_spoken_rgb: List[int] = field(default_factory=lambda: [194, 193, 199]) # light gray
-
-
-@dataclass
-class AlignmentConfig:
- # Required configs
- pretrained_name: Optional[str] = None
- model_path: Optional[str] = None
- manifest_filepath: Optional[str] = None
- output_dir: Optional[str] = None
-
- # General configs
- align_using_pred_text: bool = False
- transcribe_device: Optional[str] = None
- viterbi_device: Optional[str] = None
- batch_size: int = 1
- use_local_attention: bool = True
- additional_segment_grouping_separator: Optional[str] = None
- audio_filepath_parts_in_utt_id: int = 1
-
- # Buffered chunked streaming configs
- use_buffered_chunked_streaming: bool = False
- chunk_len_in_secs: float = 1.6
- total_buffer_in_secs: float = 4.0
- chunk_batch_size: int = 32
-
- # Cache aware streaming configs
- simulate_cache_aware_streaming: Optional[bool] = False
-
- # Output file configs
- save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
-
-
-@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
-def main(cfg: AlignmentConfig):
-
- logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
-
- if is_dataclass(cfg):
- cfg = OmegaConf.structured(cfg)
-
- # Validate config
- if cfg.model_path is None and cfg.pretrained_name is None:
- raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None")
-
- if cfg.model_path is not None and cfg.pretrained_name is not None:
- raise ValueError("One of cfg.model_path and cfg.pretrained_name must be None")
-
- if cfg.manifest_filepath is None:
- raise ValueError("cfg.manifest_filepath must be specified")
-
- if cfg.output_dir is None:
- raise ValueError("cfg.output_dir must be specified")
-
- if cfg.batch_size < 1:
- raise ValueError("cfg.batch_size cannot be zero or a negative number")
-
- if cfg.additional_segment_grouping_separator == "" or cfg.additional_segment_grouping_separator == " ":
- raise ValueError("cfg.additional_grouping_separator cannot be empty string or space character")
-
- if cfg.ctm_file_config.minimum_timestamp_duration < 0:
- raise ValueError("cfg.minimum_timestamp_duration cannot be a negative number")
-
- if cfg.ass_file_config.vertical_alignment not in ["top", "center", "bottom"]:
- raise ValueError("cfg.ass_file_config.vertical_alignment must be one of 'top', 'center' or 'bottom'")
-
- for rgb_list in [
- cfg.ass_file_config.text_already_spoken_rgb,
- cfg.ass_file_config.text_already_spoken_rgb,
- cfg.ass_file_config.text_already_spoken_rgb,
- ]:
- if len(rgb_list) != 3:
- raise ValueError(
- "cfg.ass_file_config.text_already_spoken_rgb,"
- " cfg.ass_file_config.text_being_spoken_rgb,"
- " and cfg.ass_file_config.text_already_spoken_rgb all need to contain"
- " exactly 3 elements."
- )
-
- # Validate manifest contents
- if not is_entry_in_all_lines(cfg.manifest_filepath, "audio_filepath"):
- raise RuntimeError(
- "At least one line in cfg.manifest_filepath does not contain an 'audio_filepath' entry. "
- "All lines must contain an 'audio_filepath' entry."
- )
-
- if cfg.align_using_pred_text:
- if is_entry_in_any_lines(cfg.manifest_filepath, "pred_text"):
- raise RuntimeError(
- "Cannot specify cfg.align_using_pred_text=True when the manifest at cfg.manifest_filepath "
- "contains 'pred_text' entries. This is because the audio will be transcribed and may produce "
- "a different 'pred_text'. This may cause confusion."
- )
- else:
- if not is_entry_in_all_lines(cfg.manifest_filepath, "text"):
- raise RuntimeError(
- "At least one line in cfg.manifest_filepath does not contain a 'text' entry. "
- "NFA requires all lines to contain a 'text' entry when cfg.align_using_pred_text=False."
- )
-
- # init devices
- if cfg.transcribe_device is None:
- transcribe_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- transcribe_device = torch.device(cfg.transcribe_device)
- logging.info(f"Device to be used for transcription step (`transcribe_device`) is {transcribe_device}")
-
- if cfg.viterbi_device is None:
- viterbi_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- viterbi_device = torch.device(cfg.viterbi_device)
- logging.info(f"Device to be used for viterbi step (`viterbi_device`) is {viterbi_device}")
-
- if transcribe_device.type == 'cuda' or viterbi_device.type == 'cuda':
- logging.warning(
- 'One or both of transcribe_device and viterbi_device are GPUs. If you run into OOM errors '
- 'it may help to change both devices to be the CPU.'
- )
-
- # load model
- model, _ = setup_model(cfg, transcribe_device)
- model.eval()
-
- if isinstance(model, EncDecHybridRNNTCTCModel):
- model.change_decoding_strategy(decoder_type="ctc")
-
- if cfg.use_local_attention:
- logging.info(
- "Flag use_local_attention is set to True => will try to use local attention for model if it allows it"
- )
- model.change_attention_model(self_attention_model="rel_pos_local_attn", att_context_size=[64, 64])
-
- if not (isinstance(model, EncDecCTCModel) or isinstance(model, EncDecHybridRNNTCTCModel)):
- raise NotImplementedError(
- f"Model is not an instance of NeMo EncDecCTCModel or ENCDecHybridRNNTCTCModel."
- " Currently only instances of these models are supported"
- )
-
- if cfg.ctm_file_config.minimum_timestamp_duration > 0:
- logging.warning(
- f"cfg.ctm_file_config.minimum_timestamp_duration has been set to {cfg.ctm_file_config.minimum_timestamp_duration} seconds. "
- "This may cause the alignments for some tokens/words/additional segments to be overlapping."
- )
-
- buffered_chunk_params = {}
- if cfg.use_buffered_chunked_streaming:
- model_cfg = copy.deepcopy(model._cfg)
-
- OmegaConf.set_struct(model_cfg.preprocessor, False)
- # some changes for streaming scenario
- model_cfg.preprocessor.dither = 0.0
- model_cfg.preprocessor.pad_to = 0
-
- if model_cfg.preprocessor.normalize != "per_feature":
- logging.error(
- "Only EncDecCTCModelBPE models trained with per_feature normalization are supported currently"
- )
- # Disable config overwriting
- OmegaConf.set_struct(model_cfg.preprocessor, True)
-
- feature_stride = model_cfg.preprocessor['window_stride']
- model_stride_in_secs = feature_stride * cfg.model_downsample_factor
- total_buffer = cfg.total_buffer_in_secs
- chunk_len = float(cfg.chunk_len_in_secs)
- tokens_per_chunk = math.ceil(chunk_len / model_stride_in_secs)
- mid_delay = math.ceil((chunk_len + (total_buffer - chunk_len) / 2) / model_stride_in_secs)
- logging.info(f"tokens_per_chunk is {tokens_per_chunk}, mid_delay is {mid_delay}")
-
- model = FrameBatchASR(
- asr_model=model,
- frame_len=chunk_len,
- total_buffer=cfg.total_buffer_in_secs,
- batch_size=cfg.chunk_batch_size,
- )
- buffered_chunk_params = {
- "delay": mid_delay,
- "model_stride_in_secs": model_stride_in_secs,
- "tokens_per_chunk": tokens_per_chunk,
- }
- # get start and end line IDs of batches
- starts, ends = get_batch_starts_ends(cfg.manifest_filepath, cfg.batch_size)
-
- # init output_timestep_duration = None and we will calculate and update it during the first batch
- output_timestep_duration = None
-
- # init f_manifest_out
- os.makedirs(cfg.output_dir, exist_ok=True)
- tgt_manifest_name = str(Path(cfg.manifest_filepath).stem) + "_with_output_file_paths.json"
- tgt_manifest_filepath = str(Path(cfg.output_dir) / tgt_manifest_name)
- f_manifest_out = open(tgt_manifest_filepath, 'w')
-
- # get alignment and save in CTM batch-by-batch
- for start, end in zip(starts, ends):
- manifest_lines_batch = get_manifest_lines_batch(cfg.manifest_filepath, start, end)
-
- (log_probs_batch, y_batch, T_batch, U_batch, utt_obj_batch, output_timestep_duration,) = get_batch_variables(
- manifest_lines_batch,
- model,
- cfg.additional_segment_grouping_separator,
- cfg.align_using_pred_text,
- cfg.audio_filepath_parts_in_utt_id,
- output_timestep_duration,
- cfg.simulate_cache_aware_streaming,
- cfg.use_buffered_chunked_streaming,
- buffered_chunk_params,
- )
-
- alignments_batch = viterbi_decoding(log_probs_batch, y_batch, T_batch, U_batch, viterbi_device)
-
- for utt_obj, alignment_utt in zip(utt_obj_batch, alignments_batch):
-
- utt_obj = add_t_start_end_to_utt_obj(utt_obj, alignment_utt, output_timestep_duration)
-
- if "ctm" in cfg.save_output_file_formats:
- utt_obj = make_ctm_files(utt_obj, cfg.output_dir, cfg.ctm_file_config,)
-
- if "ass" in cfg.save_output_file_formats:
- utt_obj = make_ass_files(utt_obj, cfg.output_dir, cfg.ass_file_config)
-
- write_manifest_out_line(
- f_manifest_out, utt_obj,
- )
-
- f_manifest_out.close()
-
- return None
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/facebook/StyleNeRF/torch_utils/ops/fma.py b/spaces/facebook/StyleNeRF/torch_utils/ops/fma.py
deleted file mode 100644
index 51a45dfa0829987e8ee5214663e068cb3af2a8b9..0000000000000000000000000000000000000000
--- a/spaces/facebook/StyleNeRF/torch_utils/ops/fma.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`."""
-
-import torch
-
-#----------------------------------------------------------------------------
-
-def fma(a, b, c): # => a * b + c
- return _FusedMultiplyAdd.apply(a, b, c)
-
-#----------------------------------------------------------------------------
-
-class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c
- @staticmethod
- def forward(ctx, a, b, c): # pylint: disable=arguments-differ
- out = torch.addcmul(c, a, b)
- ctx.save_for_backward(a, b)
- ctx.c_shape = c.shape
- return out
-
- @staticmethod
- def backward(ctx, dout): # pylint: disable=arguments-differ
- a, b = ctx.saved_tensors
- c_shape = ctx.c_shape
- da = None
- db = None
- dc = None
-
- if ctx.needs_input_grad[0]:
- da = _unbroadcast(dout * b, a.shape)
-
- if ctx.needs_input_grad[1]:
- db = _unbroadcast(dout * a, b.shape)
-
- if ctx.needs_input_grad[2]:
- dc = _unbroadcast(dout, c_shape)
-
- return da, db, dc
-
-#----------------------------------------------------------------------------
-
-def _unbroadcast(x, shape):
- extra_dims = x.ndim - len(shape)
- assert extra_dims >= 0
- dim = [i for i in range(x.ndim) if x.shape[i] > 1 and (i < extra_dims or shape[i - extra_dims] == 1)]
- if len(dim):
- x = x.sum(dim=dim, keepdim=True)
- if extra_dims:
- x = x.reshape(-1, *x.shape[extra_dims+1:])
- assert x.shape == shape
- return x
-
-#----------------------------------------------------------------------------
diff --git a/spaces/falterWliame/Face_Mask_Detection/Adobe Photoshop Lightroom CC (2018) 10.8.5 Crack Free Download.md b/spaces/falterWliame/Face_Mask_Detection/Adobe Photoshop Lightroom CC (2018) 10.8.5 Crack Free Download.md
deleted file mode 100644
index 822c7e560f818c24508bb78b77bafdc9bdffd764..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Adobe Photoshop Lightroom CC (2018) 10.8.5 Crack Free Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Adobe Photoshop Lightroom CC (2018) 10.8.5 Crack free download
-
-Download the latest version of Adobe Reader Version X. and save it to your desktop. Uncheck ... Pro 4.06 for After Effects (win)\keygen.zip probably a variant of Win32/Agent. ... Data\Sun\Java\Deployment\cache\6.0\23\20d6897-3201a4f9 a variant of ... [HKEY_LOCAL_MACHINE\software\GenArts\Sapphire ... 1fdad05405
-
-
-
diff --git a/spaces/fatiXbelha/sd/Download KKN di Desa Penari Full Movie Uncut 17 Streaming Legal Tanpa LK21.md b/spaces/fatiXbelha/sd/Download KKN di Desa Penari Full Movie Uncut 17 Streaming Legal Tanpa LK21.md
deleted file mode 100644
index 31a640a010e3096658c515b9486ce5c3bb045506..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download KKN di Desa Penari Full Movie Uncut 17 Streaming Legal Tanpa LK21.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
How to Download KKN di Desa Penari lk21: A Complete Guide
-
KKN di Desa Penari is a 2022 Indonesian horror movie that has become a huge hit among audiences and critics alike. It is based on a viral Twitter thread by SimpleMan that tells a true story of a group of college students who encountered supernatural events during their community service in a remote village. The movie is directed by Azhar Kinoi Lubis and stars Tissa Biani, Aghniny Haque, Calvin Jeremy, and Fero Walandouw. It has grossed over 100 billion rupiah at the Indonesian box office and received rave reviews from critics and viewers.
-
lk21 is a website that offers free streaming and downloading of movies from various countries and genres. It is one of the most popular illegal sites in Indonesia, along with IndoXXI, Ganool, and LayarKaca21. It has a large collection of movies, ranging from Hollywood blockbusters to Asian dramas, from action thrillers to romantic comedies, from old classics to new releases.
If you are a fan of KKN di Desa Penari and want to watch it again or if you missed it in theaters and want to see what the hype is all about, you might be tempted to download it from lk21. However, before you do that, you should know the risks and the alternatives of doing so. In this article, we will show you how to download KKN di Desa Penari lk21, what are the dangers of doing so, and what are the better options to watch it legally and safely.
-
What is KKN di Desa Penari?
-
KKN di Desa Penari is a horror movie that is based on a true story that was shared by a Twitter user named SimpleMan in 2019. The story went viral and attracted millions of readers who were hooked by the suspenseful and terrifying narrative. The story tells the experience of a group of college students who went to a remote village in East Java for their community service program (KKN). There, they encountered strange and horrifying events involving the villagers, the local legend of a dancer who was cursed by a shaman, and the dark secrets of the village chief.
-
The movie adaptation of the story was released on April 29, 2022, after being delayed several times due to the COVID-19 pandemic. The movie was directed by Azhar Kinoi Lubis, who is known for his previous horror movies such as Rumah Kentang: The Beginning and Makmum. The movie stars Tissa Biani as Rini, the leader of the group; Aghniny Haque as Dini, Rini's best friend; Calvin Jeremy as Bima, Rini's boyfriend; Fero Walandouw as Anton, Bima's friend; Zidni Hakim as Joko, the village chief's son; Ria Ricis as Suci, Joko's sister; and Asri Welas as Mbah Uti, the village chief.
-
The movie was a huge success at the box office, breaking several records in Indonesian cinema history. It became the fastest movie to reach 1 million viewers in just four days, surpassing Dilan 1991. It also became the highest-grossing Indonesian movie of 2022 so far, earning over 100 billion rupiah in less than three weeks. It also received positive reviews from critics and audiences alike, who praised its story, direction, acting, cinematography, sound design, and scare factor.
-
Why is KKN di Desa Penari Popular?
-
There are several reasons why KKN di Desa Penari became such a popular movie among Indonesian viewers. Here are some of them:
-
-
It is based on a viral source material that already had a huge fan base. The Twitter thread by SimpleMan was one of the most talked-about topics on social media in 2019. Many people were curious about how the story would be adapted into a movie and how faithful it would be to the original.
-
It is a horror movie that appeals to a wide range of audiences. Horror is one of the most popular genres in Indonesian cinema, as many people enjoy being scared and thrilled by movies that feature ghosts, demons, curses, and other supernatural elements. KKN di Desa Penari delivers on that aspect by providing plenty of jump scares, creepy atmosphere, gore, and mystery.
-
It is a movie that reflects the cultural and social issues of Indonesia. The movie explores themes such as urban-rural divide, superstition vs rationality, corruption vs justice, tradition vs modernity, and loyalty vs betrayal. It also showcases the diversity of Indonesia's culture and folklore by featuring elements such as Javanese dance, wayang kulit (shadow puppetry), gamelan (traditional music), and dukun (shaman).
-
It is a movie that received positive word-of-mouth and reviews. The movie generated a lot of buzz and hype before and after its release. Many people who watched it recommended it to their friends and family, creating a snowball effect of popularity. The movie also received favorable reviews from critics and viewers, who gave it high ratings and compliments on various platforms such as IMDb, Rotten Tomatoes, and Letterboxd.
-
What is lk21?
-
lk21 is a website that allows users to stream and download movies for free. It is one of the most visited illegal sites in Indonesia, along with IndoXXI, Ganool, and LayarKaca21. It has a huge collection of movies from various countries and genres, such as Hollywood, Bollywood, Korean, Japanese, Chinese, Thai, Indonesian, action, comedy, drama, horror, romance, sci-fi, fantasy, animation, and more. It also updates its content regularly, adding new releases and old classics to its library.
-
lk21 works by providing links to various servers that host the movie files. Users can choose from different servers based on their preferences and availability. Some servers offer faster download speed or better streaming quality than others. Some servers also require users to register or complete a captcha before accessing the movie. Users can either download the movie file to their device or watch it online on their browser.
-
download kkn desa penari lk21 full movie
-download kkn desa penari lk21 sub indo
-download kkn desa penari lk21 bluray
-download kkn desa penari lk21 gratis
-download kkn desa penari lk21 terbaru
-download kkn desa penari lk21 2022
-download kkn desa penari lk21 horror
-download kkn desa penari lk21 hd
-download kkn desa penari lk21 720p
-download kkn desa penari lk21 mp4
-download kkn desa penari lk21 streaming
-download kkn desa penari lk21 online
-download kkn desa penari lk21 indoxxi
-download kkn desa penari lk21 ganool
-download kkn desa penari lk21 dunia21
-download kkn desa penari lk21 bioskopkeren
-download kkn desa penari lk21 layarkaca21
-download kkn desa penari lk21 cinemaindo
-download kkn desa penari lk21 juraganfilm
-download kkn desa penari lk21 nontonfilm
-download kkn desa penari lk21 imdb
-download kkn desa penari lk21 awi suryadi
-download kkn desa penari lk21 tissa biani
-download kkn desa penari lk21 adinda thomas
-download kkn desa penari lk21 achmad megantara
-download kkn desa penari lk21 aghniny haque
-download kkn desa penari lk21 calvin jeremy
-download kkn desa penari lk21 fajar nugraha
-download kkn desa penari lk21 trailer
-download kkn desa penari lk21 review
-download kkn desa penari lk21 sinopsis
-download kkn desa penari lk21 plot
-download kkn desa penari lk21 cast
-download kkn desa penari lk21 crew
-download kkn desa penari lk21 rating
-download kkn desa penari lk21 awards
-download kkn desa penari lk21 box office
-download kkn desa penari lk21 poster
-download kkn desa penari lk21 wallpaper
-download kkn desa penari lk21 soundtrack
-download kkn desa penari lk21 subtitle
-download kkn desa penari lk21 english sub
-download film indonesia terbaru 2022: KKN di Desa Penari LK 21
-
lk21 is not a legal or safe site to use. It violates the intellectual property rights of the filmmakers and distributors who own the movies. It also exposes users to various cyber risks such as malware, viruses, phishing, hacking, and identity theft. It is not recommended to use lk21 or any other illegal site to watch movies online or offline.
-
How to Download KKN di Desa Penari lk21?
-
Step 1: Find a link to the movie on lk21
-
The first step to download KKN di Desa Penari lk21 is to find a link to the movie on the site. To do this, you need to visit the official website of lk21 at https://lk21online.cc/. You can also use a proxy or VPN service to access the site if it is blocked by your internet provider or government.
-
Once you are on the site, you can search for the movie by typing its title in the search box or browsing through the categories and genres. You can also filter the results by year, country, quality, or subtitle. You should look for the movie that has the highest quality (such as HD or BluRay) and subtitle (such as English or Indonesian) available.
-
You should be careful when searching for the movie on lk21, as there might be fake or misleading links that lead you to other sites or ads. You should avoid clicking on any pop-ups or banners that appear on the site. You should also check the comments and ratings of other users who have watched or downloaded the movie before you.
-
Step 2: Click on the link and choose a server
-
The second step to download KKN di Desa Penari lk21 is to click on the link and choose a server. Once you have found a link to the movie that matches your preferences, you can click on it and you will be directed to another page that shows more details about the movie, such as its synopsis, cast, director, genre, duration, and rating.
-
On this page, you will also see a list of servers that host the movie file. You can choose from different servers based on their speed, quality, and availability. Some servers might be faster or have better quality than others. Some servers might also be offline or unavailable at certain times.
-
You should be careful when choosing a server, as some servers might contain malware, viruses, phishing, hacking, or identity theft risks. You should avoid clicking on any suspicious links or ads that appear on the server page. You should also use an antivirus software or firewall to protect your device from any potential threats.
-
Step 3: Download the movie file or stream it online
-
The third step to download KKN di Desa Penari lk21 is to download the movie file or stream it online. Once you have chosen a server that works for you, you can either download the movie file to your device or watch it online on your browser.
-
To download the movie file, you need to click on the download button that appears on the server page. You will then be asked to choose a location where you want to save the file on your device. You can also rename the file if you want. The download process will start automatically and it will take some time depending on your internet speed and file size.
-
To stream the movie online, you need to click on the play button that appears on the server page. You will then be able to watch the movie on your browser as if you were watching a YouTube video. You can also adjust the volume, brightness, resolution, and subtitle settings according to your preferences. The streaming process will also depend on your internet speed and server quality.
-
What are the Risks of Downloading KKN di Desa Penari lk21?
-
Downloading KKN di Desa Penari lk21 might seem like a convenient and cheap way to watch the movie, but it also comes with many risks that you should be aware of. Here are some of the risks that you might face if you download the movie from an illegal site:
-
Legal Risks
-
Downloading movies from illegal sites such as lk21 violates the intellectual property rights of the filmmakers and distributors who own the movies. By doing so, you are committing a crime that can result in legal consequences such as fines or lawsuits. In Indonesia, the law on intellectual property rights states that anyone who infringes the rights of the creators or owners of a work can be punished by imprisonment for up to four years or a fine of up to one billion rupiah. Therefore, you should respect the rights of the filmmakers and distributors who worked hard to make the movie and not download it illegally.
-
Cyber Risks
-
Downloading movies from illegal sites such as lk21 exposes you to various cyber threats such as malware, viruses, phishing, hacking, and identity theft. These threats can harm your device, data, and privacy in various ways. For example, malware or viruses can infect your device and damage its performance or functionality. Phishing or hacking can steal your personal or financial information and use it for fraudulent purposes. Identity theft can impersonate you and cause problems for you or others. Therefore, you should protect your device, data, and privacy from these threats and not download movies from unsafe sites.
-
Ethical Risks
-
Downloading movies from illegal sites such as lk21 harms the filmmakers and distributors who invested time and money in making the movie. By doing so, you are depriving them of their rightful income and recognition. This can affect their ability to produce more quality movies in the future and their motivation to continue their work. It can also affect the development of the Indonesian film industry and culture, which relies on the support and appreciation of the viewers. Therefore, you should support the filmmakers and distributors who made the movie and not download it illegally.
-
What are the Alternatives to Downloading KKN di Desa Penari lk21?
-
If you want to watch KKN di Desa Penari legally and safely, there are better alternatives than downloading it from lk21 or any other illegal site. Here are some of them:
-
Watch it in Theaters
-
The best way to watch KKN di Desa Penari is to watch it in theaters. This is the only legal and safe way to watch the movie until May 31, 2022, when it ends its theatrical run. Watching it in theaters will give you the best experience of enjoying the movie on the big screen with surround sound and other viewers. You will also be able to appreciate the movie's cinematography, sound design, and scare factor better than watching it on a small screen with low quality and no subtitles. You will also be supporting the filmmakers and distributors who made the movie by buying a ticket and contributing to their box office revenue.
-
To watch KKN di Desa Penari in theaters, you need to find a cinema near you that is showing the movie. You can use online platforms such as BookMyShow, CGV Cinemas, or Cinema 21 to check the showtimes, locations, prices, and availability of tickets. You can also book your tickets online or buy them at the cinema counter. You should follow the health protocols and safety measures that are implemented by the cinema operators and authorities to prevent the spread of COVID-19.
-
Wait for the Official Release on Streaming Services or Digital Platforms
-
The second best way to watch KKN di Desa Penari is to wait for its official release on streaming services or digital platforms. This is another legal and safe way to watch the movie after it finishes its theatrical run. Watching it on streaming services or digital platforms will give you the convenience of watching the movie anytime and anywhere you want. You will also be able to watch it legally and safely with high quality and subtitles. You will also be supporting the filmmakers and distributors who made the movie by paying for their service or product. To watch KKN di Desa Penari on streaming services or digital platforms, you need to wait for its official release date, which has not been announced yet. However, based on the previous movies by the same director and distributor, it is likely that the movie will be available on Disney Plus Hotstar, Netflix, or Google Play Movies within a few months after its theatrical release. You can check their websites or apps for updates and announcements regarding the movie's availability. You can also subscribe to their service or buy their product to access the movie once it is released.
Conclusion
-
KKN di Desa Penari is a 2022 Indonesian horror movie that is based on a viral Twitter thread by SimpleMan. It is a movie that has become a huge hit among audiences and critics alike, thanks to its story, direction, acting, cinematography, sound design, and scare factor. It is a movie that reflects the cultural and social issues of Indonesia, such as urban-rural divide, superstition vs rationality, corruption vs justice, tradition vs modernity, and loyalty vs betrayal.
-
However, downloading KKN di Desa Penari lk21 is not a good idea, as it comes with many risks and disadvantages. Downloading the movie from an illegal site violates the intellectual property rights of the filmmakers and distributors who own the movie. It also exposes you to various cyber threats such as malware, viruses, phishing, hacking, and identity theft. It also harms the filmmakers and distributors who invested time and money in making the movie.
-
Therefore, you should avoid downloading KKN di Desa Penari lk21 or any other illegal site and choose one of the alternatives instead. The best way to watch the movie is to watch it in theaters until May 31, 2022. The second best way to watch the movie is to wait for its official release on streaming services or digital platforms such as Disney Plus Hotstar, Netflix, or Google Play Movies. These are the legal and safe ways to watch the movie that will give you the best experience and support the filmmakers and distributors who made the movie.
-
We hope this article has helped you understand how to download KKN di Desa Penari lk21, what are the risks of doing so, and what are the alternatives to doing so. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy watching!
-
FAQs
-
Here are some frequently asked questions related to the topic of this article:
-
-
Q: What is KKN di Desa Penari about?
-
A: KKN di Desa Penari is a horror movie that tells the true story of a group of college students who encountered supernatural events during their community service in a remote village in East Java.
-
Q: Who are the cast and director of KKN di Desa Penari?
-
A: The cast of KKN di Desa Penari includes Tissa Biani, Aghniny Haque, Calvin Jeremy, Fero Walandouw, Zidni Hakim, Ria Ricis, and Asri Welas. The director of KKN di Desa Penari is Azhar Kinoi Lubis.
-
Q: How much did KKN di Desa Penari make at the box office?
-
A: KKN di Desa Penari grossed over 100 billion rupiah at the Indonesian box office and became the highest-grossing Indonesian movie of 2022 so far.
-
Q: What are some of the reviews of KKN di Desa Penari?
-
A: KKN di Desa Penari received positive reviews from critics and audiences alike, who praised its story, direction, acting, cinematography, sound design, and scare factor. It has a rating of 8.1 out of 10 on IMDb, 92% on Rotten Tomatoes, and 4.1 out of 5 on Letterboxd.
-
Q: When will KKN di Desa Penari be available on streaming services or digital platforms?
-
A: The official release date of KKN di Desa Penari on streaming services or digital platforms has not been announced yet. However, it is likely that the movie will be available on Disney Plus Hotstar, Netflix, or Google Play Movies within a few months after its theatrical release, based on the previous movies by the same director and distributor.
-
- : https://www.hotstar.com/id : https://www.netflix.com/id-en/ : https://play.google.com/store/movies 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Slither.io for iPad and Join the Online Slithering Community.md b/spaces/fatiXbelha/sd/Download Slither.io for iPad and Join the Online Slithering Community.md
deleted file mode 100644
index 03b491f55ecff747d6198a02663862e6c7de00db..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Slither.io for iPad and Join the Online Slithering Community.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
How to Download Slither.io on iPad
-
Do you love playing snake games? Do you want to challenge yourself and other players online? Do you want to have fun and kill some time on your iPad? If you answered yes to any of these questions, then you should try Slither.io, a popular .io snake game that you can download and play on your iPad. In this article, we will show you what Slither.io is, how to play it, why you should play it, how to download it on your iPad, and some tips and tricks to help you become the longest slither of the day. Let's get started!
-
What is Slither.io?
-
Slither.io is a multiplayer online snake game that was released in 2016 by Lowtech Studios. It is one of the most popular .io games, with millions of players around the world. The game is inspired by the classic snake game, but with some twists and features that make it more fun and addictive.
.io games are a genre of online games that have simple rules, minimal graphics, and multiplayer gameplay. They are usually browser-based or mobile-based, and they can be played by anyone with an internet connection. Some of the most famous .io games are Agar.io, Diep.io, Paper.io, and of course, Slither.io.
-
How to play Slither.io?
-
The goal of Slither.io is to grow your snake by eating multi-colored orbs that are scattered around the map. You can also eat the remains of other snakes that have exploded. The longer you grow, the higher you rank on the leaderboard. However, you have to be careful not to touch other snakes with your head, or else you will explode and lose all your length. You can also use a boost button to speed up your snake, but this will consume some of your length as well.
-
Why play Slither.io?
-
Slither.io is a game that is easy to learn but hard to master. It is a game that can test your skills, reflexes, and strategy. It is a game that can provide you with hours of entertainment and excitement. It is a game that can make you feel proud when you reach the top of the leaderboard, or frustrated when you lose everything in a split second. It is a game that can make you addicted and keep you coming back for more.
-
How to Download Slither.io on iPad?
-
If you want to play Slither.io on your iPad, you will need to download it from the App Store. Here are the steps you need to follow:
-
Step 1: Go to the App Store
-
On your iPad, tap on the App Store icon to open it. You will see a screen with various categories and recommendations of apps and games.
-
Step 2: Search for Slither.io
-
On the bottom right corner of the screen, tap on the magnifying glass icon to open the search bar. Type in "Slither.io" and hit enter. You will see a list of results related to your search query.
-
slither.io game download for ipad
-how to download slither.io on ipad
-slither.io app download ipad free
-slither.io download ipad mini
-slither.io download ipad pro
-slither.io download ipad air
-slither.io download ipad offline
-slither.io download ipad online
-slither.io download ipad no wifi
-slither.io download ipad without app store
-slither.io download ipad latest version
-slither.io download ipad update
-slither.io download ipad hack
-slither.io download ipad mod
-slither.io download ipad cheats
-slither.io download ipad skins
-slither.io download ipad bots
-slither.io download ipad zoom
-slither.io download ipad lag fix
-slither.io download ipad tips and tricks
-slither.io download ipad review
-slither.io download ipad gameplay
-slither.io download ipad tutorial
-slither.io download ipad guide
-slither.io download ipad best settings
-slither.io download ipad multiplayer
-slither.io download ipad vs android
-slither.io download ipad vs pc
-slither.io download ipad vs iphone
-slither.io download ipad vs snake io
-slither.io download ipad vs wormate io
-slither.io download ipad vs wormax io
-slither.io download ipad vs paper io
-slither.io download ipad vs hole io
-slither.io download ipad vs agar io
-slither.io download ipad vs diep io
-slither.io download ipad vs zombs io
-slither.io download ipad vs mope io
-slither.io download ipad vs krunker io
-slither.io download ipad vs surviv io
-
Step 3: Tap on the Get button
-
On the top of the list, you will see the official Slither.io app by Lowtech Studios LLC. Tap on it to open its details page. You will see some screenshots, ratings, reviews, and information about the app
Step 4: Wait for the download to finish
-
On the details page, you will see a blue button that says "Get". Tap on it to start the download process. You may need to enter your Apple ID and password, or use Touch ID or Face ID, to confirm the download. You will see a progress circle on the app icon that indicates how much of the download is completed.
-
Step 5: Enjoy the game!
-
Once the download is finished, you will see an "Open" button on the details page. Tap on it to launch the game. Alternatively, you can also tap on the app icon on your home screen to open it. You will see a screen with the Slither.io logo and a "Play" button. Tap on it to start playing the game. You can also customize your snake's appearance, change the game mode, and access the settings from this screen.
-
Tips and Tricks for Slither.io on iPad
-
Now that you have downloaded and installed Slither.io on your iPad, you may want to know some tips and tricks to improve your gameplay and have more fun. Here are some of them:
-
Use the boost button wisely
-
On the bottom right corner of the screen, you will see a circular button with a lightning bolt icon. This is the boost button, which allows you to speed up your snake for a short time. However, using this button will also reduce your length, so you have to use it wisely. You can use it to escape from danger, chase after prey, or cut off other snakes. But be careful not to waste it or run into other snakes while boosting.
-
Avoid the edges of the map
-
The map of Slither.io is bounded by four walls that are invisible until you get close to them. If you touch any of these walls with your head, you will explode and die. Therefore, it is best to avoid the edges of the map and stay in the center where there are more orbs and opportunities. However, if you are very long and confident, you can also use the edges as a strategy to trap other snakes or protect yourself from being circled.
-
Circle smaller snakes to trap them
-
One of the most satisfying moves in Slither.io is to circle around a smaller snake and trap them inside your coil. This way, you can force them to touch your body and explode, and then eat their remains. However, this move also requires some skill and patience, as you have to maintain your circle without leaving any gaps or touching other snakes. You also have to watch out for other snakes that may try to interfere or steal your kill.
-
Follow bigger snakes to eat their remains
-
Another way to grow your snake quickly is to follow bigger snakes and wait for them to explode. This can happen when they collide with other snakes, hit the walls, or make a mistake. When this happens, you can swoop in and eat their remains before anyone else does. However, this strategy also has some risks, as you may encounter other hungry snakes that are doing the same thing, or become a target for bigger snakes that want to eat you.
-
Conclusion
-
Slither.io is a fun and addictive snake game that you can play on your iPad. It is easy to download and install from the App Store, and it offers a simple but challenging gameplay that can keep you entertained for hours. You can also customize your snake's appearance, change the game mode, and compete with other players online. If you want to improve your skills and have more fun, you can also follow some tips and tricks that we have shared in this article. We hope you enjoyed reading this article and learned something new. Now go ahead and download Slither.io on your iPad and start slithering!
-
FAQs
-
Here are some frequently asked questions about Slither.io on iPad:
-
Q: How do I change my snake's appearance?
-
A: On the main screen of the game, tap on the "Change Skin" button on the bottom left corner. You will see a screen with various skins that you can choose from. You can also unlock more skins by sharing the game on social media or watching ads.
-
Q: How do I change the game mode?
-
A: On the main screen of the game, tap on the "Play Online" button on the top right corner. You will see a screen with three game modes: Classic, Low Quality (for slower devices), and AI (offline mode). Tap on the one you want to play.
-
Q: How do I access the settings?
A: On the main screen of the game, tap on the gear icon on the top left corner. You will see a screen with various settings that you can adjust, such as sound, music, joystick, and graphics.
-
Q: How do I chat with other players?
-
A: On the main screen of the game, tap on the chat icon on the bottom right corner. You will see a screen with a chat box where you can type and send messages to other players. You can also use emojis and stickers to express yourself.
-
Q: How do I share my score and rank?
-
A: When you die in the game, you will see a screen with your score and rank. You can tap on the share button on the bottom of the screen to share your results on social media or other platforms. You can also take a screenshot of your snake and share it with your friends.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/request_llm/edge_gpt_free.py b/spaces/fb700/chatglm-fitness-RLHF/request_llm/edge_gpt_free.py
deleted file mode 100644
index ef6187379c470b0f325d50d7642cfc95b933f1ef..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/request_llm/edge_gpt_free.py
+++ /dev/null
@@ -1,1112 +0,0 @@
-"""
-========================================================================
-第一部分:来自EdgeGPT.py
-https://github.com/acheong08/EdgeGPT
-========================================================================
-"""
-"""
-Main.py
-"""
-
-import argparse
-import asyncio
-import json
-import os
-import random
-import re
-import ssl
-import sys
-import time
-import uuid
-from enum import Enum
-from pathlib import Path
-from typing import Generator
-from typing import Literal
-from typing import Optional
-from typing import Union
-
-import aiohttp
-import certifi
-import httpx
-from prompt_toolkit import PromptSession
-from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
-from prompt_toolkit.completion import WordCompleter
-from prompt_toolkit.history import InMemoryHistory
-from prompt_toolkit.key_binding import KeyBindings
-from rich.live import Live
-from rich.markdown import Markdown
-
-DELIMITER = "\x1e"
-
-
-# Generate random IP between range 13.104.0.0/14
-FORWARDED_IP = (
- f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
-)
-
-HEADERS = {
- "accept": "application/json",
- "accept-language": "en-US,en;q=0.9",
- "content-type": "application/json",
- "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
- "sec-ch-ua-arch": '"x86"',
- "sec-ch-ua-bitness": '"64"',
- "sec-ch-ua-full-version": '"109.0.1518.78"',
- "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-model": "",
- "sec-ch-ua-platform": '"Windows"',
- "sec-ch-ua-platform-version": '"15.0.0"',
- "sec-fetch-dest": "empty",
- "sec-fetch-mode": "cors",
- "sec-fetch-site": "same-origin",
- "x-ms-client-request-id": str(uuid.uuid4()),
- "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
- "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
- "Referrer-Policy": "origin-when-cross-origin",
- "x-forwarded-for": FORWARDED_IP,
-}
-
-HEADERS_INIT_CONVER = {
- "authority": "edgeservices.bing.com",
- "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
- "accept-language": "en-US,en;q=0.9",
- "cache-control": "max-age=0",
- "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
- "sec-ch-ua-arch": '"x86"',
- "sec-ch-ua-bitness": '"64"',
- "sec-ch-ua-full-version": '"110.0.1587.69"',
- "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-model": '""',
- "sec-ch-ua-platform": '"Windows"',
- "sec-ch-ua-platform-version": '"15.0.0"',
- "sec-fetch-dest": "document",
- "sec-fetch-mode": "navigate",
- "sec-fetch-site": "none",
- "sec-fetch-user": "?1",
- "upgrade-insecure-requests": "1",
- "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
- "x-edge-shopping-flag": "1",
- "x-forwarded-for": FORWARDED_IP,
-}
-
-ssl_context = ssl.create_default_context()
-ssl_context.load_verify_locations(certifi.where())
-
-
-class NotAllowedToAccess(Exception):
- pass
-
-
-class ConversationStyle(Enum):
- creative = [
- "nlu_direct_response_filter",
- "deepleo",
- "disable_emoji_spoken_text",
- "responsible_ai_policy_235",
- "enablemm",
- "h3imaginative",
- "travelansgnd",
- "dv3sugg",
- "clgalileo",
- "gencontentv3",
- "dv3sugg",
- "responseos",
- "e2ecachewrite",
- "cachewriteext",
- "nodlcpcwrite",
- "travelansgnd",
- "nojbfedge",
- ]
- balanced = [
- "nlu_direct_response_filter",
- "deepleo",
- "disable_emoji_spoken_text",
- "responsible_ai_policy_235",
- "enablemm",
- "galileo",
- "dv3sugg",
- "responseos",
- "e2ecachewrite",
- "cachewriteext",
- "nodlcpcwrite",
- "travelansgnd",
- "nojbfedge",
- ]
- precise = [
- "nlu_direct_response_filter",
- "deepleo",
- "disable_emoji_spoken_text",
- "responsible_ai_policy_235",
- "enablemm",
- "galileo",
- "dv3sugg",
- "responseos",
- "e2ecachewrite",
- "cachewriteext",
- "nodlcpcwrite",
- "travelansgnd",
- "h3precise",
- "clgalileo",
- "nojbfedge",
- ]
-
-
-CONVERSATION_STYLE_TYPE = Optional[
- Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
-]
-
-
-def _append_identifier(msg: dict) -> str:
- """
- Appends special character to end of message to identify end of message
- """
- # Convert dict to json string
- return json.dumps(msg, ensure_ascii=False) + DELIMITER
-
-
-def _get_ran_hex(length: int = 32) -> str:
- """
- Returns random hex string
- """
- return "".join(random.choice("0123456789abcdef") for _ in range(length))
-
-
-class _ChatHubRequest:
- """
- Request object for ChatHub
- """
-
- def __init__(
- self,
- conversation_signature: str,
- client_id: str,
- conversation_id: str,
- invocation_id: int = 0,
- ) -> None:
- self.struct: dict = {}
-
- self.client_id: str = client_id
- self.conversation_id: str = conversation_id
- self.conversation_signature: str = conversation_signature
- self.invocation_id: int = invocation_id
-
- def update(
- self,
- prompt: str,
- conversation_style: CONVERSATION_STYLE_TYPE,
- options = None,
- webpage_context = None,
- search_result = False,
- ) -> None:
- """
- Updates request object
- """
- if options is None:
- options = [
- "deepleo",
- "enable_debug_commands",
- "disable_emoji_spoken_text",
- "enablemm",
- ]
- if conversation_style:
- if not isinstance(conversation_style, ConversationStyle):
- conversation_style = getattr(ConversationStyle, conversation_style)
- options = conversation_style.value
- self.struct = {
- "arguments": [
- {
- "source": "cib",
- "optionsSets": options,
- "allowedMessageTypes": [
- "Chat",
- "Disengaged",
- "AdsQuery",
- "SemanticSerp",
- "GenerateContentQuery",
- "SearchQuery",
- ],
- "sliceIds": [
- "chk1cf",
- "nopreloadsscf",
- "winlongmsg2tf",
- "perfimpcomb",
- "sugdivdis",
- "sydnoinputt",
- "wpcssopt",
- "wintone2tf",
- "0404sydicnbs0",
- "405suggbs0",
- "scctl",
- "330uaugs0",
- "0329resp",
- "udscahrfon",
- "udstrblm5",
- "404e2ewrt",
- "408nodedups0",
- "403tvlansgnd",
- ],
- "traceId": _get_ran_hex(32),
- "isStartOfSession": self.invocation_id == 0,
- "message": {
- "author": "user",
- "inputMethod": "Keyboard",
- "text": prompt,
- "messageType": "Chat",
- },
- "conversationSignature": self.conversation_signature,
- "participant": {
- "id": self.client_id,
- },
- "conversationId": self.conversation_id,
- },
- ],
- "invocationId": str(self.invocation_id),
- "target": "chat",
- "type": 4,
- }
- if search_result:
- have_search_result = [
- "InternalSearchQuery",
- "InternalSearchResult",
- "InternalLoaderMessage",
- "RenderCardRequest",
- ]
- self.struct["arguments"][0]["allowedMessageTypes"] += have_search_result
- if webpage_context:
- self.struct["arguments"][0]["previousMessages"] = [
- {
- "author": "user",
- "description": webpage_context,
- "contextType": "WebPage",
- "messageType": "Context",
- "messageId": "discover-web--page-ping-mriduna-----",
- },
- ]
- self.invocation_id += 1
-
-
-class _Conversation:
- """
- Conversation API
- """
-
- def __init__(
- self,
- proxy = None,
- async_mode = False,
- cookies = None,
- ) -> None:
- if async_mode:
- return
- self.struct: dict = {
- "conversationId": None,
- "clientId": None,
- "conversationSignature": None,
- "result": {"value": "Success", "message": None},
- }
- self.proxy = proxy
- proxy = (
- proxy
- or os.environ.get("all_proxy")
- or os.environ.get("ALL_PROXY")
- or os.environ.get("https_proxy")
- or os.environ.get("HTTPS_PROXY")
- or None
- )
- if proxy is not None and proxy.startswith("socks5h://"):
- proxy = "socks5://" + proxy[len("socks5h://") :]
- self.session = httpx.Client(
- proxies=proxy,
- timeout=30,
- headers=HEADERS_INIT_CONVER,
- )
- if cookies:
- for cookie in cookies:
- self.session.cookies.set(cookie["name"], cookie["value"])
- # Send GET request
- response = self.session.get(
- url=os.environ.get("BING_PROXY_URL")
- or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
- )
- if response.status_code != 200:
- response = self.session.get(
- "https://edge.churchless.tech/edgesvc/turing/conversation/create",
- )
- if response.status_code != 200:
- print(f"Status code: {response.status_code}")
- print(response.text)
- print(response.url)
- raise Exception("Authentication failed")
- try:
- self.struct = response.json()
- except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
- raise Exception(
- "Authentication failed. You have not been accepted into the beta.",
- ) from exc
- if self.struct["result"]["value"] == "UnauthorizedRequest":
- raise NotAllowedToAccess(self.struct["result"]["message"])
-
- @staticmethod
- async def create(
- proxy = None,
- cookies = None,
- ):
- self = _Conversation(async_mode=True)
- self.struct = {
- "conversationId": None,
- "clientId": None,
- "conversationSignature": None,
- "result": {"value": "Success", "message": None},
- }
- self.proxy = proxy
- proxy = (
- proxy
- or os.environ.get("all_proxy")
- or os.environ.get("ALL_PROXY")
- or os.environ.get("https_proxy")
- or os.environ.get("HTTPS_PROXY")
- or None
- )
- if proxy is not None and proxy.startswith("socks5h://"):
- proxy = "socks5://" + proxy[len("socks5h://") :]
- transport = httpx.AsyncHTTPTransport(retries=10)
- # Convert cookie format to httpx format
- formatted_cookies = None
- if cookies:
- formatted_cookies = httpx.Cookies()
- for cookie in cookies:
- formatted_cookies.set(cookie["name"], cookie["value"])
- async with httpx.AsyncClient(
- proxies=proxy,
- timeout=30,
- headers=HEADERS_INIT_CONVER,
- transport=transport,
- cookies=formatted_cookies,
- ) as client:
- # Send GET request
- response = await client.get(
- url=os.environ.get("BING_PROXY_URL")
- or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
- )
- if response.status_code != 200:
- response = await client.get(
- "https://edge.churchless.tech/edgesvc/turing/conversation/create",
- )
- if response.status_code != 200:
- print(f"Status code: {response.status_code}")
- print(response.text)
- print(response.url)
- raise Exception("Authentication failed")
- try:
- self.struct = response.json()
- except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
- raise Exception(
- "Authentication failed. You have not been accepted into the beta.",
- ) from exc
- if self.struct["result"]["value"] == "UnauthorizedRequest":
- raise NotAllowedToAccess(self.struct["result"]["message"])
- return self
-
-
-class _ChatHub:
- """
- Chat API
- """
-
- def __init__(
- self,
- conversation: _Conversation,
- proxy = None,
- cookies = None,
- ) -> None:
- self.session = None
- self.wss = None
- self.request: _ChatHubRequest
- self.loop: bool
- self.task: asyncio.Task
- self.request = _ChatHubRequest(
- conversation_signature=conversation.struct["conversationSignature"],
- client_id=conversation.struct["clientId"],
- conversation_id=conversation.struct["conversationId"],
- )
- self.cookies = cookies
- self.proxy: str = proxy
-
- async def ask_stream(
- self,
- prompt: str,
- wss_link: str,
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- raw: bool = False,
- options: dict = None,
- webpage_context = None,
- search_result: bool = False,
- ) -> Generator[str, None, None]:
- """
- Ask a question to the bot
- """
- timeout = aiohttp.ClientTimeout(total=30)
- self.session = aiohttp.ClientSession(timeout=timeout)
-
- if self.wss and not self.wss.closed:
- await self.wss.close()
- # Check if websocket is closed
- self.wss = await self.session.ws_connect(
- wss_link,
- headers=HEADERS,
- ssl=ssl_context,
- proxy=self.proxy,
- autoping=False,
- )
- await self._initial_handshake()
- if self.request.invocation_id == 0:
- # Construct a ChatHub request
- self.request.update(
- prompt=prompt,
- conversation_style=conversation_style,
- options=options,
- webpage_context=webpage_context,
- search_result=search_result,
- )
- else:
- async with httpx.AsyncClient() as client:
- response = await client.post(
- "https://sydney.bing.com/sydney/UpdateConversation/",
- json={
- "messages": [
- {
- "author": "user",
- "description": webpage_context,
- "contextType": "WebPage",
- "messageType": "Context",
- },
- ],
- "conversationId": self.request.conversation_id,
- "source": "cib",
- "traceId": _get_ran_hex(32),
- "participant": {"id": self.request.client_id},
- "conversationSignature": self.request.conversation_signature,
- },
- )
- if response.status_code != 200:
- print(f"Status code: {response.status_code}")
- print(response.text)
- print(response.url)
- raise Exception("Update web page context failed")
- # Construct a ChatHub request
- self.request.update(
- prompt=prompt,
- conversation_style=conversation_style,
- options=options,
- )
- # Send request
- await self.wss.send_str(_append_identifier(self.request.struct))
- final = False
- draw = False
- resp_txt = ""
- result_text = ""
- resp_txt_no_link = ""
- while not final:
- msg = await self.wss.receive()
- objects = msg.data.split(DELIMITER)
- for obj in objects:
- if obj is None or not obj:
- continue
- response = json.loads(obj)
- if response.get("type") != 2 and raw:
- yield False, response
- elif response.get("type") == 1 and response["arguments"][0].get(
- "messages",
- ):
- if not draw:
- if (
- response["arguments"][0]["messages"][0].get("messageType")
- == "GenerateContentQuery"
- ):
- async with ImageGenAsync("", True) as image_generator:
- images = await image_generator.get_images(
- response["arguments"][0]["messages"][0]["text"],
- )
- for i, image in enumerate(images):
- resp_txt = resp_txt + f"\n"
- draw = True
- if (
- response["arguments"][0]["messages"][0]["contentOrigin"]
- != "Apology"
- ) and not draw:
- resp_txt = result_text + response["arguments"][0][
- "messages"
- ][0]["adaptiveCards"][0]["body"][0].get("text", "")
- resp_txt_no_link = result_text + response["arguments"][0][
- "messages"
- ][0].get("text", "")
- if response["arguments"][0]["messages"][0].get(
- "messageType",
- ):
- resp_txt = (
- resp_txt
- + response["arguments"][0]["messages"][0][
- "adaptiveCards"
- ][0]["body"][0]["inlines"][0].get("text")
- + "\n"
- )
- result_text = (
- result_text
- + response["arguments"][0]["messages"][0][
- "adaptiveCards"
- ][0]["body"][0]["inlines"][0].get("text")
- + "\n"
- )
- yield False, resp_txt
-
- elif response.get("type") == 2:
- if response["item"]["result"].get("error"):
- await self.close()
- raise Exception(
- f"{response['item']['result']['value']}: {response['item']['result']['message']}",
- )
- if draw:
- cache = response["item"]["messages"][1]["adaptiveCards"][0][
- "body"
- ][0]["text"]
- response["item"]["messages"][1]["adaptiveCards"][0]["body"][0][
- "text"
- ] = (cache + resp_txt)
- if (
- response["item"]["messages"][-1]["contentOrigin"] == "Apology"
- and resp_txt
- ):
- response["item"]["messages"][-1]["text"] = resp_txt_no_link
- response["item"]["messages"][-1]["adaptiveCards"][0]["body"][0][
- "text"
- ] = resp_txt
- print(
- "Preserved the message from being deleted",
- file=sys.stderr,
- )
- final = True
- await self.close()
- yield True, response
-
- async def _initial_handshake(self) -> None:
- await self.wss.send_str(_append_identifier({"protocol": "json", "version": 1}))
- await self.wss.receive()
-
- async def close(self) -> None:
- """
- Close the connection
- """
- if self.wss and not self.wss.closed:
- await self.wss.close()
- if self.session and not self.session.closed:
- await self.session.close()
-
-
-class Chatbot:
- """
- Combines everything to make it seamless
- """
-
- def __init__(
- self,
- proxy = None,
- cookies = None,
- ) -> None:
- self.proxy = proxy
- self.chat_hub: _ChatHub = _ChatHub(
- _Conversation(self.proxy, cookies=cookies),
- proxy=self.proxy,
- cookies=cookies,
- )
-
- @staticmethod
- async def create(
- proxy = None,
- cookies = None,
- ):
- self = Chatbot.__new__(Chatbot)
- self.proxy = proxy
- self.chat_hub = _ChatHub(
- await _Conversation.create(self.proxy, cookies=cookies),
- proxy=self.proxy,
- cookies=cookies,
- )
- return self
-
- async def ask(
- self,
- prompt: str,
- wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- options: dict = None,
- webpage_context = None,
- search_result: bool = False,
- ) -> dict:
- """
- Ask a question to the bot
- """
- async for final, response in self.chat_hub.ask_stream(
- prompt=prompt,
- conversation_style=conversation_style,
- wss_link=wss_link,
- options=options,
- webpage_context=webpage_context,
- search_result=search_result,
- ):
- if final:
- return response
- await self.chat_hub.wss.close()
- return {}
-
- async def ask_stream(
- self,
- prompt: str,
- wss_link: str = "wss://sydney.bing.com/sydney/ChatHub",
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- raw: bool = False,
- options: dict = None,
- webpage_context = None,
- search_result: bool = False,
- ) -> Generator[str, None, None]:
- """
- Ask a question to the bot
- """
- async for response in self.chat_hub.ask_stream(
- prompt=prompt,
- conversation_style=conversation_style,
- wss_link=wss_link,
- raw=raw,
- options=options,
- webpage_context=webpage_context,
- search_result=search_result,
- ):
- yield response
-
- async def close(self) -> None:
- """
- Close the connection
- """
- await self.chat_hub.close()
-
- async def reset(self) -> None:
- """
- Reset the conversation
- """
- await self.close()
- self.chat_hub = _ChatHub(
- await _Conversation.create(self.proxy),
- proxy=self.proxy,
- cookies=self.chat_hub.cookies,
- )
-
-
-async def _get_input_async(
- session: PromptSession = None,
- completer: WordCompleter = None,
-) -> str:
- """
- Multiline input function.
- """
- return await session.prompt_async(
- completer=completer,
- multiline=True,
- auto_suggest=AutoSuggestFromHistory(),
- )
-
-
-def _create_session() -> PromptSession:
- kb = KeyBindings()
-
- @kb.add("enter")
- def _(event):
- buffer_text = event.current_buffer.text
- if buffer_text.startswith("!"):
- event.current_buffer.validate_and_handle()
- else:
- event.current_buffer.insert_text("\n")
-
- @kb.add("escape")
- def _(event):
- if event.current_buffer.complete_state:
- # event.current_buffer.cancel_completion()
- event.current_buffer.text = ""
-
- return PromptSession(key_bindings=kb, history=InMemoryHistory())
-
-
-def _create_completer(commands: list, pattern_str: str = "$"):
- return WordCompleter(words=commands, pattern=re.compile(pattern_str))
-
-
-async def async_main(args: argparse.Namespace) -> None:
- """
- Main function
- """
- print("Initializing...")
- print("Enter `alt+enter` or `escape+enter` to send a message")
- # Read and parse cookies
- cookies = None
- if args.cookie_file:
- cookies = json.loads(open(args.cookie_file, encoding="utf-8").read())
- bot = await Chatbot.create(proxy=args.proxy, cookies=cookies)
- session = _create_session()
- completer = _create_completer(["!help", "!exit", "!reset"])
- initial_prompt = args.prompt
-
- while True:
- print("\nYou:")
- if initial_prompt:
- question = initial_prompt
- print(question)
- initial_prompt = None
- else:
- question = (
- input()
- if args.enter_once
- else await _get_input_async(session=session, completer=completer)
- )
- print()
- if question == "!exit":
- break
- if question == "!help":
- print(
- """
- !help - Show this help message
- !exit - Exit the program
- !reset - Reset the conversation
- """,
- )
- continue
- if question == "!reset":
- await bot.reset()
- continue
- print("Bot:")
- if args.no_stream:
- print(
- (
- await bot.ask(
- prompt=question,
- conversation_style=args.style,
- wss_link=args.wss_link,
- )
- )["item"]["messages"][1]["adaptiveCards"][0]["body"][0]["text"],
- )
- else:
- wrote = 0
- if args.rich:
- md = Markdown("")
- with Live(md, auto_refresh=False) as live:
- async for final, response in bot.ask_stream(
- prompt=question,
- conversation_style=args.style,
- wss_link=args.wss_link,
- ):
- if not final:
- if wrote > len(response):
- print(md)
- print(Markdown("***Bing revoked the response.***"))
- wrote = len(response)
- md = Markdown(response)
- live.update(md, refresh=True)
- else:
- async for final, response in bot.ask_stream(
- prompt=question,
- conversation_style=args.style,
- wss_link=args.wss_link,
- ):
- if not final:
- if not wrote:
- print(response, end="", flush=True)
- else:
- print(response[wrote:], end="", flush=True)
- wrote = len(response)
- print()
- await bot.close()
-
-
-def main() -> None:
- print(
- """
- EdgeGPT - A demo of reverse engineering the Bing GPT chatbot
- Repo: github.com/acheong08/EdgeGPT
- By: Antonio Cheong
-
- !help for help
-
- Type !exit to exit
- """,
- )
- parser = argparse.ArgumentParser()
- parser.add_argument("--enter-once", action="store_true")
- parser.add_argument("--no-stream", action="store_true")
- parser.add_argument("--rich", action="store_true")
- parser.add_argument(
- "--proxy",
- help="Proxy URL (e.g. socks5://127.0.0.1:1080)",
- type=str,
- )
- parser.add_argument(
- "--wss-link",
- help="WSS URL(e.g. wss://sydney.bing.com/sydney/ChatHub)",
- type=str,
- default="wss://sydney.bing.com/sydney/ChatHub",
- )
- parser.add_argument(
- "--style",
- choices=["creative", "balanced", "precise"],
- default="balanced",
- )
- parser.add_argument(
- "--prompt",
- type=str,
- default="",
- required=False,
- help="prompt to start with",
- )
- parser.add_argument(
- "--cookie-file",
- type=str,
- default="",
- required=False,
- help="path to cookie file",
- )
- args = parser.parse_args()
- asyncio.run(async_main(args))
-
-
-class Cookie:
- """
- Convenience class for Bing Cookie files, data, and configuration. This Class
- is updated dynamically by the Query class to allow cycling through >1
- cookie/credentials file e.g. when daily request limits (current 200 per
- account per day) are exceeded.
- """
-
- current_file_index = 0
- dirpath = Path("./").resolve()
- search_pattern = "bing_cookies_*.json"
- ignore_files = set()
-
- @classmethod
- def fetch_default(cls, path=None):
- from selenium import webdriver
- from selenium.webdriver.common.by import By
-
- driver = webdriver.Edge()
- driver.get("https://bing.com/chat")
- time.sleep(5)
- xpath = '//button[@id="bnp_btn_accept"]'
- driver.find_element(By.XPATH, xpath).click()
- time.sleep(2)
- xpath = '//a[@id="codexPrimaryButton"]'
- driver.find_element(By.XPATH, xpath).click()
- if path is None:
- path = Path("./bing_cookies__default.json")
- # Double underscore ensures this file is first when sorted
- cookies = driver.get_cookies()
- Path(path).write_text(json.dumps(cookies, indent=4), encoding="utf-8")
- # Path again in case supplied path is: str
- print(f"Cookies saved to: {path}")
- driver.quit()
-
- @classmethod
- def files(cls):
- """Return a sorted list of all cookie files matching .search_pattern"""
- all_files = set(cls.dirpath.glob(cls.search_pattern))
- return sorted(list(all_files - cls.ignore_files))
-
- @classmethod
- def import_data(cls):
- """
- Read the active cookie file and populate the following attributes:
-
- .current_filepath
- .current_data
- .image_token
- """
- try:
- cls.current_filepath = cls.files()[cls.current_file_index]
- except IndexError:
- print(
- "> Please set Cookie.current_filepath to a valid cookie file, then run Cookie.import_data()",
- )
- return
- print(f"> Importing cookies from: {cls.current_filepath.name}")
- with open(cls.current_filepath, encoding="utf-8") as file:
- cls.current_data = json.load(file)
- cls.image_token = [x for x in cls.current_data if x.get("name") == "_U"]
- cls.image_token = cls.image_token[0].get("value")
-
- @classmethod
- def import_next(cls):
- """
- Cycle through to the next cookies file. Import it. Mark the previous
- file to be ignored for the remainder of the current session.
- """
- cls.ignore_files.add(cls.current_filepath)
- if Cookie.current_file_index >= len(cls.files()):
- Cookie.current_file_index = 0
- Cookie.import_data()
-
-
-class Query:
- """
- A convenience class that wraps around EdgeGPT.Chatbot to encapsulate input,
- config, and output all together. Relies on Cookie class for authentication
- """
-
- def __init__(
- self,
- prompt,
- style="precise",
- content_type="text",
- cookie_file=0,
- echo=True,
- echo_prompt=False,
- ):
- """
- Arguments:
-
- prompt: Text to enter into Bing Chat
- style: creative, balanced, or precise
- content_type: "text" for Bing Chat; "image" for Dall-e
- cookie_file: Path, filepath string, or index (int) to list of cookie paths
- echo: Print something to confirm request made
- echo_prompt: Print confirmation of the evaluated prompt
- """
- self.index = []
- self.request_count = {}
- self.image_dirpath = Path("./").resolve()
- Cookie.import_data()
- self.index += [self]
- self.prompt = prompt
- files = Cookie.files()
- if isinstance(cookie_file, int):
- index = cookie_file if cookie_file < len(files) else 0
- else:
- if not isinstance(cookie_file, (str, Path)):
- message = "'cookie_file' must be an int, str, or Path object"
- raise TypeError(message)
- cookie_file = Path(cookie_file)
- if cookie_file in files(): # Supplied filepath IS in Cookie.dirpath
- index = files.index(cookie_file)
- else: # Supplied filepath is NOT in Cookie.dirpath
- if cookie_file.is_file():
- Cookie.dirpath = cookie_file.parent.resolve()
- if cookie_file.is_dir():
- Cookie.dirpath = cookie_file.resolve()
- index = 0
- Cookie.current_file_index = index
- if content_type == "text":
- self.style = style
- self.log_and_send_query(echo, echo_prompt)
- if content_type == "image":
- self.create_image()
-
- def log_and_send_query(self, echo, echo_prompt):
- self.response = asyncio.run(self.send_to_bing(echo, echo_prompt))
- name = str(Cookie.current_filepath.name)
- if not self.request_count.get(name):
- self.request_count[name] = 1
- else:
- self.request_count[name] += 1
-
- def create_image(self):
- image_generator = ImageGen(Cookie.image_token)
- image_generator.save_images(
- image_generator.get_images(self.prompt),
- output_dir=self.image_dirpath,
- )
-
- async def send_to_bing(self, echo=True, echo_prompt=False):
- """Creat, submit, then close a Chatbot instance. Return the response"""
- retries = len(Cookie.files())
- while retries:
- try:
- bot = await Chatbot.create()
- if echo_prompt:
- print(f"> {self.prompt=}")
- if echo:
- print("> Waiting for response...")
- if self.style.lower() not in "creative balanced precise".split():
- self.style = "precise"
- response = await bot.ask(
- prompt=self.prompt,
- conversation_style=getattr(ConversationStyle, self.style),
- # wss_link="wss://sydney.bing.com/sydney/ChatHub"
- # What other values can this parameter take? It seems to be optional
- )
- return response
- except KeyError:
- print(
- f"> KeyError [{Cookie.current_filepath.name} may have exceeded the daily limit]",
- )
- Cookie.import_next()
- retries -= 1
- finally:
- await bot.close()
-
- @property
- def output(self):
- """The response from a completed Chatbot request"""
- return self.response["item"]["messages"][1]["text"]
-
- @property
- def sources(self):
- """The source names and details parsed from a completed Chatbot request"""
- return self.response["item"]["messages"][1]["sourceAttributions"]
-
- @property
- def sources_dict(self):
- """The source names and details as a dictionary"""
- sources_dict = {}
- name = "providerDisplayName"
- url = "seeMoreUrl"
- for source in self.sources:
- if name in source.keys() and url in source.keys():
- sources_dict[source[name]] = source[url]
- else:
- continue
- return sources_dict
-
- @property
- def code(self):
- """Extract and join any snippets of Python code in the response"""
- code_blocks = self.output.split("```")[1:-1:2]
- code_blocks = ["\n".join(x.splitlines()[1:]) for x in code_blocks]
- return "\n\n".join(code_blocks)
-
- @property
- def languages(self):
- """Extract all programming languages given in code blocks"""
- code_blocks = self.output.split("```")[1:-1:2]
- return {x.splitlines()[0] for x in code_blocks}
-
- @property
- def suggestions(self):
- """Follow-on questions suggested by the Chatbot"""
- return [
- x["text"]
- for x in self.response["item"]["messages"][1]["suggestedResponses"]
- ]
-
- def __repr__(self):
- return f""
-
- def __str__(self):
- return self.output
-
-
-class ImageQuery(Query):
- def __init__(self, prompt, **kwargs):
- kwargs.update({"content_type": "image"})
- super().__init__(prompt, **kwargs)
-
- def __repr__(self):
- return f""
-
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/examples/finetune_taiyi_stable_diffusion/finetune.py b/spaces/fclong/summary/fengshen/examples/finetune_taiyi_stable_diffusion/finetune.py
deleted file mode 100644
index c9f27358402cd0de23353acf6eaedf247949ec0a..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/finetune_taiyi_stable_diffusion/finetune.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import os
-import torch
-import argparse
-from pytorch_lightning import (
- LightningModule,
- Trainer,
-)
-from pytorch_lightning.callbacks import (
- LearningRateMonitor,
-)
-from fengshen.data.universal_datamodule import UniversalDataModule
-from fengshen.models.model_utils import (
- add_module_args,
- configure_optimizers,
- get_total_steps,
-)
-from fengshen.utils.universal_checkpoint import UniversalCheckpoint
-from transformers import BertTokenizer, BertModel
-from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
-from torch.nn import functional as F
-from fengshen.data.taiyi_stable_diffusion_datasets.taiyi_datasets import add_data_args, load_data
-from torchvision import transforms
-from PIL import Image
-from torch.utils.data._utils.collate import default_collate
-
-
-class Collator():
- def __init__(self, args, tokenizer):
- self.image_transforms = transforms.Compose(
- [
- transforms.Resize(
- args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
- transforms.CenterCrop(
- args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
- transforms.ToTensor(),
- transforms.Normalize([0.5], [0.5]),
- ]
- )
- self.tokenizer = tokenizer
-
- def __call__(self, inputs):
- examples = []
- max_length = min(max([len(i['caption']) for i in inputs]), 512)
- for i in inputs:
- example = {}
- instance_image = Image.open(i['img_path'])
- if not instance_image.mode == "RGB":
- instance_image = instance_image.convert("RGB")
- example["pixel_values"] = self.image_transforms(instance_image)
- example["input_ids"] = self.tokenizer(
- i['caption'],
- padding="max_length",
- truncation=True,
- max_length=max_length,
- return_tensors='pt',
- )['input_ids'][0]
- examples.append(example)
- return default_collate(examples)
-
-
-class StableDiffusion(LightningModule):
- @staticmethod
- def add_module_specific_args(parent_parser):
- parser = parent_parser.add_argument_group('Taiyi Stable Diffusion Module')
- parser.add_argument('--freeze_unet', action='store_true', default=False)
- parser.add_argument('--freeze_text_encoder', action='store_true', default=False)
- return parent_parser
-
- def __init__(self, args):
- super().__init__()
- self.tokenizer = BertTokenizer.from_pretrained(
- args.model_path, subfolder="tokenizer")
- self.text_encoder = BertModel.from_pretrained(
- args.model_path, subfolder="text_encoder") # load from taiyi_finetune-v0
- self.vae = AutoencoderKL.from_pretrained(
- args.model_path, subfolder="vae")
- self.unet = UNet2DConditionModel.from_pretrained(
- args.model_path, subfolder="unet")
- # TODO: 使用xformers配合deepspeed速度反而有下降(待确认
- self.unet.set_use_memory_efficient_attention_xformers(False)
-
- self.noise_scheduler = DDPMScheduler(
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000
- )
-
- for param in self.vae.parameters():
- param.requires_grad = False
-
- if args.freeze_text_encoder:
- for param in self.text_encoder.parameters():
- param.requires_grad = False
-
- if args.freeze_unet:
- for param in self.unet.parameters():
- param.requires_grad = False
-
- self.save_hyperparameters(args)
-
- def setup(self, stage) -> None:
- if stage == 'fit':
- self.total_steps = get_total_steps(self.trainer, self.hparams)
- print('Total steps: {}' .format(self.total_steps))
-
- def configure_optimizers(self):
- return configure_optimizers(self)
-
- def training_step(self, batch, batch_idx):
- self.text_encoder.train()
-
- latents = self.vae.encode(batch["pixel_values"]).latent_dist.sample()
- latents = latents * 0.18215
-
- # Sample noise that we'll add to the latents
- noise = torch.randn(latents.shape).to(latents.device)
- noise = noise.to(dtype=self.unet.dtype)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(
- 0, self.noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
-
- noisy_latents = self.noise_scheduler.add_noise(latents, noise, timesteps)
- noisy_latents = noisy_latents.to(dtype=self.unet.dtype)
-
- # Get the text embedding for conditioning
- encoder_hidden_states = self.text_encoder(batch["input_ids"])[0]
-
- # Predict the noise residual
- noise_pred = self.unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
- self.log("train_loss", loss.item(), on_epoch=False, prog_bar=True, logger=True)
-
- if self.trainer.global_rank == 0 and self.global_step == 100:
- # 打印显存占用
- from fengshen.utils.utils import report_memory
- report_memory('stable diffusion')
-
- return {"loss": loss}
-
- def on_save_checkpoint(self, checkpoint) -> None:
- if self.trainer.global_rank == 0:
- print('saving model...')
- pipeline = StableDiffusionPipeline.from_pretrained(
- self.hparams.model_path,
- text_encoder=self.text_encoder,
- tokenizer=self.tokenizer,
- unet=self.unet)
- self.trainer.current_epoch
- pipeline.save_pretrained(os.path.join(
- args.default_root_dir, f'hf_out_{self.trainer.current_epoch}_{self.trainer.global_step}'))
-
- def on_load_checkpoint(self, checkpoint) -> None:
- # 兼容低版本lightning,低版本lightning从ckpt起来时steps数会被重置为0
- global_step_offset = checkpoint["global_step"]
- if 'global_samples' in checkpoint:
- self.consumed_samples = checkpoint['global_samples']
- self.trainer.fit_loop.epoch_loop._batches_that_stepped = global_step_offset
-
-
-if __name__ == '__main__':
- args_parser = argparse.ArgumentParser()
- args_parser = add_module_args(args_parser)
- args_parser = add_data_args(args_parser)
- args_parser = UniversalDataModule.add_data_specific_args(args_parser)
- args_parser = Trainer.add_argparse_args(args_parser)
- args_parser = StableDiffusion.add_module_specific_args(args_parser)
- args_parser = UniversalCheckpoint.add_argparse_args(args_parser)
- args = args_parser.parse_args()
-
- lr_monitor = LearningRateMonitor(logging_interval='step')
- checkpoint_callback = UniversalCheckpoint(args)
- trainer = Trainer.from_argparse_args(args,
- callbacks=[
- lr_monitor,
- checkpoint_callback])
-
- model = StableDiffusion(args)
- tokenizer = model.tokenizer
- datasets = load_data(args, global_rank=trainer.global_rank)
- collate_fn = Collator(args, tokenizer)
-
- datamoule = UniversalDataModule(
- tokenizer=tokenizer, collate_fn=collate_fn, args=args, datasets=datasets)
-
- trainer.fit(model, datamoule, ckpt_path=args.load_ckpt_path)
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/AetherSX2 APK How to Play PS Two Games on Android with BIOS Image.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/AetherSX2 APK How to Play PS Two Games on Android with BIOS Image.md
deleted file mode 100644
index aa3d00c56021ce82c009445a822958a089a992ae..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/AetherSX2 APK How to Play PS Two Games on Android with BIOS Image.md
+++ /dev/null
@@ -1,182 +0,0 @@
-
-
AetherSX2 APK Download 2023: How to Play PS2 Games on Android
-
Do you miss playing your favorite PlayStation 2 games on your Android smartphone? If so, you're in luck. There's a new PS2 emulator for Android that lets you enjoy classic PS2 titles on your mobile device with amazing graphics and performance. It's called AetherSX2, and it's easily the best PS2 emulator for Android right now.
-
In this article, we'll tell you everything you need to know about AetherSX2, including its history, requirements, download and installation process, setup tips, game compatibility, pros and cons, and alternatives. By the end of this article, you'll be able to play PS2 games on Android with ease and nostalgia.
AetherSX2 is a PS2 emulator for Android that was developed by a single person who goes by the name Tahlreth. The developer used the code of PCSX2, a well-known and long-running PS2 emulator for PC, as the basis for their Android-based emulator. PCSX2 is an open-source project that has been in development since 2001 and has achieved a high level of accuracy and compatibility for PS2 emulation.
-
Tahlreth got the permission from the PCSX2 developers to use their code for their Android emulator, as long as they followed the terms of the LGPL license that governs PCSX2. This means that Tahlreth had to make their source code available to the public and respect the original authors' credits. Unlike another PS2 emulator for Android called DamonPS2, which stole the PCSX2 code without following the license or giving proper attribution.
-
AetherSX2 was initially released in December 2021 via the Google Play Store as an open beta. It quickly gained popularity among PS2 fans who were looking for a free, fast, and reliable way to play PS2 games on their Android devices. The developer has been updating the app regularly with bug fixes, performance improvements, and new features. You can also download the APK file from the official website or join the fan-run Discord server for more information and support
How to set up AetherSX2 for optimal performance?
-
Now that you have downloaded and installed AetherSX2 on your Android device, you may want to tweak some settings to make the emulator run faster and smoother. AetherSX2 has a lot of options and features that you can customize to suit your preferences and device capabilities. Here are some tips and tricks to set up AetherSX2 for optimal performance:
-
Graphics Renderer
-
The graphics renderer is the most important setting that affects the emulator's performance and quality. AetherSX2 supports two types of graphics renderers: OpenGL ES and Vulkan. OpenGL ES is the default and more compatible renderer, but it may not be able to handle some games with high graphics demands. Vulkan is a newer and more powerful renderer, but it requires a device that supports the Vulkan API and may cause some graphical glitches or crashes in some games.
-
To change the graphics renderer, go to Settings > Graphics > Renderer and choose between OpenGL ES or Vulkan. You can also switch between them during gameplay by tapping on the menu icon on the top right corner of the screen and selecting "Change Renderer".
-
We recommend using Vulkan if your device supports it and if the game you're playing runs well with it. Otherwise, stick with OpenGL ES for better compatibility and stability.
-
Resolution Scaling
-
The resolution scaling is another important setting that affects the emulator's performance and quality. Resolution scaling allows you to increase or decrease the internal resolution of the PS2 games, which can make them look sharper or smoother on your device's screen. However, increasing the resolution scaling also increases the CPU and GPU load, which can cause slowdowns or stuttering in some games.
-
To change the resolution scaling, go to Settings > Graphics > Resolution Scaling and choose between 1x (Native), 2x, 3x, or 4x. You can also change it during gameplay by tapping on the menu icon on the top right corner of the screen and selecting "Change Resolution".
-
aethersx2 android emulator apk download 2023
-aethersx2 ps2 emulator for android apk download 2023
-aethersx2 apk free download latest version 2023
-aethersx2 apk full version download 2023
-aethersx2 apk mod download 2023
-aethersx2 apk pro download 2023
-aethersx2 apk premium download 2023
-aethersx2 apk cracked download 2023
-aethersx2 apk unlocked download 2023
-aethersx2 apk no ads download 2023
-aethersx2 apk offline download 2023
-aethersx2 apk online download 2023
-aethersx2 apk update download 2023
-aethersx2 apk new version download 2023
-aethersx2 apk old version download 2023
-aethersx2 apk beta version download 2023
-aethersx2 apk stable version download 2023
-aethersx2 apk original version download 2023
-aethersx2 apk official version download 2023
-aethersx2 apk unofficial version download 2023
-aethersx2 apk direct download link 2023
-aethersx2 apk mirror download link 2023
-aethersx2 apk google drive download link 2023
-aethersx2 apk mediafire download link 2023
-aethersx2 apk mega.nz download link 2023
-aethersx2 apk dropbox download link 2023
-aethersx2 apk zippyshare download link 2023
-aethersx2 apk fileplanet download link 2023
-aethersx2 apk apkpure download link 2023
-aethersx2 apk apkmirror download link 2023
-aethersx2 apk apktada download link 2023
-aethersx2 apk apknite download link 2023
-aethersx2 apk apkmody download link 2023
-aethersx2 apk apksfree download link 2023
-aethersx2 apk uptodown download link 2023
-aethersx2 apk mob.org download link 2023
-how to install and use aethersx2 apk on android device in 2023
-how to play ps2 games on android with aethersx2 apk in 2023
-how to configure and optimize settings for best performance on aethersx2 apk in 2023
-how to fix common issues and errors on aethersx2 apk in 2023
-how to update and upgrade to the latest version of aethersx2 apk in 2023
-how to uninstall and remove completely from android device in 2023
-
We recommend using 2x or 3x resolution scaling for most games, as they offer a good balance between performance and quality. You can try 4x resolution scaling if your device is powerful enough and if the game you're playing doesn't suffer from any performance issues. Avoid using 1x resolution scaling unless you want to play with the original PS2 graphics.
-
Control Scheme
-
The control scheme is another setting that affects the emulator's usability and comfort. Control scheme allows you to customize the layout and size of the virtual buttons on your device's screen, which can make them easier or harder to use depending on your preferences and device size. You can also enable or disable vibration feedback, which can make the buttons feel more responsive or annoying.
-
To change the control scheme, go to Settings > Controls > Control Scheme and choose between Default, Compact, or Custom. You can also edit the custom control scheme by tapping on "Edit Custom Layout" and dragging and resizing the buttons as you like. You can also enable or disable vibration feedback by toggling "Vibration" on or off.
-
We recommend using the default control scheme for most devices, as it offers a standard and comfortable layout for PS2 games. You can try the compact control scheme if your device has a small screen or if you want to have more space for viewing the game. You can also create your own custom control scheme if you want to have more control over the button placement and size.
How to load and play PS2 games on AetherSX2?
-
Now that you have set up AetherSX2 on your Android device, you may be wondering how to load and play PS2 games on it. Well, the process is not very complicated, but it does require some preparation and patience. Here are the steps to load and play PS2 games on AetherSX2:
-
Step 1: Dump your own PS2 games
-
The first and most important step is to dump your own PS2 games from your original discs. This is necessary for two reasons: one, to avoid piracy and respect the intellectual property of the game developers and publishers; and two, to ensure the compatibility and quality of the games on the emulator.
-
To dump your own PS2 games, you will need a PS2 console, a USB flash drive or an external hard drive, a PC, and a software called DVD Decrypter. You can download DVD Decrypter from this link: [DVD Decrypter 3.5.4.0 Free Download - VideoHelp].
-
Once you have everything ready, follow these steps:
-
-
Insert your USB flash drive or external hard drive into your PC and format it to FAT32 file system.
-
Insert your PS2 game disc into your PC's DVD drive and launch DVD Decrypter.
-
Select "Mode" from the menu bar and choose "ISO > Read".
-
Select your DVD drive as the source and your USB flash drive or external hard drive as the destination.
-
Click on the big button with a CD and an arrow to start the dumping process. Wait for it to finish.
-
Repeat the process for any other PS2 game discs you want to dump.
-
Eject your USB flash drive or external hard drive from your PC and insert it into your PS2 console.
-
Turn on your PS2 console and launch a software called uLaunchELF. You can download uLaunchELF from this link: [uLaunchELF v4.42b - PSX-Place].
-
Select "FileBrowser" from the main menu and navigate to your USB flash drive or external hard drive.
-
Select the ISO file of the PS2 game you want to dump and press R1.
-
Select "Copy" from the menu and press X.
-
Navigate to "mc0:/" or "mc1:/" depending on which memory card slot you want to use and press R1.
-
Select "Paste" from the menu and press X. Wait for it to finish.
-
Repeat the process for any other ISO files you want to dump.
-
Eject your USB flash drive or external hard drive from your PS2 console and insert it back into your PC.
-
-
Step 2: Convert your PS2 games to ISO/CHD/CSO files
-
The second step is to convert your PS2 games to ISO/CHD/CSO files. This is optional, but recommended for two reasons: one, to reduce the file size of the games and save storage space on your device; and two, to improve the loading speed and performance of the games on the emulator.
-
To convert your PS2 games to ISO/CHD/CSO files, you will need a PC, a software called Ciso Multi Compressor, and a software called CHDMAN. You can download Ciso Multi Compressor from this link: [Ciso Multi Compressor v1.0.0.6 - EmuCR]. You can download CHDMAN from this link: [CHDMAN v0.205 - EmuCR].
-
Once you have everything ready, follow these steps:
-
-
Launch Ciso Multi Compressor on your PC and select the ISO file of the PS2 game you want to convert.
-
Select the compression level you want to use. The higher the compression level, the smaller the file size, but also the longer the conversion time. We recommend using level 9 for optimal results.
-
Select "Compress" and wait for it to finish. You will get a CSO file with the same name as the original ISO file.
-
Repeat the process for any other ISO files you want to convert.
-
Launch CHDMAN on your PC and select the CSO file of the PS2 game you want to convert.
-
Select "Convert" and wait for it to finish. You will get a CHD file with the same name as the original CSO file.
-
Repeat the process for any other CSO files you want to convert.
-Step 3: Transfer your PS2 games to your Android device
-
The third step is to transfer your PS2 games to your Android device. This is the final and easiest step, as you just need to copy and paste the files from your PC to your device. You can use a USB cable, a wireless connection, or a cloud service to do this.
-
To transfer your PS2 games to your Android device, you will need a PC, an Android device, and a USB cable or a Wi-Fi connection or a cloud service. Once you have everything ready, follow these steps:
-
-
Connect your Android device to your PC using a USB cable or a Wi-Fi connection or a cloud service.
-
Locate the CHD files of the PS2 games you want to transfer on your PC.
-
Copy the CHD files and paste them into the "AetherSX2" folder on your Android device's internal storage or external SD card.
-
Disconnect your Android device from your PC.
-
You're done! You can now load and play PS2 games on your Android device with AetherSX2.
-
-
How to load and play PS2 games on AetherSX2?
-
Now that you have transferred your PS2 games to your Android device, you may be eager to load and play them on AetherSX2. Well, the process is very simple and intuitive. Here are the steps to load and play PS2 games on AetherSX2:
-
-
Launch AetherSX2 on your Android device and tap on "Load Game".
-
Navigate to the "AetherSX2" folder on your internal storage or external SD card and select the CHD file of the PS2 game you want to play.
-
Wait for the game to load and start playing.
-
You can use the virtual buttons on the screen to control the game, or connect a Bluetooth controller or a keyboard and mouse for better input.
-
You can also access the emulator menu by tapping on the menu icon on the top right corner of the screen. From there, you can save and load states, change settings, switch games, and more.
-
You're done! You can now enjoy PS2 games on your Android device with AetherSX2.
-
-
What are some of the best PS2 games to play on AetherSX2?
-
AetherSX2 supports a wide range of PS2 games, from popular titles to obscure gems. However, some games may run better than others, depending on their compatibility and optimization with the emulator. You can check the compatibility list on the official website or the Discord server to see how well different games run on AetherSX2.
-
Here are some of the best PS2 games to play on AetherSX2, based on their popularity and compatibility:
-
GTA San Andreas
-
GTA San Andreas is one of the most iconic and influential PS2 games ever made. It's an open-world action-adventure game that lets you explore a fictional version of California in the early 1990s. You play as Carl Johnson, a former gangster who returns to his hometown after his mother's death and gets involved in various criminal activities and missions. You can drive cars, bikes, planes, boats, and more; shoot guns, melee weapons, and explosives; customize your character's appearance, skills, and stats; date girlfriends; gamble; play mini-games; and much more. GTA San Andreas is a huge and immersive game that offers endless hours of fun and entertainment.
-
NFS Most Wanted
-
NFS Most Wanted is one of the best racing games for PS2. It's a street racing game that combines high-speed chases, police pursuits, car customization, and story mode. You play as an unnamed racer who arrives in Rockport City and challenges the local street racing crew called the Blacklist. You have to race against other racers, evade the cops, upgrade your car, and climb up the ranks of the Blacklist until you face the ultimate rival: Razor. NFS Most Wanted is a thrilling and adrenaline-pumping game that will keep you on the edge of your seat.
-
FIFA Football
-
FIFA Football is one of the best sports games for PS2. It's a soccer simulation game that lets you play as your favorite teams and players from around the world. You can choose from various modes, such as exhibition, career, tournament, online multiplayer, etc. You can also create your own custom teams, players, stadiums, etc. FIFA Football is a realistic and enjoyable game that will make you feel like a soccer star.
Bully: Anniversary Edition
-
Bully: Anniversary Edition is one of the best action-adventure games for PS2. It's a remastered version of the original Bully game that was released in 2006. It's a game that lets you play as Jimmy Hopkins, a rebellious teenager who is sent to a prestigious boarding school called Bullworth Academy. You have to deal with various challenges, such as bullies, teachers, cliques, pranks, classes, etc. You can also explore the town of Bullworth, make friends and enemies, date girls, ride bikes and skateboards, play mini-games, and more. Bully: Anniversary Edition is a hilarious and engaging game that will make you feel like a schoolboy again.
-
Max Payne Mobile
-
Max Payne Mobile is one of the best shooter games for PS2. It's a mobile version of the original Max Payne game that was released in 2001. It's a game that lets you play as Max Payne, a former NYPD detective who is on the run from the law and the mob after being framed for murder. You have to use your skills, weapons, and bullet time ability to fight your way through various enemies and locations. You also have to follow the dark and twisted story of Max Payne, which is told through graphic novel-style cutscenes and voiceovers. Max Payne Mobile is a gritty and cinematic game that will keep you hooked.
-
What are some of the advantages and disadvantages of AetherSX2?
-
AetherSX2 is undoubtedly the best PS2 emulator for Android right now, but it's not perfect. Like any other emulator, it has its own advantages and disadvantages that you should be aware of before using it. Here are some of the pros and cons of AetherSX2:
-
Advantages
-
-
It's free and open-source, which means you don't have to pay anything to use it or access its source code.
-
It's fast and stable, which means it can run most PS2 games at full speed and without crashes or glitches.
-
It's compatible and accurate, which means it can support a large number of PS2 games and emulate them faithfully.
-
It's customizable and user-friendly, which means you can adjust various settings and features to suit your preferences and device capabilities.
-
It's updated and supported, which means you can get regular updates and fixes from the developer and the community.
-
-
Disadvantages
-
-
It's still in beta stage, which means it may have some bugs or issues that need to be fixed or improved.
-
It's not 100% compatible, which means some PS2 games may not work or work properly on the emulator.
-
It's not legal in some countries or regions, which means you may be violating some laws or regulations by using it or downloading PS2 games.
-
It's not easy to set up, which means you may need some technical knowledge and patience to dump your own PS2 games and convert them to ISO/CHD/CSO files.
-
It's not available on iOS devices, which means you can't use it on your iPhone or iPad.
-
-
What are some of the alternatives to AetherSX2?
-
AetherSX2 is not the only PS2 emulator for Android out there. There are some other PS2 emulators for Android that you can try if you want to compare or switch. However, we must warn you that none of them are as good as AetherSX2 in terms of performance, compatibility, quality, or features. Here are some of the alternatives to AetherSX2:
-
Play!
-
Play! is another PS2 emulator for Android that is free and open-source. It was developed by Jean-Philip Desjardins, a former PCSX2 developer who wanted to create a cross-platform PS2 emulator. Play! has been in development since 2014 and has achieved some progress in PS2 emulation. However, it is still far behind AetherSX2 in terms of speed, stability, accuracy, and compatibility. Most PS2 games run very slowly or don't run at all on Play!. The graphics are also very poor and glitchy. Play! has a simple interface and minimal settings, which may appeal to some users who want a simple emulator. However, it also lacks many features and options that AetherSX2 offers. Play! is not updated very frequently and has a small community of users and supporters.
-
DamonPS2
-
DamonPS2 is another PS2 emulator for Android that is paid and closed-source. It was developed by a team of Chinese developers who claimed to have created the fastest and most compatible PS2 emulator for Android. DamonPS2 was released in 2017 and has gained a lot of popularity and controversy among PS2 fans. DamonPS2 can run some PS2 games at full speed and with decent graphics on Android devices. However, it also has many drawbacks and issues that make it inferior to AetherSX2. DamonPS2 is not free and requires you to pay a subscription fee to unlock some features and options. DamonPS2 is not open-source and has been accused of stealing the code of PCSX2 and other emulators without following the license or giving proper credit. DamonPS2 is not stable and has many bugs and crashes that affect the gameplay experience. DamonPS2 is not compatible and has many games that don't work or work poorly on the emulator. DamonPS2 is not user-friendly and has a complicated and cluttered interface and settings. DamonPS2 is not updated very often and has a poor customer service and support.
-
Conclusion
-
AetherSX2 is the best PS2 emulator for Android that you can find in 2023. It allows you to play PS2 games on your Android device with amazing graphics and performance. It is free, fast, stable, compatible, accurate, customizable, updated, and supported. It is easy to download, install, set up, load, and play PS2 games on AetherSX2. It offers a wide range of PS2 games that you can enjoy on your device, from GTA San Andreas to FIFA Football.
-
If you are a fan of PS2 games and want to relive your childhood memories or discover new titles, you should definitely try out AetherSX2. It will give you the best PS2 emulation experience on Android that you can ever imagine. You won't regret it.
-
So what are you waiting for? Download AetherSX2 APK now and start playing PS2 games on Android today!
-
FAQs
-
Here are some of the frequently asked questions about AetherSX2:
-
-
Is AetherSX2 safe to use?
-
AetherSX2 is safe to use as long as you download it from the official sources, such as the Google Play Store or the official website. You should also avoid downloading PS2 games from untrusted or illegal sources, as they may contain viruses or malware that can harm your device or data.
-
Is AetherSX2 legal to use?
-
AetherSX2 is legal to use in most countries or regions, as long as you own the original PS2 game discs and dump them yourself. You should not download or share PS2 games that you don't own, as that would be considered piracy and violate the intellectual property rights of the game developers and publishers.
-
How can I improve the performance of AetherSX2?
-
You can improve the performance of AetherSX2 by following these tips:
-
-
Use a powerful device that meets or exceeds the recommended specifications for running the emulator.
-
Use Vulkan as the graphics renderer if your device supports it and if the game runs well with it.
-
Use 2x or 3x resolution scaling for most games, or 4x if your device can handle it.
-
Close any background apps or processes that may consume CPU or RAM resources.
-
Enable "Fast Boot" in the emulator settings to skip the PS2 BIOS screen.
-
Disable any unnecessary features or options in the emulator settings that may affect the performance.
-
-
How can I contact the developer of AetherSX2?
-
You can contact the developer of AetherSX2 by sending an email to tahlreth@gmail.com or by joining the fan-run Discord server: [AetherSX2 Discord Server]. You can also follow the developer on Twitter: [@Tahlreth].
-
How can I support the development of AetherSX2?
-
You can support the development of AetherSX2 by donating to the developer via PayPal: [Donate to Tahlreth]. You can also leave a positive review and rating on the Google Play Store: [AetherSX2 - PS2 Emulator - PSP PPSSPP PS2 Emu]. You can also share your feedback, suggestions, bug reports, screenshots, videos, etc. on the official website or the Discord server.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Chicken Gun Mod Menu APK 2.7.02 Unlimited Money and Ammo.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Chicken Gun Mod Menu APK 2.7.02 Unlimited Money and Ammo.md
deleted file mode 100644
index ccffa2d0f0887adb47e6f14f3ecda42933ea13db..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Chicken Gun Mod Menu APK 2.7.02 Unlimited Money and Ammo.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
Chicken Gun Mod Menu APK 2.7 02: Everything You Need to Know
-
If you are looking for a fun and exciting multiplayer shooter game, you might want to check out Chicken Gun. This game lets you play as a chicken with a gun and fight against other players in various modes and maps. You can customize your chicken, upgrade your weapons, and enjoy the hilarious graphics and sound effects. But what if you want to have more features and options in the game? That's where Chicken Gun Mod Menu APK 2.7 02 comes in. This is a modified version of the original game that gives you access to a lot of cheats and hacks that can make the game more enjoyable and easier. In this article, we will tell you everything you need to know about Chicken Gun Mod Menu APK 2.7 02, including what it is, how to download and install it, and what are the benefits and drawbacks of using it.
Chicken Gun is a multiplayer shooter game developed by KAKAROD INTERACTIVE. It was released in 2020 and has gained a lot of popularity among gamers who love action and comedy. The game is available for both Android and iOS devices, and you can download it for free from the Google Play Store or the App Store.
-
Features and gameplay of Chicken Gun
-
The game has a simple but engaging gameplay that involves shooting other chickens with various weapons. You can choose from different modes, such as deathmatch, team deathmatch, capture the flag, or zombie mode. You can also select from different maps, such as farm, city, desert, or space. The game supports up to 8 players per match, and you can play with your friends or random players online.
-
The game also allows you to customize your chicken with different skins, hats, glasses, masks, or accessories. You can also upgrade your weapons with different attachments, such as scopes, silencers, or magazines. The game has a lot of humor and fun elements, such as the chicken noises, the ragdoll physics, the blood splatters, or the chat system.
-
What is Chicken Gun Mod Menu APK 2.7 02?
-
A modified version of the original game
-
Chicken Gun Mod Menu APK 2.7 02 is a modified version of the original game that gives you access to a mod menu that contains various cheats and hacks that can enhance your gaming experience. Some of the features that you can get from the mod menu apk are:
-
-
Unlimited money: You can get unlimited money to buy anything you want in the game.
-
Unlimited ammo: You can shoot without worrying about running out of ammo.
-
No reload: You can fire continuously without having to reload your weapon.
-
No recoil: You can shoot with accuracy and stability without any recoil.
-
No spread: You can shoot with precision without any bullet spread.
-
God mode: You can become invincible and immune to any damage.
-
Invisible mode: You can become invisible and undetectable by other players.
-
Speed hack: You can move faster than normal.
-
Fly hack: You can fly in the air like a bird.
-
Teleport hack: You can teleport to any location on the map.
-
Wall hack: You can see through walls and shoot through them.
-
Aimbot: You can automatically aim at your enemies and kill them with one shot.
-
-
As you can see, the mod menu apk gives you a lot of advantages and options that can make the game more fun and easy. However, it also has some drawbacks that you should be aware of.
-
Benefits and drawbacks of using the mod menu apk
-
The main benefit of using the mod menu apk is that you can enjoy the game without any limitations or restrictions. You can buy anything you want, use any weapon you like, and play any mode or map you prefer. You can also dominate the game and win every match with ease. You can have a lot of fun and entertainment with the mod menu apk.
-
However, the mod menu apk also has some drawbacks that you should consider before using it. Some of the drawbacks are:
-
-
Risk of getting banned: The mod menu apk is not authorized or supported by the game developers, and it violates the terms and conditions of the game. Therefore, if you use the mod menu apk, you might get detected and banned by the game servers. This means that you might lose your account, your progress, and your data. You might also face legal consequences for using the mod menu apk.
-
Risk of getting viruses: The mod menu apk is not verified or tested by any official source, and it might contain harmful viruses or malware that can damage your device or steal your personal information. Therefore, if you download and install the mod menu apk from an unknown or untrusted source, you might expose your device and your data to security risks.
-
Risk of losing interest: The mod menu apk might make the game too easy and boring for you. If you use all the cheats and hacks, you might lose the challenge and the thrill of the game. You might also lose the respect and the fun of playing with other players who are playing fairly and honestly. Therefore, if you use the mod menu apk too much, you might lose interest in the game and stop playing it.
-
-
Therefore, you should weigh the benefits and drawbacks of using the mod menu apk carefully before deciding to use it. You should also use it at your own risk and responsibility.
-
chicken gun mod menu apk 2.7 02 download
-chicken gun mod menu apk 2.7 02 free
-chicken gun mod menu apk 2.7 02 unlimited money
-chicken gun mod menu apk 2.7 02 latest version
-chicken gun mod menu apk 2.7 02 android
-chicken gun mod menu apk 2.7 02 happymod
-chicken gun mod menu apk 2.7 02 online
-chicken gun mod menu apk 2.7 02 offline
-chicken gun mod menu apk 2.7 02 no root
-chicken gun mod menu apk 2.7 02 mediafıre
-chicken gun mod menu apk 2.7 02 mega
-chicken gun mod menu apk 2.7 02 revdl
-chicken gun mod menu apk 2.7 02 rexdl
-chicken gun mod menu apk 2.7 02 an1
-chicken gun mod menu apk 2.7 02 apkpure
-chicken gun mod menu apk 2.7 02 apkmody
-chicken gun mod menu apk 2.7 02 apkdone
-chicken gun mod menu apk 2.7 02 apkhome
-chicken gun mod menu apk 2.7 02 mob.org
-chicken gun mod menu apk 2.7 02 uptodown
-chicken gun mod menu apk 2.7 02 hack
-chicken gun mod menu apk 2.7 02 cheat
-chicken gun mod menu apk 2.7 02 unlock all
-chicken gun mod menu apk 2.7 02 premium
-chicken gun mod menu apk 2.7 02 pro
-chicken gun mod menu apk 2.7 02 vip
-chicken gun mod menu apk 2.7 02 god mode
-chicken gun mod menu apk 2.7 02 one hit kill
-chicken gun mod menu apk 2.7 02 aimbot
-chicken gun mod menu apk 2.7 02 wallhack
-chicken gun mod menu apk 2.7 02 esp
-chicken gun mod menu apk 2.7 02 radar hack
-chicken gun mod menu apk 2.7 02 speed hack
-chicken gun mod menu apk 2.7 02 fly hack
-chicken gun mod menu apk 2.7 02 invisible hack
-chicken gun mod menu apk 2.7 02 antiban
-chicken gun mod menu apk 2.7 02 update
-chicken gun mod menu apk version: v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v12 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .07
-how to install chicken gun mod menu apk version: v12 .07
-how to use chicken gun mod menu apk version: v12 .07
-
How to download and install Chicken Gun Mod Menu APK 2.7 02?
-
Steps to download and install the mod menu apk
-
If you still want to try the mod menu apk, here are the steps to download and install it on your Android device:
-
-
First, you need to uninstall the original version of Chicken Gun from your device if you have it installed.
-
Second, you need to enable the installation of apps from unknown sources on your device settings. This will allow you to install the mod menu apk from a third-party source.
-
Third, you need to find a reliable and safe website that provides the download link for Chicken Gun Mod Menu APK 2.7 02. You can search for it on Google or use a trusted source like [this one].
-
Fourth, you need to download the mod menu apk file from the website and save it on your device storage.
-
Fifth, you need to locate the mod menu apk file on your device file manager and tap on it to start the installation process.
-
Sixth, you need to follow the instructions on the screen and wait for the installation to complete.
-
Seventh, you need to launch the game from your app drawer and enjoy using the mod menu apk.
-
-
Precautions and tips to avoid any issues
-
To avoid any issues or problems while using the mod menu apk, here are some precautions and tips that you should follow:
-
-
Make sure that your device meets the minimum requirements for running Chicken Gun Mod Menu APK 2.7 02. Your device should have at least Android 4.4 or higher version, 2 GB of RAM, 100 MB of free storage space, and a stable internet connection.
-
Make sure that you download and install Chicken Gun Mod Menu APK 2.7 02 from a reputable and secure website that does not contain any viruses or malware. You can also scan the file with an antivirus app before installing it.
-
Make sure that you use a VPN app or a proxy server while playing Chicken Gun Mod Menu APK 2.7 02 online. This will help you hide your IP address and avoid getting detected or banned by the game servers. You can also use a fake or secondary account to play the game and avoid risking your main account.
-
Make sure that you use the mod menu apk features wisely and moderately. Do not abuse or overuse the cheats and hacks, as this might ruin the game balance and fun for yourself and other players. You might also get reported or flagged by other players who notice your suspicious behavior. Try to use the mod menu apk features only when you need them or when you want to have some fun.
-
Make sure that you update the mod menu apk regularly to keep up with the latest version of the game. The mod menu apk might not work properly or at all if the game version is different from the mod menu apk version. You can check for updates on the website where you downloaded the mod menu apk or on the mod menu itself.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, Chicken Gun Mod Menu APK 2.7 02 is a modified version of Chicken Gun that gives you access to a mod menu that contains various cheats and hacks that can make the game more fun and easy. You can get unlimited money, ammo, god mode, invisible mode, speed hack, fly hack, teleport hack, wall hack, aimbot, and more. However, you should also be aware of the drawbacks and risks of using the mod menu apk, such as getting banned, getting viruses, or losing interest in the game. You should also follow some precautions and tips to avoid any issues or problems while using the mod menu apk.
-
Call to action and final thoughts
-
If you are interested in trying Chicken Gun Mod Menu APK 2.7 02, you can download and install it from [this link]. However, we recommend that you use it at your own risk and responsibility, and that you respect the game developers and other players who play fairly and honestly. We hope that this article has helped you understand everything you need to know about Chicken Gun Mod Menu APK 2.7 02. Thank you for reading and have fun playing Chicken Gun!
-
FAQs
-
What is Chicken Gun?
-
Chicken Gun is a multiplayer shooter game that lets you play as a chicken with a gun and fight against other players in various modes and maps.
-
What is Chicken Gun Mod Menu APK 2.7 02?
-
Chicken Gun Mod Menu APK 2.7 02 is a modified version of Chicken Gun that gives you access to a mod menu that contains various cheats and hacks that can make the game more fun and easy.
-
How to download and install Chicken Gun Mod Menu APK 2.7 02?
-
You can download and install Chicken Gun Mod Menu APK 2.7 02 by following these steps:
-
-
Uninstall the original version of Chicken Gun from your device.
-
Enable the installation of apps from unknown sources on your device settings.
-
Find a reliable and safe website that provides the download link for Chicken Gun Mod Menu APK 2.7 02.
-
Download the mod menu apk file from the website and save it on your device storage.
-
Locate the mod menu apk file on your device file manager and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to complete.
-
Launch the game from your app drawer and enjoy using the mod menu apk.
-
-
What are the benefits and drawbacks of using Chicken Gun Mod Menu APK 2.7 02?
-
The benefits of using Chicken Gun Mod Menu APK 2.7 02 are:
-
-
You can enjoy the game without any limitations or restrictions.
-
You can buy anything you want, use any weapon you like, and play any mode or map you prefer.
-
You can dominate the game and win every match with ease.
-
You can have a lot of fun and entertainment with the mod menu apk.
-
-
The drawbacks of using Chicken Gun Mod Menu APK 2.7 02 are:
-
-
You might get detected and banned by the game servers.
-
You might get harmful viruses or malware that can damage your device or steal your personal information.
-
You might lose interest in the game as it becomes too easy and boring.
-
-
What are some precautions and tips to avoid any issues while using Chicken Gun Mod Menu APK 2.7 02?
-
Some precautions and tips to avoid any issues while using Chicken Gun Mod Menu APK 2.7 02 are:
Some precautions and tips to avoid any issues while using Chicken Gun Mod Menu APK 2.7 02 are:
-
-
Make sure that your device meets the minimum requirements for running Chicken Gun Mod Menu APK 2.7 02.
-
Make sure that you download and install Chicken Gun Mod Menu APK 2.7 02 from a reputable and secure website.
-
Make sure that you use a VPN app or a proxy server while playing Chicken Gun Mod Menu APK 2.7 02 online.
-
Make sure that you use the mod menu apk features wisely and moderately.
-
Make sure that you update the mod menu apk regularly to keep up with the latest version of the game.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/DragGAN/stylegan2/op/conv2d_gradfix.py b/spaces/fffiloni/DragGAN/stylegan2/op/conv2d_gradfix.py
deleted file mode 100644
index bb2f94bbcb8132299fd4d538972d32bd7ff6e7d6..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/DragGAN/stylegan2/op/conv2d_gradfix.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import contextlib
-import warnings
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-
-enabled = True
-weight_gradients_disabled = False
-
-
-@contextlib.contextmanager
-def no_weight_gradients():
- global weight_gradients_disabled
-
- old = weight_gradients_disabled
- weight_gradients_disabled = True
- yield
- weight_gradients_disabled = old
-
-
-def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=False,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=0,
- dilation=dilation,
- groups=groups,
- ).apply(input, weight, bias)
-
- return F.conv2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def conv_transpose2d(
- input,
- weight,
- bias=None,
- stride=1,
- padding=0,
- output_padding=0,
- groups=1,
- dilation=1,
-):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=True,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- groups=groups,
- dilation=dilation,
- ).apply(input, weight, bias)
-
- return F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def could_use_op(input):
- if (not enabled) or (not torch.backends.cudnn.enabled):
- return False
-
- if input.device.type != "cuda":
- return False
-
- if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]):
- return True
-
- warnings.warn(
- f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()."
- )
-
- return False
-
-
-def ensure_tuple(xs, ndim):
- xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim
-
- return xs
-
-
-conv2d_gradfix_cache = dict()
-
-
-def conv2d_gradfix(
- transpose, weight_shape, stride, padding, output_padding, dilation, groups
-):
- ndim = 2
- weight_shape = tuple(weight_shape)
- stride = ensure_tuple(stride, ndim)
- padding = ensure_tuple(padding, ndim)
- output_padding = ensure_tuple(output_padding, ndim)
- dilation = ensure_tuple(dilation, ndim)
-
- key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups)
- if key in conv2d_gradfix_cache:
- return conv2d_gradfix_cache[key]
-
- common_kwargs = dict(
- stride=stride, padding=padding, dilation=dilation, groups=groups
- )
-
- def calc_output_padding(input_shape, output_shape):
- if transpose:
- return [0, 0]
-
- return [
- input_shape[i + 2]
- - (output_shape[i + 2] - 1) * stride[i]
- - (1 - 2 * padding[i])
- - dilation[i] * (weight_shape[i + 2] - 1)
- for i in range(ndim)
- ]
-
- class Conv2d(autograd.Function):
- @staticmethod
- def forward(ctx, input, weight, bias):
- if not transpose:
- out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs)
-
- else:
- out = F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- output_padding=output_padding,
- **common_kwargs,
- )
-
- ctx.save_for_backward(input, weight)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, weight = ctx.saved_tensors
- grad_input, grad_weight, grad_bias = None, None, None
-
- if ctx.needs_input_grad[0]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, weight, None)
-
- if ctx.needs_input_grad[1] and not weight_gradients_disabled:
- grad_weight = Conv2dGradWeight.apply(grad_output, input)
-
- if ctx.needs_input_grad[2]:
- grad_bias = grad_output.sum((0, 2, 3))
-
- return grad_input, grad_weight, grad_bias
-
- class Conv2dGradWeight(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input):
- op = torch._C._jit_get_operation(
- "aten::cudnn_convolution_backward_weight"
- if not transpose
- else "aten::cudnn_convolution_transpose_backward_weight"
- )
- flags = [
- torch.backends.cudnn.benchmark,
- torch.backends.cudnn.deterministic,
- torch.backends.cudnn.allow_tf32,
- ]
- grad_weight = op(
- weight_shape,
- grad_output,
- input,
- padding,
- stride,
- dilation,
- groups,
- *flags,
- )
- ctx.save_for_backward(grad_output, input)
-
- return grad_weight
-
- @staticmethod
- def backward(ctx, grad_grad_weight):
- grad_output, input = ctx.saved_tensors
- grad_grad_output, grad_grad_input = None, None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = Conv2d.apply(input, grad_grad_weight, None)
-
- if ctx.needs_input_grad[1]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, grad_grad_weight, None)
-
- return grad_grad_output, grad_grad_input
-
- conv2d_gradfix_cache[key] = Conv2d
-
- return Conv2d
diff --git a/spaces/fffiloni/Image-to-MusicGen/CODE_OF_CONDUCT.md b/spaces/fffiloni/Image-to-MusicGen/CODE_OF_CONDUCT.md
deleted file mode 100644
index 83f431e8feeb7e80d571f39c9f6c1b96857b5f85..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Image-to-MusicGen/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Code of Conduct
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as
-contributors and maintainers pledge to make participation in our project and
-our community a harassment-free experience for everyone, regardless of age, body
-size, disability, ethnicity, sex characteristics, gender identity and expression,
-level of experience, education, socio-economic status, nationality, personal
-appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment
-include:
-
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-* The use of sexualized language or imagery and unwelcome sexual attention or
-advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic
-address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
-professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable
-behavior and are expected to take appropriate and fair corrective action in
-response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or
-reject comments, commits, code, wiki edits, issues, and other contributions
-that are not aligned to this Code of Conduct, or to ban temporarily or
-permanently any contributor for other behaviors that they deem inappropriate,
-threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies within all project spaces, and it also applies when
-an individual is representing the project or its community in public spaces.
-Examples of representing a project or community include using an official
-project e-mail address, posting via an official social media account, or acting
-as an appointed representative at an online or offline event. Representation of
-a project may be further defined and clarified by project maintainers.
-
-This Code of Conduct also applies outside the project spaces when there is a
-reasonable belief that an individual's behavior may have a negative impact on
-the project or its community.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be
-reported by contacting the project team at . All
-complaints will be reviewed and investigated and will result in a response that
-is deemed necessary and appropriate to the circumstances. The project team is
-obligated to maintain confidentiality with regard to the reporter of an incident.
-Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good
-faith may face temporary or permanent repercussions as determined by other
-members of the project's leadership.
-
-## Attribution
-
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-[homepage]: https://www.contributor-covenant.org
-
-For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/fs.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/fs.d.ts
deleted file mode 100644
index 75c53fb0d542e5f7ce5b43c68982e428a6e653aa..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/fs.d.ts
+++ /dev/null
@@ -1,3872 +0,0 @@
-/**
- * The `fs` module enables interacting with the file system in a
- * way modeled on standard POSIX functions.
- *
- * To use the promise-based APIs:
- *
- * ```js
- * import * as fs from 'fs/promises';
- * ```
- *
- * To use the callback and sync APIs:
- *
- * ```js
- * import * as fs from 'fs';
- * ```
- *
- * All file system operations have synchronous, callback, and promise-based
- * forms, and are accessible using both CommonJS syntax and ES6 Modules (ESM).
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/fs.js)
- */
-declare module 'fs' {
- import * as stream from 'node:stream';
- import { Abortable, EventEmitter } from 'node:events';
- import { URL } from 'node:url';
- import * as promises from 'node:fs/promises';
- export { promises };
- /**
- * Valid types for path values in "fs".
- */
- export type PathLike = string | Buffer | URL;
- export type PathOrFileDescriptor = PathLike | number;
- export type TimeLike = string | number | Date;
- export type NoParamCallback = (err: NodeJS.ErrnoException | null) => void;
- export type BufferEncodingOption =
- | 'buffer'
- | {
- encoding: 'buffer';
- };
- export interface ObjectEncodingOptions {
- encoding?: BufferEncoding | null | undefined;
- }
- export type EncodingOption = ObjectEncodingOptions | BufferEncoding | undefined | null;
- export type OpenMode = number | string;
- export type Mode = number | string;
- export interface StatsBase {
- isFile(): boolean;
- isDirectory(): boolean;
- isBlockDevice(): boolean;
- isCharacterDevice(): boolean;
- isSymbolicLink(): boolean;
- isFIFO(): boolean;
- isSocket(): boolean;
- dev: T;
- ino: T;
- mode: T;
- nlink: T;
- uid: T;
- gid: T;
- rdev: T;
- size: T;
- blksize: T;
- blocks: T;
- atimeMs: T;
- mtimeMs: T;
- ctimeMs: T;
- birthtimeMs: T;
- atime: Date;
- mtime: Date;
- ctime: Date;
- birthtime: Date;
- }
- export interface Stats extends StatsBase {}
- /**
- * A `fs.Stats` object provides information about a file.
- *
- * Objects returned from {@link stat}, {@link lstat} and {@link fstat} and
- * their synchronous counterparts are of this type.
- * If `bigint` in the `options` passed to those methods is true, the numeric values
- * will be `bigint` instead of `number`, and the object will contain additional
- * nanosecond-precision properties suffixed with `Ns`.
- *
- * ```console
- * Stats {
- * dev: 2114,
- * ino: 48064969,
- * mode: 33188,
- * nlink: 1,
- * uid: 85,
- * gid: 100,
- * rdev: 0,
- * size: 527,
- * blksize: 4096,
- * blocks: 8,
- * atimeMs: 1318289051000.1,
- * mtimeMs: 1318289051000.1,
- * ctimeMs: 1318289051000.1,
- * birthtimeMs: 1318289051000.1,
- * atime: Mon, 10 Oct 2011 23:24:11 GMT,
- * mtime: Mon, 10 Oct 2011 23:24:11 GMT,
- * ctime: Mon, 10 Oct 2011 23:24:11 GMT,
- * birthtime: Mon, 10 Oct 2011 23:24:11 GMT }
- * ```
- *
- * `bigint` version:
- *
- * ```console
- * BigIntStats {
- * dev: 2114n,
- * ino: 48064969n,
- * mode: 33188n,
- * nlink: 1n,
- * uid: 85n,
- * gid: 100n,
- * rdev: 0n,
- * size: 527n,
- * blksize: 4096n,
- * blocks: 8n,
- * atimeMs: 1318289051000n,
- * mtimeMs: 1318289051000n,
- * ctimeMs: 1318289051000n,
- * birthtimeMs: 1318289051000n,
- * atimeNs: 1318289051000000000n,
- * mtimeNs: 1318289051000000000n,
- * ctimeNs: 1318289051000000000n,
- * birthtimeNs: 1318289051000000000n,
- * atime: Mon, 10 Oct 2011 23:24:11 GMT,
- * mtime: Mon, 10 Oct 2011 23:24:11 GMT,
- * ctime: Mon, 10 Oct 2011 23:24:11 GMT,
- * birthtime: Mon, 10 Oct 2011 23:24:11 GMT }
- * ```
- * @since v0.1.21
- */
- export class Stats {}
- /**
- * A representation of a directory entry, which can be a file or a subdirectory
- * within the directory, as returned by reading from an `fs.Dir`. The
- * directory entry is a combination of the file name and file type pairs.
- *
- * Additionally, when {@link readdir} or {@link readdirSync} is called with
- * the `withFileTypes` option set to `true`, the resulting array is filled with `fs.Dirent` objects, rather than strings or `Buffer` s.
- * @since v10.10.0
- */
- export class Dirent {
- /**
- * Returns `true` if the `fs.Dirent` object describes a regular file.
- * @since v10.10.0
- */
- isFile(): boolean;
- /**
- * Returns `true` if the `fs.Dirent` object describes a file system
- * directory.
- * @since v10.10.0
- */
- isDirectory(): boolean;
- /**
- * Returns `true` if the `fs.Dirent` object describes a block device.
- * @since v10.10.0
- */
- isBlockDevice(): boolean;
- /**
- * Returns `true` if the `fs.Dirent` object describes a character device.
- * @since v10.10.0
- */
- isCharacterDevice(): boolean;
- /**
- * Returns `true` if the `fs.Dirent` object describes a symbolic link.
- * @since v10.10.0
- */
- isSymbolicLink(): boolean;
- /**
- * Returns `true` if the `fs.Dirent` object describes a first-in-first-out
- * (FIFO) pipe.
- * @since v10.10.0
- */
- isFIFO(): boolean;
- /**
- * Returns `true` if the `fs.Dirent` object describes a socket.
- * @since v10.10.0
- */
- isSocket(): boolean;
- /**
- * The file name that this `fs.Dirent` object refers to. The type of this
- * value is determined by the `options.encoding` passed to {@link readdir} or {@link readdirSync}.
- * @since v10.10.0
- */
- name: string;
- }
- /**
- * A class representing a directory stream.
- *
- * Created by {@link opendir}, {@link opendirSync}, or `fsPromises.opendir()`.
- *
- * ```js
- * import { opendir } from 'fs/promises';
- *
- * try {
- * const dir = await opendir('./');
- * for await (const dirent of dir)
- * console.log(dirent.name);
- * } catch (err) {
- * console.error(err);
- * }
- * ```
- *
- * When using the async iterator, the `fs.Dir` object will be automatically
- * closed after the iterator exits.
- * @since v12.12.0
- */
- export class Dir implements AsyncIterable {
- /**
- * The read-only path of this directory as was provided to {@link opendir},{@link opendirSync}, or `fsPromises.opendir()`.
- * @since v12.12.0
- */
- readonly path: string;
- /**
- * Asynchronously iterates over the directory via `readdir(3)` until all entries have been read.
- */
- [Symbol.asyncIterator](): AsyncIterableIterator;
- /**
- * Asynchronously close the directory's underlying resource handle.
- * Subsequent reads will result in errors.
- *
- * A promise is returned that will be resolved after the resource has been
- * closed.
- * @since v12.12.0
- */
- close(): Promise;
- close(cb: NoParamCallback): void;
- /**
- * Synchronously close the directory's underlying resource handle.
- * Subsequent reads will result in errors.
- * @since v12.12.0
- */
- closeSync(): void;
- /**
- * Asynchronously read the next directory entry via [`readdir(3)`](http://man7.org/linux/man-pages/man3/readdir.3.html) as an `fs.Dirent`.
- *
- * A promise is returned that will be resolved with an `fs.Dirent`, or `null`if there are no more directory entries to read.
- *
- * Directory entries returned by this function are in no particular order as
- * provided by the operating system's underlying directory mechanisms.
- * Entries added or removed while iterating over the directory might not be
- * included in the iteration results.
- * @since v12.12.0
- * @return containing {fs.Dirent|null}
- */
- read(): Promise;
- read(cb: (err: NodeJS.ErrnoException | null, dirEnt: Dirent | null) => void): void;
- /**
- * Synchronously read the next directory entry as an `fs.Dirent`. See the
- * POSIX [`readdir(3)`](http://man7.org/linux/man-pages/man3/readdir.3.html) documentation for more detail.
- *
- * If there are no more directory entries to read, `null` will be returned.
- *
- * Directory entries returned by this function are in no particular order as
- * provided by the operating system's underlying directory mechanisms.
- * Entries added or removed while iterating over the directory might not be
- * included in the iteration results.
- * @since v12.12.0
- */
- readSync(): Dirent | null;
- }
- /**
- * Class: fs.StatWatcher
- * @since v14.3.0, v12.20.0
- * Extends `EventEmitter`
- * A successful call to {@link watchFile} method will return a new fs.StatWatcher object.
- */
- export interface StatWatcher extends EventEmitter {
- /**
- * When called, requests that the Node.js event loop _not_ exit so long as the `fs.StatWatcher` is active. Calling `watcher.ref()` multiple times will have
- * no effect.
- *
- * By default, all `fs.StatWatcher` objects are "ref'ed", making it normally
- * unnecessary to call `watcher.ref()` unless `watcher.unref()` had been
- * called previously.
- * @since v14.3.0, v12.20.0
- */
- ref(): this;
- /**
- * When called, the active `fs.StatWatcher` object will not require the Node.js
- * event loop to remain active. If there is no other activity keeping the
- * event loop running, the process may exit before the `fs.StatWatcher` object's
- * callback is invoked. Calling `watcher.unref()` multiple times will have
- * no effect.
- * @since v14.3.0, v12.20.0
- */
- unref(): this;
- }
- export interface FSWatcher extends EventEmitter {
- /**
- * Stop watching for changes on the given `fs.FSWatcher`. Once stopped, the `fs.FSWatcher` object is no longer usable.
- * @since v0.5.8
- */
- close(): void;
- /**
- * events.EventEmitter
- * 1. change
- * 2. error
- */
- addListener(event: string, listener: (...args: any[]) => void): this;
- addListener(event: 'change', listener: (eventType: string, filename: string | Buffer) => void): this;
- addListener(event: 'error', listener: (error: Error) => void): this;
- addListener(event: 'close', listener: () => void): this;
- on(event: string, listener: (...args: any[]) => void): this;
- on(event: 'change', listener: (eventType: string, filename: string | Buffer) => void): this;
- on(event: 'error', listener: (error: Error) => void): this;
- on(event: 'close', listener: () => void): this;
- once(event: string, listener: (...args: any[]) => void): this;
- once(event: 'change', listener: (eventType: string, filename: string | Buffer) => void): this;
- once(event: 'error', listener: (error: Error) => void): this;
- once(event: 'close', listener: () => void): this;
- prependListener(event: string, listener: (...args: any[]) => void): this;
- prependListener(event: 'change', listener: (eventType: string, filename: string | Buffer) => void): this;
- prependListener(event: 'error', listener: (error: Error) => void): this;
- prependListener(event: 'close', listener: () => void): this;
- prependOnceListener(event: string, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'change', listener: (eventType: string, filename: string | Buffer) => void): this;
- prependOnceListener(event: 'error', listener: (error: Error) => void): this;
- prependOnceListener(event: 'close', listener: () => void): this;
- }
- /**
- * Instances of `fs.ReadStream` are created and returned using the {@link createReadStream} function.
- * @since v0.1.93
- */
- export class ReadStream extends stream.Readable {
- close(callback?: (err?: NodeJS.ErrnoException | null) => void): void;
- /**
- * The number of bytes that have been read so far.
- * @since v6.4.0
- */
- bytesRead: number;
- /**
- * The path to the file the stream is reading from as specified in the first
- * argument to `fs.createReadStream()`. If `path` is passed as a string, then`readStream.path` will be a string. If `path` is passed as a `Buffer`, then`readStream.path` will be a
- * `Buffer`. If `fd` is specified, then`readStream.path` will be `undefined`.
- * @since v0.1.93
- */
- path: string | Buffer;
- /**
- * This property is `true` if the underlying file has not been opened yet,
- * i.e. before the `'ready'` event is emitted.
- * @since v11.2.0, v10.16.0
- */
- pending: boolean;
- /**
- * events.EventEmitter
- * 1. open
- * 2. close
- * 3. ready
- */
- addListener(event: 'close', listener: () => void): this;
- addListener(event: 'data', listener: (chunk: Buffer | string) => void): this;
- addListener(event: 'end', listener: () => void): this;
- addListener(event: 'error', listener: (err: Error) => void): this;
- addListener(event: 'open', listener: (fd: number) => void): this;
- addListener(event: 'pause', listener: () => void): this;
- addListener(event: 'readable', listener: () => void): this;
- addListener(event: 'ready', listener: () => void): this;
- addListener(event: 'resume', listener: () => void): this;
- addListener(event: string | symbol, listener: (...args: any[]) => void): this;
- on(event: 'close', listener: () => void): this;
- on(event: 'data', listener: (chunk: Buffer | string) => void): this;
- on(event: 'end', listener: () => void): this;
- on(event: 'error', listener: (err: Error) => void): this;
- on(event: 'open', listener: (fd: number) => void): this;
- on(event: 'pause', listener: () => void): this;
- on(event: 'readable', listener: () => void): this;
- on(event: 'ready', listener: () => void): this;
- on(event: 'resume', listener: () => void): this;
- on(event: string | symbol, listener: (...args: any[]) => void): this;
- once(event: 'close', listener: () => void): this;
- once(event: 'data', listener: (chunk: Buffer | string) => void): this;
- once(event: 'end', listener: () => void): this;
- once(event: 'error', listener: (err: Error) => void): this;
- once(event: 'open', listener: (fd: number) => void): this;
- once(event: 'pause', listener: () => void): this;
- once(event: 'readable', listener: () => void): this;
- once(event: 'ready', listener: () => void): this;
- once(event: 'resume', listener: () => void): this;
- once(event: string | symbol, listener: (...args: any[]) => void): this;
- prependListener(event: 'close', listener: () => void): this;
- prependListener(event: 'data', listener: (chunk: Buffer | string) => void): this;
- prependListener(event: 'end', listener: () => void): this;
- prependListener(event: 'error', listener: (err: Error) => void): this;
- prependListener(event: 'open', listener: (fd: number) => void): this;
- prependListener(event: 'pause', listener: () => void): this;
- prependListener(event: 'readable', listener: () => void): this;
- prependListener(event: 'ready', listener: () => void): this;
- prependListener(event: 'resume', listener: () => void): this;
- prependListener(event: string | symbol, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'close', listener: () => void): this;
- prependOnceListener(event: 'data', listener: (chunk: Buffer | string) => void): this;
- prependOnceListener(event: 'end', listener: () => void): this;
- prependOnceListener(event: 'error', listener: (err: Error) => void): this;
- prependOnceListener(event: 'open', listener: (fd: number) => void): this;
- prependOnceListener(event: 'pause', listener: () => void): this;
- prependOnceListener(event: 'readable', listener: () => void): this;
- prependOnceListener(event: 'ready', listener: () => void): this;
- prependOnceListener(event: 'resume', listener: () => void): this;
- prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this;
- }
- /**
- * * Extends `stream.Writable`
- *
- * Instances of `fs.WriteStream` are created and returned using the {@link createWriteStream} function.
- * @since v0.1.93
- */
- export class WriteStream extends stream.Writable {
- /**
- * Closes `writeStream`. Optionally accepts a
- * callback that will be executed once the `writeStream`is closed.
- * @since v0.9.4
- */
- close(callback?: (err?: NodeJS.ErrnoException | null) => void): void;
- /**
- * The number of bytes written so far. Does not include data that is still queued
- * for writing.
- * @since v0.4.7
- */
- bytesWritten: number;
- /**
- * The path to the file the stream is writing to as specified in the first
- * argument to {@link createWriteStream}. If `path` is passed as a string, then`writeStream.path` will be a string. If `path` is passed as a `Buffer`, then`writeStream.path` will be a
- * `Buffer`.
- * @since v0.1.93
- */
- path: string | Buffer;
- /**
- * This property is `true` if the underlying file has not been opened yet,
- * i.e. before the `'ready'` event is emitted.
- * @since v11.2.0
- */
- pending: boolean;
- /**
- * events.EventEmitter
- * 1. open
- * 2. close
- * 3. ready
- */
- addListener(event: 'close', listener: () => void): this;
- addListener(event: 'drain', listener: () => void): this;
- addListener(event: 'error', listener: (err: Error) => void): this;
- addListener(event: 'finish', listener: () => void): this;
- addListener(event: 'open', listener: (fd: number) => void): this;
- addListener(event: 'pipe', listener: (src: stream.Readable) => void): this;
- addListener(event: 'ready', listener: () => void): this;
- addListener(event: 'unpipe', listener: (src: stream.Readable) => void): this;
- addListener(event: string | symbol, listener: (...args: any[]) => void): this;
- on(event: 'close', listener: () => void): this;
- on(event: 'drain', listener: () => void): this;
- on(event: 'error', listener: (err: Error) => void): this;
- on(event: 'finish', listener: () => void): this;
- on(event: 'open', listener: (fd: number) => void): this;
- on(event: 'pipe', listener: (src: stream.Readable) => void): this;
- on(event: 'ready', listener: () => void): this;
- on(event: 'unpipe', listener: (src: stream.Readable) => void): this;
- on(event: string | symbol, listener: (...args: any[]) => void): this;
- once(event: 'close', listener: () => void): this;
- once(event: 'drain', listener: () => void): this;
- once(event: 'error', listener: (err: Error) => void): this;
- once(event: 'finish', listener: () => void): this;
- once(event: 'open', listener: (fd: number) => void): this;
- once(event: 'pipe', listener: (src: stream.Readable) => void): this;
- once(event: 'ready', listener: () => void): this;
- once(event: 'unpipe', listener: (src: stream.Readable) => void): this;
- once(event: string | symbol, listener: (...args: any[]) => void): this;
- prependListener(event: 'close', listener: () => void): this;
- prependListener(event: 'drain', listener: () => void): this;
- prependListener(event: 'error', listener: (err: Error) => void): this;
- prependListener(event: 'finish', listener: () => void): this;
- prependListener(event: 'open', listener: (fd: number) => void): this;
- prependListener(event: 'pipe', listener: (src: stream.Readable) => void): this;
- prependListener(event: 'ready', listener: () => void): this;
- prependListener(event: 'unpipe', listener: (src: stream.Readable) => void): this;
- prependListener(event: string | symbol, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'close', listener: () => void): this;
- prependOnceListener(event: 'drain', listener: () => void): this;
- prependOnceListener(event: 'error', listener: (err: Error) => void): this;
- prependOnceListener(event: 'finish', listener: () => void): this;
- prependOnceListener(event: 'open', listener: (fd: number) => void): this;
- prependOnceListener(event: 'pipe', listener: (src: stream.Readable) => void): this;
- prependOnceListener(event: 'ready', listener: () => void): this;
- prependOnceListener(event: 'unpipe', listener: (src: stream.Readable) => void): this;
- prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this;
- }
- /**
- * Asynchronously rename file at `oldPath` to the pathname provided
- * as `newPath`. In the case that `newPath` already exists, it will
- * be overwritten. If there is a directory at `newPath`, an error will
- * be raised instead. No arguments other than a possible exception are
- * given to the completion callback.
- *
- * See also: [`rename(2)`](http://man7.org/linux/man-pages/man2/rename.2.html).
- *
- * ```js
- * import { rename } from 'fs';
- *
- * rename('oldFile.txt', 'newFile.txt', (err) => {
- * if (err) throw err;
- * console.log('Rename complete!');
- * });
- * ```
- * @since v0.0.2
- */
- export function rename(oldPath: PathLike, newPath: PathLike, callback: NoParamCallback): void;
- export namespace rename {
- /**
- * Asynchronous rename(2) - Change the name or location of a file or directory.
- * @param oldPath A path to a file. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- * @param newPath A path to a file. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- */
- function __promisify__(oldPath: PathLike, newPath: PathLike): Promise;
- }
- /**
- * Renames the file from `oldPath` to `newPath`. Returns `undefined`.
- *
- * See the POSIX [`rename(2)`](http://man7.org/linux/man-pages/man2/rename.2.html) documentation for more details.
- * @since v0.1.21
- */
- export function renameSync(oldPath: PathLike, newPath: PathLike): void;
- /**
- * Truncates the file. No arguments other than a possible exception are
- * given to the completion callback. A file descriptor can also be passed as the
- * first argument. In this case, `fs.ftruncate()` is called.
- *
- * ```js
- * import { truncate } from 'fs';
- * // Assuming that 'path/file.txt' is a regular file.
- * truncate('path/file.txt', (err) => {
- * if (err) throw err;
- * console.log('path/file.txt was truncated');
- * });
- * ```
- *
- * Passing a file descriptor is deprecated and may result in an error being thrown
- * in the future.
- *
- * See the POSIX [`truncate(2)`](http://man7.org/linux/man-pages/man2/truncate.2.html) documentation for more details.
- * @since v0.8.6
- * @param [len=0]
- */
- export function truncate(path: PathLike, len: number | undefined | null, callback: NoParamCallback): void;
- /**
- * Asynchronous truncate(2) - Truncate a file to a specified length.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export function truncate(path: PathLike, callback: NoParamCallback): void;
- export namespace truncate {
- /**
- * Asynchronous truncate(2) - Truncate a file to a specified length.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param len If not specified, defaults to `0`.
- */
- function __promisify__(path: PathLike, len?: number | null): Promise;
- }
- /**
- * Truncates the file. Returns `undefined`. A file descriptor can also be
- * passed as the first argument. In this case, `fs.ftruncateSync()` is called.
- *
- * Passing a file descriptor is deprecated and may result in an error being thrown
- * in the future.
- * @since v0.8.6
- * @param [len=0]
- */
- export function truncateSync(path: PathLike, len?: number | null): void;
- /**
- * Truncates the file descriptor. No arguments other than a possible exception are
- * given to the completion callback.
- *
- * See the POSIX [`ftruncate(2)`](http://man7.org/linux/man-pages/man2/ftruncate.2.html) documentation for more detail.
- *
- * If the file referred to by the file descriptor was larger than `len` bytes, only
- * the first `len` bytes will be retained in the file.
- *
- * For example, the following program retains only the first four bytes of the
- * file:
- *
- * ```js
- * import { open, close, ftruncate } from 'fs';
- *
- * function closeFd(fd) {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- *
- * open('temp.txt', 'r+', (err, fd) => {
- * if (err) throw err;
- *
- * try {
- * ftruncate(fd, 4, (err) => {
- * closeFd(fd);
- * if (err) throw err;
- * });
- * } catch (err) {
- * closeFd(fd);
- * if (err) throw err;
- * }
- * });
- * ```
- *
- * If the file previously was shorter than `len` bytes, it is extended, and the
- * extended part is filled with null bytes (`'\0'`):
- *
- * If `len` is negative then `0` will be used.
- * @since v0.8.6
- * @param [len=0]
- */
- export function ftruncate(fd: number, len: number | undefined | null, callback: NoParamCallback): void;
- /**
- * Asynchronous ftruncate(2) - Truncate a file to a specified length.
- * @param fd A file descriptor.
- */
- export function ftruncate(fd: number, callback: NoParamCallback): void;
- export namespace ftruncate {
- /**
- * Asynchronous ftruncate(2) - Truncate a file to a specified length.
- * @param fd A file descriptor.
- * @param len If not specified, defaults to `0`.
- */
- function __promisify__(fd: number, len?: number | null): Promise;
- }
- /**
- * Truncates the file descriptor. Returns `undefined`.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link ftruncate}.
- * @since v0.8.6
- * @param [len=0]
- */
- export function ftruncateSync(fd: number, len?: number | null): void;
- /**
- * Asynchronously changes owner and group of a file. No arguments other than a
- * possible exception are given to the completion callback.
- *
- * See the POSIX [`chown(2)`](http://man7.org/linux/man-pages/man2/chown.2.html) documentation for more detail.
- * @since v0.1.97
- */
- export function chown(path: PathLike, uid: number, gid: number, callback: NoParamCallback): void;
- export namespace chown {
- /**
- * Asynchronous chown(2) - Change ownership of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- function __promisify__(path: PathLike, uid: number, gid: number): Promise;
- }
- /**
- * Synchronously changes owner and group of a file. Returns `undefined`.
- * This is the synchronous version of {@link chown}.
- *
- * See the POSIX [`chown(2)`](http://man7.org/linux/man-pages/man2/chown.2.html) documentation for more detail.
- * @since v0.1.97
- */
- export function chownSync(path: PathLike, uid: number, gid: number): void;
- /**
- * Sets the owner of the file. No arguments other than a possible exception are
- * given to the completion callback.
- *
- * See the POSIX [`fchown(2)`](http://man7.org/linux/man-pages/man2/fchown.2.html) documentation for more detail.
- * @since v0.4.7
- */
- export function fchown(fd: number, uid: number, gid: number, callback: NoParamCallback): void;
- export namespace fchown {
- /**
- * Asynchronous fchown(2) - Change ownership of a file.
- * @param fd A file descriptor.
- */
- function __promisify__(fd: number, uid: number, gid: number): Promise;
- }
- /**
- * Sets the owner of the file. Returns `undefined`.
- *
- * See the POSIX [`fchown(2)`](http://man7.org/linux/man-pages/man2/fchown.2.html) documentation for more detail.
- * @since v0.4.7
- * @param uid The file's new owner's user id.
- * @param gid The file's new group's group id.
- */
- export function fchownSync(fd: number, uid: number, gid: number): void;
- /**
- * Set the owner of the symbolic link. No arguments other than a possible
- * exception are given to the completion callback.
- *
- * See the POSIX [`lchown(2)`](http://man7.org/linux/man-pages/man2/lchown.2.html) documentation for more detail.
- */
- export function lchown(path: PathLike, uid: number, gid: number, callback: NoParamCallback): void;
- export namespace lchown {
- /**
- * Asynchronous lchown(2) - Change ownership of a file. Does not dereference symbolic links.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- function __promisify__(path: PathLike, uid: number, gid: number): Promise;
- }
- /**
- * Set the owner for the path. Returns `undefined`.
- *
- * See the POSIX [`lchown(2)`](http://man7.org/linux/man-pages/man2/lchown.2.html) documentation for more details.
- * @param uid The file's new owner's user id.
- * @param gid The file's new group's group id.
- */
- export function lchownSync(path: PathLike, uid: number, gid: number): void;
- /**
- * Changes the access and modification times of a file in the same way as {@link utimes}, with the difference that if the path refers to a symbolic
- * link, then the link is not dereferenced: instead, the timestamps of the
- * symbolic link itself are changed.
- *
- * No arguments other than a possible exception are given to the completion
- * callback.
- * @since v14.5.0, v12.19.0
- */
- export function lutimes(path: PathLike, atime: TimeLike, mtime: TimeLike, callback: NoParamCallback): void;
- export namespace lutimes {
- /**
- * Changes the access and modification times of a file in the same way as `fsPromises.utimes()`,
- * with the difference that if the path refers to a symbolic link, then the link is not
- * dereferenced: instead, the timestamps of the symbolic link itself are changed.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param atime The last access time. If a string is provided, it will be coerced to number.
- * @param mtime The last modified time. If a string is provided, it will be coerced to number.
- */
- function __promisify__(path: PathLike, atime: TimeLike, mtime: TimeLike): Promise;
- }
- /**
- * Change the file system timestamps of the symbolic link referenced by `path`.
- * Returns `undefined`, or throws an exception when parameters are incorrect or
- * the operation fails. This is the synchronous version of {@link lutimes}.
- * @since v14.5.0, v12.19.0
- */
- export function lutimesSync(path: PathLike, atime: TimeLike, mtime: TimeLike): void;
- /**
- * Asynchronously changes the permissions of a file. No arguments other than a
- * possible exception are given to the completion callback.
- *
- * See the POSIX [`chmod(2)`](http://man7.org/linux/man-pages/man2/chmod.2.html) documentation for more detail.
- *
- * ```js
- * import { chmod } from 'fs';
- *
- * chmod('my_file.txt', 0o775, (err) => {
- * if (err) throw err;
- * console.log('The permissions for file "my_file.txt" have been changed!');
- * });
- * ```
- * @since v0.1.30
- */
- export function chmod(path: PathLike, mode: Mode, callback: NoParamCallback): void;
- export namespace chmod {
- /**
- * Asynchronous chmod(2) - Change permissions of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param mode A file mode. If a string is passed, it is parsed as an octal integer.
- */
- function __promisify__(path: PathLike, mode: Mode): Promise;
- }
- /**
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link chmod}.
- *
- * See the POSIX [`chmod(2)`](http://man7.org/linux/man-pages/man2/chmod.2.html) documentation for more detail.
- * @since v0.6.7
- */
- export function chmodSync(path: PathLike, mode: Mode): void;
- /**
- * Sets the permissions on the file. No arguments other than a possible exception
- * are given to the completion callback.
- *
- * See the POSIX [`fchmod(2)`](http://man7.org/linux/man-pages/man2/fchmod.2.html) documentation for more detail.
- * @since v0.4.7
- */
- export function fchmod(fd: number, mode: Mode, callback: NoParamCallback): void;
- export namespace fchmod {
- /**
- * Asynchronous fchmod(2) - Change permissions of a file.
- * @param fd A file descriptor.
- * @param mode A file mode. If a string is passed, it is parsed as an octal integer.
- */
- function __promisify__(fd: number, mode: Mode): Promise;
- }
- /**
- * Sets the permissions on the file. Returns `undefined`.
- *
- * See the POSIX [`fchmod(2)`](http://man7.org/linux/man-pages/man2/fchmod.2.html) documentation for more detail.
- * @since v0.4.7
- */
- export function fchmodSync(fd: number, mode: Mode): void;
- /**
- * Changes the permissions on a symbolic link. No arguments other than a possible
- * exception are given to the completion callback.
- *
- * This method is only implemented on macOS.
- *
- * See the POSIX [`lchmod(2)`](https://www.freebsd.org/cgi/man.cgi?query=lchmod&sektion=2) documentation for more detail.
- * @deprecated Since v0.4.7
- */
- export function lchmod(path: PathLike, mode: Mode, callback: NoParamCallback): void;
- /** @deprecated */
- export namespace lchmod {
- /**
- * Asynchronous lchmod(2) - Change permissions of a file. Does not dereference symbolic links.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param mode A file mode. If a string is passed, it is parsed as an octal integer.
- */
- function __promisify__(path: PathLike, mode: Mode): Promise;
- }
- /**
- * Changes the permissions on a symbolic link. Returns `undefined`.
- *
- * This method is only implemented on macOS.
- *
- * See the POSIX [`lchmod(2)`](https://www.freebsd.org/cgi/man.cgi?query=lchmod&sektion=2) documentation for more detail.
- * @deprecated Since v0.4.7
- */
- export function lchmodSync(path: PathLike, mode: Mode): void;
- /**
- * Asynchronous [`stat(2)`](http://man7.org/linux/man-pages/man2/stat.2.html). The callback gets two arguments `(err, stats)` where`stats` is an `fs.Stats` object.
- *
- * In case of an error, the `err.code` will be one of `Common System Errors`.
- *
- * Using `fs.stat()` to check for the existence of a file before calling`fs.open()`, `fs.readFile()` or `fs.writeFile()` is not recommended.
- * Instead, user code should open/read/write the file directly and handle the
- * error raised if the file is not available.
- *
- * To check if a file exists without manipulating it afterwards, {@link access} is recommended.
- *
- * For example, given the following directory structure:
- *
- * ```text
- * - txtDir
- * -- file.txt
- * - app.js
- * ```
- *
- * The next program will check for the stats of the given paths:
- *
- * ```js
- * import { stat } from 'fs';
- *
- * const pathsToCheck = ['./txtDir', './txtDir/file.txt'];
- *
- * for (let i = 0; i < pathsToCheck.length; i++) {
- * stat(pathsToCheck[i], (err, stats) => {
- * console.log(stats.isDirectory());
- * console.log(stats);
- * });
- * }
- * ```
- *
- * The resulting output will resemble:
- *
- * ```console
- * true
- * Stats {
- * dev: 16777220,
- * mode: 16877,
- * nlink: 3,
- * uid: 501,
- * gid: 20,
- * rdev: 0,
- * blksize: 4096,
- * ino: 14214262,
- * size: 96,
- * blocks: 0,
- * atimeMs: 1561174653071.963,
- * mtimeMs: 1561174614583.3518,
- * ctimeMs: 1561174626623.5366,
- * birthtimeMs: 1561174126937.2893,
- * atime: 2019-06-22T03:37:33.072Z,
- * mtime: 2019-06-22T03:36:54.583Z,
- * ctime: 2019-06-22T03:37:06.624Z,
- * birthtime: 2019-06-22T03:28:46.937Z
- * }
- * false
- * Stats {
- * dev: 16777220,
- * mode: 33188,
- * nlink: 1,
- * uid: 501,
- * gid: 20,
- * rdev: 0,
- * blksize: 4096,
- * ino: 14214074,
- * size: 8,
- * blocks: 8,
- * atimeMs: 1561174616618.8555,
- * mtimeMs: 1561174614584,
- * ctimeMs: 1561174614583.8145,
- * birthtimeMs: 1561174007710.7478,
- * atime: 2019-06-22T03:36:56.619Z,
- * mtime: 2019-06-22T03:36:54.584Z,
- * ctime: 2019-06-22T03:36:54.584Z,
- * birthtime: 2019-06-22T03:26:47.711Z
- * }
- * ```
- * @since v0.0.2
- */
- export function stat(path: PathLike, callback: (err: NodeJS.ErrnoException | null, stats: Stats) => void): void;
- export function stat(
- path: PathLike,
- options:
- | (StatOptions & {
- bigint?: false | undefined;
- })
- | undefined,
- callback: (err: NodeJS.ErrnoException | null, stats: Stats) => void
- ): void;
- export function stat(
- path: PathLike,
- options: StatOptions & {
- bigint: true;
- },
- callback: (err: NodeJS.ErrnoException | null, stats: BigIntStats) => void
- ): void;
- export function stat(path: PathLike, options: StatOptions | undefined, callback: (err: NodeJS.ErrnoException | null, stats: Stats | BigIntStats) => void): void;
- export namespace stat {
- /**
- * Asynchronous stat(2) - Get file status.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- function __promisify__(
- path: PathLike,
- options?: StatOptions & {
- bigint?: false | undefined;
- }
- ): Promise;
- function __promisify__(
- path: PathLike,
- options: StatOptions & {
- bigint: true;
- }
- ): Promise;
- function __promisify__(path: PathLike, options?: StatOptions): Promise;
- }
- export interface StatSyncFn extends Function {
- (path: PathLike, options?: undefined): Stats;
- (
- path: PathLike,
- options?: StatSyncOptions & {
- bigint?: false | undefined;
- throwIfNoEntry: false;
- }
- ): Stats | undefined;
- (
- path: PathLike,
- options: StatSyncOptions & {
- bigint: true;
- throwIfNoEntry: false;
- }
- ): BigIntStats | undefined;
- (
- path: PathLike,
- options?: StatSyncOptions & {
- bigint?: false | undefined;
- }
- ): Stats;
- (
- path: PathLike,
- options: StatSyncOptions & {
- bigint: true;
- }
- ): BigIntStats;
- (
- path: PathLike,
- options: StatSyncOptions & {
- bigint: boolean;
- throwIfNoEntry?: false | undefined;
- }
- ): Stats | BigIntStats;
- (path: PathLike, options?: StatSyncOptions): Stats | BigIntStats | undefined;
- }
- /**
- * Synchronous stat(2) - Get file status.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export const statSync: StatSyncFn;
- /**
- * Invokes the callback with the `fs.Stats` for the file descriptor.
- *
- * See the POSIX [`fstat(2)`](http://man7.org/linux/man-pages/man2/fstat.2.html) documentation for more detail.
- * @since v0.1.95
- */
- export function fstat(fd: number, callback: (err: NodeJS.ErrnoException | null, stats: Stats) => void): void;
- export function fstat(
- fd: number,
- options:
- | (StatOptions & {
- bigint?: false | undefined;
- })
- | undefined,
- callback: (err: NodeJS.ErrnoException | null, stats: Stats) => void
- ): void;
- export function fstat(
- fd: number,
- options: StatOptions & {
- bigint: true;
- },
- callback: (err: NodeJS.ErrnoException | null, stats: BigIntStats) => void
- ): void;
- export function fstat(fd: number, options: StatOptions | undefined, callback: (err: NodeJS.ErrnoException | null, stats: Stats | BigIntStats) => void): void;
- export namespace fstat {
- /**
- * Asynchronous fstat(2) - Get file status.
- * @param fd A file descriptor.
- */
- function __promisify__(
- fd: number,
- options?: StatOptions & {
- bigint?: false | undefined;
- }
- ): Promise;
- function __promisify__(
- fd: number,
- options: StatOptions & {
- bigint: true;
- }
- ): Promise;
- function __promisify__(fd: number, options?: StatOptions): Promise;
- }
- /**
- * Retrieves the `fs.Stats` for the file descriptor.
- *
- * See the POSIX [`fstat(2)`](http://man7.org/linux/man-pages/man2/fstat.2.html) documentation for more detail.
- * @since v0.1.95
- */
- export function fstatSync(
- fd: number,
- options?: StatOptions & {
- bigint?: false | undefined;
- }
- ): Stats;
- export function fstatSync(
- fd: number,
- options: StatOptions & {
- bigint: true;
- }
- ): BigIntStats;
- export function fstatSync(fd: number, options?: StatOptions): Stats | BigIntStats;
- /**
- * Retrieves the `fs.Stats` for the symbolic link referred to by the path.
- * The callback gets two arguments `(err, stats)` where `stats` is a `fs.Stats` object. `lstat()` is identical to `stat()`, except that if `path` is a symbolic
- * link, then the link itself is stat-ed, not the file that it refers to.
- *
- * See the POSIX [`lstat(2)`](http://man7.org/linux/man-pages/man2/lstat.2.html) documentation for more details.
- * @since v0.1.30
- */
- export function lstat(path: PathLike, callback: (err: NodeJS.ErrnoException | null, stats: Stats) => void): void;
- export function lstat(
- path: PathLike,
- options:
- | (StatOptions & {
- bigint?: false | undefined;
- })
- | undefined,
- callback: (err: NodeJS.ErrnoException | null, stats: Stats) => void
- ): void;
- export function lstat(
- path: PathLike,
- options: StatOptions & {
- bigint: true;
- },
- callback: (err: NodeJS.ErrnoException | null, stats: BigIntStats) => void
- ): void;
- export function lstat(path: PathLike, options: StatOptions | undefined, callback: (err: NodeJS.ErrnoException | null, stats: Stats | BigIntStats) => void): void;
- export namespace lstat {
- /**
- * Asynchronous lstat(2) - Get file status. Does not dereference symbolic links.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- function __promisify__(
- path: PathLike,
- options?: StatOptions & {
- bigint?: false | undefined;
- }
- ): Promise;
- function __promisify__(
- path: PathLike,
- options: StatOptions & {
- bigint: true;
- }
- ): Promise;
- function __promisify__(path: PathLike, options?: StatOptions): Promise;
- }
- /**
- * Synchronous lstat(2) - Get file status. Does not dereference symbolic links.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export const lstatSync: StatSyncFn;
- /**
- * Creates a new link from the `existingPath` to the `newPath`. See the POSIX [`link(2)`](http://man7.org/linux/man-pages/man2/link.2.html) documentation for more detail. No arguments other than
- * a possible
- * exception are given to the completion callback.
- * @since v0.1.31
- */
- export function link(existingPath: PathLike, newPath: PathLike, callback: NoParamCallback): void;
- export namespace link {
- /**
- * Asynchronous link(2) - Create a new link (also known as a hard link) to an existing file.
- * @param existingPath A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param newPath A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- function __promisify__(existingPath: PathLike, newPath: PathLike): Promise;
- }
- /**
- * Creates a new link from the `existingPath` to the `newPath`. See the POSIX [`link(2)`](http://man7.org/linux/man-pages/man2/link.2.html) documentation for more detail. Returns `undefined`.
- * @since v0.1.31
- */
- export function linkSync(existingPath: PathLike, newPath: PathLike): void;
- /**
- * Creates the link called `path` pointing to `target`. No arguments other than a
- * possible exception are given to the completion callback.
- *
- * See the POSIX [`symlink(2)`](http://man7.org/linux/man-pages/man2/symlink.2.html) documentation for more details.
- *
- * The `type` argument is only available on Windows and ignored on other platforms.
- * It can be set to `'dir'`, `'file'`, or `'junction'`. If the `type` argument is
- * not set, Node.js will autodetect `target` type and use `'file'` or `'dir'`. If
- * the `target` does not exist, `'file'` will be used. Windows junction points
- * require the destination path to be absolute. When using `'junction'`, the`target` argument will automatically be normalized to absolute path.
- *
- * Relative targets are relative to the link’s parent directory.
- *
- * ```js
- * import { symlink } from 'fs';
- *
- * symlink('./mew', './mewtwo', callback);
- * ```
- *
- * The above example creates a symbolic link `mewtwo` which points to `mew` in the
- * same directory:
- *
- * ```bash
- * $ tree .
- * .
- * ├── mew
- * └── mewtwo -> ./mew
- * ```
- * @since v0.1.31
- */
- export function symlink(target: PathLike, path: PathLike, type: symlink.Type | undefined | null, callback: NoParamCallback): void;
- /**
- * Asynchronous symlink(2) - Create a new symbolic link to an existing file.
- * @param target A path to an existing file. If a URL is provided, it must use the `file:` protocol.
- * @param path A path to the new symlink. If a URL is provided, it must use the `file:` protocol.
- */
- export function symlink(target: PathLike, path: PathLike, callback: NoParamCallback): void;
- export namespace symlink {
- /**
- * Asynchronous symlink(2) - Create a new symbolic link to an existing file.
- * @param target A path to an existing file. If a URL is provided, it must use the `file:` protocol.
- * @param path A path to the new symlink. If a URL is provided, it must use the `file:` protocol.
- * @param type May be set to `'dir'`, `'file'`, or `'junction'` (default is `'file'`) and is only available on Windows (ignored on other platforms).
- * When using `'junction'`, the `target` argument will automatically be normalized to an absolute path.
- */
- function __promisify__(target: PathLike, path: PathLike, type?: string | null): Promise;
- type Type = 'dir' | 'file' | 'junction';
- }
- /**
- * Returns `undefined`.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link symlink}.
- * @since v0.1.31
- */
- export function symlinkSync(target: PathLike, path: PathLike, type?: symlink.Type | null): void;
- /**
- * Reads the contents of the symbolic link referred to by `path`. The callback gets
- * two arguments `(err, linkString)`.
- *
- * See the POSIX [`readlink(2)`](http://man7.org/linux/man-pages/man2/readlink.2.html) documentation for more details.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use for
- * the link path passed to the callback. If the `encoding` is set to `'buffer'`,
- * the link path returned will be passed as a `Buffer` object.
- * @since v0.1.31
- */
- export function readlink(path: PathLike, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, linkString: string) => void): void;
- /**
- * Asynchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readlink(path: PathLike, options: BufferEncodingOption, callback: (err: NodeJS.ErrnoException | null, linkString: Buffer) => void): void;
- /**
- * Asynchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readlink(path: PathLike, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, linkString: string | Buffer) => void): void;
- /**
- * Asynchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export function readlink(path: PathLike, callback: (err: NodeJS.ErrnoException | null, linkString: string) => void): void;
- export namespace readlink {
- /**
- * Asynchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(path: PathLike, options?: EncodingOption): Promise;
- /**
- * Asynchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(path: PathLike, options: BufferEncodingOption): Promise;
- /**
- * Asynchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(path: PathLike, options?: EncodingOption): Promise;
- }
- /**
- * Returns the symbolic link's string value.
- *
- * See the POSIX [`readlink(2)`](http://man7.org/linux/man-pages/man2/readlink.2.html) documentation for more details.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use for
- * the link path returned. If the `encoding` is set to `'buffer'`,
- * the link path returned will be passed as a `Buffer` object.
- * @since v0.1.31
- */
- export function readlinkSync(path: PathLike, options?: EncodingOption): string;
- /**
- * Synchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readlinkSync(path: PathLike, options: BufferEncodingOption): Buffer;
- /**
- * Synchronous readlink(2) - read value of a symbolic link.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readlinkSync(path: PathLike, options?: EncodingOption): string | Buffer;
- /**
- * Asynchronously computes the canonical pathname by resolving `.`, `..` and
- * symbolic links.
- *
- * A canonical pathname is not necessarily unique. Hard links and bind mounts can
- * expose a file system entity through many pathnames.
- *
- * This function behaves like [`realpath(3)`](http://man7.org/linux/man-pages/man3/realpath.3.html), with some exceptions:
- *
- * 1. No case conversion is performed on case-insensitive file systems.
- * 2. The maximum number of symbolic links is platform-independent and generally
- * (much) higher than what the native [`realpath(3)`](http://man7.org/linux/man-pages/man3/realpath.3.html) implementation supports.
- *
- * The `callback` gets two arguments `(err, resolvedPath)`. May use `process.cwd`to resolve relative paths.
- *
- * Only paths that can be converted to UTF8 strings are supported.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use for
- * the path passed to the callback. If the `encoding` is set to `'buffer'`,
- * the path returned will be passed as a `Buffer` object.
- *
- * If `path` resolves to a socket or a pipe, the function will return a system
- * dependent name for that object.
- * @since v0.1.31
- */
- export function realpath(path: PathLike, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, resolvedPath: string) => void): void;
- /**
- * Asynchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function realpath(path: PathLike, options: BufferEncodingOption, callback: (err: NodeJS.ErrnoException | null, resolvedPath: Buffer) => void): void;
- /**
- * Asynchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function realpath(path: PathLike, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, resolvedPath: string | Buffer) => void): void;
- /**
- * Asynchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export function realpath(path: PathLike, callback: (err: NodeJS.ErrnoException | null, resolvedPath: string) => void): void;
- export namespace realpath {
- /**
- * Asynchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(path: PathLike, options?: EncodingOption): Promise;
- /**
- * Asynchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(path: PathLike, options: BufferEncodingOption): Promise;
- /**
- * Asynchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(path: PathLike, options?: EncodingOption): Promise;
- /**
- * Asynchronous [`realpath(3)`](http://man7.org/linux/man-pages/man3/realpath.3.html).
- *
- * The `callback` gets two arguments `(err, resolvedPath)`.
- *
- * Only paths that can be converted to UTF8 strings are supported.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use for
- * the path passed to the callback. If the `encoding` is set to `'buffer'`,
- * the path returned will be passed as a `Buffer` object.
- *
- * On Linux, when Node.js is linked against musl libc, the procfs file system must
- * be mounted on `/proc` in order for this function to work. Glibc does not have
- * this restriction.
- * @since v9.2.0
- */
- function native(path: PathLike, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, resolvedPath: string) => void): void;
- function native(path: PathLike, options: BufferEncodingOption, callback: (err: NodeJS.ErrnoException | null, resolvedPath: Buffer) => void): void;
- function native(path: PathLike, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, resolvedPath: string | Buffer) => void): void;
- function native(path: PathLike, callback: (err: NodeJS.ErrnoException | null, resolvedPath: string) => void): void;
- }
- /**
- * Returns the resolved pathname.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link realpath}.
- * @since v0.1.31
- */
- export function realpathSync(path: PathLike, options?: EncodingOption): string;
- /**
- * Synchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function realpathSync(path: PathLike, options: BufferEncodingOption): Buffer;
- /**
- * Synchronous realpath(3) - return the canonicalized absolute pathname.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function realpathSync(path: PathLike, options?: EncodingOption): string | Buffer;
- export namespace realpathSync {
- function native(path: PathLike, options?: EncodingOption): string;
- function native(path: PathLike, options: BufferEncodingOption): Buffer;
- function native(path: PathLike, options?: EncodingOption): string | Buffer;
- }
- /**
- * Asynchronously removes a file or symbolic link. No arguments other than a
- * possible exception are given to the completion callback.
- *
- * ```js
- * import { unlink } from 'fs';
- * // Assuming that 'path/file.txt' is a regular file.
- * unlink('path/file.txt', (err) => {
- * if (err) throw err;
- * console.log('path/file.txt was deleted');
- * });
- * ```
- *
- * `fs.unlink()` will not work on a directory, empty or otherwise. To remove a
- * directory, use {@link rmdir}.
- *
- * See the POSIX [`unlink(2)`](http://man7.org/linux/man-pages/man2/unlink.2.html) documentation for more details.
- * @since v0.0.2
- */
- export function unlink(path: PathLike, callback: NoParamCallback): void;
- export namespace unlink {
- /**
- * Asynchronous unlink(2) - delete a name and possibly the file it refers to.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- function __promisify__(path: PathLike): Promise;
- }
- /**
- * Synchronous [`unlink(2)`](http://man7.org/linux/man-pages/man2/unlink.2.html). Returns `undefined`.
- * @since v0.1.21
- */
- export function unlinkSync(path: PathLike): void;
- export interface RmDirOptions {
- /**
- * If an `EBUSY`, `EMFILE`, `ENFILE`, `ENOTEMPTY`, or
- * `EPERM` error is encountered, Node.js will retry the operation with a linear
- * backoff wait of `retryDelay` ms longer on each try. This option represents the
- * number of retries. This option is ignored if the `recursive` option is not
- * `true`.
- * @default 0
- */
- maxRetries?: number | undefined;
- /**
- * @deprecated since v14.14.0 In future versions of Node.js and will trigger a warning
- * `fs.rmdir(path, { recursive: true })` will throw if `path` does not exist or is a file.
- * Use `fs.rm(path, { recursive: true, force: true })` instead.
- *
- * If `true`, perform a recursive directory removal. In
- * recursive mode, operations are retried on failure.
- * @default false
- */
- recursive?: boolean | undefined;
- /**
- * The amount of time in milliseconds to wait between retries.
- * This option is ignored if the `recursive` option is not `true`.
- * @default 100
- */
- retryDelay?: number | undefined;
- }
- /**
- * Asynchronous [`rmdir(2)`](http://man7.org/linux/man-pages/man2/rmdir.2.html). No arguments other than a possible exception are given
- * to the completion callback.
- *
- * Using `fs.rmdir()` on a file (not a directory) results in an `ENOENT` error on
- * Windows and an `ENOTDIR` error on POSIX.
- *
- * To get a behavior similar to the `rm -rf` Unix command, use {@link rm} with options `{ recursive: true, force: true }`.
- * @since v0.0.2
- */
- export function rmdir(path: PathLike, callback: NoParamCallback): void;
- export function rmdir(path: PathLike, options: RmDirOptions, callback: NoParamCallback): void;
- export namespace rmdir {
- /**
- * Asynchronous rmdir(2) - delete a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- function __promisify__(path: PathLike, options?: RmDirOptions): Promise;
- }
- /**
- * Synchronous [`rmdir(2)`](http://man7.org/linux/man-pages/man2/rmdir.2.html). Returns `undefined`.
- *
- * Using `fs.rmdirSync()` on a file (not a directory) results in an `ENOENT` error
- * on Windows and an `ENOTDIR` error on POSIX.
- *
- * To get a behavior similar to the `rm -rf` Unix command, use {@link rmSync} with options `{ recursive: true, force: true }`.
- * @since v0.1.21
- */
- export function rmdirSync(path: PathLike, options?: RmDirOptions): void;
- export interface RmOptions {
- /**
- * When `true`, exceptions will be ignored if `path` does not exist.
- * @default false
- */
- force?: boolean | undefined;
- /**
- * If an `EBUSY`, `EMFILE`, `ENFILE`, `ENOTEMPTY`, or
- * `EPERM` error is encountered, Node.js will retry the operation with a linear
- * backoff wait of `retryDelay` ms longer on each try. This option represents the
- * number of retries. This option is ignored if the `recursive` option is not
- * `true`.
- * @default 0
- */
- maxRetries?: number | undefined;
- /**
- * If `true`, perform a recursive directory removal. In
- * recursive mode, operations are retried on failure.
- * @default false
- */
- recursive?: boolean | undefined;
- /**
- * The amount of time in milliseconds to wait between retries.
- * This option is ignored if the `recursive` option is not `true`.
- * @default 100
- */
- retryDelay?: number | undefined;
- }
- /**
- * Asynchronously removes files and directories (modeled on the standard POSIX `rm`utility). No arguments other than a possible exception are given to the
- * completion callback.
- * @since v14.14.0
- */
- export function rm(path: PathLike, callback: NoParamCallback): void;
- export function rm(path: PathLike, options: RmOptions, callback: NoParamCallback): void;
- export namespace rm {
- /**
- * Asynchronously removes files and directories (modeled on the standard POSIX `rm` utility).
- */
- function __promisify__(path: PathLike, options?: RmOptions): Promise;
- }
- /**
- * Synchronously removes files and directories (modeled on the standard POSIX `rm`utility). Returns `undefined`.
- * @since v14.14.0
- */
- export function rmSync(path: PathLike, options?: RmOptions): void;
- export interface MakeDirectoryOptions {
- /**
- * Indicates whether parent folders should be created.
- * If a folder was created, the path to the first created folder will be returned.
- * @default false
- */
- recursive?: boolean | undefined;
- /**
- * A file mode. If a string is passed, it is parsed as an octal integer. If not specified
- * @default 0o777
- */
- mode?: Mode | undefined;
- }
- /**
- * Asynchronously creates a directory.
- *
- * The callback is given a possible exception and, if `recursive` is `true`, the
- * first directory path created, `(err[, path])`.`path` can still be `undefined` when `recursive` is `true`, if no directory was
- * created.
- *
- * The optional `options` argument can be an integer specifying `mode` (permission
- * and sticky bits), or an object with a `mode` property and a `recursive`property indicating whether parent directories should be created. Calling`fs.mkdir()` when `path` is a directory that
- * exists results in an error only
- * when `recursive` is false.
- *
- * ```js
- * import { mkdir } from 'fs';
- *
- * // Creates /tmp/a/apple, regardless of whether `/tmp` and /tmp/a exist.
- * mkdir('/tmp/a/apple', { recursive: true }, (err) => {
- * if (err) throw err;
- * });
- * ```
- *
- * On Windows, using `fs.mkdir()` on the root directory even with recursion will
- * result in an error:
- *
- * ```js
- * import { mkdir } from 'fs';
- *
- * mkdir('/', { recursive: true }, (err) => {
- * // => [Error: EPERM: operation not permitted, mkdir 'C:\']
- * });
- * ```
- *
- * See the POSIX [`mkdir(2)`](http://man7.org/linux/man-pages/man2/mkdir.2.html) documentation for more details.
- * @since v0.1.8
- */
- export function mkdir(
- path: PathLike,
- options: MakeDirectoryOptions & {
- recursive: true;
- },
- callback: (err: NodeJS.ErrnoException | null, path?: string) => void
- ): void;
- /**
- * Asynchronous mkdir(2) - create a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders
- * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`.
- */
- export function mkdir(
- path: PathLike,
- options:
- | Mode
- | (MakeDirectoryOptions & {
- recursive?: false | undefined;
- })
- | null
- | undefined,
- callback: NoParamCallback
- ): void;
- /**
- * Asynchronous mkdir(2) - create a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders
- * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`.
- */
- export function mkdir(path: PathLike, options: Mode | MakeDirectoryOptions | null | undefined, callback: (err: NodeJS.ErrnoException | null, path?: string) => void): void;
- /**
- * Asynchronous mkdir(2) - create a directory with a mode of `0o777`.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export function mkdir(path: PathLike, callback: NoParamCallback): void;
- export namespace mkdir {
- /**
- * Asynchronous mkdir(2) - create a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders
- * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`.
- */
- function __promisify__(
- path: PathLike,
- options: MakeDirectoryOptions & {
- recursive: true;
- }
- ): Promise;
- /**
- * Asynchronous mkdir(2) - create a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders
- * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`.
- */
- function __promisify__(
- path: PathLike,
- options?:
- | Mode
- | (MakeDirectoryOptions & {
- recursive?: false | undefined;
- })
- | null
- ): Promise;
- /**
- * Asynchronous mkdir(2) - create a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders
- * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`.
- */
- function __promisify__(path: PathLike, options?: Mode | MakeDirectoryOptions | null): Promise;
- }
- /**
- * Synchronously creates a directory. Returns `undefined`, or if `recursive` is`true`, the first directory path created.
- * This is the synchronous version of {@link mkdir}.
- *
- * See the POSIX [`mkdir(2)`](http://man7.org/linux/man-pages/man2/mkdir.2.html) documentation for more details.
- * @since v0.1.21
- */
- export function mkdirSync(
- path: PathLike,
- options: MakeDirectoryOptions & {
- recursive: true;
- }
- ): string | undefined;
- /**
- * Synchronous mkdir(2) - create a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders
- * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`.
- */
- export function mkdirSync(
- path: PathLike,
- options?:
- | Mode
- | (MakeDirectoryOptions & {
- recursive?: false | undefined;
- })
- | null
- ): void;
- /**
- * Synchronous mkdir(2) - create a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders
- * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`.
- */
- export function mkdirSync(path: PathLike, options?: Mode | MakeDirectoryOptions | null): string | undefined;
- /**
- * Creates a unique temporary directory.
- *
- * Generates six random characters to be appended behind a required`prefix` to create a unique temporary directory. Due to platform
- * inconsistencies, avoid trailing `X` characters in `prefix`. Some platforms,
- * notably the BSDs, can return more than six random characters, and replace
- * trailing `X` characters in `prefix` with random characters.
- *
- * The created directory path is passed as a string to the callback's second
- * parameter.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use.
- *
- * ```js
- * import { mkdtemp } from 'fs';
- *
- * mkdtemp(path.join(os.tmpdir(), 'foo-'), (err, directory) => {
- * if (err) throw err;
- * console.log(directory);
- * // Prints: /tmp/foo-itXde2 or C:\Users\...\AppData\Local\Temp\foo-itXde2
- * });
- * ```
- *
- * The `fs.mkdtemp()` method will append the six randomly selected characters
- * directly to the `prefix` string. For instance, given a directory `/tmp`, if the
- * intention is to create a temporary directory _within_`/tmp`, the `prefix`must end with a trailing platform-specific path separator
- * (`require('path').sep`).
- *
- * ```js
- * import { tmpdir } from 'os';
- * import { mkdtemp } from 'fs';
- *
- * // The parent directory for the new temporary directory
- * const tmpDir = tmpdir();
- *
- * // This method is *INCORRECT*:
- * mkdtemp(tmpDir, (err, directory) => {
- * if (err) throw err;
- * console.log(directory);
- * // Will print something similar to `/tmpabc123`.
- * // A new temporary directory is created at the file system root
- * // rather than *within* the /tmp directory.
- * });
- *
- * // This method is *CORRECT*:
- * import { sep } from 'path';
- * mkdtemp(`${tmpDir}${sep}`, (err, directory) => {
- * if (err) throw err;
- * console.log(directory);
- * // Will print something similar to `/tmp/abc123`.
- * // A new temporary directory is created within
- * // the /tmp directory.
- * });
- * ```
- * @since v5.10.0
- */
- export function mkdtemp(prefix: string, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, folder: string) => void): void;
- /**
- * Asynchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function mkdtemp(
- prefix: string,
- options:
- | 'buffer'
- | {
- encoding: 'buffer';
- },
- callback: (err: NodeJS.ErrnoException | null, folder: Buffer) => void
- ): void;
- /**
- * Asynchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function mkdtemp(prefix: string, options: EncodingOption, callback: (err: NodeJS.ErrnoException | null, folder: string | Buffer) => void): void;
- /**
- * Asynchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- */
- export function mkdtemp(prefix: string, callback: (err: NodeJS.ErrnoException | null, folder: string) => void): void;
- export namespace mkdtemp {
- /**
- * Asynchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(prefix: string, options?: EncodingOption): Promise;
- /**
- * Asynchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(prefix: string, options: BufferEncodingOption): Promise;
- /**
- * Asynchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(prefix: string, options?: EncodingOption): Promise;
- }
- /**
- * Returns the created directory path.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link mkdtemp}.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use.
- * @since v5.10.0
- */
- export function mkdtempSync(prefix: string, options?: EncodingOption): string;
- /**
- * Synchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function mkdtempSync(prefix: string, options: BufferEncodingOption): Buffer;
- /**
- * Synchronously creates a unique temporary directory.
- * Generates six random characters to be appended behind a required prefix to create a unique temporary directory.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function mkdtempSync(prefix: string, options?: EncodingOption): string | Buffer;
- /**
- * Reads the contents of a directory. The callback gets two arguments `(err, files)`where `files` is an array of the names of the files in the directory excluding`'.'` and `'..'`.
- *
- * See the POSIX [`readdir(3)`](http://man7.org/linux/man-pages/man3/readdir.3.html) documentation for more details.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use for
- * the filenames passed to the callback. If the `encoding` is set to `'buffer'`,
- * the filenames returned will be passed as `Buffer` objects.
- *
- * If `options.withFileTypes` is set to `true`, the `files` array will contain `fs.Dirent` objects.
- * @since v0.1.8
- */
- export function readdir(
- path: PathLike,
- options:
- | {
- encoding: BufferEncoding | null;
- withFileTypes?: false | undefined;
- }
- | BufferEncoding
- | undefined
- | null,
- callback: (err: NodeJS.ErrnoException | null, files: string[]) => void
- ): void;
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readdir(
- path: PathLike,
- options:
- | {
- encoding: 'buffer';
- withFileTypes?: false | undefined;
- }
- | 'buffer',
- callback: (err: NodeJS.ErrnoException | null, files: Buffer[]) => void
- ): void;
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readdir(
- path: PathLike,
- options:
- | (ObjectEncodingOptions & {
- withFileTypes?: false | undefined;
- })
- | BufferEncoding
- | undefined
- | null,
- callback: (err: NodeJS.ErrnoException | null, files: string[] | Buffer[]) => void
- ): void;
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export function readdir(path: PathLike, callback: (err: NodeJS.ErrnoException | null, files: string[]) => void): void;
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options If called with `withFileTypes: true` the result data will be an array of Dirent.
- */
- export function readdir(
- path: PathLike,
- options: ObjectEncodingOptions & {
- withFileTypes: true;
- },
- callback: (err: NodeJS.ErrnoException | null, files: Dirent[]) => void
- ): void;
- export namespace readdir {
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(
- path: PathLike,
- options?:
- | {
- encoding: BufferEncoding | null;
- withFileTypes?: false | undefined;
- }
- | BufferEncoding
- | null
- ): Promise;
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(
- path: PathLike,
- options:
- | 'buffer'
- | {
- encoding: 'buffer';
- withFileTypes?: false | undefined;
- }
- ): Promise;
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- function __promisify__(
- path: PathLike,
- options?:
- | (ObjectEncodingOptions & {
- withFileTypes?: false | undefined;
- })
- | BufferEncoding
- | null
- ): Promise;
- /**
- * Asynchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options If called with `withFileTypes: true` the result data will be an array of Dirent
- */
- function __promisify__(
- path: PathLike,
- options: ObjectEncodingOptions & {
- withFileTypes: true;
- }
- ): Promise;
- }
- /**
- * Reads the contents of the directory.
- *
- * See the POSIX [`readdir(3)`](http://man7.org/linux/man-pages/man3/readdir.3.html) documentation for more details.
- *
- * The optional `options` argument can be a string specifying an encoding, or an
- * object with an `encoding` property specifying the character encoding to use for
- * the filenames returned. If the `encoding` is set to `'buffer'`,
- * the filenames returned will be passed as `Buffer` objects.
- *
- * If `options.withFileTypes` is set to `true`, the result will contain `fs.Dirent` objects.
- * @since v0.1.21
- */
- export function readdirSync(
- path: PathLike,
- options?:
- | {
- encoding: BufferEncoding | null;
- withFileTypes?: false | undefined;
- }
- | BufferEncoding
- | null
- ): string[];
- /**
- * Synchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readdirSync(
- path: PathLike,
- options:
- | {
- encoding: 'buffer';
- withFileTypes?: false | undefined;
- }
- | 'buffer'
- ): Buffer[];
- /**
- * Synchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used.
- */
- export function readdirSync(
- path: PathLike,
- options?:
- | (ObjectEncodingOptions & {
- withFileTypes?: false | undefined;
- })
- | BufferEncoding
- | null
- ): string[] | Buffer[];
- /**
- * Synchronous readdir(3) - read a directory.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param options If called with `withFileTypes: true` the result data will be an array of Dirent.
- */
- export function readdirSync(
- path: PathLike,
- options: ObjectEncodingOptions & {
- withFileTypes: true;
- }
- ): Dirent[];
- /**
- * Closes the file descriptor. No arguments other than a possible exception are
- * given to the completion callback.
- *
- * Calling `fs.close()` on any file descriptor (`fd`) that is currently in use
- * through any other `fs` operation may lead to undefined behavior.
- *
- * See the POSIX [`close(2)`](http://man7.org/linux/man-pages/man2/close.2.html) documentation for more detail.
- * @since v0.0.2
- */
- export function close(fd: number, callback?: NoParamCallback): void;
- export namespace close {
- /**
- * Asynchronous close(2) - close a file descriptor.
- * @param fd A file descriptor.
- */
- function __promisify__(fd: number): Promise;
- }
- /**
- * Closes the file descriptor. Returns `undefined`.
- *
- * Calling `fs.closeSync()` on any file descriptor (`fd`) that is currently in use
- * through any other `fs` operation may lead to undefined behavior.
- *
- * See the POSIX [`close(2)`](http://man7.org/linux/man-pages/man2/close.2.html) documentation for more detail.
- * @since v0.1.21
- */
- export function closeSync(fd: number): void;
- /**
- * Asynchronous file open. See the POSIX [`open(2)`](http://man7.org/linux/man-pages/man2/open.2.html) documentation for more details.
- *
- * `mode` sets the file mode (permission and sticky bits), but only if the file was
- * created. On Windows, only the write permission can be manipulated; see {@link chmod}.
- *
- * The callback gets two arguments `(err, fd)`.
- *
- * Some characters (`< > : " / \ | ? *`) are reserved under Windows as documented
- * by [Naming Files, Paths, and Namespaces](https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file). Under NTFS, if the filename contains
- * a colon, Node.js will open a file system stream, as described by [this MSDN page](https://docs.microsoft.com/en-us/windows/desktop/FileIO/using-streams).
- *
- * Functions based on `fs.open()` exhibit this behavior as well:`fs.writeFile()`, `fs.readFile()`, etc.
- * @since v0.0.2
- * @param [flags='r'] See `support of file system `flags``.
- * @param [mode=0o666]
- */
- export function open(path: PathLike, flags: OpenMode | undefined, mode: Mode | undefined | null, callback: (err: NodeJS.ErrnoException | null, fd: number) => void): void;
- /**
- * Asynchronous open(2) - open and possibly create a file. If the file is created, its mode will be `0o666`.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param [flags='r'] See `support of file system `flags``.
- */
- export function open(path: PathLike, flags: OpenMode | undefined, callback: (err: NodeJS.ErrnoException | null, fd: number) => void): void;
- /**
- * Asynchronous open(2) - open and possibly create a file. If the file is created, its mode will be `0o666`.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- */
- export function open(path: PathLike, callback: (err: NodeJS.ErrnoException | null, fd: number) => void): void;
-
- export namespace open {
- /**
- * Asynchronous open(2) - open and possibly create a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param mode A file mode. If a string is passed, it is parsed as an octal integer. If not supplied, defaults to `0o666`.
- */
- function __promisify__(path: PathLike, flags: OpenMode, mode?: Mode | null): Promise;
- }
- /**
- * Returns an integer representing the file descriptor.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link open}.
- * @since v0.1.21
- * @param [flags='r']
- * @param [mode=0o666]
- */
- export function openSync(path: PathLike, flags: OpenMode, mode?: Mode | null): number;
- /**
- * Change the file system timestamps of the object referenced by `path`.
- *
- * The `atime` and `mtime` arguments follow these rules:
- *
- * * Values can be either numbers representing Unix epoch time in seconds,`Date`s, or a numeric string like `'123456789.0'`.
- * * If the value can not be converted to a number, or is `NaN`, `Infinity` or`-Infinity`, an `Error` will be thrown.
- * @since v0.4.2
- */
- export function utimes(path: PathLike, atime: TimeLike, mtime: TimeLike, callback: NoParamCallback): void;
- export namespace utimes {
- /**
- * Asynchronously change file timestamps of the file referenced by the supplied path.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * @param atime The last access time. If a string is provided, it will be coerced to number.
- * @param mtime The last modified time. If a string is provided, it will be coerced to number.
- */
- function __promisify__(path: PathLike, atime: TimeLike, mtime: TimeLike): Promise;
- }
- /**
- * Returns `undefined`.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link utimes}.
- * @since v0.4.2
- */
- export function utimesSync(path: PathLike, atime: TimeLike, mtime: TimeLike): void;
- /**
- * Change the file system timestamps of the object referenced by the supplied file
- * descriptor. See {@link utimes}.
- * @since v0.4.2
- */
- export function futimes(fd: number, atime: TimeLike, mtime: TimeLike, callback: NoParamCallback): void;
- export namespace futimes {
- /**
- * Asynchronously change file timestamps of the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param atime The last access time. If a string is provided, it will be coerced to number.
- * @param mtime The last modified time. If a string is provided, it will be coerced to number.
- */
- function __promisify__(fd: number, atime: TimeLike, mtime: TimeLike): Promise;
- }
- /**
- * Synchronous version of {@link futimes}. Returns `undefined`.
- * @since v0.4.2
- */
- export function futimesSync(fd: number, atime: TimeLike, mtime: TimeLike): void;
- /**
- * Request that all data for the open file descriptor is flushed to the storage
- * device. The specific implementation is operating system and device specific.
- * Refer to the POSIX [`fsync(2)`](http://man7.org/linux/man-pages/man2/fsync.2.html) documentation for more detail. No arguments other
- * than a possible exception are given to the completion callback.
- * @since v0.1.96
- */
- export function fsync(fd: number, callback: NoParamCallback): void;
- export namespace fsync {
- /**
- * Asynchronous fsync(2) - synchronize a file's in-core state with the underlying storage device.
- * @param fd A file descriptor.
- */
- function __promisify__(fd: number): Promise;
- }
- /**
- * Request that all data for the open file descriptor is flushed to the storage
- * device. The specific implementation is operating system and device specific.
- * Refer to the POSIX [`fsync(2)`](http://man7.org/linux/man-pages/man2/fsync.2.html) documentation for more detail. Returns `undefined`.
- * @since v0.1.96
- */
- export function fsyncSync(fd: number): void;
- /**
- * Write `buffer` to the file specified by `fd`.
- *
- * `offset` determines the part of the buffer to be written, and `length` is
- * an integer specifying the number of bytes to write.
- *
- * `position` refers to the offset from the beginning of the file where this data
- * should be written. If `typeof position !== 'number'`, the data will be written
- * at the current position. See [`pwrite(2)`](http://man7.org/linux/man-pages/man2/pwrite.2.html).
- *
- * The callback will be given three arguments `(err, bytesWritten, buffer)` where`bytesWritten` specifies how many _bytes_ were written from `buffer`.
- *
- * If this method is invoked as its `util.promisify()` ed version, it returns
- * a promise for an `Object` with `bytesWritten` and `buffer` properties.
- *
- * It is unsafe to use `fs.write()` multiple times on the same file without waiting
- * for the callback. For this scenario, {@link createWriteStream} is
- * recommended.
- *
- * On Linux, positional writes don't work when the file is opened in append mode.
- * The kernel ignores the position argument and always appends the data to
- * the end of the file.
- * @since v0.0.2
- */
- export function write(
- fd: number,
- buffer: TBuffer,
- offset: number | undefined | null,
- length: number | undefined | null,
- position: number | undefined | null,
- callback: (err: NodeJS.ErrnoException | null, written: number, buffer: TBuffer) => void
- ): void;
- /**
- * Asynchronously writes `buffer` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param offset The part of the buffer to be written. If not supplied, defaults to `0`.
- * @param length The number of bytes to write. If not supplied, defaults to `buffer.length - offset`.
- */
- export function write(
- fd: number,
- buffer: TBuffer,
- offset: number | undefined | null,
- length: number | undefined | null,
- callback: (err: NodeJS.ErrnoException | null, written: number, buffer: TBuffer) => void
- ): void;
- /**
- * Asynchronously writes `buffer` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param offset The part of the buffer to be written. If not supplied, defaults to `0`.
- */
- export function write(
- fd: number,
- buffer: TBuffer,
- offset: number | undefined | null,
- callback: (err: NodeJS.ErrnoException | null, written: number, buffer: TBuffer) => void
- ): void;
- /**
- * Asynchronously writes `buffer` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- */
- export function write(fd: number, buffer: TBuffer, callback: (err: NodeJS.ErrnoException | null, written: number, buffer: TBuffer) => void): void;
- /**
- * Asynchronously writes `string` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param string A string to write.
- * @param position The offset from the beginning of the file where this data should be written. If not supplied, defaults to the current position.
- * @param encoding The expected string encoding.
- */
- export function write(
- fd: number,
- string: string,
- position: number | undefined | null,
- encoding: BufferEncoding | undefined | null,
- callback: (err: NodeJS.ErrnoException | null, written: number, str: string) => void
- ): void;
- /**
- * Asynchronously writes `string` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param string A string to write.
- * @param position The offset from the beginning of the file where this data should be written. If not supplied, defaults to the current position.
- */
- export function write(fd: number, string: string, position: number | undefined | null, callback: (err: NodeJS.ErrnoException | null, written: number, str: string) => void): void;
- /**
- * Asynchronously writes `string` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param string A string to write.
- */
- export function write(fd: number, string: string, callback: (err: NodeJS.ErrnoException | null, written: number, str: string) => void): void;
- export namespace write {
- /**
- * Asynchronously writes `buffer` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param offset The part of the buffer to be written. If not supplied, defaults to `0`.
- * @param length The number of bytes to write. If not supplied, defaults to `buffer.length - offset`.
- * @param position The offset from the beginning of the file where this data should be written. If not supplied, defaults to the current position.
- */
- function __promisify__(
- fd: number,
- buffer?: TBuffer,
- offset?: number,
- length?: number,
- position?: number | null
- ): Promise<{
- bytesWritten: number;
- buffer: TBuffer;
- }>;
- /**
- * Asynchronously writes `string` to the file referenced by the supplied file descriptor.
- * @param fd A file descriptor.
- * @param string A string to write.
- * @param position The offset from the beginning of the file where this data should be written. If not supplied, defaults to the current position.
- * @param encoding The expected string encoding.
- */
- function __promisify__(
- fd: number,
- string: string,
- position?: number | null,
- encoding?: BufferEncoding | null
- ): Promise<{
- bytesWritten: number;
- buffer: string;
- }>;
- }
- /**
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link write}.
- * @since v0.1.21
- * @return The number of bytes written.
- */
- export function writeSync(fd: number, buffer: NodeJS.ArrayBufferView, offset?: number | null, length?: number | null, position?: number | null): number;
- /**
- * Synchronously writes `string` to the file referenced by the supplied file descriptor, returning the number of bytes written.
- * @param fd A file descriptor.
- * @param string A string to write.
- * @param position The offset from the beginning of the file where this data should be written. If not supplied, defaults to the current position.
- * @param encoding The expected string encoding.
- */
- export function writeSync(fd: number, string: string, position?: number | null, encoding?: BufferEncoding | null): number;
- export type ReadPosition = number | bigint;
- export interface ReadSyncOptions {
- /**
- * @default 0
- */
- offset?: number | undefined;
- /**
- * @default `length of buffer`
- */
- length?: number | undefined;
- /**
- * @default null
- */
- position?: ReadPosition | null | undefined;
- }
- export interface ReadAsyncOptions extends ReadSyncOptions {
- buffer?: TBuffer;
- }
- /**
- * Read data from the file specified by `fd`.
- *
- * The callback is given the three arguments, `(err, bytesRead, buffer)`.
- *
- * If the file is not modified concurrently, the end-of-file is reached when the
- * number of bytes read is zero.
- *
- * If this method is invoked as its `util.promisify()` ed version, it returns
- * a promise for an `Object` with `bytesRead` and `buffer` properties.
- * @since v0.0.2
- * @param buffer The buffer that the data will be written to.
- * @param offset The position in `buffer` to write the data to.
- * @param length The number of bytes to read.
- * @param position Specifies where to begin reading from in the file. If `position` is `null` or `-1 `, data will be read from the current file position, and the file position will be updated. If
- * `position` is an integer, the file position will be unchanged.
- */
- export function read(
- fd: number,
- buffer: TBuffer,
- offset: number,
- length: number,
- position: ReadPosition | null,
- callback: (err: NodeJS.ErrnoException | null, bytesRead: number, buffer: TBuffer) => void
- ): void;
- /**
- * Similar to the above `fs.read` function, this version takes an optional `options` object.
- * If not otherwise specified in an `options` object,
- * `buffer` defaults to `Buffer.alloc(16384)`,
- * `offset` defaults to `0`,
- * `length` defaults to `buffer.byteLength`, `- offset` as of Node 17.6.0
- * `position` defaults to `null`
- * @since v12.17.0, 13.11.0
- */
- export function read(
- fd: number,
- options: ReadAsyncOptions,
- callback: (err: NodeJS.ErrnoException | null, bytesRead: number, buffer: TBuffer) => void
- ): void;
- export function read(fd: number, callback: (err: NodeJS.ErrnoException | null, bytesRead: number, buffer: NodeJS.ArrayBufferView) => void): void;
- export namespace read {
- /**
- * @param fd A file descriptor.
- * @param buffer The buffer that the data will be written to.
- * @param offset The offset in the buffer at which to start writing.
- * @param length The number of bytes to read.
- * @param position The offset from the beginning of the file from which data should be read. If `null`, data will be read from the current position.
- */
- function __promisify__(
- fd: number,
- buffer: TBuffer,
- offset: number,
- length: number,
- position: number | null
- ): Promise<{
- bytesRead: number;
- buffer: TBuffer;
- }>;
- function __promisify__(
- fd: number,
- options: ReadAsyncOptions
- ): Promise<{
- bytesRead: number;
- buffer: TBuffer;
- }>;
- function __promisify__(fd: number): Promise<{
- bytesRead: number;
- buffer: NodeJS.ArrayBufferView;
- }>;
- }
- /**
- * Returns the number of `bytesRead`.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link read}.
- * @since v0.1.21
- */
- export function readSync(fd: number, buffer: NodeJS.ArrayBufferView, offset: number, length: number, position: ReadPosition | null): number;
- /**
- * Similar to the above `fs.readSync` function, this version takes an optional `options` object.
- * If no `options` object is specified, it will default with the above values.
- */
- export function readSync(fd: number, buffer: NodeJS.ArrayBufferView, opts?: ReadSyncOptions): number;
- /**
- * Asynchronously reads the entire contents of a file.
- *
- * ```js
- * import { readFile } from 'fs';
- *
- * readFile('/etc/passwd', (err, data) => {
- * if (err) throw err;
- * console.log(data);
- * });
- * ```
- *
- * The callback is passed two arguments `(err, data)`, where `data` is the
- * contents of the file.
- *
- * If no encoding is specified, then the raw buffer is returned.
- *
- * If `options` is a string, then it specifies the encoding:
- *
- * ```js
- * import { readFile } from 'fs';
- *
- * readFile('/etc/passwd', 'utf8', callback);
- * ```
- *
- * When the path is a directory, the behavior of `fs.readFile()` and {@link readFileSync} is platform-specific. On macOS, Linux, and Windows, an
- * error will be returned. On FreeBSD, a representation of the directory's contents
- * will be returned.
- *
- * ```js
- * import { readFile } from 'fs';
- *
- * // macOS, Linux, and Windows
- * readFile('', (err, data) => {
- * // => [Error: EISDIR: illegal operation on a directory, read ]
- * });
- *
- * // FreeBSD
- * readFile('', (err, data) => {
- * // => null,
- * });
- * ```
- *
- * It is possible to abort an ongoing request using an `AbortSignal`. If a
- * request is aborted the callback is called with an `AbortError`:
- *
- * ```js
- * import { readFile } from 'fs';
- *
- * const controller = new AbortController();
- * const signal = controller.signal;
- * readFile(fileInfo[0].name, { signal }, (err, buf) => {
- * // ...
- * });
- * // When you want to abort the request
- * controller.abort();
- * ```
- *
- * The `fs.readFile()` function buffers the entire file. To minimize memory costs,
- * when possible prefer streaming via `fs.createReadStream()`.
- *
- * Aborting an ongoing request does not abort individual operating
- * system requests but rather the internal buffering `fs.readFile` performs.
- * @since v0.1.29
- * @param path filename or file descriptor
- */
- export function readFile(
- path: PathOrFileDescriptor,
- options:
- | ({
- encoding?: null | undefined;
- flag?: string | undefined;
- } & Abortable)
- | undefined
- | null,
- callback: (err: NodeJS.ErrnoException | null, data: Buffer) => void
- ): void;
- /**
- * Asynchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param options Either the encoding for the result, or an object that contains the encoding and an optional flag.
- * If a flag is not provided, it defaults to `'r'`.
- */
- export function readFile(
- path: PathOrFileDescriptor,
- options:
- | ({
- encoding: BufferEncoding;
- flag?: string | undefined;
- } & Abortable)
- | BufferEncoding,
- callback: (err: NodeJS.ErrnoException | null, data: string) => void
- ): void;
- /**
- * Asynchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param options Either the encoding for the result, or an object that contains the encoding and an optional flag.
- * If a flag is not provided, it defaults to `'r'`.
- */
- export function readFile(
- path: PathOrFileDescriptor,
- options:
- | (ObjectEncodingOptions & {
- flag?: string | undefined;
- } & Abortable)
- | BufferEncoding
- | undefined
- | null,
- callback: (err: NodeJS.ErrnoException | null, data: string | Buffer) => void
- ): void;
- /**
- * Asynchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- */
- export function readFile(path: PathOrFileDescriptor, callback: (err: NodeJS.ErrnoException | null, data: Buffer) => void): void;
- export namespace readFile {
- /**
- * Asynchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param options An object that may contain an optional flag.
- * If a flag is not provided, it defaults to `'r'`.
- */
- function __promisify__(
- path: PathOrFileDescriptor,
- options?: {
- encoding?: null | undefined;
- flag?: string | undefined;
- } | null
- ): Promise;
- /**
- * Asynchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param options Either the encoding for the result, or an object that contains the encoding and an optional flag.
- * If a flag is not provided, it defaults to `'r'`.
- */
- function __promisify__(
- path: PathOrFileDescriptor,
- options:
- | {
- encoding: BufferEncoding;
- flag?: string | undefined;
- }
- | BufferEncoding
- ): Promise;
- /**
- * Asynchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param options Either the encoding for the result, or an object that contains the encoding and an optional flag.
- * If a flag is not provided, it defaults to `'r'`.
- */
- function __promisify__(
- path: PathOrFileDescriptor,
- options?:
- | (ObjectEncodingOptions & {
- flag?: string | undefined;
- })
- | BufferEncoding
- | null
- ): Promise;
- }
- /**
- * Returns the contents of the `path`.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link readFile}.
- *
- * If the `encoding` option is specified then this function returns a
- * string. Otherwise it returns a buffer.
- *
- * Similar to {@link readFile}, when the path is a directory, the behavior of`fs.readFileSync()` is platform-specific.
- *
- * ```js
- * import { readFileSync } from 'fs';
- *
- * // macOS, Linux, and Windows
- * readFileSync('');
- * // => [Error: EISDIR: illegal operation on a directory, read ]
- *
- * // FreeBSD
- * readFileSync(''); // =>
- * ```
- * @since v0.1.8
- * @param path filename or file descriptor
- */
- export function readFileSync(
- path: PathOrFileDescriptor,
- options?: {
- encoding?: null | undefined;
- flag?: string | undefined;
- } | null
- ): Buffer;
- /**
- * Synchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param options Either the encoding for the result, or an object that contains the encoding and an optional flag.
- * If a flag is not provided, it defaults to `'r'`.
- */
- export function readFileSync(
- path: PathOrFileDescriptor,
- options:
- | {
- encoding: BufferEncoding;
- flag?: string | undefined;
- }
- | BufferEncoding
- ): string;
- /**
- * Synchronously reads the entire contents of a file.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param options Either the encoding for the result, or an object that contains the encoding and an optional flag.
- * If a flag is not provided, it defaults to `'r'`.
- */
- export function readFileSync(
- path: PathOrFileDescriptor,
- options?:
- | (ObjectEncodingOptions & {
- flag?: string | undefined;
- })
- | BufferEncoding
- | null
- ): string | Buffer;
- export type WriteFileOptions =
- | (ObjectEncodingOptions &
- Abortable & {
- mode?: Mode | undefined;
- flag?: string | undefined;
- })
- | BufferEncoding
- | null;
- /**
- * When `file` is a filename, asynchronously writes data to the file, replacing the
- * file if it already exists. `data` can be a string or a buffer.
- *
- * When `file` is a file descriptor, the behavior is similar to calling`fs.write()` directly (which is recommended). See the notes below on using
- * a file descriptor.
- *
- * The `encoding` option is ignored if `data` is a buffer.
- *
- * The `mode` option only affects the newly created file. See {@link open} for more details.
- *
- * ```js
- * import { writeFile } from 'fs';
- * import { Buffer } from 'buffer';
- *
- * const data = new Uint8Array(Buffer.from('Hello Node.js'));
- * writeFile('message.txt', data, (err) => {
- * if (err) throw err;
- * console.log('The file has been saved!');
- * });
- * ```
- *
- * If `options` is a string, then it specifies the encoding:
- *
- * ```js
- * import { writeFile } from 'fs';
- *
- * writeFile('message.txt', 'Hello Node.js', 'utf8', callback);
- * ```
- *
- * It is unsafe to use `fs.writeFile()` multiple times on the same file without
- * waiting for the callback. For this scenario, {@link createWriteStream} is
- * recommended.
- *
- * Similarly to `fs.readFile` \- `fs.writeFile` is a convenience method that
- * performs multiple `write` calls internally to write the buffer passed to it.
- * For performance sensitive code consider using {@link createWriteStream}.
- *
- * It is possible to use an `AbortSignal` to cancel an `fs.writeFile()`.
- * Cancelation is "best effort", and some amount of data is likely still
- * to be written.
- *
- * ```js
- * import { writeFile } from 'fs';
- * import { Buffer } from 'buffer';
- *
- * const controller = new AbortController();
- * const { signal } = controller;
- * const data = new Uint8Array(Buffer.from('Hello Node.js'));
- * writeFile('message.txt', data, { signal }, (err) => {
- * // When a request is aborted - the callback is called with an AbortError
- * });
- * // When the request should be aborted
- * controller.abort();
- * ```
- *
- * Aborting an ongoing request does not abort individual operating
- * system requests but rather the internal buffering `fs.writeFile` performs.
- * @since v0.1.29
- * @param file filename or file descriptor
- */
- export function writeFile(file: PathOrFileDescriptor, data: string | NodeJS.ArrayBufferView, options: WriteFileOptions, callback: NoParamCallback): void;
- /**
- * Asynchronously writes data to a file, replacing the file if it already exists.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param data The data to write. If something other than a Buffer or Uint8Array is provided, the value is coerced to a string.
- */
- export function writeFile(path: PathOrFileDescriptor, data: string | NodeJS.ArrayBufferView, callback: NoParamCallback): void;
- export namespace writeFile {
- /**
- * Asynchronously writes data to a file, replacing the file if it already exists.
- * @param path A path to a file. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param data The data to write. If something other than a Buffer or Uint8Array is provided, the value is coerced to a string.
- * @param options Either the encoding for the file, or an object optionally specifying the encoding, file mode, and flag.
- * If `encoding` is not supplied, the default of `'utf8'` is used.
- * If `mode` is not supplied, the default of `0o666` is used.
- * If `mode` is a string, it is parsed as an octal integer.
- * If `flag` is not supplied, the default of `'w'` is used.
- */
- function __promisify__(path: PathOrFileDescriptor, data: string | NodeJS.ArrayBufferView, options?: WriteFileOptions): Promise;
- }
- /**
- * Returns `undefined`.
- *
- * The `mode` option only affects the newly created file. See {@link open} for more details.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link writeFile}.
- * @since v0.1.29
- * @param file filename or file descriptor
- */
- export function writeFileSync(file: PathOrFileDescriptor, data: string | NodeJS.ArrayBufferView, options?: WriteFileOptions): void;
- /**
- * Asynchronously append data to a file, creating the file if it does not yet
- * exist. `data` can be a string or a `Buffer`.
- *
- * The `mode` option only affects the newly created file. See {@link open} for more details.
- *
- * ```js
- * import { appendFile } from 'fs';
- *
- * appendFile('message.txt', 'data to append', (err) => {
- * if (err) throw err;
- * console.log('The "data to append" was appended to file!');
- * });
- * ```
- *
- * If `options` is a string, then it specifies the encoding:
- *
- * ```js
- * import { appendFile } from 'fs';
- *
- * appendFile('message.txt', 'data to append', 'utf8', callback);
- * ```
- *
- * The `path` may be specified as a numeric file descriptor that has been opened
- * for appending (using `fs.open()` or `fs.openSync()`). The file descriptor will
- * not be closed automatically.
- *
- * ```js
- * import { open, close, appendFile } from 'fs';
- *
- * function closeFd(fd) {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- *
- * open('message.txt', 'a', (err, fd) => {
- * if (err) throw err;
- *
- * try {
- * appendFile(fd, 'data to append', 'utf8', (err) => {
- * closeFd(fd);
- * if (err) throw err;
- * });
- * } catch (err) {
- * closeFd(fd);
- * throw err;
- * }
- * });
- * ```
- * @since v0.6.7
- * @param path filename or file descriptor
- */
- export function appendFile(path: PathOrFileDescriptor, data: string | Uint8Array, options: WriteFileOptions, callback: NoParamCallback): void;
- /**
- * Asynchronously append data to a file, creating the file if it does not exist.
- * @param file A path to a file. If a URL is provided, it must use the `file:` protocol.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param data The data to write. If something other than a Buffer or Uint8Array is provided, the value is coerced to a string.
- */
- export function appendFile(file: PathOrFileDescriptor, data: string | Uint8Array, callback: NoParamCallback): void;
- export namespace appendFile {
- /**
- * Asynchronously append data to a file, creating the file if it does not exist.
- * @param file A path to a file. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- * If a file descriptor is provided, the underlying file will _not_ be closed automatically.
- * @param data The data to write. If something other than a Buffer or Uint8Array is provided, the value is coerced to a string.
- * @param options Either the encoding for the file, or an object optionally specifying the encoding, file mode, and flag.
- * If `encoding` is not supplied, the default of `'utf8'` is used.
- * If `mode` is not supplied, the default of `0o666` is used.
- * If `mode` is a string, it is parsed as an octal integer.
- * If `flag` is not supplied, the default of `'a'` is used.
- */
- function __promisify__(file: PathOrFileDescriptor, data: string | Uint8Array, options?: WriteFileOptions): Promise;
- }
- /**
- * Synchronously append data to a file, creating the file if it does not yet
- * exist. `data` can be a string or a `Buffer`.
- *
- * The `mode` option only affects the newly created file. See {@link open} for more details.
- *
- * ```js
- * import { appendFileSync } from 'fs';
- *
- * try {
- * appendFileSync('message.txt', 'data to append');
- * console.log('The "data to append" was appended to file!');
- * } catch (err) {
- * // Handle the error
- * }
- * ```
- *
- * If `options` is a string, then it specifies the encoding:
- *
- * ```js
- * import { appendFileSync } from 'fs';
- *
- * appendFileSync('message.txt', 'data to append', 'utf8');
- * ```
- *
- * The `path` may be specified as a numeric file descriptor that has been opened
- * for appending (using `fs.open()` or `fs.openSync()`). The file descriptor will
- * not be closed automatically.
- *
- * ```js
- * import { openSync, closeSync, appendFileSync } from 'fs';
- *
- * let fd;
- *
- * try {
- * fd = openSync('message.txt', 'a');
- * appendFileSync(fd, 'data to append', 'utf8');
- * } catch (err) {
- * // Handle the error
- * } finally {
- * if (fd !== undefined)
- * closeSync(fd);
- * }
- * ```
- * @since v0.6.7
- * @param path filename or file descriptor
- */
- export function appendFileSync(path: PathOrFileDescriptor, data: string | Uint8Array, options?: WriteFileOptions): void;
- /**
- * Watch for changes on `filename`. The callback `listener` will be called each
- * time the file is accessed.
- *
- * The `options` argument may be omitted. If provided, it should be an object. The`options` object may contain a boolean named `persistent` that indicates
- * whether the process should continue to run as long as files are being watched.
- * The `options` object may specify an `interval` property indicating how often the
- * target should be polled in milliseconds.
- *
- * The `listener` gets two arguments the current stat object and the previous
- * stat object:
- *
- * ```js
- * import { watchFile } from 'fs';
- *
- * watchFile('message.text', (curr, prev) => {
- * console.log(`the current mtime is: ${curr.mtime}`);
- * console.log(`the previous mtime was: ${prev.mtime}`);
- * });
- * ```
- *
- * These stat objects are instances of `fs.Stat`. If the `bigint` option is `true`,
- * the numeric values in these objects are specified as `BigInt`s.
- *
- * To be notified when the file was modified, not just accessed, it is necessary
- * to compare `curr.mtimeMs` and `prev.mtimeMs`.
- *
- * When an `fs.watchFile` operation results in an `ENOENT` error, it
- * will invoke the listener once, with all the fields zeroed (or, for dates, the
- * Unix Epoch). If the file is created later on, the listener will be called
- * again, with the latest stat objects. This is a change in functionality since
- * v0.10.
- *
- * Using {@link watch} is more efficient than `fs.watchFile` and`fs.unwatchFile`. `fs.watch` should be used instead of `fs.watchFile` and`fs.unwatchFile` when possible.
- *
- * When a file being watched by `fs.watchFile()` disappears and reappears,
- * then the contents of `previous` in the second callback event (the file's
- * reappearance) will be the same as the contents of `previous` in the first
- * callback event (its disappearance).
- *
- * This happens when:
- *
- * * the file is deleted, followed by a restore
- * * the file is renamed and then renamed a second time back to its original name
- * @since v0.1.31
- */
- export interface WatchFileOptions {
- bigint?: boolean | undefined;
- persistent?: boolean | undefined;
- interval?: number | undefined;
- }
- /**
- * Watch for changes on `filename`. The callback `listener` will be called each
- * time the file is accessed.
- *
- * The `options` argument may be omitted. If provided, it should be an object. The`options` object may contain a boolean named `persistent` that indicates
- * whether the process should continue to run as long as files are being watched.
- * The `options` object may specify an `interval` property indicating how often the
- * target should be polled in milliseconds.
- *
- * The `listener` gets two arguments the current stat object and the previous
- * stat object:
- *
- * ```js
- * import { watchFile } from 'fs';
- *
- * watchFile('message.text', (curr, prev) => {
- * console.log(`the current mtime is: ${curr.mtime}`);
- * console.log(`the previous mtime was: ${prev.mtime}`);
- * });
- * ```
- *
- * These stat objects are instances of `fs.Stat`. If the `bigint` option is `true`,
- * the numeric values in these objects are specified as `BigInt`s.
- *
- * To be notified when the file was modified, not just accessed, it is necessary
- * to compare `curr.mtimeMs` and `prev.mtimeMs`.
- *
- * When an `fs.watchFile` operation results in an `ENOENT` error, it
- * will invoke the listener once, with all the fields zeroed (or, for dates, the
- * Unix Epoch). If the file is created later on, the listener will be called
- * again, with the latest stat objects. This is a change in functionality since
- * v0.10.
- *
- * Using {@link watch} is more efficient than `fs.watchFile` and`fs.unwatchFile`. `fs.watch` should be used instead of `fs.watchFile` and`fs.unwatchFile` when possible.
- *
- * When a file being watched by `fs.watchFile()` disappears and reappears,
- * then the contents of `previous` in the second callback event (the file's
- * reappearance) will be the same as the contents of `previous` in the first
- * callback event (its disappearance).
- *
- * This happens when:
- *
- * * the file is deleted, followed by a restore
- * * the file is renamed and then renamed a second time back to its original name
- * @since v0.1.31
- */
- export function watchFile(
- filename: PathLike,
- options:
- | (WatchFileOptions & {
- bigint?: false | undefined;
- })
- | undefined,
- listener: (curr: Stats, prev: Stats) => void
- ): StatWatcher;
- export function watchFile(
- filename: PathLike,
- options:
- | (WatchFileOptions & {
- bigint: true;
- })
- | undefined,
- listener: (curr: BigIntStats, prev: BigIntStats) => void
- ): StatWatcher;
- /**
- * Watch for changes on `filename`. The callback `listener` will be called each time the file is accessed.
- * @param filename A path to a file or directory. If a URL is provided, it must use the `file:` protocol.
- */
- export function watchFile(filename: PathLike, listener: (curr: Stats, prev: Stats) => void): StatWatcher;
- /**
- * Stop watching for changes on `filename`. If `listener` is specified, only that
- * particular listener is removed. Otherwise, _all_ listeners are removed,
- * effectively stopping watching of `filename`.
- *
- * Calling `fs.unwatchFile()` with a filename that is not being watched is a
- * no-op, not an error.
- *
- * Using {@link watch} is more efficient than `fs.watchFile()` and`fs.unwatchFile()`. `fs.watch()` should be used instead of `fs.watchFile()`and `fs.unwatchFile()` when possible.
- * @since v0.1.31
- * @param listener Optional, a listener previously attached using `fs.watchFile()`
- */
- export function unwatchFile(filename: PathLike, listener?: (curr: Stats, prev: Stats) => void): void;
- export interface WatchOptions extends Abortable {
- encoding?: BufferEncoding | 'buffer' | undefined;
- persistent?: boolean | undefined;
- recursive?: boolean | undefined;
- }
- export type WatchEventType = 'rename' | 'change';
- export type WatchListener = (event: WatchEventType, filename: T) => void;
- /**
- * Watch for changes on `filename`, where `filename` is either a file or a
- * directory.
- *
- * The second argument is optional. If `options` is provided as a string, it
- * specifies the `encoding`. Otherwise `options` should be passed as an object.
- *
- * The listener callback gets two arguments `(eventType, filename)`. `eventType`is either `'rename'` or `'change'`, and `filename` is the name of the file
- * which triggered the event.
- *
- * On most platforms, `'rename'` is emitted whenever a filename appears or
- * disappears in the directory.
- *
- * The listener callback is attached to the `'change'` event fired by `fs.FSWatcher`, but it is not the same thing as the `'change'` value of`eventType`.
- *
- * If a `signal` is passed, aborting the corresponding AbortController will close
- * the returned `fs.FSWatcher`.
- * @since v0.5.10
- * @param listener
- */
- export function watch(
- filename: PathLike,
- options:
- | (WatchOptions & {
- encoding: 'buffer';
- })
- | 'buffer',
- listener?: WatchListener
- ): FSWatcher;
- /**
- * Watch for changes on `filename`, where `filename` is either a file or a directory, returning an `FSWatcher`.
- * @param filename A path to a file or directory. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the encoding for the filename provided to the listener, or an object optionally specifying encoding, persistent, and recursive options.
- * If `encoding` is not supplied, the default of `'utf8'` is used.
- * If `persistent` is not supplied, the default of `true` is used.
- * If `recursive` is not supplied, the default of `false` is used.
- */
- export function watch(filename: PathLike, options?: WatchOptions | BufferEncoding | null, listener?: WatchListener): FSWatcher;
- /**
- * Watch for changes on `filename`, where `filename` is either a file or a directory, returning an `FSWatcher`.
- * @param filename A path to a file or directory. If a URL is provided, it must use the `file:` protocol.
- * @param options Either the encoding for the filename provided to the listener, or an object optionally specifying encoding, persistent, and recursive options.
- * If `encoding` is not supplied, the default of `'utf8'` is used.
- * If `persistent` is not supplied, the default of `true` is used.
- * If `recursive` is not supplied, the default of `false` is used.
- */
- export function watch(filename: PathLike, options: WatchOptions | string, listener?: WatchListener): FSWatcher;
- /**
- * Watch for changes on `filename`, where `filename` is either a file or a directory, returning an `FSWatcher`.
- * @param filename A path to a file or directory. If a URL is provided, it must use the `file:` protocol.
- */
- export function watch(filename: PathLike, listener?: WatchListener): FSWatcher;
- /**
- * Test whether or not the given path exists by checking with the file system.
- * Then call the `callback` argument with either true or false:
- *
- * ```js
- * import { exists } from 'fs';
- *
- * exists('/etc/passwd', (e) => {
- * console.log(e ? 'it exists' : 'no passwd!');
- * });
- * ```
- *
- * **The parameters for this callback are not consistent with other Node.js**
- * **callbacks.** Normally, the first parameter to a Node.js callback is an `err`parameter, optionally followed by other parameters. The `fs.exists()` callback
- * has only one boolean parameter. This is one reason `fs.access()` is recommended
- * instead of `fs.exists()`.
- *
- * Using `fs.exists()` to check for the existence of a file before calling`fs.open()`, `fs.readFile()` or `fs.writeFile()` is not recommended. Doing
- * so introduces a race condition, since other processes may change the file's
- * state between the two calls. Instead, user code should open/read/write the
- * file directly and handle the error raised if the file does not exist.
- *
- * **write (NOT RECOMMENDED)**
- *
- * ```js
- * import { exists, open, close } from 'fs';
- *
- * exists('myfile', (e) => {
- * if (e) {
- * console.error('myfile already exists');
- * } else {
- * open('myfile', 'wx', (err, fd) => {
- * if (err) throw err;
- *
- * try {
- * writeMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * }
- * });
- * ```
- *
- * **write (RECOMMENDED)**
- *
- * ```js
- * import { open, close } from 'fs';
- * open('myfile', 'wx', (err, fd) => {
- * if (err) {
- * if (err.code === 'EEXIST') {
- * console.error('myfile already exists');
- * return;
- * }
- *
- * throw err;
- * }
- *
- * try {
- * writeMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * ```
- *
- * **read (NOT RECOMMENDED)**
- *
- * ```js
- * import { open, close, exists } from 'fs';
- *
- * exists('myfile', (e) => {
- * if (e) {
- * open('myfile', 'r', (err, fd) => {
- * if (err) throw err;
- *
- * try {
- * readMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * } else {
- * console.error('myfile does not exist');
- * }
- * });
- * ```
- *
- * **read (RECOMMENDED)**
- *
- * ```js
- * import { open, close } from 'fs';
- *
- * open('myfile', 'r', (err, fd) => {
- * if (err) {
- * if (err.code === 'ENOENT') {
- * console.error('myfile does not exist');
- * return;
- * }
- *
- * throw err;
- * }
- *
- * try {
- * readMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * ```
- *
- * The "not recommended" examples above check for existence and then use the
- * file; the "recommended" examples are better because they use the file directly
- * and handle the error, if any.
- *
- * In general, check for the existence of a file only if the file won’t be
- * used directly, for example when its existence is a signal from another
- * process.
- * @since v0.0.2
- * @deprecated Since v1.0.0 - Use {@link stat} or {@link access} instead.
- */
- export function exists(path: PathLike, callback: (exists: boolean) => void): void;
- /** @deprecated */
- export namespace exists {
- /**
- * @param path A path to a file or directory. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- */
- function __promisify__(path: PathLike): Promise;
- }
- /**
- * Returns `true` if the path exists, `false` otherwise.
- *
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link exists}.
- *
- * `fs.exists()` is deprecated, but `fs.existsSync()` is not. The `callback`parameter to `fs.exists()` accepts parameters that are inconsistent with other
- * Node.js callbacks. `fs.existsSync()` does not use a callback.
- *
- * ```js
- * import { existsSync } from 'fs';
- *
- * if (existsSync('/etc/passwd'))
- * console.log('The path exists.');
- * ```
- * @since v0.1.21
- */
- export function existsSync(path: PathLike): boolean;
- export namespace constants {
- // File Access Constants
- /** Constant for fs.access(). File is visible to the calling process. */
- const F_OK: number;
- /** Constant for fs.access(). File can be read by the calling process. */
- const R_OK: number;
- /** Constant for fs.access(). File can be written by the calling process. */
- const W_OK: number;
- /** Constant for fs.access(). File can be executed by the calling process. */
- const X_OK: number;
- // File Copy Constants
- /** Constant for fs.copyFile. Flag indicating the destination file should not be overwritten if it already exists. */
- const COPYFILE_EXCL: number;
- /**
- * Constant for fs.copyFile. copy operation will attempt to create a copy-on-write reflink.
- * If the underlying platform does not support copy-on-write, then a fallback copy mechanism is used.
- */
- const COPYFILE_FICLONE: number;
- /**
- * Constant for fs.copyFile. Copy operation will attempt to create a copy-on-write reflink.
- * If the underlying platform does not support copy-on-write, then the operation will fail with an error.
- */
- const COPYFILE_FICLONE_FORCE: number;
- // File Open Constants
- /** Constant for fs.open(). Flag indicating to open a file for read-only access. */
- const O_RDONLY: number;
- /** Constant for fs.open(). Flag indicating to open a file for write-only access. */
- const O_WRONLY: number;
- /** Constant for fs.open(). Flag indicating to open a file for read-write access. */
- const O_RDWR: number;
- /** Constant for fs.open(). Flag indicating to create the file if it does not already exist. */
- const O_CREAT: number;
- /** Constant for fs.open(). Flag indicating that opening a file should fail if the O_CREAT flag is set and the file already exists. */
- const O_EXCL: number;
- /**
- * Constant for fs.open(). Flag indicating that if path identifies a terminal device,
- * opening the path shall not cause that terminal to become the controlling terminal for the process
- * (if the process does not already have one).
- */
- const O_NOCTTY: number;
- /** Constant for fs.open(). Flag indicating that if the file exists and is a regular file, and the file is opened successfully for write access, its length shall be truncated to zero. */
- const O_TRUNC: number;
- /** Constant for fs.open(). Flag indicating that data will be appended to the end of the file. */
- const O_APPEND: number;
- /** Constant for fs.open(). Flag indicating that the open should fail if the path is not a directory. */
- const O_DIRECTORY: number;
- /**
- * constant for fs.open().
- * Flag indicating reading accesses to the file system will no longer result in
- * an update to the atime information associated with the file.
- * This flag is available on Linux operating systems only.
- */
- const O_NOATIME: number;
- /** Constant for fs.open(). Flag indicating that the open should fail if the path is a symbolic link. */
- const O_NOFOLLOW: number;
- /** Constant for fs.open(). Flag indicating that the file is opened for synchronous I/O. */
- const O_SYNC: number;
- /** Constant for fs.open(). Flag indicating that the file is opened for synchronous I/O with write operations waiting for data integrity. */
- const O_DSYNC: number;
- /** Constant for fs.open(). Flag indicating to open the symbolic link itself rather than the resource it is pointing to. */
- const O_SYMLINK: number;
- /** Constant for fs.open(). When set, an attempt will be made to minimize caching effects of file I/O. */
- const O_DIRECT: number;
- /** Constant for fs.open(). Flag indicating to open the file in nonblocking mode when possible. */
- const O_NONBLOCK: number;
- // File Type Constants
- /** Constant for fs.Stats mode property for determining a file's type. Bit mask used to extract the file type code. */
- const S_IFMT: number;
- /** Constant for fs.Stats mode property for determining a file's type. File type constant for a regular file. */
- const S_IFREG: number;
- /** Constant for fs.Stats mode property for determining a file's type. File type constant for a directory. */
- const S_IFDIR: number;
- /** Constant for fs.Stats mode property for determining a file's type. File type constant for a character-oriented device file. */
- const S_IFCHR: number;
- /** Constant for fs.Stats mode property for determining a file's type. File type constant for a block-oriented device file. */
- const S_IFBLK: number;
- /** Constant for fs.Stats mode property for determining a file's type. File type constant for a FIFO/pipe. */
- const S_IFIFO: number;
- /** Constant for fs.Stats mode property for determining a file's type. File type constant for a symbolic link. */
- const S_IFLNK: number;
- /** Constant for fs.Stats mode property for determining a file's type. File type constant for a socket. */
- const S_IFSOCK: number;
- // File Mode Constants
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating readable, writable and executable by owner. */
- const S_IRWXU: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating readable by owner. */
- const S_IRUSR: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating writable by owner. */
- const S_IWUSR: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating executable by owner. */
- const S_IXUSR: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating readable, writable and executable by group. */
- const S_IRWXG: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating readable by group. */
- const S_IRGRP: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating writable by group. */
- const S_IWGRP: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating executable by group. */
- const S_IXGRP: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating readable, writable and executable by others. */
- const S_IRWXO: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating readable by others. */
- const S_IROTH: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating writable by others. */
- const S_IWOTH: number;
- /** Constant for fs.Stats mode property for determining access permissions for a file. File mode indicating executable by others. */
- const S_IXOTH: number;
- /**
- * When set, a memory file mapping is used to access the file. This flag
- * is available on Windows operating systems only. On other operating systems,
- * this flag is ignored.
- */
- const UV_FS_O_FILEMAP: number;
- }
- /**
- * Tests a user's permissions for the file or directory specified by `path`.
- * The `mode` argument is an optional integer that specifies the accessibility
- * checks to be performed. `mode` should be either the value `fs.constants.F_OK`or a mask consisting of the bitwise OR of any of `fs.constants.R_OK`,`fs.constants.W_OK`, and `fs.constants.X_OK`
- * (e.g.`fs.constants.W_OK | fs.constants.R_OK`). Check `File access constants` for
- * possible values of `mode`.
- *
- * The final argument, `callback`, is a callback function that is invoked with
- * a possible error argument. If any of the accessibility checks fail, the error
- * argument will be an `Error` object. The following examples check if`package.json` exists, and if it is readable or writable.
- *
- * ```js
- * import { access, constants } from 'fs';
- *
- * const file = 'package.json';
- *
- * // Check if the file exists in the current directory.
- * access(file, constants.F_OK, (err) => {
- * console.log(`${file} ${err ? 'does not exist' : 'exists'}`);
- * });
- *
- * // Check if the file is readable.
- * access(file, constants.R_OK, (err) => {
- * console.log(`${file} ${err ? 'is not readable' : 'is readable'}`);
- * });
- *
- * // Check if the file is writable.
- * access(file, constants.W_OK, (err) => {
- * console.log(`${file} ${err ? 'is not writable' : 'is writable'}`);
- * });
- *
- * // Check if the file is readable and writable.
- * access(file, constants.R_OK | constants.W_OK, (err) => {
- * console.log(`${file} ${err ? 'is not' : 'is'} readable and writable`);
- * });
- * ```
- *
- * Do not use `fs.access()` to check for the accessibility of a file before calling`fs.open()`, `fs.readFile()` or `fs.writeFile()`. Doing
- * so introduces a race condition, since other processes may change the file's
- * state between the two calls. Instead, user code should open/read/write the
- * file directly and handle the error raised if the file is not accessible.
- *
- * **write (NOT RECOMMENDED)**
- *
- * ```js
- * import { access, open, close } from 'fs';
- *
- * access('myfile', (err) => {
- * if (!err) {
- * console.error('myfile already exists');
- * return;
- * }
- *
- * open('myfile', 'wx', (err, fd) => {
- * if (err) throw err;
- *
- * try {
- * writeMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * });
- * ```
- *
- * **write (RECOMMENDED)**
- *
- * ```js
- * import { open, close } from 'fs';
- *
- * open('myfile', 'wx', (err, fd) => {
- * if (err) {
- * if (err.code === 'EEXIST') {
- * console.error('myfile already exists');
- * return;
- * }
- *
- * throw err;
- * }
- *
- * try {
- * writeMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * ```
- *
- * **read (NOT RECOMMENDED)**
- *
- * ```js
- * import { access, open, close } from 'fs';
- * access('myfile', (err) => {
- * if (err) {
- * if (err.code === 'ENOENT') {
- * console.error('myfile does not exist');
- * return;
- * }
- *
- * throw err;
- * }
- *
- * open('myfile', 'r', (err, fd) => {
- * if (err) throw err;
- *
- * try {
- * readMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * });
- * ```
- *
- * **read (RECOMMENDED)**
- *
- * ```js
- * import { open, close } from 'fs';
- *
- * open('myfile', 'r', (err, fd) => {
- * if (err) {
- * if (err.code === 'ENOENT') {
- * console.error('myfile does not exist');
- * return;
- * }
- *
- * throw err;
- * }
- *
- * try {
- * readMyData(fd);
- * } finally {
- * close(fd, (err) => {
- * if (err) throw err;
- * });
- * }
- * });
- * ```
- *
- * The "not recommended" examples above check for accessibility and then use the
- * file; the "recommended" examples are better because they use the file directly
- * and handle the error, if any.
- *
- * In general, check for the accessibility of a file only if the file will not be
- * used directly, for example when its accessibility is a signal from another
- * process.
- *
- * On Windows, access-control policies (ACLs) on a directory may limit access to
- * a file or directory. The `fs.access()` function, however, does not check the
- * ACL and therefore may report that a path is accessible even if the ACL restricts
- * the user from reading or writing to it.
- * @since v0.11.15
- * @param [mode=fs.constants.F_OK]
- */
- export function access(path: PathLike, mode: number | undefined, callback: NoParamCallback): void;
- /**
- * Asynchronously tests a user's permissions for the file specified by path.
- * @param path A path to a file or directory. If a URL is provided, it must use the `file:` protocol.
- */
- export function access(path: PathLike, callback: NoParamCallback): void;
- export namespace access {
- /**
- * Asynchronously tests a user's permissions for the file specified by path.
- * @param path A path to a file or directory. If a URL is provided, it must use the `file:` protocol.
- * URL support is _experimental_.
- */
- function __promisify__(path: PathLike, mode?: number): Promise;
- }
- /**
- * Synchronously tests a user's permissions for the file or directory specified
- * by `path`. The `mode` argument is an optional integer that specifies the
- * accessibility checks to be performed. `mode` should be either the value`fs.constants.F_OK` or a mask consisting of the bitwise OR of any of`fs.constants.R_OK`, `fs.constants.W_OK`, and
- * `fs.constants.X_OK` (e.g.`fs.constants.W_OK | fs.constants.R_OK`). Check `File access constants` for
- * possible values of `mode`.
- *
- * If any of the accessibility checks fail, an `Error` will be thrown. Otherwise,
- * the method will return `undefined`.
- *
- * ```js
- * import { accessSync, constants } from 'fs';
- *
- * try {
- * accessSync('etc/passwd', constants.R_OK | constants.W_OK);
- * console.log('can read/write');
- * } catch (err) {
- * console.error('no access!');
- * }
- * ```
- * @since v0.11.15
- * @param [mode=fs.constants.F_OK]
- */
- export function accessSync(path: PathLike, mode?: number): void;
- interface StreamOptions {
- flags?: string | undefined;
- encoding?: BufferEncoding | undefined;
- fd?: number | promises.FileHandle | undefined;
- mode?: number | undefined;
- autoClose?: boolean | undefined;
- /**
- * @default false
- */
- emitClose?: boolean | undefined;
- start?: number | undefined;
- highWaterMark?: number | undefined;
- }
- interface ReadStreamOptions extends StreamOptions {
- end?: number | undefined;
- }
- /**
- * Unlike the 16 kb default `highWaterMark` for a `stream.Readable`, the stream
- * returned by this method has a default `highWaterMark` of 64 kb.
- *
- * `options` can include `start` and `end` values to read a range of bytes from
- * the file instead of the entire file. Both `start` and `end` are inclusive and
- * start counting at 0, allowed values are in the
- * \[0, [`Number.MAX_SAFE_INTEGER`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER)\] range. If `fd` is specified and `start` is
- * omitted or `undefined`, `fs.createReadStream()` reads sequentially from the
- * current file position. The `encoding` can be any one of those accepted by `Buffer`.
- *
- * If `fd` is specified, `ReadStream` will ignore the `path` argument and will use
- * the specified file descriptor. This means that no `'open'` event will be
- * emitted. `fd` should be blocking; non-blocking `fd`s should be passed to `net.Socket`.
- *
- * If `fd` points to a character device that only supports blocking reads
- * (such as keyboard or sound card), read operations do not finish until data is
- * available. This can prevent the process from exiting and the stream from
- * closing naturally.
- *
- * By default, the stream will emit a `'close'` event after it has been
- * destroyed. Set the `emitClose` option to `false` to change this behavior.
- *
- * By providing the `fs` option, it is possible to override the corresponding `fs`implementations for `open`, `read`, and `close`. When providing the `fs` option,
- * an override for `read` is required. If no `fd` is provided, an override for`open` is also required. If `autoClose` is `true`, an override for `close` is
- * also required.
- *
- * ```js
- * import { createReadStream } from 'fs';
- *
- * // Create a stream from some character device.
- * const stream = createReadStream('/dev/input/event0');
- * setTimeout(() => {
- * stream.close(); // This may not close the stream.
- * // Artificially marking end-of-stream, as if the underlying resource had
- * // indicated end-of-file by itself, allows the stream to close.
- * // This does not cancel pending read operations, and if there is such an
- * // operation, the process may still not be able to exit successfully
- * // until it finishes.
- * stream.push(null);
- * stream.read(0);
- * }, 100);
- * ```
- *
- * If `autoClose` is false, then the file descriptor won't be closed, even if
- * there's an error. It is the application's responsibility to close it and make
- * sure there's no file descriptor leak. If `autoClose` is set to true (default
- * behavior), on `'error'` or `'end'` the file descriptor will be closed
- * automatically.
- *
- * `mode` sets the file mode (permission and sticky bits), but only if the
- * file was created.
- *
- * An example to read the last 10 bytes of a file which is 100 bytes long:
- *
- * ```js
- * import { createReadStream } from 'fs';
- *
- * createReadStream('sample.txt', { start: 90, end: 99 });
- * ```
- *
- * If `options` is a string, then it specifies the encoding.
- * @since v0.1.31
- */
- export function createReadStream(path: PathLike, options?: BufferEncoding | ReadStreamOptions): ReadStream;
- /**
- * `options` may also include a `start` option to allow writing data at some
- * position past the beginning of the file, allowed values are in the
- * \[0, [`Number.MAX_SAFE_INTEGER`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER)\] range. Modifying a file rather than
- * replacing it may require the `flags` option to be set to `r+` rather than the
- * default `w`. The `encoding` can be any one of those accepted by `Buffer`.
- *
- * If `autoClose` is set to true (default behavior) on `'error'` or `'finish'`the file descriptor will be closed automatically. If `autoClose` is false,
- * then the file descriptor won't be closed, even if there's an error.
- * It is the application's responsibility to close it and make sure there's no
- * file descriptor leak.
- *
- * By default, the stream will emit a `'close'` event after it has been
- * destroyed. Set the `emitClose` option to `false` to change this behavior.
- *
- * By providing the `fs` option it is possible to override the corresponding `fs`implementations for `open`, `write`, `writev` and `close`. Overriding `write()`without `writev()` can reduce
- * performance as some optimizations (`_writev()`)
- * will be disabled. When providing the `fs` option, overrides for at least one of`write` and `writev` are required. If no `fd` option is supplied, an override
- * for `open` is also required. If `autoClose` is `true`, an override for `close`is also required.
- *
- * Like `fs.ReadStream`, if `fd` is specified, `fs.WriteStream` will ignore the`path` argument and will use the specified file descriptor. This means that no`'open'` event will be
- * emitted. `fd` should be blocking; non-blocking `fd`s
- * should be passed to `net.Socket`.
- *
- * If `options` is a string, then it specifies the encoding.
- * @since v0.1.31
- */
- export function createWriteStream(path: PathLike, options?: BufferEncoding | StreamOptions): WriteStream;
- /**
- * Forces all currently queued I/O operations associated with the file to the
- * operating system's synchronized I/O completion state. Refer to the POSIX [`fdatasync(2)`](http://man7.org/linux/man-pages/man2/fdatasync.2.html) documentation for details. No arguments other
- * than a possible
- * exception are given to the completion callback.
- * @since v0.1.96
- */
- export function fdatasync(fd: number, callback: NoParamCallback): void;
- export namespace fdatasync {
- /**
- * Asynchronous fdatasync(2) - synchronize a file's in-core state with storage device.
- * @param fd A file descriptor.
- */
- function __promisify__(fd: number): Promise;
- }
- /**
- * Forces all currently queued I/O operations associated with the file to the
- * operating system's synchronized I/O completion state. Refer to the POSIX [`fdatasync(2)`](http://man7.org/linux/man-pages/man2/fdatasync.2.html) documentation for details. Returns `undefined`.
- * @since v0.1.96
- */
- export function fdatasyncSync(fd: number): void;
- /**
- * Asynchronously copies `src` to `dest`. By default, `dest` is overwritten if it
- * already exists. No arguments other than a possible exception are given to the
- * callback function. Node.js makes no guarantees about the atomicity of the copy
- * operation. If an error occurs after the destination file has been opened for
- * writing, Node.js will attempt to remove the destination.
- *
- * `mode` is an optional integer that specifies the behavior
- * of the copy operation. It is possible to create a mask consisting of the bitwise
- * OR of two or more values (e.g.`fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE`).
- *
- * * `fs.constants.COPYFILE_EXCL`: The copy operation will fail if `dest` already
- * exists.
- * * `fs.constants.COPYFILE_FICLONE`: The copy operation will attempt to create a
- * copy-on-write reflink. If the platform does not support copy-on-write, then a
- * fallback copy mechanism is used.
- * * `fs.constants.COPYFILE_FICLONE_FORCE`: The copy operation will attempt to
- * create a copy-on-write reflink. If the platform does not support
- * copy-on-write, then the operation will fail.
- *
- * ```js
- * import { copyFile, constants } from 'fs';
- *
- * function callback(err) {
- * if (err) throw err;
- * console.log('source.txt was copied to destination.txt');
- * }
- *
- * // destination.txt will be created or overwritten by default.
- * copyFile('source.txt', 'destination.txt', callback);
- *
- * // By using COPYFILE_EXCL, the operation will fail if destination.txt exists.
- * copyFile('source.txt', 'destination.txt', constants.COPYFILE_EXCL, callback);
- * ```
- * @since v8.5.0
- * @param src source filename to copy
- * @param dest destination filename of the copy operation
- * @param [mode=0] modifiers for copy operation.
- */
- export function copyFile(src: PathLike, dest: PathLike, callback: NoParamCallback): void;
- export function copyFile(src: PathLike, dest: PathLike, mode: number, callback: NoParamCallback): void;
- export namespace copyFile {
- function __promisify__(src: PathLike, dst: PathLike, mode?: number): Promise;
- }
- /**
- * Synchronously copies `src` to `dest`. By default, `dest` is overwritten if it
- * already exists. Returns `undefined`. Node.js makes no guarantees about the
- * atomicity of the copy operation. If an error occurs after the destination file
- * has been opened for writing, Node.js will attempt to remove the destination.
- *
- * `mode` is an optional integer that specifies the behavior
- * of the copy operation. It is possible to create a mask consisting of the bitwise
- * OR of two or more values (e.g.`fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE`).
- *
- * * `fs.constants.COPYFILE_EXCL`: The copy operation will fail if `dest` already
- * exists.
- * * `fs.constants.COPYFILE_FICLONE`: The copy operation will attempt to create a
- * copy-on-write reflink. If the platform does not support copy-on-write, then a
- * fallback copy mechanism is used.
- * * `fs.constants.COPYFILE_FICLONE_FORCE`: The copy operation will attempt to
- * create a copy-on-write reflink. If the platform does not support
- * copy-on-write, then the operation will fail.
- *
- * ```js
- * import { copyFileSync, constants } from 'fs';
- *
- * // destination.txt will be created or overwritten by default.
- * copyFileSync('source.txt', 'destination.txt');
- * console.log('source.txt was copied to destination.txt');
- *
- * // By using COPYFILE_EXCL, the operation will fail if destination.txt exists.
- * copyFileSync('source.txt', 'destination.txt', constants.COPYFILE_EXCL);
- * ```
- * @since v8.5.0
- * @param src source filename to copy
- * @param dest destination filename of the copy operation
- * @param [mode=0] modifiers for copy operation.
- */
- export function copyFileSync(src: PathLike, dest: PathLike, mode?: number): void;
- /**
- * Write an array of `ArrayBufferView`s to the file specified by `fd` using`writev()`.
- *
- * `position` is the offset from the beginning of the file where this data
- * should be written. If `typeof position !== 'number'`, the data will be written
- * at the current position.
- *
- * The callback will be given three arguments: `err`, `bytesWritten`, and`buffers`. `bytesWritten` is how many bytes were written from `buffers`.
- *
- * If this method is `util.promisify()` ed, it returns a promise for an`Object` with `bytesWritten` and `buffers` properties.
- *
- * It is unsafe to use `fs.writev()` multiple times on the same file without
- * waiting for the callback. For this scenario, use {@link createWriteStream}.
- *
- * On Linux, positional writes don't work when the file is opened in append mode.
- * The kernel ignores the position argument and always appends the data to
- * the end of the file.
- * @since v12.9.0
- */
- export function writev(fd: number, buffers: ReadonlyArray, cb: (err: NodeJS.ErrnoException | null, bytesWritten: number, buffers: NodeJS.ArrayBufferView[]) => void): void;
- export function writev(
- fd: number,
- buffers: ReadonlyArray,
- position: number,
- cb: (err: NodeJS.ErrnoException | null, bytesWritten: number, buffers: NodeJS.ArrayBufferView[]) => void
- ): void;
- export interface WriteVResult {
- bytesWritten: number;
- buffers: NodeJS.ArrayBufferView[];
- }
- export namespace writev {
- function __promisify__(fd: number, buffers: ReadonlyArray, position?: number): Promise;
- }
- /**
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link writev}.
- * @since v12.9.0
- * @return The number of bytes written.
- */
- export function writevSync(fd: number, buffers: ReadonlyArray, position?: number): number;
- /**
- * Read from a file specified by `fd` and write to an array of `ArrayBufferView`s
- * using `readv()`.
- *
- * `position` is the offset from the beginning of the file from where data
- * should be read. If `typeof position !== 'number'`, the data will be read
- * from the current position.
- *
- * The callback will be given three arguments: `err`, `bytesRead`, and`buffers`. `bytesRead` is how many bytes were read from the file.
- *
- * If this method is invoked as its `util.promisify()` ed version, it returns
- * a promise for an `Object` with `bytesRead` and `buffers` properties.
- * @since v13.13.0, v12.17.0
- */
- export function readv(fd: number, buffers: ReadonlyArray, cb: (err: NodeJS.ErrnoException | null, bytesRead: number, buffers: NodeJS.ArrayBufferView[]) => void): void;
- export function readv(
- fd: number,
- buffers: ReadonlyArray,
- position: number,
- cb: (err: NodeJS.ErrnoException | null, bytesRead: number, buffers: NodeJS.ArrayBufferView[]) => void
- ): void;
- export interface ReadVResult {
- bytesRead: number;
- buffers: NodeJS.ArrayBufferView[];
- }
- export namespace readv {
- function __promisify__(fd: number, buffers: ReadonlyArray, position?: number): Promise;
- }
- /**
- * For detailed information, see the documentation of the asynchronous version of
- * this API: {@link readv}.
- * @since v13.13.0, v12.17.0
- * @return The number of bytes read.
- */
- export function readvSync(fd: number, buffers: ReadonlyArray, position?: number): number;
- export interface OpenDirOptions {
- encoding?: BufferEncoding | undefined;
- /**
- * Number of directory entries that are buffered
- * internally when reading from the directory. Higher values lead to better
- * performance but higher memory usage.
- * @default 32
- */
- bufferSize?: number | undefined;
- }
- /**
- * Synchronously open a directory. See [`opendir(3)`](http://man7.org/linux/man-pages/man3/opendir.3.html).
- *
- * Creates an `fs.Dir`, which contains all further functions for reading from
- * and cleaning up the directory.
- *
- * The `encoding` option sets the encoding for the `path` while opening the
- * directory and subsequent read operations.
- * @since v12.12.0
- */
- export function opendirSync(path: PathLike, options?: OpenDirOptions): Dir;
- /**
- * Asynchronously open a directory. See the POSIX [`opendir(3)`](http://man7.org/linux/man-pages/man3/opendir.3.html) documentation for
- * more details.
- *
- * Creates an `fs.Dir`, which contains all further functions for reading from
- * and cleaning up the directory.
- *
- * The `encoding` option sets the encoding for the `path` while opening the
- * directory and subsequent read operations.
- * @since v12.12.0
- */
- export function opendir(path: PathLike, cb: (err: NodeJS.ErrnoException | null, dir: Dir) => void): void;
- export function opendir(path: PathLike, options: OpenDirOptions, cb: (err: NodeJS.ErrnoException | null, dir: Dir) => void): void;
- export namespace opendir {
- function __promisify__(path: PathLike, options?: OpenDirOptions): Promise;
- }
- export interface BigIntStats extends StatsBase {
- atimeNs: bigint;
- mtimeNs: bigint;
- ctimeNs: bigint;
- birthtimeNs: bigint;
- }
- export interface BigIntOptions {
- bigint: true;
- }
- export interface StatOptions {
- bigint?: boolean | undefined;
- }
- export interface StatSyncOptions extends StatOptions {
- throwIfNoEntry?: boolean | undefined;
- }
- interface CopyOptionsBase {
- /**
- * Dereference symlinks
- * @default false
- */
- dereference?: boolean;
- /**
- * When `force` is `false`, and the destination
- * exists, throw an error.
- * @default false
- */
- errorOnExist?: boolean;
- /**
- * Overwrite existing file or directory. _The copy
- * operation will ignore errors if you set this to false and the destination
- * exists. Use the `errorOnExist` option to change this behavior.
- * @default true
- */
- force?: boolean;
- /**
- * When `true` timestamps from `src` will
- * be preserved.
- * @default false
- */
- preserveTimestamps?: boolean;
- /**
- * Copy directories recursively.
- * @default false
- */
- recursive?: boolean;
- /**
- * When true, path resolution for symlinks will be skipped
- * @default false
- */
- verbatimSymlinks?: boolean;
- }
- export interface CopyOptions extends CopyOptionsBase {
- /**
- * Function to filter copied files/directories. Return
- * `true` to copy the item, `false` to ignore it.
- */
- filter?(source: string, destination: string): boolean | Promise;
- }
- export interface CopySyncOptions extends CopyOptionsBase {
- /**
- * Function to filter copied files/directories. Return
- * `true` to copy the item, `false` to ignore it.
- */
- filter?(source: string, destination: string): boolean;
- }
- /**
- * Asynchronously copies the entire directory structure from `src` to `dest`,
- * including subdirectories and files.
- *
- * When copying a directory to another directory, globs are not supported and
- * behavior is similar to `cp dir1/ dir2/`.
- * @since v16.7.0
- * @experimental
- * @param src source path to copy.
- * @param dest destination path to copy to.
- */
- export function cp(source: string | URL, destination: string | URL, callback: (err: NodeJS.ErrnoException | null) => void): void;
- export function cp(source: string | URL, destination: string | URL, opts: CopyOptions, callback: (err: NodeJS.ErrnoException | null) => void): void;
- /**
- * Synchronously copies the entire directory structure from `src` to `dest`,
- * including subdirectories and files.
- *
- * When copying a directory to another directory, globs are not supported and
- * behavior is similar to `cp dir1/ dir2/`.
- * @since v16.7.0
- * @experimental
- * @param src source path to copy.
- * @param dest destination path to copy to.
- */
- export function cpSync(source: string | URL, destination: string | URL, opts?: CopySyncOptions): void;
-}
-declare module 'node:fs' {
- export * from 'fs';
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/node_modules/debug/src/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/node_modules/debug/src/index.js
deleted file mode 100644
index bf4c57f259df2e16761b45e2636db307c89ba419..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/node_modules/debug/src/index.js
+++ /dev/null
@@ -1,10 +0,0 @@
-/**
- * Detect Electron renderer / nwjs process, which is node, but we should
- * treat as a browser.
- */
-
-if (typeof process === 'undefined' || process.type === 'renderer' || process.browser === true || process.__nwjs) {
- module.exports = require('./browser.js');
-} else {
- module.exports = require('./node.js');
-}
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/calculator/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/calculator/run.py
deleted file mode 100644
index ac1bd12235f9f4be5feced048d571bde8f6b9be3..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/calculator/run.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import gradio as gr
-
-def calculator(num1, operation, num2):
- if operation == "add":
- return num1 + num2
- elif operation == "subtract":
- return num1 - num2
- elif operation == "multiply":
- return num1 * num2
- elif operation == "divide":
- return num1 / num2
-
-demo = gr.Interface(
- calculator,
- [
- "number",
- gr.Radio(["add", "subtract", "multiply", "divide"]),
- "number"
- ],
- "number",
- examples=[
- [5, "add", 3],
- [4, "divide", 2],
- [-4, "multiply", 2.5],
- [0, "subtract", 1.2],
- ],
- title="Toy Calculator",
- description="Here's a sample toy calculator. Enjoy!",
-)
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/parallel/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/parallel/__init__.py
deleted file mode 100644
index 2ed2c17ad357742e423beeaf4d35db03fe9af469..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/parallel/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .collate import collate
-from .data_container import DataContainer
-from .data_parallel import MMDataParallel
-from .distributed import MMDistributedDataParallel
-from .registry import MODULE_WRAPPERS
-from .scatter_gather import scatter, scatter_kwargs
-from .utils import is_module_wrapper
-
-__all__ = [
- 'collate', 'DataContainer', 'MMDataParallel', 'MMDistributedDataParallel',
- 'scatter', 'scatter_kwargs', 'is_module_wrapper', 'MODULE_WRAPPERS'
-]
diff --git a/spaces/givkashi/SwinIR-Super-resolution/predict.py b/spaces/givkashi/SwinIR-Super-resolution/predict.py
deleted file mode 100644
index c0f6b715d6355cba970c5e84659b2b21c3f22a92..0000000000000000000000000000000000000000
--- a/spaces/givkashi/SwinIR-Super-resolution/predict.py
+++ /dev/null
@@ -1,159 +0,0 @@
-import cog
-import tempfile
-from pathlib import Path
-import argparse
-import shutil
-import os
-import cv2
-import glob
-import torch
-from collections import OrderedDict
-import numpy as np
-from main_test_swinir import define_model, setup, get_image_pair
-
-
-class Predictor(cog.Predictor):
- def setup(self):
- model_dir = 'experiments/pretrained_models'
-
- self.model_zoo = {
- 'real_sr': {
- 4: os.path.join(model_dir, '003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth')
- },
- 'gray_dn': {
- 15: os.path.join(model_dir, '004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth'),
- 25: os.path.join(model_dir, '004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth'),
- 50: os.path.join(model_dir, '004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth')
- },
- 'color_dn': {
- 15: os.path.join(model_dir, '005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth'),
- 25: os.path.join(model_dir, '005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth'),
- 50: os.path.join(model_dir, '005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth')
- },
- 'jpeg_car': {
- 10: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth'),
- 20: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth'),
- 30: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth'),
- 40: os.path.join(model_dir, '006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth')
- }
- }
-
- parser = argparse.ArgumentParser()
- parser.add_argument('--task', type=str, default='real_sr', help='classical_sr, lightweight_sr, real_sr, '
- 'gray_dn, color_dn, jpeg_car')
- parser.add_argument('--scale', type=int, default=1, help='scale factor: 1, 2, 3, 4, 8') # 1 for dn and jpeg car
- parser.add_argument('--noise', type=int, default=15, help='noise level: 15, 25, 50')
- parser.add_argument('--jpeg', type=int, default=40, help='scale factor: 10, 20, 30, 40')
- parser.add_argument('--training_patch_size', type=int, default=128, help='patch size used in training SwinIR. '
- 'Just used to differentiate two different settings in Table 2 of the paper. '
- 'Images are NOT tested patch by patch.')
- parser.add_argument('--large_model', action='store_true',
- help='use large model, only provided for real image sr')
- parser.add_argument('--model_path', type=str,
- default=self.model_zoo['real_sr'][4])
- parser.add_argument('--folder_lq', type=str, default=None, help='input low-quality test image folder')
- parser.add_argument('--folder_gt', type=str, default=None, help='input ground-truth test image folder')
-
- self.args = parser.parse_args('')
-
- self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- self.tasks = {
- 'Real-World Image Super-Resolution': 'real_sr',
- 'Grayscale Image Denoising': 'gray_dn',
- 'Color Image Denoising': 'color_dn',
- 'JPEG Compression Artifact Reduction': 'jpeg_car'
- }
-
- @cog.input("image", type=Path, help="input image")
- @cog.input("task_type", type=str, default='Real-World Image Super-Resolution',
- options=['Real-World Image Super-Resolution', 'Grayscale Image Denoising', 'Color Image Denoising',
- 'JPEG Compression Artifact Reduction'],
- help="image restoration task type")
- @cog.input("noise", type=int, default=15, options=[15, 25, 50],
- help='noise level, activated for Grayscale Image Denoising and Color Image Denoising. '
- 'Leave it as default or arbitrary if other tasks are selected')
- @cog.input("jpeg", type=int, default=40, options=[10, 20, 30, 40],
- help='scale factor, activated for JPEG Compression Artifact Reduction. '
- 'Leave it as default or arbitrary if other tasks are selected')
- def predict(self, image, task_type='Real-World Image Super-Resolution', jpeg=40, noise=15):
-
- self.args.task = self.tasks[task_type]
- self.args.noise = noise
- self.args.jpeg = jpeg
-
- # set model path
- if self.args.task == 'real_sr':
- self.args.scale = 4
- self.args.model_path = self.model_zoo[self.args.task][4]
- elif self.args.task in ['gray_dn', 'color_dn']:
- self.args.model_path = self.model_zoo[self.args.task][noise]
- else:
- self.args.model_path = self.model_zoo[self.args.task][jpeg]
-
- try:
- # set input folder
- input_dir = 'input_cog_temp'
- os.makedirs(input_dir, exist_ok=True)
- input_path = os.path.join(input_dir, os.path.basename(image))
- shutil.copy(str(image), input_path)
- if self.args.task == 'real_sr':
- self.args.folder_lq = input_dir
- else:
- self.args.folder_gt = input_dir
-
- model = define_model(self.args)
- model.eval()
- model = model.to(self.device)
-
- # setup folder and path
- folder, save_dir, border, window_size = setup(self.args)
- os.makedirs(save_dir, exist_ok=True)
- test_results = OrderedDict()
- test_results['psnr'] = []
- test_results['ssim'] = []
- test_results['psnr_y'] = []
- test_results['ssim_y'] = []
- test_results['psnr_b'] = []
- # psnr, ssim, psnr_y, ssim_y, psnr_b = 0, 0, 0, 0, 0
- out_path = Path(tempfile.mkdtemp()) / "out.png"
-
- for idx, path in enumerate(sorted(glob.glob(os.path.join(folder, '*')))):
- # read image
- imgname, img_lq, img_gt = get_image_pair(self.args, path) # image to HWC-BGR, float32
- img_lq = np.transpose(img_lq if img_lq.shape[2] == 1 else img_lq[:, :, [2, 1, 0]],
- (2, 0, 1)) # HCW-BGR to CHW-RGB
- img_lq = torch.from_numpy(img_lq).float().unsqueeze(0).to(self.device) # CHW-RGB to NCHW-RGB
-
- # inference
- with torch.no_grad():
- # pad input image to be a multiple of window_size
- _, _, h_old, w_old = img_lq.size()
- h_pad = (h_old // window_size + 1) * window_size - h_old
- w_pad = (w_old // window_size + 1) * window_size - w_old
- img_lq = torch.cat([img_lq, torch.flip(img_lq, [2])], 2)[:, :, :h_old + h_pad, :]
- img_lq = torch.cat([img_lq, torch.flip(img_lq, [3])], 3)[:, :, :, :w_old + w_pad]
- output = model(img_lq)
- output = output[..., :h_old * self.args.scale, :w_old * self.args.scale]
-
- # save image
- output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- if output.ndim == 3:
- output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0)) # CHW-RGB to HCW-BGR
- output = (output * 255.0).round().astype(np.uint8) # float32 to uint8
- cv2.imwrite(str(out_path), output)
- finally:
- clean_folder(input_dir)
- return out_path
-
-
-def clean_folder(folder):
- for filename in os.listdir(folder):
- file_path = os.path.join(folder, filename)
- try:
- if os.path.isfile(file_path) or os.path.islink(file_path):
- os.unlink(file_path)
- elif os.path.isdir(file_path):
- shutil.rmtree(file_path)
- except Exception as e:
- print('Failed to delete %s. Reason: %s' % (file_path, e))
diff --git a/spaces/glyszt/vt/vtoonify/model/raft/core/extractor.py b/spaces/glyszt/vt/vtoonify/model/raft/core/extractor.py
deleted file mode 100644
index 9a9c759d1243d4694e8656c2f6f8a37e53edd009..0000000000000000000000000000000000000000
--- a/spaces/glyszt/vt/vtoonify/model/raft/core/extractor.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class ResidualBlock(nn.Module):
- def __init__(self, in_planes, planes, norm_fn='group', stride=1):
- super(ResidualBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
- self.relu = nn.ReLU(inplace=True)
-
- num_groups = planes // 8
-
- if norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- if not stride == 1:
- self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
-
- elif norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(planes)
- self.norm2 = nn.BatchNorm2d(planes)
- if not stride == 1:
- self.norm3 = nn.BatchNorm2d(planes)
-
- elif norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(planes)
- self.norm2 = nn.InstanceNorm2d(planes)
- if not stride == 1:
- self.norm3 = nn.InstanceNorm2d(planes)
-
- elif norm_fn == 'none':
- self.norm1 = nn.Sequential()
- self.norm2 = nn.Sequential()
- if not stride == 1:
- self.norm3 = nn.Sequential()
-
- if stride == 1:
- self.downsample = None
-
- else:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3)
-
-
- def forward(self, x):
- y = x
- y = self.relu(self.norm1(self.conv1(y)))
- y = self.relu(self.norm2(self.conv2(y)))
-
- if self.downsample is not None:
- x = self.downsample(x)
-
- return self.relu(x+y)
-
-
-
-class BottleneckBlock(nn.Module):
- def __init__(self, in_planes, planes, norm_fn='group', stride=1):
- super(BottleneckBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_planes, planes//4, kernel_size=1, padding=0)
- self.conv2 = nn.Conv2d(planes//4, planes//4, kernel_size=3, padding=1, stride=stride)
- self.conv3 = nn.Conv2d(planes//4, planes, kernel_size=1, padding=0)
- self.relu = nn.ReLU(inplace=True)
-
- num_groups = planes // 8
-
- if norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4)
- self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4)
- self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- if not stride == 1:
- self.norm4 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
-
- elif norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(planes//4)
- self.norm2 = nn.BatchNorm2d(planes//4)
- self.norm3 = nn.BatchNorm2d(planes)
- if not stride == 1:
- self.norm4 = nn.BatchNorm2d(planes)
-
- elif norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(planes//4)
- self.norm2 = nn.InstanceNorm2d(planes//4)
- self.norm3 = nn.InstanceNorm2d(planes)
- if not stride == 1:
- self.norm4 = nn.InstanceNorm2d(planes)
-
- elif norm_fn == 'none':
- self.norm1 = nn.Sequential()
- self.norm2 = nn.Sequential()
- self.norm3 = nn.Sequential()
- if not stride == 1:
- self.norm4 = nn.Sequential()
-
- if stride == 1:
- self.downsample = None
-
- else:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm4)
-
-
- def forward(self, x):
- y = x
- y = self.relu(self.norm1(self.conv1(y)))
- y = self.relu(self.norm2(self.conv2(y)))
- y = self.relu(self.norm3(self.conv3(y)))
-
- if self.downsample is not None:
- x = self.downsample(x)
-
- return self.relu(x+y)
-
-class BasicEncoder(nn.Module):
- def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0):
- super(BasicEncoder, self).__init__()
- self.norm_fn = norm_fn
-
- if self.norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64)
-
- elif self.norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(64)
-
- elif self.norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(64)
-
- elif self.norm_fn == 'none':
- self.norm1 = nn.Sequential()
-
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3)
- self.relu1 = nn.ReLU(inplace=True)
-
- self.in_planes = 64
- self.layer1 = self._make_layer(64, stride=1)
- self.layer2 = self._make_layer(96, stride=2)
- self.layer3 = self._make_layer(128, stride=2)
-
- # output convolution
- self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1)
-
- self.dropout = None
- if dropout > 0:
- self.dropout = nn.Dropout2d(p=dropout)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)):
- if m.weight is not None:
- nn.init.constant_(m.weight, 1)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def _make_layer(self, dim, stride=1):
- layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride)
- layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1)
- layers = (layer1, layer2)
-
- self.in_planes = dim
- return nn.Sequential(*layers)
-
-
- def forward(self, x):
-
- # if input is list, combine batch dimension
- is_list = isinstance(x, tuple) or isinstance(x, list)
- if is_list:
- batch_dim = x[0].shape[0]
- x = torch.cat(x, dim=0)
-
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu1(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
-
- x = self.conv2(x)
-
- if self.training and self.dropout is not None:
- x = self.dropout(x)
-
- if is_list:
- x = torch.split(x, [batch_dim, batch_dim], dim=0)
-
- return x
-
-
-class SmallEncoder(nn.Module):
- def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0):
- super(SmallEncoder, self).__init__()
- self.norm_fn = norm_fn
-
- if self.norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=8, num_channels=32)
-
- elif self.norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(32)
-
- elif self.norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(32)
-
- elif self.norm_fn == 'none':
- self.norm1 = nn.Sequential()
-
- self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=2, padding=3)
- self.relu1 = nn.ReLU(inplace=True)
-
- self.in_planes = 32
- self.layer1 = self._make_layer(32, stride=1)
- self.layer2 = self._make_layer(64, stride=2)
- self.layer3 = self._make_layer(96, stride=2)
-
- self.dropout = None
- if dropout > 0:
- self.dropout = nn.Dropout2d(p=dropout)
-
- self.conv2 = nn.Conv2d(96, output_dim, kernel_size=1)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)):
- if m.weight is not None:
- nn.init.constant_(m.weight, 1)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def _make_layer(self, dim, stride=1):
- layer1 = BottleneckBlock(self.in_planes, dim, self.norm_fn, stride=stride)
- layer2 = BottleneckBlock(dim, dim, self.norm_fn, stride=1)
- layers = (layer1, layer2)
-
- self.in_planes = dim
- return nn.Sequential(*layers)
-
-
- def forward(self, x):
-
- # if input is list, combine batch dimension
- is_list = isinstance(x, tuple) or isinstance(x, list)
- if is_list:
- batch_dim = x[0].shape[0]
- x = torch.cat(x, dim=0)
-
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu1(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.conv2(x)
-
- if self.training and self.dropout is not None:
- x = self.dropout(x)
-
- if is_list:
- x = torch.split(x, [batch_dim, batch_dim], dim=0)
-
- return x
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/CALD 4 Visual Thesaurus 3.01.md b/spaces/gotiQspiryo/whisper-ui/examples/CALD 4 Visual Thesaurus 3.01.md
deleted file mode 100644
index a6e612e85061b18957b379c1597d6033366a11da..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/CALD 4 Visual Thesaurus 3.01.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
How to Use CALD 4 Visual Thesaurus 3.01 to Improve Your Vocabulary and Writing Skills
-
-
If you are looking for a powerful and easy-to-use tool to enhance your English language learning, you might want to try CALD 4 Visual Thesaurus 3.01. This is a software that combines the features of a dictionary and a thesaurus, with a unique visual interface that helps you explore the meanings and relationships of words.
-
-
CALD stands for Cambridge Advanced Learner's Dictionary, which is one of the most comprehensive and authoritative dictionaries for learners of English. It provides clear definitions, examples, synonyms, antonyms, collocations, idioms, phrasal verbs, and more for over 170,000 words and phrases.
Visual Thesaurus is a software that displays words and their meanings in an interactive map, where you can see how they are connected by synonyms, antonyms, hyponyms, hypernyms, meronyms, holonyms, and other semantic relations. You can also hear the pronunciation of each word, see its usage frequency, and find related images and videos on the web.
-
-
CALD 4 Visual Thesaurus 3.01 is the latest version of this software, which integrates the content of CALD 4 with the functionality of Visual Thesaurus 3.01. You can access both the dictionary and the thesaurus from the same interface, and switch between them with a simple click. You can also customize the appearance and behavior of the software according to your preferences and needs.
-
-
Some of the benefits of using CALD 4 Visual Thesaurus 3.01 are:
-
-
-
It helps you expand your vocabulary by showing you synonyms, antonyms, and related words for any word you look up.
-
It helps you improve your writing skills by suggesting alternative words and expressions that fit your context and style.
-
It helps you avoid repetition and redundancy by showing you different ways to say the same thing.
-
It helps you learn new words and concepts by visualizing their meanings and associations in a graphical way.
-
It helps you understand complex words and phrases by breaking them down into simpler components.
CALD 4 Visual Thesaurus 3.01 is a great tool for anyone who wants to improve their English language skills in a fun and effective way. Try it today and see the difference it can make!
-
-
One of the most useful features of CALD 4 Visual Thesaurus 3.01 is the word filter, which allows you to narrow down your search results by part of speech, level of difficulty, domain, register, and usage. For example, you can filter out words that are too formal, informal, technical, slang, or archaic for your purpose. You can also filter out words that are too easy or too difficult for your level of proficiency. This way, you can find the most appropriate and relevant words for your situation.
-
-
-
Another feature that makes CALD 4 Visual Thesaurus 3.01 stand out from other dictionaries and thesauruses is the word explorer, which lets you discover new words and concepts by following the links on the map. You can click on any word to see its definition and examples in the dictionary panel, or double-click on it to see its synonyms and antonyms in the thesaurus panel. You can also drag any word to the center of the map to see its related words and meanings. You can explore as many words as you want, and see how they are connected in a network of semantic relations.
-
-
A third feature that enhances the learning experience of CALD 4 Visual Thesaurus 3.01 is the quiz mode, which tests your knowledge of words and their meanings in a fun and interactive way. You can choose from three types of quizzes: meaning, spelling, and reverse spelling. In each quiz, you will see a word on the map and four possible answers in the quiz panel. You have to select the correct answer before the time runs out. You can also adjust the difficulty level and the number of questions according to your preference. The quiz mode is a great way to review what you have learned and challenge yourself.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Euro Truck Simulator 2 V1.27.2.9s Incl ALL DLC With Lucky Patcher.md b/spaces/gotiQspiryo/whisper-ui/examples/Euro Truck Simulator 2 V1.27.2.9s Incl ALL DLC With Lucky Patcher.md
deleted file mode 100644
index 112627b911e4b6a02076c9745b067c5355571ea3..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Euro Truck Simulator 2 V1.27.2.9s Incl ALL DLC With Lucky Patcher.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Euro Truck Simulator 2 v1.27.2.9s Incl ALL DLC with lucky patcher
-
-May 05, 2018 c11361aded Ice Age: Collision Course Full Movie Online Free movierulz Download Telugu Dubbed Full Movie Torrent Mp4 HD Videos Upload ... Ice Age 4: Continental Drift (2012)Tamil & Eng & Hind & Telugu. 4d29de3e1b
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mercedes Benz Navigation Cd Ntg2 Audio 50 Aps Europa Version 14 _TOP_.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mercedes Benz Navigation Cd Ntg2 Audio 50 Aps Europa Version 14 _TOP_.md
deleted file mode 100644
index 896f8f4bd5032f6cd21307dea1620643278327e8..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mercedes Benz Navigation Cd Ntg2 Audio 50 Aps Europa Version 14 _TOP_.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
Mercedes Benz Navigation Cd Ntg2 Audio 50 Aps Europa Version 14
-
-Jun 30, 2020 - Mercedes Benz Navigation Cd Ntg2 Audio 50 Aps Europa Version 14 â—‹ 14, 2014 - Mercedes NTG2 Audio 50 APS CD v. 15 2013-2014.txt. 226. 3.89 KB. Download. Models: A-Klasse >09/04, 07/08. B-Klasse <07/08. C-Klasse >04/04, . B-Klasse - NTG2 - NTG2 (CD V1 + 2) - Audio 40 Aps.
-Mercedes-Benz B-Klasse (W-244) NTG2 CD V1 2. pdf; Mercedes-Benz A-Klasse.
-In this section of the site you can download Mercedes-Benz repair and operation manual without any problems, there is a wide range of literature for you to solve your problems with the car.
-Format: PDF Language: Russian Size: 1.35 Gb 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Cytomic The Glue V1 2 1 WiN OSX VERIFIED.md b/spaces/inreVtussa/clothingai/Examples/Cytomic The Glue V1 2 1 WiN OSX VERIFIED.md
deleted file mode 100644
index 517c3d35924330db0afe7e5b20ad3e85df78c64d..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Cytomic The Glue V1 2 1 WiN OSX VERIFIED.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
at first, i wondered why so many of the controls were represented by buttons - especially the ratio knob, which is a classic filter - but then it dawned on me that this is a digital compressor. further, theglue'srate knobis actually an audio rate filter. buttons will let you jump directly to the setting without having to move the knob. if you search hard enough, you'll find a video tutorial that explains the functions of each control.
-
although the audio signal is being compressed, the glue's compressor is also able to offer super clean transient response and low distortion, even with so much compression applied. it's important to note that the the glue only applies compression to the signal, so you don't need to worry about carefully fiddling with the levels to avoid clipping. while this is sometimes a concern when using audio plugins, with the glue it's very easy to do because there are no peaks and no extreme levels. there are also no sidechain or auxiliaries inputs, so the only things that the glue will affect are the main outputs. the glue is not particularly transparent, but it's certainly not overbearing.
in some ways, the glue is more like a compressor than a filter, but the combination of compression and filtering is really nice. of the plugins that i've tried, i think the glue is the easiest to understand. although it has a lot of controls, the interface is intuitive, and it doesn't take long to figure out how to use the plugin. overall, it's a plugin that is well suited to soft limiting guitar or vocals.
-
compression: use one of the three options (normal, fast, slow), which will affect the processing speed. the normal option allows you to take advantage of the professional algorithms that are included. the slow option allows you to use a mellower algorithm that gives you a lot of control over the settings. the fast option will give you a really fast processing speed, but at a lower quality. the normal setting is used primarily, as the slow setting will be too slow to really get a lot of control over the settings.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/isididiidid/chatgpt-next-webiii/README.md b/spaces/isididiidid/chatgpt-next-webiii/README.md
deleted file mode 100644
index 35c5cc38c62d43cd952bbc84188a496af59deed5..0000000000000000000000000000000000000000
--- a/spaces/isididiidid/chatgpt-next-webiii/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Chatgpt Next Web
-emoji: 📈
-colorFrom: green
-colorTo: green
-sdk: docker
-pinned: false
-app_port: 3000
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py b/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py
deleted file mode 100644
index acd00238895d57ba878fd0211d5654250fb10061..0000000000000000000000000000000000000000
--- a/spaces/ivotai/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py
+++ /dev/null
@@ -1,509 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import ONNXVITS_modules as modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- self.w = None
- self.reverse = None
- self.noise_scale = None
- def forward(self, x, x_mask, g=None):
- w = self.w
- reverse = self.reverse
- noise_scale = self.noise_scale
-
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- self.reverse = None
- def forward(self, x, x_mask, g=None):
- reverse = self.reverse
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask # x_in : [b, c, t] -> [b, h, t]
- x = self.enc(x, x_mask, g=g) # x_in : [b, h, t], g : [b, h, 1], x = x_in + g
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask # z, m, logs : [b, h, t]
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
-
- if n_speakers > 0:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, sid=None, noise_scale=.667, length_scale=1, noise_scale_w=.8, max_len=None):
- torch.onnx.export(
- self.enc_p,
- (x, x_lengths),
- "ONNX_net/enc_p.onnx",
- input_names=["x", "x_lengths"],
- output_names=["xout", "m_p", "logs_p", "x_mask"],
- dynamic_axes={
- "x" : [1],
- "xout" : [2],
- "m_p" : [2],
- "logs_p" : [2],
- "x_mask" : [2]
- },
- verbose=True,
- )
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
-
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- self.dp.reverse = True
- self.dp.noise_scale = noise_scale_w
- torch.onnx.export(
- self.dp,
- (x, x_mask, g),
- "ONNX_net/dp.onnx",
- input_names=["x", "x_mask", "g"],
- output_names=["logw"],
- dynamic_axes={
- "x" : [2],
- "x_mask" : [2],
- "logw" : [2]
- },
- verbose=True,
- )
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
-
- self.flow.reverse = True
- torch.onnx.export(
- self.flow,
- (z_p, y_mask, g),
- "ONNX_net/flow.onnx",
- input_names=["z_p", "y_mask", "g"],
- output_names=["z"],
- dynamic_axes={
- "z_p" : [2],
- "y_mask" : [2],
- "z" : [2]
- },
- verbose=True,
- )
- z = self.flow(z_p, y_mask, g=g)
- z_in = (z * y_mask)[:,:,:max_len]
-
- torch.onnx.export(
- self.dec,
- (z_in, g),
- "ONNX_net/dec.onnx",
- input_names=["z_in", "g"],
- output_names=["o"],
- dynamic_axes={
- "z_in" : [2],
- "o" : [2]
- },
- verbose=True,
- )
- o = self.dec(z_in, g=g)
- return o
diff --git a/spaces/ivy-1911/vits-uma-genshin-honkai/mel_processing.py b/spaces/ivy-1911/vits-uma-genshin-honkai/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/ivy-1911/vits-uma-genshin-honkai/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/jaisidhsingh/cluster-summ/configs/model_config.py b/spaces/jaisidhsingh/cluster-summ/configs/model_config.py
deleted file mode 100644
index 51337d15bb42a5acae0707669fbe31072fc31fb7..0000000000000000000000000000000000000000
--- a/spaces/jaisidhsingh/cluster-summ/configs/model_config.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from types import SimpleNamespace
-
-
-cfg = SimpleNamespace(**{})
-
-# sentence embedding model configs
-cfg.sent_model_name = "sentence-transformers/all-MiniLM-L6-v2"
-cfg.sent_model_seq_limit = 256
-
-# summarization model configs
-
diff --git a/spaces/jhj0517/Whisper-WebUI-Easy-Subtitle-Generator/ui/htmls.py b/spaces/jhj0517/Whisper-WebUI-Easy-Subtitle-Generator/ui/htmls.py
deleted file mode 100644
index 57791d5a67b093f9a65fd398175e8a9d314b266f..0000000000000000000000000000000000000000
--- a/spaces/jhj0517/Whisper-WebUI-Easy-Subtitle-Generator/ui/htmls.py
+++ /dev/null
@@ -1,55 +0,0 @@
-CSS = """
-.bmc-button {
- padding: 2px 5px;
- border-radius: 5px;
- background-color: #FF813F;
- color: white;
- box-shadow: 0px 1px 2px rgba(0, 0, 0, 0.3);
- text-decoration: none;
- display: inline-block;
- font-size: 20px;
- margin: 2px;
- cursor: pointer;
- -webkit-transition: background-color 0.3s ease;
- -ms-transition: background-color 0.3s ease;
- transition: background-color 0.3s ease;
-}
-.bmc-button:hover,
-.bmc-button:active,
-.bmc-button:focus {
- background-color: #FF5633;
-}
-.markdown {
- margin-bottom: 0;
- padding-bottom: 0;
-}
-.tabs {
- margin-top: 0;
- padding-top: 0;
-}
-
-#md_project a {
- color: black;
- text-decoration: none;
-}
-#md_project a:hover {
- text-decoration: underline;
-}
-
-#md_note a {
- color: black;
-}
-#md_note a:hover {
- text-decoration: underline;
-}
-"""
-
-MARKDOWN_TITLE = """
-### [Whisper Web-UI](https://github.com/jhj0517/Whsiper-WebUI)
-"""
-MARKDOWN_NOTE = """
-### To run it locally, check [here !](https://github.com/jhj0517/Whsiper-WebUI)
-Note: Only the **"tiny"** and **"tiny.en"** models are supported in this space due to the free CPU provided by Hugging Face.
-
-If you want to run every Whisper model, you should do it in a local environment.
-"""
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/encodings/codecs.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/encodings/codecs.py
deleted file mode 100644
index 3ac0268d6a11a1be99bb2cf7fde5979da2853d4a..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/encodings/codecs.py
+++ /dev/null
@@ -1,135 +0,0 @@
-"""Extend the Python codecs module with a few encodings that are used in OpenType (name table)
-but missing from Python. See https://github.com/fonttools/fonttools/issues/236 for details."""
-
-import codecs
-import encodings
-
-
-class ExtendCodec(codecs.Codec):
- def __init__(self, name, base_encoding, mapping):
- self.name = name
- self.base_encoding = base_encoding
- self.mapping = mapping
- self.reverse = {v: k for k, v in mapping.items()}
- self.max_len = max(len(v) for v in mapping.values())
- self.info = codecs.CodecInfo(
- name=self.name, encode=self.encode, decode=self.decode
- )
- codecs.register_error(name, self.error)
-
- def _map(self, mapper, output_type, exc_type, input, errors):
- base_error_handler = codecs.lookup_error(errors)
- length = len(input)
- out = output_type()
- while input:
- # first try to use self.error as the error handler
- try:
- part = mapper(input, self.base_encoding, errors=self.name)
- out += part
- break # All converted
- except exc_type as e:
- # else convert the correct part, handle error as requested and continue
- out += mapper(input[: e.start], self.base_encoding, self.name)
- replacement, pos = base_error_handler(e)
- out += replacement
- input = input[pos:]
- return out, length
-
- def encode(self, input, errors="strict"):
- return self._map(codecs.encode, bytes, UnicodeEncodeError, input, errors)
-
- def decode(self, input, errors="strict"):
- return self._map(codecs.decode, str, UnicodeDecodeError, input, errors)
-
- def error(self, e):
- if isinstance(e, UnicodeDecodeError):
- for end in range(e.start + 1, e.end + 1):
- s = e.object[e.start : end]
- if s in self.mapping:
- return self.mapping[s], end
- elif isinstance(e, UnicodeEncodeError):
- for end in range(e.start + 1, e.start + self.max_len + 1):
- s = e.object[e.start : end]
- if s in self.reverse:
- return self.reverse[s], end
- e.encoding = self.name
- raise e
-
-
-_extended_encodings = {
- "x_mac_japanese_ttx": (
- "shift_jis",
- {
- b"\xFC": chr(0x007C),
- b"\x7E": chr(0x007E),
- b"\x80": chr(0x005C),
- b"\xA0": chr(0x00A0),
- b"\xFD": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
- "x_mac_trad_chinese_ttx": (
- "big5",
- {
- b"\x80": chr(0x005C),
- b"\xA0": chr(0x00A0),
- b"\xFD": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
- "x_mac_korean_ttx": (
- "euc_kr",
- {
- b"\x80": chr(0x00A0),
- b"\x81": chr(0x20A9),
- b"\x82": chr(0x2014),
- b"\x83": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
- "x_mac_simp_chinese_ttx": (
- "gb2312",
- {
- b"\x80": chr(0x00FC),
- b"\xA0": chr(0x00A0),
- b"\xFD": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
-}
-
-_cache = {}
-
-
-def search_function(name):
- name = encodings.normalize_encoding(name) # Rather undocumented...
- if name in _extended_encodings:
- if name not in _cache:
- base_encoding, mapping = _extended_encodings[name]
- assert name[-4:] == "_ttx"
- # Python 2 didn't have any of the encodings that we are implementing
- # in this file. Python 3 added aliases for the East Asian ones, mapping
- # them "temporarily" to the same base encoding as us, with a comment
- # suggesting that full implementation will appear some time later.
- # As such, try the Python version of the x_mac_... first, if that is found,
- # use *that* as our base encoding. This would make our encoding upgrade
- # to the full encoding when and if Python finally implements that.
- # http://bugs.python.org/issue24041
- base_encodings = [name[:-4], base_encoding]
- for base_encoding in base_encodings:
- try:
- codecs.lookup(base_encoding)
- except LookupError:
- continue
- _cache[name] = ExtendCodec(name, base_encoding, mapping)
- break
- return _cache[name].info
-
- return None
-
-
-codecs.register(search_function)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttx.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttx.py
deleted file mode 100644
index 65a3c7a808b41fc571d59bac80f7b1085abc6b9b..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttx.py
+++ /dev/null
@@ -1,469 +0,0 @@
-"""\
-usage: ttx [options] inputfile1 [... inputfileN]
-
-TTX -- From OpenType To XML And Back
-
-If an input file is a TrueType or OpenType font file, it will be
-decompiled to a TTX file (an XML-based text format).
-If an input file is a TTX file, it will be compiled to whatever
-format the data is in, a TrueType or OpenType/CFF font file.
-A special input value of - means read from the standard input.
-
-Output files are created so they are unique: an existing file is
-never overwritten.
-
-General options
-===============
-
--h Help print this message.
---version show version and exit.
--d Specify a directory where the output files are
- to be created.
--o Specify a file to write the output to. A special
- value of - would use the standard output.
--f Overwrite existing output file(s), ie. don't append
- numbers.
--v Verbose: more messages will be written to stdout
- about what is being done.
--q Quiet: No messages will be written to stdout about
- what is being done.
--a allow virtual glyphs ID's on compile or decompile.
-
-Dump options
-============
-
--l List table info: instead of dumping to a TTX file, list
- some minimal info about each table.
--t
Specify a table to dump. Multiple -t options
- are allowed. When no -t option is specified, all tables
- will be dumped.
--x
Specify a table to exclude from the dump. Multiple
- -x options are allowed. -t and -x are mutually exclusive.
--s Split tables: save the TTX data into separate TTX files per
- table and write one small TTX file that contains references
- to the individual table dumps. This file can be used as
- input to ttx, as long as the table files are in the
- same directory.
--g Split glyf table: Save the glyf data into separate TTX files
- per glyph and write a small TTX for the glyf table which
- contains references to the individual TTGlyph elements.
- NOTE: specifying -g implies -s (no need for -s together
- with -g)
--i Do NOT disassemble TT instructions: when this option is
- given, all TrueType programs (glyph programs, the font
- program and the pre-program) will be written to the TTX
- file as hex data instead of assembly. This saves some time
- and makes the TTX file smaller.
--z Specify a bitmap data export option for EBDT:
- {'raw', 'row', 'bitwise', 'extfile'} or for the CBDT:
- {'raw', 'extfile'} Each option does one of the following:
-
- -z raw
- export the bitmap data as a hex dump
- -z row
- export each row as hex data
- -z bitwise
- export each row as binary in an ASCII art style
- -z extfile
- export the data as external files with XML references
-
- If no export format is specified 'raw' format is used.
--e Don't ignore decompilation errors, but show a full traceback
- and abort.
--y Select font number for TrueType Collection (.ttc/.otc),
- starting from 0.
---unicodedata
- Use custom database file to write character names in the
- comments of the cmap TTX output.
---newline
- Control how line endings are written in the XML file. It
- can be 'LF', 'CR', or 'CRLF'. If not specified, the
- default platform-specific line endings are used.
-
-Compile options
-===============
-
--m Merge with TrueType-input-file: specify a TrueType or
- OpenType font file to be merged with the TTX file. This
- option is only valid when at most one TTX file is specified.
--b Don't recalc glyph bounding boxes: use the values in the
- TTX file as-is.
---recalc-timestamp
- Set font 'modified' timestamp to current time.
- By default, the modification time of the TTX file will be
- used.
---no-recalc-timestamp
- Keep the original font 'modified' timestamp.
---flavor
- Specify flavor of output font file. May be 'woff' or 'woff2'.
- Note that WOFF2 requires the Brotli Python extension,
- available at https://github.com/google/brotli
---with-zopfli
- Use Zopfli instead of Zlib to compress WOFF. The Python
- extension is available at https://pypi.python.org/pypi/zopfli
-"""
-
-
-from fontTools.ttLib import TTFont, TTLibError
-from fontTools.misc.macCreatorType import getMacCreatorAndType
-from fontTools.unicode import setUnicodeData
-from fontTools.misc.textTools import Tag, tostr
-from fontTools.misc.timeTools import timestampSinceEpoch
-from fontTools.misc.loggingTools import Timer
-from fontTools.misc.cliTools import makeOutputFileName
-import os
-import sys
-import getopt
-import re
-import logging
-
-
-log = logging.getLogger("fontTools.ttx")
-
-opentypeheaderRE = re.compile("""sfntVersion=['"]OTTO["']""")
-
-
-class Options(object):
-
- listTables = False
- outputDir = None
- outputFile = None
- overWrite = False
- verbose = False
- quiet = False
- splitTables = False
- splitGlyphs = False
- disassembleInstructions = True
- mergeFile = None
- recalcBBoxes = True
- ignoreDecompileErrors = True
- bitmapGlyphDataFormat = "raw"
- unicodedata = None
- newlinestr = "\n"
- recalcTimestamp = None
- flavor = None
- useZopfli = False
-
- def __init__(self, rawOptions, numFiles):
- self.onlyTables = []
- self.skipTables = []
- self.fontNumber = -1
- for option, value in rawOptions:
- # general options
- if option == "-h":
- print(__doc__)
- sys.exit(0)
- elif option == "--version":
- from fontTools import version
-
- print(version)
- sys.exit(0)
- elif option == "-d":
- if not os.path.isdir(value):
- raise getopt.GetoptError(
- "The -d option value must be an existing directory"
- )
- self.outputDir = value
- elif option == "-o":
- self.outputFile = value
- elif option == "-f":
- self.overWrite = True
- elif option == "-v":
- self.verbose = True
- elif option == "-q":
- self.quiet = True
- # dump options
- elif option == "-l":
- self.listTables = True
- elif option == "-t":
- # pad with space if table tag length is less than 4
- value = value.ljust(4)
- self.onlyTables.append(value)
- elif option == "-x":
- # pad with space if table tag length is less than 4
- value = value.ljust(4)
- self.skipTables.append(value)
- elif option == "-s":
- self.splitTables = True
- elif option == "-g":
- # -g implies (and forces) splitTables
- self.splitGlyphs = True
- self.splitTables = True
- elif option == "-i":
- self.disassembleInstructions = False
- elif option == "-z":
- validOptions = ("raw", "row", "bitwise", "extfile")
- if value not in validOptions:
- raise getopt.GetoptError(
- "-z does not allow %s as a format. Use %s"
- % (option, validOptions)
- )
- self.bitmapGlyphDataFormat = value
- elif option == "-y":
- self.fontNumber = int(value)
- # compile options
- elif option == "-m":
- self.mergeFile = value
- elif option == "-b":
- self.recalcBBoxes = False
- elif option == "-e":
- self.ignoreDecompileErrors = False
- elif option == "--unicodedata":
- self.unicodedata = value
- elif option == "--newline":
- validOptions = ("LF", "CR", "CRLF")
- if value == "LF":
- self.newlinestr = "\n"
- elif value == "CR":
- self.newlinestr = "\r"
- elif value == "CRLF":
- self.newlinestr = "\r\n"
- else:
- raise getopt.GetoptError(
- "Invalid choice for --newline: %r (choose from %s)"
- % (value, ", ".join(map(repr, validOptions)))
- )
- elif option == "--recalc-timestamp":
- self.recalcTimestamp = True
- elif option == "--no-recalc-timestamp":
- self.recalcTimestamp = False
- elif option == "--flavor":
- self.flavor = value
- elif option == "--with-zopfli":
- self.useZopfli = True
- if self.verbose and self.quiet:
- raise getopt.GetoptError("-q and -v options are mutually exclusive")
- if self.verbose:
- self.logLevel = logging.DEBUG
- elif self.quiet:
- self.logLevel = logging.WARNING
- else:
- self.logLevel = logging.INFO
- if self.mergeFile and self.flavor:
- raise getopt.GetoptError("-m and --flavor options are mutually exclusive")
- if self.onlyTables and self.skipTables:
- raise getopt.GetoptError("-t and -x options are mutually exclusive")
- if self.mergeFile and numFiles > 1:
- raise getopt.GetoptError(
- "Must specify exactly one TTX source file when using -m"
- )
- if self.flavor != "woff" and self.useZopfli:
- raise getopt.GetoptError("--with-zopfli option requires --flavor 'woff'")
-
-
-def ttList(input, output, options):
- ttf = TTFont(input, fontNumber=options.fontNumber, lazy=True)
- reader = ttf.reader
- tags = sorted(reader.keys())
- print('Listing table info for "%s":' % input)
- format = " %4s %10s %8s %8s"
- print(format % ("tag ", " checksum", " length", " offset"))
- print(format % ("----", "----------", "--------", "--------"))
- for tag in tags:
- entry = reader.tables[tag]
- if ttf.flavor == "woff2":
- # WOFF2 doesn't store table checksums, so they must be calculated
- from fontTools.ttLib.sfnt import calcChecksum
-
- data = entry.loadData(reader.transformBuffer)
- checkSum = calcChecksum(data)
- else:
- checkSum = int(entry.checkSum)
- if checkSum < 0:
- checkSum = checkSum + 0x100000000
- checksum = "0x%08X" % checkSum
- print(format % (tag, checksum, entry.length, entry.offset))
- print()
- ttf.close()
-
-
-@Timer(log, "Done dumping TTX in %(time).3f seconds")
-def ttDump(input, output, options):
- input_name = input
- if input == "-":
- input, input_name = sys.stdin.buffer, sys.stdin.name
- output_name = output
- if output == "-":
- output, output_name = sys.stdout, sys.stdout.name
- log.info('Dumping "%s" to "%s"...', input_name, output_name)
- if options.unicodedata:
- setUnicodeData(options.unicodedata)
- ttf = TTFont(
- input,
- 0,
- ignoreDecompileErrors=options.ignoreDecompileErrors,
- fontNumber=options.fontNumber,
- )
- ttf.saveXML(
- output,
- tables=options.onlyTables,
- skipTables=options.skipTables,
- splitTables=options.splitTables,
- splitGlyphs=options.splitGlyphs,
- disassembleInstructions=options.disassembleInstructions,
- bitmapGlyphDataFormat=options.bitmapGlyphDataFormat,
- newlinestr=options.newlinestr,
- )
- ttf.close()
-
-
-@Timer(log, "Done compiling TTX in %(time).3f seconds")
-def ttCompile(input, output, options):
- input_name = input
- if input == "-":
- input, input_name = sys.stdin, sys.stdin.name
- output_name = output
- if output == "-":
- output, output_name = sys.stdout.buffer, sys.stdout.name
- log.info('Compiling "%s" to "%s"...' % (input_name, output))
- if options.useZopfli:
- from fontTools.ttLib import sfnt
-
- sfnt.USE_ZOPFLI = True
- ttf = TTFont(
- options.mergeFile,
- flavor=options.flavor,
- recalcBBoxes=options.recalcBBoxes,
- recalcTimestamp=options.recalcTimestamp,
- )
- ttf.importXML(input)
-
- if options.recalcTimestamp is None and "head" in ttf and input is not sys.stdin:
- # use TTX file modification time for head "modified" timestamp
- mtime = os.path.getmtime(input)
- ttf["head"].modified = timestampSinceEpoch(mtime)
-
- ttf.save(output)
-
-
-def guessFileType(fileName):
- if fileName == "-":
- header = sys.stdin.buffer.peek(256)
- ext = ""
- else:
- base, ext = os.path.splitext(fileName)
- try:
- with open(fileName, "rb") as f:
- header = f.read(256)
- except IOError:
- return None
-
- if header.startswith(b"\xef\xbb\xbf')
- with gr.Row():
- gr.HTML('')
- gr.HTML('')
- gr.HTML('')
- gr.HTML('')
-
- source.change(
- fn = image_changed,
- inputs = [source],
- outputs = [info, json, mask, frontImage, backImage])
-
- btn.click(
- fn = None,
- inputs = [frontImage, backImage, json],
- outputs = [],
- _js="(frontImage, backImage, json) => { initializeEditor(); importPose(json); importPicture(frontImage); importBackground(backImage); return []; }")
-
- demo.load(fn=None, inputs=[], outputs=[], _js="() => { initializeEditor(); importPose(); return []; }")
-
-print("mount")
-app.mount("/static", StaticFiles(directory="static"), name="static")
-app.mount("/js", StaticFiles(directory="js"), name="js")
-gr.mount_gradio_app(app, demo, path="/")
diff --git a/spaces/jordonpeter01/MusicGen2/audiocraft/utils/utils.py b/spaces/jordonpeter01/MusicGen2/audiocraft/utils/utils.py
deleted file mode 100644
index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/MusicGen2/audiocraft/utils/utils.py
+++ /dev/null
@@ -1,234 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from concurrent.futures import ProcessPoolExecutor
-from functools import wraps
-import hashlib
-import logging
-import typing as tp
-
-import flashy
-import flashy.distrib
-import omegaconf
-import torch
-from torch.nn.utils.rnn import pad_sequence
-
-
-logger = logging.getLogger(__name__)
-
-
-def dict_from_config(cfg: omegaconf.DictConfig) -> dict:
- """Convenience function to map an omegaconf configuration to a dictionary.
-
- Args:
- cfg (omegaconf.DictConfig): Original configuration to map to dict.
- Returns:
- dict: Config as dictionary object.
- """
- dct = omegaconf.OmegaConf.to_container(cfg, resolve=True)
- assert isinstance(dct, dict)
- return dct
-
-
-def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset:
- if max_samples >= len(dataset):
- return dataset
-
- generator = torch.Generator().manual_seed(seed)
- perm = torch.randperm(len(dataset), generator=generator)
- return torch.utils.data.Subset(dataset, perm[:max_samples].tolist())
-
-
-def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int,
- num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader:
- """Convenience function to load dataset into a dataloader with optional subset sampling.
-
- Args:
- dataset: Dataset to load.
- num_samples (Optional[int]): Number of samples to limit subset size.
- batch_size (int): Batch size.
- num_workers (int): Number of workers for data loading.
- seed (int): Random seed.
- """
- if num_samples is not None:
- dataset = random_subset(dataset, num_samples, seed)
-
- dataloader = flashy.distrib.loader(
- dataset,
- batch_size=batch_size,
- num_workers=num_workers,
- **kwargs
- )
- return dataloader
-
-
-def get_dataset_from_loader(dataloader):
- dataset = dataloader.dataset
- if isinstance(dataset, torch.utils.data.Subset):
- return dataset.dataset
- else:
- return dataset
-
-
-def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None):
- """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension.
-
- Args:
- input (torch.Tensor): The input tensor containing probabilities.
- num_samples (int): Number of samples to draw.
- replacement (bool): Whether to draw with replacement or not.
- Keywords args:
- generator (torch.Generator): A pseudorandom number generator for sampling.
- Returns:
- torch.Tensor: Last dimension contains num_samples indices
- sampled from the multinomial probability distribution
- located in the last dimension of tensor input.
- """
- input_ = input.reshape(-1, input.shape[-1])
- output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator)
- output = output_.reshape(*list(input.shape[:-1]), -1)
- return output
-
-
-def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor:
- """Sample next token from top K values along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- k (int): The k in “top-k”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- top_k_value, _ = torch.topk(probs, k, dim=-1)
- min_value_top_k = top_k_value[..., [-1]]
- probs *= (probs >= min_value_top_k).float()
- probs.div_(probs.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs, num_samples=1)
- return next_token
-
-
-def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor:
- """Sample next token from top P probabilities along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- p (int): The p in “top-p”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)
- probs_sum = torch.cumsum(probs_sort, dim=-1)
- mask = probs_sum - probs_sort > p
- probs_sort *= (~mask).float()
- probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs_sort, num_samples=1)
- next_token = torch.gather(probs_idx, -1, next_token)
- return next_token
-
-
-class DummyPoolExecutor:
- """Dummy pool executor to use when we actually have only 1 worker.
- (e.g. instead of ProcessPoolExecutor).
- """
- class DummyResult:
- def __init__(self, func, *args, **kwargs):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- def result(self):
- return self.func(*self.args, **self.kwargs)
-
- def __init__(self, workers, mp_context=None):
- pass
-
- def submit(self, func, *args, **kwargs):
- return DummyPoolExecutor.DummyResult(func, *args, **kwargs)
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, exc_tb):
- return
-
-
-def get_pool_executor(num_workers: int, mp_context=None):
- return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1)
-
-
-def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor:
- """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences).
- For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]]
-
- Args:
- lengths (torch.Tensor): tensor with lengths
- max_len (int): can set the max length manually. Defaults to None.
- Returns:
- torch.Tensor: mask with 0s where there is pad tokens else 1s
- """
- assert len(lengths.shape) == 1, "Length shape should be 1 dimensional."
- final_length = lengths.max().item() if not max_len else max_len
- final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor
- return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None]
-
-
-def hash_trick(word: str, vocab_size: int) -> int:
- """Hash trick to pair each word with an index
-
- Args:
- word (str): word we wish to convert to an index
- vocab_size (int): size of the vocabulary
- Returns:
- int: index of the word in the embedding LUT
- """
- hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16)
- return hash % vocab_size
-
-
-def with_rank_rng(base_seed: int = 1234):
- """Decorator for a function so that the function will use a Random Number Generator
- whose state depend on the GPU rank. The original RNG state is restored upon returning.
-
- Args:
- base_seed (int): Random seed.
- """
- def _decorator(fun: tp.Callable):
- @wraps(fun)
- def _decorated(*args, **kwargs):
- state = torch.get_rng_state()
- seed = base_seed ^ flashy.distrib.rank()
- torch.manual_seed(seed)
- logger.debug('Rank dependent seed set to %d', seed)
- try:
- return fun(*args, **kwargs)
- finally:
- torch.set_rng_state(state)
- logger.debug('RNG state restored.')
- return _decorated
- return _decorator
-
-
-def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- """Get a list of tensors and collate them to a single tensor. according to the following logic:
- - `dim` specifies the time dimension which will be stacked and padded.
- - The output will contain 1 new dimension (dimension index 0) which will be the size of
- of the original list.
-
- Args:
- tensors (tp.List[torch.Tensor]): List of tensors to collate.
- dim (int): Dimension which will be stacked and padded.
- Returns:
- tp.Tuple[torch.Tensor, torch.Tensor]:
- torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension
- (dimension index 0) which will be the size of the original list.
- torch.Tensor: Tensor containing length of original tensor sizes (without padding).
- """
- tensors = [x.transpose(0, dim) for x in tensors]
- lens = torch.LongTensor([len(x) for x in tensors])
- padded_tensors = pad_sequence(tensors)
- padded_tensors = padded_tensors.transpose(0, 1)
- padded_tensors = padded_tensors.transpose(1, dim + 1)
- return padded_tensors, lens
diff --git a/spaces/jpfearnworks/ai_agents/modules/knowledge_retrieval/__init__.py b/spaces/jpfearnworks/ai_agents/modules/knowledge_retrieval/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/jsjuan/PlateNumberRecognition/README.md b/spaces/jsjuan/PlateNumberRecognition/README.md
deleted file mode 100644
index e33ff5002025f14a581a8026c431d2e1ed2f07d1..0000000000000000000000000000000000000000
--- a/spaces/jsjuan/PlateNumberRecognition/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: PlateNumberRecognition
-emoji: 🐠
-colorFrom: gray
-colorTo: indigo
-sdk: gradio
-sdk_version: 2.8.12
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/kdrkdrkdr/ProsekaTTS/text/__init__.py b/spaces/kdrkdrkdr/ProsekaTTS/text/__init__.py
deleted file mode 100644
index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000
--- a/spaces/kdrkdrkdr/ProsekaTTS/text/__init__.py
+++ /dev/null
@@ -1,32 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/keras-io/keras-image-classifier/README.md b/spaces/keras-io/keras-image-classifier/README.md
deleted file mode 100644
index 7be7ce7a501244647e6260dcb8f44a6697d763e9..0000000000000000000000000000000000000000
--- a/spaces/keras-io/keras-image-classifier/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Keras Inception Classifier
-emoji: ❤️
-colorFrom: pink
-colorTo: red
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/keras-io/shiftvit/utils/predict.py b/spaces/keras-io/shiftvit/utils/predict.py
deleted file mode 100644
index 211073367addf506f705b7dce1bf8f6a361f7b14..0000000000000000000000000000000000000000
--- a/spaces/keras-io/shiftvit/utils/predict.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import numpy as np
-import tensorflow as tf
-from tensorflow import keras
-from huggingface_hub import from_pretrained_keras
-from .lr_schedule import WarmUpCosine
-from .constants import Config, class_vocab
-from keras.utils import load_img, img_to_array
-from tensorflow_addons.optimizers import AdamW
-import matplotlib.pyplot as plt
-import pandas as pd
-import random
-config = Config()
-
-##Load Model
-model = from_pretrained_keras("keras-io/shiftvit", custom_objects={"WarmUpCosine":WarmUpCosine, "AdamW": AdamW})
-
-(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
-
-
-AUTO = tf.data.AUTOTUNE
-
-def predict(image_path):
- """
- This function is used for fetching predictions corresponding to input_image.
-
- It outputs confidence scores corresponding to each class on which the model was trained
- """
-
- test_image1 = load_img(image_path,target_size =(32,32))
- test_image = img_to_array(test_image1)
- test_image = np.expand_dims(test_image, axis =0)
- test_image = test_image.astype('uint8')
-
-
- predict_ds = tf.data.Dataset.from_tensor_slices(test_image)
- predict_ds = predict_ds.shuffle(config.buffer_size).batch(config.batch_size).prefetch(AUTO)
- logits = model.predict(predict_ds)
- prob = tf.nn.softmax(logits)
-
- confidences = {}
- prob_list = prob.numpy().flatten().tolist()
- sorted_prob = np.argsort(prob)[::-1].flatten()
- for i in sorted_prob:
- confidences[class_vocab[i]] = float(prob_list[i])
-
- return confidences
-
-
-def predict_batch(image_path):
-
- test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
- test_ds = test_ds.batch(config.batch_size).prefetch(AUTO)
- slice = test_ds.take(1)
-
- slice_pred = model.predict(slice)
- slice_pred = tf.nn.softmax(slice_pred)
-
- saved_plot = "plot.jpg"
- fig = plt.figure()
-
- predictions_df = pd.DataFrame()
- num = random.randint(0,50)
- for images, labels in slice:
- for i, j in zip(range(num,num+6), range(6)):
- ax = plt.subplot(3, 3, j + 1)
- plt.imshow(images[i].numpy().astype("uint8"))
- output = np.argmax(slice_pred[i])
-
- prob_list = slice_pred[i].numpy().flatten().tolist()
- sorted_prob = np.argsort(slice_pred[i])[::-1].flatten()
- prob_scores = {"image": "image "+ str(j), "first": f"predicted {class_vocab[sorted_prob[0]]} with {round(prob_list[sorted_prob[0]] * 100,2)}% confidence",
- "second": f"predicted {class_vocab[sorted_prob[1]]} is {round(prob_list[sorted_prob[1]] * 100,2)}% confidence",
- "third": f"predicted {class_vocab[sorted_prob[2]]} is {round(prob_list[sorted_prob[2]] * 100,2)}% confidence"}
- predictions_df = predictions_df.append(prob_scores,ignore_index=True)
-
- plt.title(f"image {j} : {class_vocab[output]}")
- plt.axis("off")
- plt.savefig(saved_plot,bbox_inches='tight')
-
- return saved_plot, predictions_df
-
-
-
-
-
-
-
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/utils/text2speech.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/utils/text2speech.py
deleted file mode 100644
index 00d165b6cc7774fd200929aafa0ff3b15916111e..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/utils/text2speech.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import os
-import tempfile
-from TTS.api import TTS
-
-
-class TTSTalker():
- def __init__(self) -> None:
- model_name = TTS.list_models()[0]
- self.tts = TTS(model_name)
-
- def test(self, text, language='en'):
-
- tempf = tempfile.NamedTemporaryFile(
- delete = False,
- suffix = ('.'+'wav'),
- )
-
- self.tts.tts_to_file(text, speaker=self.tts.speakers[0], language=language, file_path=tempf.name)
-
- return tempf.name
\ No newline at end of file
diff --git a/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/losses.py b/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/losses.py
deleted file mode 100644
index 87aeaa107af4d53f5a6132b3739d5cafdcded7fc..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChanger/src/face3d/models/arcface_torch/losses.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import torch
-from torch import nn
-
-
-def get_loss(name):
- if name == "cosface":
- return CosFace()
- elif name == "arcface":
- return ArcFace()
- else:
- raise ValueError()
-
-
-class CosFace(nn.Module):
- def __init__(self, s=64.0, m=0.40):
- super(CosFace, self).__init__()
- self.s = s
- self.m = m
-
- def forward(self, cosine, label):
- index = torch.where(label != -1)[0]
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
- m_hot.scatter_(1, label[index, None], self.m)
- cosine[index] -= m_hot
- ret = cosine * self.s
- return ret
-
-
-class ArcFace(nn.Module):
- def __init__(self, s=64.0, m=0.5):
- super(ArcFace, self).__init__()
- self.s = s
- self.m = m
-
- def forward(self, cosine: torch.Tensor, label):
- index = torch.where(label != -1)[0]
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
- m_hot.scatter_(1, label[index, None], self.m)
- cosine.acos_()
- cosine[index] += m_hot
- cosine.cos_().mul_(self.s)
- return cosine
diff --git a/spaces/kevinwang676/VoiceChangers/src/utils/hparams.py b/spaces/kevinwang676/VoiceChangers/src/utils/hparams.py
deleted file mode 100644
index 743c5c7d5a5a9e686f1ccd6fb3c2fb5cb382d62b..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/utils/hparams.py
+++ /dev/null
@@ -1,160 +0,0 @@
-from glob import glob
-import os
-
-class HParams:
- def __init__(self, **kwargs):
- self.data = {}
-
- for key, value in kwargs.items():
- self.data[key] = value
-
- def __getattr__(self, key):
- if key not in self.data:
- raise AttributeError("'HParams' object has no attribute %s" % key)
- return self.data[key]
-
- def set_hparam(self, key, value):
- self.data[key] = value
-
-
-# Default hyperparameters
-hparams = HParams(
- num_mels=80, # Number of mel-spectrogram channels and local conditioning dimensionality
- # network
- rescale=True, # Whether to rescale audio prior to preprocessing
- rescaling_max=0.9, # Rescaling value
-
- # Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction
- # It"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder
- # Does not work if n_ffit is not multiple of hop_size!!
- use_lws=False,
-
- n_fft=800, # Extra window size is filled with 0 paddings to match this parameter
- hop_size=200, # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate)
- win_size=800, # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate)
- sample_rate=16000, # 16000Hz (corresponding to librispeech) (sox --i )
-
- frame_shift_ms=None, # Can replace hop_size parameter. (Recommended: 12.5)
-
- # Mel and Linear spectrograms normalization/scaling and clipping
- signal_normalization=True,
- # Whether to normalize mel spectrograms to some predefined range (following below parameters)
- allow_clipping_in_normalization=True, # Only relevant if mel_normalization = True
- symmetric_mels=True,
- # Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2,
- # faster and cleaner convergence)
- max_abs_value=4.,
- # max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not
- # be too big to avoid gradient explosion,
- # not too small for fast convergence)
- # Contribution by @begeekmyfriend
- # Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude
- # levels. Also allows for better G&L phase reconstruction)
- preemphasize=True, # whether to apply filter
- preemphasis=0.97, # filter coefficient.
-
- # Limits
- min_level_db=-100,
- ref_level_db=20,
- fmin=55,
- # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To
- # test depending on dataset. Pitch info: male~[65, 260], female~[100, 525])
- fmax=7600, # To be increased/reduced depending on data.
-
- ###################### Our training parameters #################################
- img_size=96,
- fps=25,
-
- batch_size=16,
- initial_learning_rate=1e-4,
- nepochs=300000, ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs
- num_workers=20,
- checkpoint_interval=3000,
- eval_interval=3000,
- writer_interval=300,
- save_optimizer_state=True,
-
- syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence.
- syncnet_batch_size=64,
- syncnet_lr=1e-4,
- syncnet_eval_interval=1000,
- syncnet_checkpoint_interval=10000,
-
- disc_wt=0.07,
- disc_initial_learning_rate=1e-4,
-)
-
-
-
-# Default hyperparameters
-hparamsdebug = HParams(
- num_mels=80, # Number of mel-spectrogram channels and local conditioning dimensionality
- # network
- rescale=True, # Whether to rescale audio prior to preprocessing
- rescaling_max=0.9, # Rescaling value
-
- # Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction
- # It"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder
- # Does not work if n_ffit is not multiple of hop_size!!
- use_lws=False,
-
- n_fft=800, # Extra window size is filled with 0 paddings to match this parameter
- hop_size=200, # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate)
- win_size=800, # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate)
- sample_rate=16000, # 16000Hz (corresponding to librispeech) (sox --i )
-
- frame_shift_ms=None, # Can replace hop_size parameter. (Recommended: 12.5)
-
- # Mel and Linear spectrograms normalization/scaling and clipping
- signal_normalization=True,
- # Whether to normalize mel spectrograms to some predefined range (following below parameters)
- allow_clipping_in_normalization=True, # Only relevant if mel_normalization = True
- symmetric_mels=True,
- # Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2,
- # faster and cleaner convergence)
- max_abs_value=4.,
- # max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not
- # be too big to avoid gradient explosion,
- # not too small for fast convergence)
- # Contribution by @begeekmyfriend
- # Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude
- # levels. Also allows for better G&L phase reconstruction)
- preemphasize=True, # whether to apply filter
- preemphasis=0.97, # filter coefficient.
-
- # Limits
- min_level_db=-100,
- ref_level_db=20,
- fmin=55,
- # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To
- # test depending on dataset. Pitch info: male~[65, 260], female~[100, 525])
- fmax=7600, # To be increased/reduced depending on data.
-
- ###################### Our training parameters #################################
- img_size=96,
- fps=25,
-
- batch_size=2,
- initial_learning_rate=1e-3,
- nepochs=100000, ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs
- num_workers=0,
- checkpoint_interval=10000,
- eval_interval=10,
- writer_interval=5,
- save_optimizer_state=True,
-
- syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence.
- syncnet_batch_size=64,
- syncnet_lr=1e-4,
- syncnet_eval_interval=10000,
- syncnet_checkpoint_interval=10000,
-
- disc_wt=0.07,
- disc_initial_learning_rate=1e-4,
-)
-
-
-def hparams_debug_string():
- values = hparams.values()
- hp = [" %s: %s" % (name, values[name]) for name in sorted(values) if name != "sentences"]
- return "Hyperparameters:\n" + "\n".join(hp)
diff --git a/spaces/khizon/emotion-classifier-demo/app.py b/spaces/khizon/emotion-classifier-demo/app.py
deleted file mode 100644
index ff45f09a5923e5af329d5fc9d79e6786643fa782..0000000000000000000000000000000000000000
--- a/spaces/khizon/emotion-classifier-demo/app.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import numpy as np
-import pandas as pd
-from main import SpeechClassifierOutput, Wav2Vec2ForSpeechClassification
-from datasets import load_dataset
-from transformers import AutoConfig, Wav2Vec2Processor
-import torchaudio
-import torch
-import torch.nn.functional as F
-import seaborn as sns
-import matplotlib.pyplot as plt
-import streamlit as st
-import os
-
-sns.set_theme(style="darkgrid", palette="pastel")
-
-def demo_speech_file_to_array_fn(path):
- speech_array, _sampling_rate = torchaudio.load(path, normalize=True)
- resampler = torchaudio.transforms.Resample(_sampling_rate, 16_000)
- speech = resampler(speech_array).squeeze().numpy()
- return speech
-
-def demo_predict(df_row):
- path, emotion = df_row["path"], df_row["emotion"]
- speech = demo_speech_file_to_array_fn(path)
- features = processor(speech, sampling_rate=16_000, return_tensors="pt", padding=True)
-
- input_values = features.input_values.to(device)
- attention_mask = features.attention_mask.to(device)
-
- with torch.no_grad():
- logits = model(input_values, attention_mask=attention_mask).logits
-
- scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
- outputs = [{"Emotion": config.id2label[i], "Score": round(score * 100, 3)} for i, score in enumerate(scores)]
- return outputs
-
-@st.cache(allow_output_mutation=True)
-def cache_model():
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- model_name_or_path = 'khizon/greek-speech-emotion-classifier-demo'
- generic_greek_model = 'lighteternal/wav2vec2-large-xlsr-53-greek'
- config = AutoConfig.from_pretrained(model_name_or_path)
- processor = Wav2Vec2Processor.from_pretrained(generic_greek_model)
- model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
- return config, processor, model, device
-
-@st.cache
-def load_data():
- return pd.read_csv('data/test.csv', delimiter = '\t')
-
-def bar_plot(df):
- fig = plt.figure(figsize=(8, 6))
- plt.title("Prediction Scores")
- plt.xticks(fontsize=12)
- plt.xlim(0,100)
- sns.barplot(x="Score", y="Emotion", data=df)
- st.pyplot(fig)
-
-if __name__ == '__main__':
- if not os.path.exists('/home/user/app/aesdd.zip'):
- os.system('python download_dataset.py')
-
- test = load_data()
-
- config, processor, model, device = cache_model()
- print('Model loaded')
-
- st.title("Emotion Classifier for Greek Speech Audio Demo")
- if st.button("Classify Random Audio"):
- # Load demo file
- idx = np.random.randint(0, len(test))
- sample = test.iloc[idx]
-
- audio_file = open(sample['path'], 'rb')
- audio_bytes = audio_file.read()
-
- st.success(f'Label: {sample["emotion"]}')
- st.audio(audio_bytes, format='audio/ogg')
-
- outputs = demo_predict(sample)
- r = pd.DataFrame(outputs)
- # st.dataframe(r)
- bar_plot(r)
\ No newline at end of file
diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/web/api/__init__.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/web/api/__init__.py
deleted file mode 100644
index a0c8726d6b4456830e947b7165cf77ff1879361f..0000000000000000000000000000000000000000
--- a/spaces/kira4424/Tacotron-zero-short-voice-clone/web/api/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from flask import Blueprint
-from flask_restx import Api
-from .audio import api as audio
-from .synthesizer import api as synthesizer
-
-api_blueprint = Blueprint('api', __name__, url_prefix='/api')
-
-api = Api(
- app=api_blueprint,
- title='Mocking Bird',
- version='1.0',
- description='My API'
-)
-
-api.add_namespace(audio)
-api.add_namespace(synthesizer)
\ No newline at end of file
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/image/io.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/image/io.py
deleted file mode 100644
index d3fa2e8cc06b1a7b0b69de6406980b15d61a1e5d..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/image/io.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import io
-import os.path as osp
-from pathlib import Path
-
-import cv2
-import numpy as np
-from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION,
- IMREAD_UNCHANGED)
-
-from annotator.uniformer.mmcv.utils import check_file_exist, is_str, mkdir_or_exist
-
-try:
- from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG
-except ImportError:
- TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None
-
-try:
- from PIL import Image, ImageOps
-except ImportError:
- Image = None
-
-try:
- import tifffile
-except ImportError:
- tifffile = None
-
-jpeg = None
-supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile']
-
-imread_flags = {
- 'color': IMREAD_COLOR,
- 'grayscale': IMREAD_GRAYSCALE,
- 'unchanged': IMREAD_UNCHANGED,
- 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR,
- 'grayscale_ignore_orientation':
- IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE
-}
-
-imread_backend = 'cv2'
-
-
-def use_backend(backend):
- """Select a backend for image decoding.
-
- Args:
- backend (str): The image decoding backend type. Options are `cv2`,
- `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG)
- and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg`
- file format.
- """
- assert backend in supported_backends
- global imread_backend
- imread_backend = backend
- if imread_backend == 'turbojpeg':
- if TurboJPEG is None:
- raise ImportError('`PyTurboJPEG` is not installed')
- global jpeg
- if jpeg is None:
- jpeg = TurboJPEG()
- elif imread_backend == 'pillow':
- if Image is None:
- raise ImportError('`Pillow` is not installed')
- elif imread_backend == 'tifffile':
- if tifffile is None:
- raise ImportError('`tifffile` is not installed')
-
-
-def _jpegflag(flag='color', channel_order='bgr'):
- channel_order = channel_order.lower()
- if channel_order not in ['rgb', 'bgr']:
- raise ValueError('channel order must be either "rgb" or "bgr"')
-
- if flag == 'color':
- if channel_order == 'bgr':
- return TJPF_BGR
- elif channel_order == 'rgb':
- return TJCS_RGB
- elif flag == 'grayscale':
- return TJPF_GRAY
- else:
- raise ValueError('flag must be "color" or "grayscale"')
-
-
-def _pillow2array(img, flag='color', channel_order='bgr'):
- """Convert a pillow image to numpy array.
-
- Args:
- img (:obj:`PIL.Image.Image`): The image loaded using PIL
- flag (str): Flags specifying the color type of a loaded image,
- candidates are 'color', 'grayscale' and 'unchanged'.
- Default to 'color'.
- channel_order (str): The channel order of the output image array,
- candidates are 'bgr' and 'rgb'. Default to 'bgr'.
-
- Returns:
- np.ndarray: The converted numpy array
- """
- channel_order = channel_order.lower()
- if channel_order not in ['rgb', 'bgr']:
- raise ValueError('channel order must be either "rgb" or "bgr"')
-
- if flag == 'unchanged':
- array = np.array(img)
- if array.ndim >= 3 and array.shape[2] >= 3: # color image
- array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR
- else:
- # Handle exif orientation tag
- if flag in ['color', 'grayscale']:
- img = ImageOps.exif_transpose(img)
- # If the image mode is not 'RGB', convert it to 'RGB' first.
- if img.mode != 'RGB':
- if img.mode != 'LA':
- # Most formats except 'LA' can be directly converted to RGB
- img = img.convert('RGB')
- else:
- # When the mode is 'LA', the default conversion will fill in
- # the canvas with black, which sometimes shadows black objects
- # in the foreground.
- #
- # Therefore, a random color (124, 117, 104) is used for canvas
- img_rgba = img.convert('RGBA')
- img = Image.new('RGB', img_rgba.size, (124, 117, 104))
- img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha
- if flag in ['color', 'color_ignore_orientation']:
- array = np.array(img)
- if channel_order != 'rgb':
- array = array[:, :, ::-1] # RGB to BGR
- elif flag in ['grayscale', 'grayscale_ignore_orientation']:
- img = img.convert('L')
- array = np.array(img)
- else:
- raise ValueError(
- 'flag must be "color", "grayscale", "unchanged", '
- f'"color_ignore_orientation" or "grayscale_ignore_orientation"'
- f' but got {flag}')
- return array
-
-
-def imread(img_or_path, flag='color', channel_order='bgr', backend=None):
- """Read an image.
-
- Args:
- img_or_path (ndarray or str or Path): Either a numpy array or str or
- pathlib.Path. If it is a numpy array (loaded image), then
- it will be returned as is.
- flag (str): Flags specifying the color type of a loaded image,
- candidates are `color`, `grayscale`, `unchanged`,
- `color_ignore_orientation` and `grayscale_ignore_orientation`.
- By default, `cv2` and `pillow` backend would rotate the image
- according to its EXIF info unless called with `unchanged` or
- `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend
- always ignore image's EXIF info regardless of the flag.
- The `turbojpeg` backend only supports `color` and `grayscale`.
- channel_order (str): Order of channel, candidates are `bgr` and `rgb`.
- backend (str | None): The image decoding backend type. Options are
- `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`.
- If backend is None, the global imread_backend specified by
- ``mmcv.use_backend()`` will be used. Default: None.
-
- Returns:
- ndarray: Loaded image array.
- """
-
- if backend is None:
- backend = imread_backend
- if backend not in supported_backends:
- raise ValueError(f'backend: {backend} is not supported. Supported '
- "backends are 'cv2', 'turbojpeg', 'pillow'")
- if isinstance(img_or_path, Path):
- img_or_path = str(img_or_path)
-
- if isinstance(img_or_path, np.ndarray):
- return img_or_path
- elif is_str(img_or_path):
- check_file_exist(img_or_path,
- f'img file does not exist: {img_or_path}')
- if backend == 'turbojpeg':
- with open(img_or_path, 'rb') as in_file:
- img = jpeg.decode(in_file.read(),
- _jpegflag(flag, channel_order))
- if img.shape[-1] == 1:
- img = img[:, :, 0]
- return img
- elif backend == 'pillow':
- img = Image.open(img_or_path)
- img = _pillow2array(img, flag, channel_order)
- return img
- elif backend == 'tifffile':
- img = tifffile.imread(img_or_path)
- return img
- else:
- flag = imread_flags[flag] if is_str(flag) else flag
- img = cv2.imread(img_or_path, flag)
- if flag == IMREAD_COLOR and channel_order == 'rgb':
- cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img)
- return img
- else:
- raise TypeError('"img" must be a numpy array or a str or '
- 'a pathlib.Path object')
-
-
-def imfrombytes(content, flag='color', channel_order='bgr', backend=None):
- """Read an image from bytes.
-
- Args:
- content (bytes): Image bytes got from files or other streams.
- flag (str): Same as :func:`imread`.
- backend (str | None): The image decoding backend type. Options are
- `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the
- global imread_backend specified by ``mmcv.use_backend()`` will be
- used. Default: None.
-
- Returns:
- ndarray: Loaded image array.
- """
-
- if backend is None:
- backend = imread_backend
- if backend not in supported_backends:
- raise ValueError(f'backend: {backend} is not supported. Supported '
- "backends are 'cv2', 'turbojpeg', 'pillow'")
- if backend == 'turbojpeg':
- img = jpeg.decode(content, _jpegflag(flag, channel_order))
- if img.shape[-1] == 1:
- img = img[:, :, 0]
- return img
- elif backend == 'pillow':
- buff = io.BytesIO(content)
- img = Image.open(buff)
- img = _pillow2array(img, flag, channel_order)
- return img
- else:
- img_np = np.frombuffer(content, np.uint8)
- flag = imread_flags[flag] if is_str(flag) else flag
- img = cv2.imdecode(img_np, flag)
- if flag == IMREAD_COLOR and channel_order == 'rgb':
- cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img)
- return img
-
-
-def imwrite(img, file_path, params=None, auto_mkdir=True):
- """Write image to file.
-
- Args:
- img (ndarray): Image array to be written.
- file_path (str): Image file path.
- params (None or list): Same as opencv :func:`imwrite` interface.
- auto_mkdir (bool): If the parent folder of `file_path` does not exist,
- whether to create it automatically.
-
- Returns:
- bool: Successful or not.
- """
- if auto_mkdir:
- dir_name = osp.abspath(osp.dirname(file_path))
- mkdir_or_exist(dir_name)
- return cv2.imwrite(file_path, img, params)
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/fpn.py
deleted file mode 100644
index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/fpn.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN(nn.Module):
- """Feature Pyramid Network.
-
- This is an implementation of - Feature Pyramid Networks for Object
- Detection (https://arxiv.org/abs/1612.03144)
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
- on the original feature from the backbone. If True,
- it is equivalent to `add_extra_convs='on_input'`. If False, it is
- equivalent to set `add_extra_convs='on_output'`. Default to True.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- upsample_cfg (dict): Config dict for interpolate layer.
- Default: `dict(mode='nearest')`
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=False,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None,
- upsample_cfg=dict(mode='nearest')):
- super(FPN, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- if extra_convs_on_inputs:
- # For compatibility with previous release
- # TODO: deprecate `extra_convs_on_inputs`
- self.add_extra_convs = 'on_input'
- else:
- self.add_extra_convs = 'on_output'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- # default init_weights for conv(msra) and norm in ConvModule
- def init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- laterals[i - 1] += F.interpolate(laterals[i],
- **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/backtranslation/sacrebleu.sh b/spaces/koajoel/PolyFormer/fairseq/examples/backtranslation/sacrebleu.sh
deleted file mode 100644
index a70da23f48e2699297799611412783d4560dc45a..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/backtranslation/sacrebleu.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-#!/bin/bash
-
-if [ $# -ne 5 ]; then
- echo "usage: $0 [dataset=wmt14/full] [langpair=en-de] [databin] [bpecode] [model]"
- exit
-fi
-
-
-DATASET=$1
-LANGPAIR=$2
-DATABIN=$3
-BPECODE=$4
-MODEL=$5
-
-SRCLANG=$(echo $LANGPAIR | cut -d '-' -f 1)
-TGTLANG=$(echo $LANGPAIR | cut -d '-' -f 2)
-
-
-BPEROOT=examples/backtranslation/subword-nmt/subword_nmt
-if [ ! -e $BPEROOT ]; then
- BPEROOT=subword-nmt/subword_nmt
- if [ ! -e $BPEROOT ]; then
- echo 'Cloning Subword NMT repository (for BPE pre-processing)...'
- git clone https://github.com/rsennrich/subword-nmt.git
- fi
-fi
-
-
-sacrebleu -t $DATASET -l $LANGPAIR --echo src \
-| sacremoses tokenize -a -l $SRCLANG -q \
-| python $BPEROOT/apply_bpe.py -c $BPECODE \
-| fairseq-interactive $DATABIN --path $MODEL \
- -s $SRCLANG -t $TGTLANG \
- --beam 5 --remove-bpe --buffer-size 1024 --max-tokens 8000 \
-| grep ^H- | cut -f 3- \
-| sacremoses detokenize -l $TGTLANG -q \
-| sacrebleu -t $DATASET -l $LANGPAIR
diff --git a/spaces/kowsik/MygenAIApps/app.py b/spaces/kowsik/MygenAIApps/app.py
deleted file mode 100644
index b81bf231e223eb1c9eb3da9d54ed240adfac4297..0000000000000000000000000000000000000000
--- a/spaces/kowsik/MygenAIApps/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """You are an enthusiastic high school student passionate about science and exploration. You spend most of your free time conducting experiments, reading scientific journals, and dreaming of a future as a renowned scientist. Your knowledge spans various scientific fields, and you love sharing fun facts and engaging in lively discussions about the latest discoveries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/kukuhtw/AutoGPT/autogpt/commands/image_gen.py b/spaces/kukuhtw/AutoGPT/autogpt/commands/image_gen.py
deleted file mode 100644
index 0809fcdd3e38b52a2ce09ca1444f2574813d40f9..0000000000000000000000000000000000000000
--- a/spaces/kukuhtw/AutoGPT/autogpt/commands/image_gen.py
+++ /dev/null
@@ -1,163 +0,0 @@
-""" Image Generation Module for AutoGPT."""
-import io
-import os.path
-import uuid
-from base64 import b64decode
-
-import openai
-import requests
-from PIL import Image
-
-from autogpt.config import Config
-from autogpt.workspace import path_in_workspace
-
-CFG = Config()
-
-
-def generate_image(prompt: str, size: int = 256) -> str:
- """Generate an image from a prompt.
-
- Args:
- prompt (str): The prompt to use
- size (int, optional): The size of the image. Defaults to 256. (Not supported by HuggingFace)
-
- Returns:
- str: The filename of the image
- """
- filename = f"{str(uuid.uuid4())}.jpg"
-
- # DALL-E
- if CFG.image_provider == "dalle":
- return generate_image_with_dalle(prompt, filename, size)
- # HuggingFace
- elif CFG.image_provider == "huggingface":
- return generate_image_with_hf(prompt, filename)
- # SD WebUI
- elif CFG.image_provider == "sdwebui":
- return generate_image_with_sd_webui(prompt, filename, size)
- return "No Image Provider Set"
-
-
-def generate_image_with_hf(prompt: str, filename: str) -> str:
- """Generate an image with HuggingFace's API.
-
- Args:
- prompt (str): The prompt to use
- filename (str): The filename to save the image to
-
- Returns:
- str: The filename of the image
- """
- API_URL = (
- f"https://api-inference.huggingface.co/models/{CFG.huggingface_image_model}"
- )
- if CFG.huggingface_api_token is None:
- raise ValueError(
- "You need to set your Hugging Face API token in the config file."
- )
- headers = {
- "Authorization": f"Bearer {CFG.huggingface_api_token}",
- "X-Use-Cache": "false",
- }
-
- response = requests.post(
- API_URL,
- headers=headers,
- json={
- "inputs": prompt,
- },
- )
-
- image = Image.open(io.BytesIO(response.content))
- print(f"Image Generated for prompt:{prompt}")
-
- image.save(path_in_workspace(filename))
-
- return f"Saved to disk:{filename}"
-
-
-def generate_image_with_dalle(prompt: str, filename: str) -> str:
- """Generate an image with DALL-E.
-
- Args:
- prompt (str): The prompt to use
- filename (str): The filename to save the image to
-
- Returns:
- str: The filename of the image
- """
- openai.api_key = CFG.openai_api_key
-
- # Check for supported image sizes
- if size not in [256, 512, 1024]:
- closest = min([256, 512, 1024], key=lambda x: abs(x - size))
- print(
- f"DALL-E only supports image sizes of 256x256, 512x512, or 1024x1024. Setting to {closest}, was {size}."
- )
- size = closest
-
- response = openai.Image.create(
- prompt=prompt,
- n=1,
- size=f"{size}x{size}",
- response_format="b64_json",
- )
-
- print(f"Image Generated for prompt:{prompt}")
-
- image_data = b64decode(response["data"][0]["b64_json"])
-
- with open(path_in_workspace(filename), mode="wb") as png:
- png.write(image_data)
-
- return f"Saved to disk:{filename}"
-
-
-def generate_image_with_sd_webui(
- prompt: str,
- filename: str,
- size: int = 512,
- negative_prompt: str = "",
- extra: dict = {},
-) -> str:
- """Generate an image with Stable Diffusion webui.
- Args:
- prompt (str): The prompt to use
- filename (str): The filename to save the image to
- size (int, optional): The size of the image. Defaults to 256.
- negative_prompt (str, optional): The negative prompt to use. Defaults to "".
- extra (dict, optional): Extra parameters to pass to the API. Defaults to {}.
- Returns:
- str: The filename of the image
- """
- # Create a session and set the basic auth if needed
- s = requests.Session()
- if CFG.sd_webui_auth:
- username, password = CFG.sd_webui_auth.split(":")
- s.auth = (username, password or "")
-
- # Generate the images
- response = requests.post(
- f"{CFG.sd_webui_url}/sdapi/v1/txt2img",
- json={
- "prompt": prompt,
- "negative_prompt": negative_prompt,
- "sampler_index": "DDIM",
- "steps": 20,
- "cfg_scale": 7.0,
- "width": size,
- "height": size,
- "n_iter": 1,
- **extra,
- },
- )
-
- print(f"Image Generated for prompt:{prompt}")
-
- # Save the image to disk
- response = response.json()
- b64 = b64decode(response["images"][0].split(",", 1)[0])
- image = Image.open(io.BytesIO(b64))
- image.save(path_in_workspace(filename))
-
- return f"Saved to disk:{filename}"
diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/model_irse.py b/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/model_irse.py
deleted file mode 100644
index 6698d9705321dd4a27681ea15204e9ffaa51f62a..0000000000000000000000000000000000000000
--- a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/model_irse.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module
-from model.encoder.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm
-
-"""
-Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Backbone(Module):
- def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True):
- super(Backbone, self).__init__()
- assert input_size in [112, 224], "input_size should be 112 or 224"
- assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152"
- assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se"
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- if input_size == 112:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 7 * 7, 512),
- BatchNorm1d(512, affine=affine))
- else:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 14 * 14, 512),
- BatchNorm1d(512, affine=affine))
-
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer(x)
- return l2_norm(x)
-
-
-def IR_50(input_size):
- """Constructs a ir-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_101(input_size):
- """Constructs a ir-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_152(input_size):
- """Constructs a ir-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_50(input_size):
- """Constructs a ir_se-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_101(input_size):
- """Constructs a ir_se-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_152(input_size):
- """Constructs a ir_se-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ExifTags.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ExifTags.py
deleted file mode 100644
index 2347c6d4c2768b6c946a386bba9f1325ed91193f..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ExifTags.py
+++ /dev/null
@@ -1,380 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# EXIF tags
-#
-# Copyright (c) 2003 by Secret Labs AB
-#
-# See the README file for information on usage and redistribution.
-#
-
-"""
-This module provides constants and clear-text names for various
-well-known EXIF tags.
-"""
-
-from enum import IntEnum
-
-
-class Base(IntEnum):
- # possibly incomplete
- InteropIndex = 0x0001
- ProcessingSoftware = 0x000B
- NewSubfileType = 0x00FE
- SubfileType = 0x00FF
- ImageWidth = 0x0100
- ImageLength = 0x0101
- BitsPerSample = 0x0102
- Compression = 0x0103
- PhotometricInterpretation = 0x0106
- Thresholding = 0x0107
- CellWidth = 0x0108
- CellLength = 0x0109
- FillOrder = 0x010A
- DocumentName = 0x010D
- ImageDescription = 0x010E
- Make = 0x010F
- Model = 0x0110
- StripOffsets = 0x0111
- Orientation = 0x0112
- SamplesPerPixel = 0x0115
- RowsPerStrip = 0x0116
- StripByteCounts = 0x0117
- MinSampleValue = 0x0118
- MaxSampleValue = 0x0119
- XResolution = 0x011A
- YResolution = 0x011B
- PlanarConfiguration = 0x011C
- PageName = 0x011D
- FreeOffsets = 0x0120
- FreeByteCounts = 0x0121
- GrayResponseUnit = 0x0122
- GrayResponseCurve = 0x0123
- T4Options = 0x0124
- T6Options = 0x0125
- ResolutionUnit = 0x0128
- PageNumber = 0x0129
- TransferFunction = 0x012D
- Software = 0x0131
- DateTime = 0x0132
- Artist = 0x013B
- HostComputer = 0x013C
- Predictor = 0x013D
- WhitePoint = 0x013E
- PrimaryChromaticities = 0x013F
- ColorMap = 0x0140
- HalftoneHints = 0x0141
- TileWidth = 0x0142
- TileLength = 0x0143
- TileOffsets = 0x0144
- TileByteCounts = 0x0145
- SubIFDs = 0x014A
- InkSet = 0x014C
- InkNames = 0x014D
- NumberOfInks = 0x014E
- DotRange = 0x0150
- TargetPrinter = 0x0151
- ExtraSamples = 0x0152
- SampleFormat = 0x0153
- SMinSampleValue = 0x0154
- SMaxSampleValue = 0x0155
- TransferRange = 0x0156
- ClipPath = 0x0157
- XClipPathUnits = 0x0158
- YClipPathUnits = 0x0159
- Indexed = 0x015A
- JPEGTables = 0x015B
- OPIProxy = 0x015F
- JPEGProc = 0x0200
- JpegIFOffset = 0x0201
- JpegIFByteCount = 0x0202
- JpegRestartInterval = 0x0203
- JpegLosslessPredictors = 0x0205
- JpegPointTransforms = 0x0206
- JpegQTables = 0x0207
- JpegDCTables = 0x0208
- JpegACTables = 0x0209
- YCbCrCoefficients = 0x0211
- YCbCrSubSampling = 0x0212
- YCbCrPositioning = 0x0213
- ReferenceBlackWhite = 0x0214
- XMLPacket = 0x02BC
- RelatedImageFileFormat = 0x1000
- RelatedImageWidth = 0x1001
- RelatedImageLength = 0x1002
- Rating = 0x4746
- RatingPercent = 0x4749
- ImageID = 0x800D
- CFARepeatPatternDim = 0x828D
- BatteryLevel = 0x828F
- Copyright = 0x8298
- ExposureTime = 0x829A
- FNumber = 0x829D
- IPTCNAA = 0x83BB
- ImageResources = 0x8649
- ExifOffset = 0x8769
- InterColorProfile = 0x8773
- ExposureProgram = 0x8822
- SpectralSensitivity = 0x8824
- GPSInfo = 0x8825
- ISOSpeedRatings = 0x8827
- OECF = 0x8828
- Interlace = 0x8829
- TimeZoneOffset = 0x882A
- SelfTimerMode = 0x882B
- SensitivityType = 0x8830
- StandardOutputSensitivity = 0x8831
- RecommendedExposureIndex = 0x8832
- ISOSpeed = 0x8833
- ISOSpeedLatitudeyyy = 0x8834
- ISOSpeedLatitudezzz = 0x8835
- ExifVersion = 0x9000
- DateTimeOriginal = 0x9003
- DateTimeDigitized = 0x9004
- OffsetTime = 0x9010
- OffsetTimeOriginal = 0x9011
- OffsetTimeDigitized = 0x9012
- ComponentsConfiguration = 0x9101
- CompressedBitsPerPixel = 0x9102
- ShutterSpeedValue = 0x9201
- ApertureValue = 0x9202
- BrightnessValue = 0x9203
- ExposureBiasValue = 0x9204
- MaxApertureValue = 0x9205
- SubjectDistance = 0x9206
- MeteringMode = 0x9207
- LightSource = 0x9208
- Flash = 0x9209
- FocalLength = 0x920A
- Noise = 0x920D
- ImageNumber = 0x9211
- SecurityClassification = 0x9212
- ImageHistory = 0x9213
- TIFFEPStandardID = 0x9216
- MakerNote = 0x927C
- UserComment = 0x9286
- SubsecTime = 0x9290
- SubsecTimeOriginal = 0x9291
- SubsecTimeDigitized = 0x9292
- AmbientTemperature = 0x9400
- Humidity = 0x9401
- Pressure = 0x9402
- WaterDepth = 0x9403
- Acceleration = 0x9404
- CameraElevationAngle = 0x9405
- XPTitle = 0x9C9B
- XPComment = 0x9C9C
- XPAuthor = 0x9C9D
- XPKeywords = 0x9C9E
- XPSubject = 0x9C9F
- FlashPixVersion = 0xA000
- ColorSpace = 0xA001
- ExifImageWidth = 0xA002
- ExifImageHeight = 0xA003
- RelatedSoundFile = 0xA004
- ExifInteroperabilityOffset = 0xA005
- FlashEnergy = 0xA20B
- SpatialFrequencyResponse = 0xA20C
- FocalPlaneXResolution = 0xA20E
- FocalPlaneYResolution = 0xA20F
- FocalPlaneResolutionUnit = 0xA210
- SubjectLocation = 0xA214
- ExposureIndex = 0xA215
- SensingMethod = 0xA217
- FileSource = 0xA300
- SceneType = 0xA301
- CFAPattern = 0xA302
- CustomRendered = 0xA401
- ExposureMode = 0xA402
- WhiteBalance = 0xA403
- DigitalZoomRatio = 0xA404
- FocalLengthIn35mmFilm = 0xA405
- SceneCaptureType = 0xA406
- GainControl = 0xA407
- Contrast = 0xA408
- Saturation = 0xA409
- Sharpness = 0xA40A
- DeviceSettingDescription = 0xA40B
- SubjectDistanceRange = 0xA40C
- ImageUniqueID = 0xA420
- CameraOwnerName = 0xA430
- BodySerialNumber = 0xA431
- LensSpecification = 0xA432
- LensMake = 0xA433
- LensModel = 0xA434
- LensSerialNumber = 0xA435
- CompositeImage = 0xA460
- CompositeImageCount = 0xA461
- CompositeImageExposureTimes = 0xA462
- Gamma = 0xA500
- PrintImageMatching = 0xC4A5
- DNGVersion = 0xC612
- DNGBackwardVersion = 0xC613
- UniqueCameraModel = 0xC614
- LocalizedCameraModel = 0xC615
- CFAPlaneColor = 0xC616
- CFALayout = 0xC617
- LinearizationTable = 0xC618
- BlackLevelRepeatDim = 0xC619
- BlackLevel = 0xC61A
- BlackLevelDeltaH = 0xC61B
- BlackLevelDeltaV = 0xC61C
- WhiteLevel = 0xC61D
- DefaultScale = 0xC61E
- DefaultCropOrigin = 0xC61F
- DefaultCropSize = 0xC620
- ColorMatrix1 = 0xC621
- ColorMatrix2 = 0xC622
- CameraCalibration1 = 0xC623
- CameraCalibration2 = 0xC624
- ReductionMatrix1 = 0xC625
- ReductionMatrix2 = 0xC626
- AnalogBalance = 0xC627
- AsShotNeutral = 0xC628
- AsShotWhiteXY = 0xC629
- BaselineExposure = 0xC62A
- BaselineNoise = 0xC62B
- BaselineSharpness = 0xC62C
- BayerGreenSplit = 0xC62D
- LinearResponseLimit = 0xC62E
- CameraSerialNumber = 0xC62F
- LensInfo = 0xC630
- ChromaBlurRadius = 0xC631
- AntiAliasStrength = 0xC632
- ShadowScale = 0xC633
- DNGPrivateData = 0xC634
- MakerNoteSafety = 0xC635
- CalibrationIlluminant1 = 0xC65A
- CalibrationIlluminant2 = 0xC65B
- BestQualityScale = 0xC65C
- RawDataUniqueID = 0xC65D
- OriginalRawFileName = 0xC68B
- OriginalRawFileData = 0xC68C
- ActiveArea = 0xC68D
- MaskedAreas = 0xC68E
- AsShotICCProfile = 0xC68F
- AsShotPreProfileMatrix = 0xC690
- CurrentICCProfile = 0xC691
- CurrentPreProfileMatrix = 0xC692
- ColorimetricReference = 0xC6BF
- CameraCalibrationSignature = 0xC6F3
- ProfileCalibrationSignature = 0xC6F4
- AsShotProfileName = 0xC6F6
- NoiseReductionApplied = 0xC6F7
- ProfileName = 0xC6F8
- ProfileHueSatMapDims = 0xC6F9
- ProfileHueSatMapData1 = 0xC6FA
- ProfileHueSatMapData2 = 0xC6FB
- ProfileToneCurve = 0xC6FC
- ProfileEmbedPolicy = 0xC6FD
- ProfileCopyright = 0xC6FE
- ForwardMatrix1 = 0xC714
- ForwardMatrix2 = 0xC715
- PreviewApplicationName = 0xC716
- PreviewApplicationVersion = 0xC717
- PreviewSettingsName = 0xC718
- PreviewSettingsDigest = 0xC719
- PreviewColorSpace = 0xC71A
- PreviewDateTime = 0xC71B
- RawImageDigest = 0xC71C
- OriginalRawFileDigest = 0xC71D
- SubTileBlockSize = 0xC71E
- RowInterleaveFactor = 0xC71F
- ProfileLookTableDims = 0xC725
- ProfileLookTableData = 0xC726
- OpcodeList1 = 0xC740
- OpcodeList2 = 0xC741
- OpcodeList3 = 0xC74E
- NoiseProfile = 0xC761
-
-
-"""Maps EXIF tags to tag names."""
-TAGS = {
- **{i.value: i.name for i in Base},
- 0x920C: "SpatialFrequencyResponse",
- 0x9214: "SubjectLocation",
- 0x9215: "ExposureIndex",
- 0x828E: "CFAPattern",
- 0x920B: "FlashEnergy",
- 0x9216: "TIFF/EPStandardID",
-}
-
-
-class GPS(IntEnum):
- GPSVersionID = 0
- GPSLatitudeRef = 1
- GPSLatitude = 2
- GPSLongitudeRef = 3
- GPSLongitude = 4
- GPSAltitudeRef = 5
- GPSAltitude = 6
- GPSTimeStamp = 7
- GPSSatellites = 8
- GPSStatus = 9
- GPSMeasureMode = 10
- GPSDOP = 11
- GPSSpeedRef = 12
- GPSSpeed = 13
- GPSTrackRef = 14
- GPSTrack = 15
- GPSImgDirectionRef = 16
- GPSImgDirection = 17
- GPSMapDatum = 18
- GPSDestLatitudeRef = 19
- GPSDestLatitude = 20
- GPSDestLongitudeRef = 21
- GPSDestLongitude = 22
- GPSDestBearingRef = 23
- GPSDestBearing = 24
- GPSDestDistanceRef = 25
- GPSDestDistance = 26
- GPSProcessingMethod = 27
- GPSAreaInformation = 28
- GPSDateStamp = 29
- GPSDifferential = 30
- GPSHPositioningError = 31
-
-
-"""Maps EXIF GPS tags to tag names."""
-GPSTAGS = {i.value: i.name for i in GPS}
-
-
-class Interop(IntEnum):
- InteropIndex = 1
- InteropVersion = 2
- RelatedImageFileFormat = 4096
- RelatedImageWidth = 4097
- RleatedImageHeight = 4098
-
-
-class IFD(IntEnum):
- Exif = 34665
- GPSInfo = 34853
- Makernote = 37500
- Interop = 40965
- IFD1 = -1
-
-
-class LightSource(IntEnum):
- Unknown = 0
- Daylight = 1
- Fluorescent = 2
- Tungsten = 3
- Flash = 4
- Fine = 9
- Cloudy = 10
- Shade = 11
- DaylightFluorescent = 12
- DayWhiteFluorescent = 13
- CoolWhiteFluorescent = 14
- WhiteFluorescent = 15
- StandardLightA = 17
- StandardLightB = 18
- StandardLightC = 19
- D55 = 20
- D65 = 21
- D75 = 22
- D50 = 23
- ISO = 24
- Other = 255
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/_core/_testing.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/_core/_testing.py
deleted file mode 100644
index c8191b3866f7104d2d02d32da9826c68ca17ac95..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/_core/_testing.py
+++ /dev/null
@@ -1,82 +0,0 @@
-from __future__ import annotations
-
-from typing import Any, Awaitable, Generator
-
-from ._compat import DeprecatedAwaitableList, _warn_deprecation
-from ._eventloop import get_asynclib
-
-
-class TaskInfo:
- """
- Represents an asynchronous task.
-
- :ivar int id: the unique identifier of the task
- :ivar parent_id: the identifier of the parent task, if any
- :vartype parent_id: Optional[int]
- :ivar str name: the description of the task (if any)
- :ivar ~collections.abc.Coroutine coro: the coroutine object of the task
- """
-
- __slots__ = "_name", "id", "parent_id", "name", "coro"
-
- def __init__(
- self,
- id: int,
- parent_id: int | None,
- name: str | None,
- coro: Generator[Any, Any, Any] | Awaitable[Any],
- ):
- func = get_current_task
- self._name = f"{func.__module__}.{func.__qualname__}"
- self.id: int = id
- self.parent_id: int | None = parent_id
- self.name: str | None = name
- self.coro: Generator[Any, Any, Any] | Awaitable[Any] = coro
-
- def __eq__(self, other: object) -> bool:
- if isinstance(other, TaskInfo):
- return self.id == other.id
-
- return NotImplemented
-
- def __hash__(self) -> int:
- return hash(self.id)
-
- def __repr__(self) -> str:
- return f"{self.__class__.__name__}(id={self.id!r}, name={self.name!r})"
-
- def __await__(self) -> Generator[None, None, TaskInfo]:
- _warn_deprecation(self)
- if False:
- yield
-
- return self
-
- def _unwrap(self) -> TaskInfo:
- return self
-
-
-def get_current_task() -> TaskInfo:
- """
- Return the current task.
-
- :return: a representation of the current task
-
- """
- return get_asynclib().get_current_task()
-
-
-def get_running_tasks() -> DeprecatedAwaitableList[TaskInfo]:
- """
- Return a list of running tasks in the current event loop.
-
- :return: a list of task info objects
-
- """
- tasks = get_asynclib().get_running_tasks()
- return DeprecatedAwaitableList(tasks, func=get_running_tasks)
-
-
-async def wait_all_tasks_blocked() -> None:
- """Wait until all other tasks are waiting for something."""
- await get_asynclib().wait_all_tasks_blocked()
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_s_b_i_x.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_s_b_i_x.py
deleted file mode 100644
index 29b82c3e43e8bd199a841c577774885d92499aba..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_s_b_i_x.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import safeEval, num2binary, binary2num
-from . import DefaultTable
-from .sbixStrike import Strike
-
-
-sbixHeaderFormat = """
- >
- version: H # Version number (set to 1)
- flags: H # The only two bits used in the flags field are bits 0
- # and 1. For historical reasons, bit 0 must always be 1.
- # Bit 1 is a sbixDrawOutlines flag and is interpreted as
- # follows:
- # 0: Draw only 'sbix' bitmaps
- # 1: Draw both 'sbix' bitmaps and outlines, in that
- # order
- numStrikes: L # Number of bitmap strikes to follow
-"""
-sbixHeaderFormatSize = sstruct.calcsize(sbixHeaderFormat)
-
-
-sbixStrikeOffsetFormat = """
- >
- strikeOffset: L # Offset from begining of table to data for the
- # individual strike
-"""
-sbixStrikeOffsetFormatSize = sstruct.calcsize(sbixStrikeOffsetFormat)
-
-
-class table__s_b_i_x(DefaultTable.DefaultTable):
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.version = 1
- self.flags = 1
- self.numStrikes = 0
- self.strikes = {}
- self.strikeOffsets = []
-
- def decompile(self, data, ttFont):
- # read table header
- sstruct.unpack(sbixHeaderFormat, data[:sbixHeaderFormatSize], self)
- # collect offsets to individual strikes in self.strikeOffsets
- for i in range(self.numStrikes):
- current_offset = sbixHeaderFormatSize + i * sbixStrikeOffsetFormatSize
- offset_entry = sbixStrikeOffset()
- sstruct.unpack(
- sbixStrikeOffsetFormat,
- data[current_offset : current_offset + sbixStrikeOffsetFormatSize],
- offset_entry,
- )
- self.strikeOffsets.append(offset_entry.strikeOffset)
-
- # decompile Strikes
- for i in range(self.numStrikes - 1, -1, -1):
- current_strike = Strike(rawdata=data[self.strikeOffsets[i] :])
- data = data[: self.strikeOffsets[i]]
- current_strike.decompile(ttFont)
- # print " Strike length: %xh" % len(bitmapSetData)
- # print "Number of Glyph entries:", len(current_strike.glyphs)
- if current_strike.ppem in self.strikes:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("Pixel 'ppem' must be unique for each Strike")
- self.strikes[current_strike.ppem] = current_strike
-
- # after the glyph data records have been extracted, we don't need the offsets anymore
- del self.strikeOffsets
- del self.numStrikes
-
- def compile(self, ttFont):
- sbixData = b""
- self.numStrikes = len(self.strikes)
- sbixHeader = sstruct.pack(sbixHeaderFormat, self)
-
- # calculate offset to start of first strike
- setOffset = sbixHeaderFormatSize + sbixStrikeOffsetFormatSize * self.numStrikes
-
- for si in sorted(self.strikes.keys()):
- current_strike = self.strikes[si]
- current_strike.compile(ttFont)
- # append offset to this strike to table header
- current_strike.strikeOffset = setOffset
- sbixHeader += sstruct.pack(sbixStrikeOffsetFormat, current_strike)
- setOffset += len(current_strike.data)
- sbixData += current_strike.data
-
- return sbixHeader + sbixData
-
- def toXML(self, xmlWriter, ttFont):
- xmlWriter.simpletag("version", value=self.version)
- xmlWriter.newline()
- xmlWriter.simpletag("flags", value=num2binary(self.flags, 16))
- xmlWriter.newline()
- for i in sorted(self.strikes.keys()):
- self.strikes[i].toXML(xmlWriter, ttFont)
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "version":
- setattr(self, name, safeEval(attrs["value"]))
- elif name == "flags":
- setattr(self, name, binary2num(attrs["value"]))
- elif name == "strike":
- current_strike = Strike()
- for element in content:
- if isinstance(element, tuple):
- name, attrs, content = element
- current_strike.fromXML(name, attrs, content, ttFont)
- self.strikes[current_strike.ppem] = current_strike
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("can't handle '%s' element" % name)
-
-
-# Helper classes
-
-
-class sbixStrikeOffset(object):
- pass
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/routes.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/routes.py
deleted file mode 100644
index 03c0d878cec47fd2f0977b98a6532bd1c0508955..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/routes.py
+++ /dev/null
@@ -1,812 +0,0 @@
-"""Implements a FastAPI server to run the gradio interface. Note that some types in this
-module use the Optional/Union notation so that they work correctly with pydantic."""
-
-from __future__ import annotations
-
-import asyncio
-import inspect
-import json
-import mimetypes
-import os
-import posixpath
-import secrets
-import tempfile
-import traceback
-from asyncio import TimeoutError as AsyncTimeOutError
-from collections import defaultdict
-from copy import deepcopy
-from pathlib import Path
-from typing import Any, Dict, List, Optional, Type
-from urllib.parse import urlparse
-
-import fastapi
-import httpx
-import markupsafe
-import orjson
-import pkg_resources
-from fastapi import Depends, FastAPI, File, HTTPException, UploadFile, WebSocket, status
-from fastapi.middleware.cors import CORSMiddleware
-from fastapi.responses import (
- FileResponse,
- HTMLResponse,
- JSONResponse,
- PlainTextResponse,
-)
-from fastapi.security import OAuth2PasswordRequestForm
-from fastapi.templating import Jinja2Templates
-from gradio_client.documentation import document, set_documentation_group
-from jinja2.exceptions import TemplateNotFound
-from starlette.background import BackgroundTask
-from starlette.responses import RedirectResponse, StreamingResponse
-from starlette.websockets import WebSocketState
-
-import gradio
-import gradio.ranged_response as ranged_response
-from gradio import utils
-from gradio.context import Context
-from gradio.data_classes import PredictBody, ResetBody
-from gradio.exceptions import Error
-from gradio.helpers import EventData
-from gradio.queueing import Estimation, Event
-from gradio.utils import cancel_tasks, run_coro_in_background, set_task_name
-
-mimetypes.init()
-
-STATIC_TEMPLATE_LIB = pkg_resources.resource_filename("gradio", "templates/")
-STATIC_PATH_LIB = pkg_resources.resource_filename("gradio", "templates/frontend/static")
-BUILD_PATH_LIB = pkg_resources.resource_filename("gradio", "templates/frontend/assets")
-VERSION_FILE = pkg_resources.resource_filename("gradio", "version.txt")
-with open(VERSION_FILE) as version_file:
- VERSION = version_file.read()
-
-
-class ORJSONResponse(JSONResponse):
- media_type = "application/json"
-
- @staticmethod
- def _render(content: Any) -> bytes:
- return orjson.dumps(
- content,
- option=orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_PASSTHROUGH_DATETIME,
- default=str,
- )
-
- def render(self, content: Any) -> bytes:
- return ORJSONResponse._render(content)
-
- @staticmethod
- def _render_str(content: Any) -> str:
- return ORJSONResponse._render(content).decode("utf-8")
-
-
-def toorjson(value):
- return markupsafe.Markup(
- ORJSONResponse._render_str(value)
- .replace("<", "\\u003c")
- .replace(">", "\\u003e")
- .replace("&", "\\u0026")
- .replace("'", "\\u0027")
- )
-
-
-templates = Jinja2Templates(directory=STATIC_TEMPLATE_LIB)
-templates.env.filters["toorjson"] = toorjson
-
-client = httpx.AsyncClient()
-
-###########
-# Auth
-###########
-
-
-class App(FastAPI):
- """
- FastAPI App Wrapper
- """
-
- def __init__(self, **kwargs):
- self.tokens = {}
- self.auth = None
- self.blocks: gradio.Blocks | None = None
- self.state_holder = {}
- self.iterators = defaultdict(dict)
- self.lock = asyncio.Lock()
- self.queue_token = secrets.token_urlsafe(32)
- self.startup_events_triggered = False
- self.uploaded_file_dir = os.environ.get("GRADIO_TEMP_DIR") or str(
- Path(tempfile.gettempdir()) / "gradio"
- )
- # Allow user to manually set `docs_url` and `redoc_url`
- # when instantiating an App; when they're not set, disable docs and redoc.
- kwargs.setdefault("docs_url", None)
- kwargs.setdefault("redoc_url", None)
- super().__init__(**kwargs)
-
- def configure_app(self, blocks: gradio.Blocks) -> None:
- auth = blocks.auth
- if auth is not None:
- if not callable(auth):
- self.auth = {account[0]: account[1] for account in auth}
- else:
- self.auth = auth
- else:
- self.auth = None
-
- self.blocks = blocks
- if hasattr(self.blocks, "_queue"):
- self.blocks._queue.set_access_token(self.queue_token)
- self.cwd = os.getcwd()
- self.favicon_path = blocks.favicon_path
- self.tokens = {}
- self.root_path = blocks.root_path
-
- def get_blocks(self) -> gradio.Blocks:
- if self.blocks is None:
- raise ValueError("No Blocks has been configured for this app.")
- return self.blocks
-
- @staticmethod
- def build_proxy_request(url_path):
- url = httpx.URL(url_path)
- is_hf_url = url.host.endswith(".hf.space")
- headers = {}
- if Context.hf_token is not None and is_hf_url:
- headers["Authorization"] = f"Bearer {Context.hf_token}"
- rp_req = client.build_request("GET", url, headers=headers)
- return rp_req
-
- @staticmethod
- def create_app(
- blocks: gradio.Blocks, app_kwargs: Dict[str, Any] | None = None
- ) -> App:
- app_kwargs = app_kwargs or {}
- app_kwargs.setdefault("default_response_class", ORJSONResponse)
- app = App(**app_kwargs)
- app.configure_app(blocks)
-
- app.add_middleware(
- CORSMiddleware,
- allow_origins=["*"],
- allow_methods=["*"],
- allow_headers=["*"],
- )
-
- @app.get("/user")
- @app.get("/user/")
- def get_current_user(request: fastapi.Request) -> Optional[str]:
- token = request.cookies.get("access-token") or request.cookies.get(
- "access-token-unsecure"
- )
- return app.tokens.get(token)
-
- @app.get("/login_check")
- @app.get("/login_check/")
- def login_check(user: str = Depends(get_current_user)):
- if app.auth is None or user is not None:
- return
- raise HTTPException(
- status_code=status.HTTP_401_UNAUTHORIZED, detail="Not authenticated"
- )
-
- async def ws_login_check(websocket: WebSocket) -> Optional[str]:
- token = websocket.cookies.get("access-token") or websocket.cookies.get(
- "access-token-unsecure"
- )
- return token # token is returned to allow request in queue
-
- @app.get("/token")
- @app.get("/token/")
- def get_token(request: fastapi.Request) -> dict:
- token = request.cookies.get("access-token")
- return {"token": token, "user": app.tokens.get(token)}
-
- @app.get("/app_id")
- @app.get("/app_id/")
- def app_id(request: fastapi.Request) -> dict:
- return {"app_id": app.get_blocks().app_id}
-
- @app.post("/login")
- @app.post("/login/")
- def login(form_data: OAuth2PasswordRequestForm = Depends()):
- username, password = form_data.username, form_data.password
- if app.auth is None:
- return RedirectResponse(url="/", status_code=status.HTTP_302_FOUND)
- if (
- not callable(app.auth)
- and username in app.auth
- and app.auth[username] == password
- ) or (callable(app.auth) and app.auth.__call__(username, password)):
- token = secrets.token_urlsafe(16)
- app.tokens[token] = username
- response = JSONResponse(content={"success": True})
- response.set_cookie(
- key="access-token",
- value=token,
- httponly=True,
- samesite="none",
- secure=True,
- )
- response.set_cookie(
- key="access-token-unsecure", value=token, httponly=True
- )
- return response
- else:
- raise HTTPException(status_code=400, detail="Incorrect credentials.")
-
- ###############
- # Main Routes
- ###############
-
- @app.head("/", response_class=HTMLResponse)
- @app.get("/", response_class=HTMLResponse)
- def main(request: fastapi.Request, user: str = Depends(get_current_user)):
- mimetypes.add_type("application/javascript", ".js")
- blocks = app.get_blocks()
- root_path = request.scope.get("root_path", "")
-
- if app.auth is None or user is not None:
- config = app.get_blocks().config
- config["root"] = root_path
- else:
- config = {
- "auth_required": True,
- "auth_message": blocks.auth_message,
- "is_space": app.get_blocks().is_space,
- "root": root_path,
- }
-
- try:
- template = (
- "frontend/share.html" if blocks.share else "frontend/index.html"
- )
- return templates.TemplateResponse(
- template,
- {"request": request, "config": config},
- )
- except TemplateNotFound as err:
- if blocks.share:
- raise ValueError(
- "Did you install Gradio from source files? Share mode only "
- "works when Gradio is installed through the pip package."
- ) from err
- else:
- raise ValueError(
- "Did you install Gradio from source files? You need to build "
- "the frontend by running /scripts/build_frontend.sh"
- ) from err
-
- @app.get("/info/", dependencies=[Depends(login_check)])
- @app.get("/info", dependencies=[Depends(login_check)])
- def api_info(serialize: bool = True):
- config = app.get_blocks().config
- return gradio.blocks.get_api_info(config, serialize) # type: ignore
-
- @app.get("/config/", dependencies=[Depends(login_check)])
- @app.get("/config", dependencies=[Depends(login_check)])
- def get_config(request: fastapi.Request):
- root_path = request.scope.get("root_path", "")
- config = app.get_blocks().config
- config["root"] = root_path
- return config
-
- @app.get("/static/{path:path}")
- def static_resource(path: str):
- static_file = safe_join(STATIC_PATH_LIB, path)
- return FileResponse(static_file)
-
- @app.get("/assets/{path:path}")
- def build_resource(path: str):
- build_file = safe_join(BUILD_PATH_LIB, path)
- return FileResponse(build_file)
-
- @app.get("/favicon.ico")
- async def favicon():
- blocks = app.get_blocks()
- if blocks.favicon_path is None:
- return static_resource("img/logo.svg")
- else:
- return FileResponse(blocks.favicon_path)
-
- @app.head("/proxy={url_path:path}", dependencies=[Depends(login_check)])
- @app.get("/proxy={url_path:path}", dependencies=[Depends(login_check)])
- async def reverse_proxy(url_path: str):
- # Adapted from: https://github.com/tiangolo/fastapi/issues/1788
- rp_req = app.build_proxy_request(url_path)
- rp_resp = await client.send(rp_req, stream=True)
- return StreamingResponse(
- rp_resp.aiter_raw(),
- status_code=rp_resp.status_code,
- headers=rp_resp.headers, # type: ignore
- background=BackgroundTask(rp_resp.aclose),
- )
-
- @app.head("/file={path_or_url:path}", dependencies=[Depends(login_check)])
- @app.get("/file={path_or_url:path}", dependencies=[Depends(login_check)])
- async def file(path_or_url: str, request: fastapi.Request):
- blocks = app.get_blocks()
- if utils.validate_url(path_or_url):
- return RedirectResponse(
- url=path_or_url, status_code=status.HTTP_302_FOUND
- )
-
- abs_path = utils.abspath(path_or_url)
-
- in_blocklist = any(
- utils.is_in_or_equal(abs_path, blocked_path)
- for blocked_path in blocks.blocked_paths
- )
- is_dotfile = any(part.startswith(".") for part in abs_path.parts)
- is_dir = abs_path.is_dir()
-
- if in_blocklist or is_dotfile or is_dir:
- raise HTTPException(403, f"File not allowed: {path_or_url}.")
- if not abs_path.exists():
- raise HTTPException(404, f"File not found: {path_or_url}.")
-
- in_app_dir = utils.is_in_or_equal(abs_path, app.cwd)
- created_by_app = str(abs_path) in set().union(*blocks.temp_file_sets)
- in_allowlist = any(
- utils.is_in_or_equal(abs_path, allowed_path)
- for allowed_path in blocks.allowed_paths
- )
- was_uploaded = utils.is_in_or_equal(abs_path, app.uploaded_file_dir)
-
- if not (in_app_dir or created_by_app or in_allowlist or was_uploaded):
- raise HTTPException(403, f"File not allowed: {path_or_url}.")
-
- range_val = request.headers.get("Range", "").strip()
- if range_val.startswith("bytes=") and "-" in range_val:
- range_val = range_val[6:]
- start, end = range_val.split("-")
- if start.isnumeric() and end.isnumeric():
- start = int(start)
- end = int(end)
- response = ranged_response.RangedFileResponse(
- abs_path,
- ranged_response.OpenRange(start, end),
- dict(request.headers),
- stat_result=os.stat(abs_path),
- )
- return response
- return FileResponse(abs_path, headers={"Accept-Ranges": "bytes"})
-
- @app.get("/file/{path:path}", dependencies=[Depends(login_check)])
- async def file_deprecated(path: str, request: fastapi.Request):
- return await file(path, request)
-
- @app.post("/reset/")
- @app.post("/reset")
- async def reset_iterator(body: ResetBody):
- if body.session_hash not in app.iterators:
- return {"success": False}
- async with app.lock:
- app.iterators[body.session_hash][body.fn_index] = None
- app.iterators[body.session_hash]["should_reset"].add(body.fn_index)
- return {"success": True}
-
- async def run_predict(
- body: PredictBody,
- request: Request | List[Request],
- fn_index_inferred: int,
- ):
- if hasattr(body, "session_hash"):
- if body.session_hash not in app.state_holder:
- app.state_holder[body.session_hash] = {
- _id: deepcopy(getattr(block, "value", None))
- for _id, block in app.get_blocks().blocks.items()
- if getattr(block, "stateful", False)
- }
- session_state = app.state_holder[body.session_hash]
- iterators = app.iterators[body.session_hash]
- # The should_reset set keeps track of the fn_indices
- # that have been cancelled. When a job is cancelled,
- # the /reset route will mark the jobs as having been reset.
- # That way if the cancel job finishes BEFORE the job being cancelled
- # the job being cancelled will not overwrite the state of the iterator.
- # In all cases, should_reset will be the empty set the next time
- # the fn_index is run.
- app.iterators[body.session_hash]["should_reset"] = set()
- else:
- session_state = {}
- iterators = {}
- event_id = getattr(body, "event_id", None)
- raw_input = body.data
- fn_index = body.fn_index
-
- dependency = app.get_blocks().dependencies[fn_index_inferred]
- target = dependency["targets"][0] if len(dependency["targets"]) else None
- event_data = EventData(
- app.get_blocks().blocks.get(target) if target else None,
- body.event_data,
- )
- batch = dependency["batch"]
- if not (body.batched) and batch:
- raw_input = [raw_input]
- try:
- with utils.MatplotlibBackendMananger():
- output = await app.get_blocks().process_api(
- fn_index=fn_index_inferred,
- inputs=raw_input,
- request=request,
- state=session_state,
- iterators=iterators,
- event_id=event_id,
- event_data=event_data,
- )
- iterator = output.pop("iterator", None)
- if hasattr(body, "session_hash"):
- if fn_index in app.iterators[body.session_hash]["should_reset"]:
- app.iterators[body.session_hash][fn_index] = None
- else:
- app.iterators[body.session_hash][fn_index] = iterator
- if isinstance(output, Error):
- raise output
- except BaseException as error:
- show_error = app.get_blocks().show_error or isinstance(error, Error)
- traceback.print_exc()
- return JSONResponse(
- content={"error": str(error) if show_error else None},
- status_code=500,
- )
-
- if not (body.batched) and batch:
- output["data"] = output["data"][0]
- return output
-
- # had to use '/run' endpoint for Colab compatibility, '/api' supported for backwards compatibility
- @app.post("/run/{api_name}", dependencies=[Depends(login_check)])
- @app.post("/run/{api_name}/", dependencies=[Depends(login_check)])
- @app.post("/api/{api_name}", dependencies=[Depends(login_check)])
- @app.post("/api/{api_name}/", dependencies=[Depends(login_check)])
- async def predict(
- api_name: str,
- body: PredictBody,
- request: fastapi.Request,
- username: str = Depends(get_current_user),
- ):
- fn_index_inferred = None
- if body.fn_index is None:
- for i, fn in enumerate(app.get_blocks().dependencies):
- if fn["api_name"] == api_name:
- fn_index_inferred = i
- break
- if fn_index_inferred is None:
- return JSONResponse(
- content={
- "error": f"This app has no endpoint /api/{api_name}/."
- },
- status_code=500,
- )
- else:
- fn_index_inferred = body.fn_index
- if (
- not app.get_blocks().api_open
- and app.get_blocks().queue_enabled_for_fn(fn_index_inferred)
- and f"Bearer {app.queue_token}" != request.headers.get("Authorization")
- ):
- raise HTTPException(
- status_code=status.HTTP_401_UNAUTHORIZED,
- detail="Not authorized to skip the queue",
- )
-
- # If this fn_index cancels jobs, then the only input we need is the
- # current session hash
- if app.get_blocks().dependencies[fn_index_inferred]["cancels"]:
- body.data = [body.session_hash]
- if body.request:
- if body.batched:
- gr_request = [
- Request(username=username, **req) for req in body.request
- ]
- else:
- assert isinstance(body.request, dict)
- gr_request = Request(username=username, **body.request)
- else:
- gr_request = Request(username=username, request=request)
- result = await run_predict(
- body=body,
- fn_index_inferred=fn_index_inferred,
- request=gr_request,
- )
- return result
-
- @app.websocket("/queue/join")
- async def join_queue(
- websocket: WebSocket,
- token: Optional[str] = Depends(ws_login_check),
- ):
- blocks = app.get_blocks()
- if app.auth is not None and token is None:
- await websocket.close(code=status.WS_1008_POLICY_VIOLATION)
- return
- if blocks._queue.server_path is None:
- app_url = get_server_url_from_ws_url(str(websocket.url))
- blocks._queue.set_url(app_url)
- await websocket.accept()
- # In order to cancel jobs, we need the session_hash and fn_index
- # to create a unique id for each job
- try:
- await asyncio.wait_for(
- websocket.send_json({"msg": "send_hash"}), timeout=5
- )
- except AsyncTimeOutError:
- return
-
- try:
- session_info = await asyncio.wait_for(
- websocket.receive_json(), timeout=5
- )
- except AsyncTimeOutError:
- return
-
- event = Event(
- websocket, session_info["session_hash"], session_info["fn_index"]
- )
- # set the token into Event to allow using the same token for call_prediction
- event.token = token
- event.session_hash = session_info["session_hash"]
-
- # Continuous events are not put in the queue so that they do not
- # occupy the queue's resource as they are expected to run forever
- if blocks.dependencies[event.fn_index].get("every", 0):
- await cancel_tasks({f"{event.session_hash}_{event.fn_index}"})
- await blocks._queue.reset_iterators(event.session_hash, event.fn_index)
- task = run_coro_in_background(
- blocks._queue.process_events, [event], False
- )
- set_task_name(task, event.session_hash, event.fn_index, batch=False)
- else:
- rank = blocks._queue.push(event)
-
- if rank is None:
- await blocks._queue.send_message(event, {"msg": "queue_full"})
- await event.disconnect()
- return
- estimation = blocks._queue.get_estimation()
- await blocks._queue.send_estimation(event, estimation, rank)
- while True:
- await asyncio.sleep(1)
- if websocket.application_state == WebSocketState.DISCONNECTED:
- return
-
- @app.get(
- "/queue/status",
- dependencies=[Depends(login_check)],
- response_model=Estimation,
- )
- async def get_queue_status():
- return app.get_blocks()._queue.get_estimation()
-
- @app.post("/upload", dependencies=[Depends(login_check)])
- async def upload_file(
- files: List[UploadFile] = File(...),
- ):
- output_files = []
- file_manager = gradio.File()
- for input_file in files:
- output_files.append(
- await file_manager.save_uploaded_file(
- input_file, app.uploaded_file_dir
- )
- )
- return output_files
-
- @app.on_event("startup")
- @app.get("/startup-events")
- async def startup_events():
- if not app.startup_events_triggered:
- app.get_blocks().startup_events()
- app.startup_events_triggered = True
- return True
- return False
-
- @app.get("/theme.css", response_class=PlainTextResponse)
- def theme_css():
- return PlainTextResponse(app.get_blocks().theme_css, media_type="text/css")
-
- @app.get("/robots.txt", response_class=PlainTextResponse)
- def robots_txt():
- if app.get_blocks().share:
- return "User-agent: *\nDisallow: /"
- else:
- return "User-agent: *\nDisallow: "
-
- return app
-
-
-########
-# Helper functions
-########
-
-
-def safe_join(directory: str, path: str) -> str:
- """Safely path to a base directory to avoid escaping the base directory.
- Borrowed from: werkzeug.security.safe_join"""
- _os_alt_seps: List[str] = [
- sep for sep in [os.path.sep, os.path.altsep] if sep is not None and sep != "/"
- ]
-
- if path == "":
- raise HTTPException(400)
-
- filename = posixpath.normpath(path)
- fullpath = os.path.join(directory, filename)
- if (
- any(sep in filename for sep in _os_alt_seps)
- or os.path.isabs(filename)
- or filename == ".."
- or filename.startswith("../")
- or os.path.isdir(fullpath)
- ):
- raise HTTPException(403)
-
- if not os.path.exists(fullpath):
- raise HTTPException(404, "File not found")
-
- return fullpath
-
-
-def get_types(cls_set: List[Type]):
- docset = []
- types = []
- for cls in cls_set:
- doc = inspect.getdoc(cls) or ""
- doc_lines = doc.split("\n")
- for line in doc_lines:
- if "value (" in line:
- types.append(line.split("value (")[1].split(")")[0])
- docset.append(doc_lines[1].split(":")[-1])
- return docset, types
-
-
-def get_server_url_from_ws_url(ws_url: str):
- ws_url_parsed = urlparse(ws_url)
- scheme = "http" if ws_url_parsed.scheme == "ws" else "https"
- port = f":{ws_url_parsed.port}" if ws_url_parsed.port else ""
- return f"{scheme}://{ws_url_parsed.hostname}{port}{ws_url_parsed.path.replace('queue/join', '')}"
-
-
-set_documentation_group("routes")
-
-
-class Obj:
- """
- Using a class to convert dictionaries into objects. Used by the `Request` class.
- Credit: https://www.geeksforgeeks.org/convert-nested-python-dictionary-to-object/
- """
-
- def __init__(self, dict_):
- self.__dict__.update(dict_)
- for key, value in dict_.items():
- if isinstance(value, (dict, list)):
- value = Obj(value)
- setattr(self, key, value)
-
- def __getitem__(self, item):
- return self.__dict__[item]
-
- def __setitem__(self, item, value):
- self.__dict__[item] = value
-
- def __iter__(self):
- for key, value in self.__dict__.items():
- if isinstance(value, Obj):
- yield (key, dict(value))
- else:
- yield (key, value)
-
- def __contains__(self, item) -> bool:
- if item in self.__dict__:
- return True
- for value in self.__dict__.values():
- if isinstance(value, Obj) and item in value:
- return True
- return False
-
- def keys(self):
- return self.__dict__.keys()
-
- def values(self):
- return self.__dict__.values()
-
- def items(self):
- return self.__dict__.items()
-
- def __str__(self) -> str:
- return str(self.__dict__)
-
- def __repr__(self) -> str:
- return str(self.__dict__)
-
-
-@document()
-class Request:
- """
- A Gradio request object that can be used to access the request headers, cookies,
- query parameters and other information about the request from within the prediction
- function. The class is a thin wrapper around the fastapi.Request class. Attributes
- of this class include: `headers`, `client`, `query_params`, and `path_params`. If
- auth is enabled, the `username` attribute can be used to get the logged in user.
- Example:
- import gradio as gr
- def echo(name, request: gr.Request):
- print("Request headers dictionary:", request.headers)
- print("IP address:", request.client.host)
- return name
- io = gr.Interface(echo, "textbox", "textbox").launch()
- """
-
- def __init__(
- self,
- request: fastapi.Request | None = None,
- username: str | None = None,
- **kwargs,
- ):
- """
- Can be instantiated with either a fastapi.Request or by manually passing in
- attributes (needed for websocket-based queueing).
- Parameters:
- request: A fastapi.Request
- """
- self.request = request
- self.username = username
- self.kwargs: Dict = kwargs
-
- def dict_to_obj(self, d):
- if isinstance(d, dict):
- return json.loads(json.dumps(d), object_hook=Obj)
- else:
- return d
-
- def __getattr__(self, name):
- if self.request:
- return self.dict_to_obj(getattr(self.request, name))
- else:
- try:
- obj = self.kwargs[name]
- except KeyError as ke:
- raise AttributeError(
- f"'Request' object has no attribute '{name}'"
- ) from ke
- return self.dict_to_obj(obj)
-
-
-@document()
-def mount_gradio_app(
- app: fastapi.FastAPI,
- blocks: gradio.Blocks,
- path: str,
- gradio_api_url: str | None = None,
-) -> fastapi.FastAPI:
- """Mount a gradio.Blocks to an existing FastAPI application.
-
- Parameters:
- app: The parent FastAPI application.
- blocks: The blocks object we want to mount to the parent app.
- path: The path at which the gradio application will be mounted.
- gradio_api_url: The full url at which the gradio app will run. This is only needed if deploying to Huggingface spaces of if the websocket endpoints of your deployed app are on a different network location than the gradio app. If deploying to spaces, set gradio_api_url to 'http://localhost:7860/'
- Example:
- from fastapi import FastAPI
- import gradio as gr
- app = FastAPI()
- @app.get("/")
- def read_main():
- return {"message": "This is your main app"}
- io = gr.Interface(lambda x: "Hello, " + x + "!", "textbox", "textbox")
- app = gr.mount_gradio_app(app, io, path="/gradio")
- # Then run `uvicorn run:app` from the terminal and navigate to http://localhost:8000/gradio.
- """
- blocks.dev_mode = False
- blocks.config = blocks.get_config_file()
- blocks.validate_queue_settings()
- gradio_app = App.create_app(blocks)
-
- @app.on_event("startup")
- async def start_queue():
- if gradio_app.get_blocks().enable_queue:
- if gradio_api_url:
- gradio_app.get_blocks()._queue.set_url(gradio_api_url)
- gradio_app.get_blocks().startup_events()
-
- app.mount(path, gradio_app)
- return app
diff --git a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/upfirdn2d.cpp b/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/upfirdn2d.cpp
deleted file mode 100644
index 44fa337d8d4c34dfa010a59cd27d86857db671aa..0000000000000000000000000000000000000000
--- a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/upfirdn2d.cpp
+++ /dev/null
@@ -1,107 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "upfirdn2d.h"
-
-//------------------------------------------------------------------------
-
-static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain)
-{
- // Validate arguments.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x");
- TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32");
- TORCH_CHECK(x.numel() <= INT_MAX, "x is too large");
- TORCH_CHECK(f.numel() <= INT_MAX, "f is too large");
- TORCH_CHECK(x.numel() > 0, "x has zero size");
- TORCH_CHECK(f.numel() > 0, "f has zero size");
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(f.dim() == 2, "f must be rank 2");
- TORCH_CHECK((x.size(0)-1)*x.stride(0) + (x.size(1)-1)*x.stride(1) + (x.size(2)-1)*x.stride(2) + (x.size(3)-1)*x.stride(3) <= INT_MAX, "x memory footprint is too large");
- TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1");
- TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1");
- TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1");
-
- // Create output tensor.
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
- int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx;
- int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy;
- TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1");
- torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format());
- TORCH_CHECK(y.numel() <= INT_MAX, "output is too large");
- TORCH_CHECK((y.size(0)-1)*y.stride(0) + (y.size(1)-1)*y.stride(1) + (y.size(2)-1)*y.stride(2) + (y.size(3)-1)*y.stride(3) <= INT_MAX, "output memory footprint is too large");
-
- // Initialize CUDA kernel parameters.
- upfirdn2d_kernel_params p;
- p.x = x.data_ptr();
- p.f = f.data_ptr();
- p.y = y.data_ptr();
- p.up = make_int2(upx, upy);
- p.down = make_int2(downx, downy);
- p.pad0 = make_int2(padx0, pady0);
- p.flip = (flip) ? 1 : 0;
- p.gain = gain;
- p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0));
- p.filterSize = make_int2((int)f.size(1), (int)f.size(0));
- p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0));
- p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0));
- p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0));
- p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z;
- p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1;
-
- // Choose CUDA kernel.
- upfirdn2d_kernel_spec spec;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&]
- {
- spec = choose_upfirdn2d_kernel(p);
- });
-
- // Set looping options.
- p.loopMajor = (p.sizeMajor - 1) / 16384 + 1;
- p.loopMinor = spec.loopMinor;
- p.loopX = spec.loopX;
- p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1;
- p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1;
-
- // Compute grid size.
- dim3 blockSize, gridSize;
- if (spec.tileOutW < 0) // large
- {
- blockSize = dim3(4, 32, 1);
- gridSize = dim3(
- ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor,
- (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1,
- p.launchMajor);
- }
- else // small
- {
- blockSize = dim3(256, 1, 1);
- gridSize = dim3(
- ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor,
- (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1,
- p.launchMajor);
- }
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream()));
- return y;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("upfirdn2d", &upfirdn2d);
-}
-
-//------------------------------------------------------------------------
diff --git a/spaces/lewiswu1209/MockingBird/vocoder_preprocess.py b/spaces/lewiswu1209/MockingBird/vocoder_preprocess.py
deleted file mode 100644
index 95f9e5a0f80edc566cfb31bb77736a1c58573d47..0000000000000000000000000000000000000000
--- a/spaces/lewiswu1209/MockingBird/vocoder_preprocess.py
+++ /dev/null
@@ -1,59 +0,0 @@
-from synthesizer.synthesize import run_synthesis
-from synthesizer.hparams import hparams
-from utils.argutils import print_args
-import argparse
-import os
-
-
-if __name__ == "__main__":
- class MyFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter):
- pass
-
- parser = argparse.ArgumentParser(
- description="Creates ground-truth aligned (GTA) spectrograms from the vocoder.",
- formatter_class=MyFormatter
- )
- parser.add_argument("datasets_root", type=str, help=\
- "Path to the directory containing your SV2TTS directory. If you specify both --in_dir and "
- "--out_dir, this argument won't be used.")
- parser.add_argument("-m", "--model_dir", type=str,
- default="synthesizer/saved_models/mandarin/", help=\
- "Path to the pretrained model directory.")
- parser.add_argument("-i", "--in_dir", type=str, default=argparse.SUPPRESS, help= \
- "Path to the synthesizer directory that contains the mel spectrograms, the wavs and the "
- "embeds. Defaults to /SV2TTS/synthesizer/.")
- parser.add_argument("-o", "--out_dir", type=str, default=argparse.SUPPRESS, help= \
- "Path to the output vocoder directory that will contain the ground truth aligned mel "
- "spectrograms. Defaults to /SV2TTS/vocoder/.")
- parser.add_argument("--hparams", default="",
- help="Hyperparameter overrides as a comma-separated list of name=value "
- "pairs")
- parser.add_argument("--no_trim", action="store_true", help=\
- "Preprocess audio without trimming silences (not recommended).")
- parser.add_argument("--cpu", action="store_true", help=\
- "If True, processing is done on CPU, even when a GPU is available.")
- args = parser.parse_args()
- print_args(args, parser)
- modified_hp = hparams.parse(args.hparams)
-
- if not hasattr(args, "in_dir"):
- args.in_dir = os.path.join(args.datasets_root, "SV2TTS", "synthesizer")
- if not hasattr(args, "out_dir"):
- args.out_dir = os.path.join(args.datasets_root, "SV2TTS", "vocoder")
-
- if args.cpu:
- # Hide GPUs from Pytorch to force CPU processing
- os.environ["CUDA_VISIBLE_DEVICES"] = ""
-
- # Verify webrtcvad is available
- if not args.no_trim:
- try:
- import webrtcvad
- except:
- raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables "
- "noise removal and is recommended. Please install and try again. If installation fails, "
- "use --no_trim to disable this error message.")
- del args.no_trim
-
- run_synthesis(args.in_dir, args.out_dir, args.model_dir, modified_hp)
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/After Effects Cs5 Plugin Keylight 1.2 210.md b/spaces/lincquiQcaudo/Top-20-Diffusion/After Effects Cs5 Plugin Keylight 1.2 210.md
deleted file mode 100644
index ab6779fd8595fb6d5a92186463b84249eba86940..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/After Effects Cs5 Plugin Keylight 1.2 210.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
How to Use Keylight 1.2 Plugin in After Effects CS5
-
Keylight is a powerful keying plugin that comes with After Effects CS5 and later versions. It allows you to easily remove green or blue backgrounds from your footage and create realistic composites. In this article, we will show you how to use Keylight 1.2 plugin in After Effects CS5 to key out a green screen and replace it with a different background.
First, you need to import your footage into After Effects and create a new composition. Then, go to the Effects & Presets panel and find the Keying subfolder. Drag and drop the Keylight 1.2 effect onto your footage layer. You can also apply it by going to Effect > Keying > Keylight 1.2.
-
Once you apply the effect, you will see a Screen Colour setting in the Effect Controls panel. This is where you select the colour of the background that you want to remove. Use the eyedropper tool to pick a representative colour from your green screen. You should see the background disappear and leave only your subject in front of a black background.
-
Step 2: Adjust the Keylight Settings
-
Now that you have applied the basic key, you need to fine-tune it to get a better result. There are many settings that you can adjust in Keylight, but we will focus on the most important ones.
-
-
View: This setting allows you to switch between different views of your keyed footage. The default view is Final Result, which shows you the final composite. You can also choose other views such as Screen Matte, Status, or Combined Matte to see how your key is working.
-
Screen Gain: This setting controls the brightness of your screen colour. Increasing it will make your screen colour brighter and easier to key out, but it may also introduce noise or artefacts. Decreasing it will make your screen colour darker and harder to key out, but it may also preserve more details or edges.
-
Screen Balance: This setting controls the balance between red, green, and blue channels of your screen colour. Adjusting it can help you remove any colour spill or contamination from your subject.
-
Clip Black and Clip White: These settings control the threshold of your matte, which is the black and white mask that defines what is transparent and what is opaque in your keyed footage. Increasing Clip Black will make more pixels transparent, while decreasing Clip White will make more pixels opaque. You can use these settings to refine the edges of your subject and remove any unwanted parts of the background.
-
Screen Shrink/Grow: This setting allows you to shrink or grow your matte by a certain amount of pixels. Shrinking it can help you get rid of any fringes or halos around your subject, while growing it can help you fill any gaps or holes in your subject.
-
Screen Softness: This setting allows you to soften or blur the edges of your matte by a certain amount of pixels. Softening it can help you blend your subject with the new background better, while blurring it can help you hide any jagged or pixelated edges.
-
-
You can experiment with these settings until you get a clean and realistic key. You can also use other tools such as masks, rotoscoping, or garbage mattes to further improve your key.
-
Step 3: Replace the Background
-
The final step is to replace the black background with a new one that matches your scene. You can import any image or video that you want to use as your new background into After Effects and place it below your keyed footage layer. You can also adjust the position, scale, rotation, opacity, or blending mode of your background layer to make it fit better with your subject.
-
If you want to add some depth or realism to your composite, you can also use effects such as levels, curves, colour balance, or hue/saturation to match the colours and tones of your subject and background. You can also use effects such as lens
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/lingbionlp/PhenoTagger_v1.2_Demo/models_v1.2/bioformer-cased-v1.0/README.md b/spaces/lingbionlp/PhenoTagger_v1.2_Demo/models_v1.2/bioformer-cased-v1.0/README.md
deleted file mode 100644
index a5a8150e73e171ba0e36b235ea4f55e5b97b4756..0000000000000000000000000000000000000000
--- a/spaces/lingbionlp/PhenoTagger_v1.2_Demo/models_v1.2/bioformer-cased-v1.0/README.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-language:
-- en
-license: apache-2.0
----
-
-
-Bioformer is a lightweight BERT model for biomedical text mining. Bioformer uses a biomedical vocabulary and is pre-trained from scratch only on biomedical domain corpora. Our experiments show that Bioformer is 3x as fast as BERT-base, and achieves comparable or even better performance than BioBERT/PubMedBERT on downstream NLP tasks.
-
-Bioformer has 8 layers (transformer blocks) with a hidden embedding size of 512, and the number of self-attention heads is 8. Its total number of parameters is 42,820,610.
-
-## Vocabulary of Bioformer
-Bioformer uses a cased WordPiece vocabulary trained from a biomedical corpus, which included all PubMed abstracts (33 million, as of Feb 1, 2021) and 1 million PMC full-text articles. PMC has 3.6 million articles but we down-sampled them to 1 million such that the total size of PubMed abstracts and PMC full-text articles are approximately equal. To mitigate the out-of-vocabulary issue and include special symbols (e.g. male and female symbols) in biomedical literature, we trained Bioformer’s vocabulary from the Unicode text of the two resources. The vocabulary size of Bioformer is 32768 (2^15), which is similar to that of the original BERT.
-
-## Pre-training of Bioformer
-Bioformer was pre-trained from scratch on the same corpus as the vocabulary (33 million PubMed abstracts + 1 million PMC full-text articles). For the masked language modeling (MLM) objective, we used whole-word masking with a masking rate of 15%. There are debates on whether the next sentence prediction (NSP) objective could improve the performance on downstream tasks. We include it in our pre-training experiment in case the prediction of the next sentence is needed by end-users. Sentence segmentation of all training text was performed using [SciSpacy](https://allenai.github.io/scispacy/).
-
-Pre-training of Bioformer was performed on a single Cloud TPU device (TPUv2, 8 cores, 8GB memory per core). The maximum input sequence length was fixed to 512, and the batch size was set to 256. We pre-trained Bioformer for 2 million steps, which took about 8.3 days.
-
-
-## Awards
-
-Bioformer achieved top performance (highest micro-F1 score) in the BioCreative VII COVID-19 multi-label topic classification challenge (https://biocreative.bioinformatics.udel.edu/media/store/files/2021/TRACK5_pos_1_BC7_submission_221.pdf)
-
-## Acknowledgment
-
-Bioformer is partly supported by the Google TPU Research Cloud (TRC) program.
-
-## Questions
-If you have any questions, please submit an issue here: https://github.com/WGLab/bioformer/issues
-
-You can also send an email to Li Fang (fangli2718@gmail.com)
\ No newline at end of file
diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/losses.py b/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/losses.py
deleted file mode 100644
index 87aeaa107af4d53f5a6132b3739d5cafdcded7fc..0000000000000000000000000000000000000000
--- a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/losses.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import torch
-from torch import nn
-
-
-def get_loss(name):
- if name == "cosface":
- return CosFace()
- elif name == "arcface":
- return ArcFace()
- else:
- raise ValueError()
-
-
-class CosFace(nn.Module):
- def __init__(self, s=64.0, m=0.40):
- super(CosFace, self).__init__()
- self.s = s
- self.m = m
-
- def forward(self, cosine, label):
- index = torch.where(label != -1)[0]
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
- m_hot.scatter_(1, label[index, None], self.m)
- cosine[index] -= m_hot
- ret = cosine * self.s
- return ret
-
-
-class ArcFace(nn.Module):
- def __init__(self, s=64.0, m=0.5):
- super(ArcFace, self).__init__()
- self.s = s
- self.m = m
-
- def forward(self, cosine: torch.Tensor, label):
- index = torch.where(label != -1)[0]
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
- m_hot.scatter_(1, label[index, None], self.m)
- cosine.acos_()
- cosine[index] += m_hot
- cosine.cos_().mul_(self.s)
- return cosine
diff --git a/spaces/lixq/bingo61/src/lib/hooks/use-at-bottom.tsx b/spaces/lixq/bingo61/src/lib/hooks/use-at-bottom.tsx
deleted file mode 100644
index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000
--- a/spaces/lixq/bingo61/src/lib/hooks/use-at-bottom.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import * as React from 'react'
-
-export function useAtBottom(offset = 0) {
- const [isAtBottom, setIsAtBottom] = React.useState(false)
-
- React.useEffect(() => {
- const handleScroll = () => {
- setIsAtBottom(
- window.innerHeight + window.scrollY >=
- document.body.offsetHeight - offset
- )
- }
-
- window.addEventListener('scroll', handleScroll, { passive: true })
- handleScroll()
-
- return () => {
- window.removeEventListener('scroll', handleScroll)
- }
- }, [offset])
-
- return isAtBottom
-}
diff --git a/spaces/lj1995/vocal2guitar/infer/trans_weights.py b/spaces/lj1995/vocal2guitar/infer/trans_weights.py
deleted file mode 100644
index da0759627d3fee175a2311a5ac50ccb7f8db8ded..0000000000000000000000000000000000000000
--- a/spaces/lj1995/vocal2guitar/infer/trans_weights.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import torch, pdb
-
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-suc\G_1000.pth")["model"]#sim_nsf#
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-freeze-vocoder-flow-enc_q\G_1000.pth")["model"]#sim_nsf#
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-freeze-vocoder\G_1000.pth")["model"]#sim_nsf#
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-test\G_1000.pth")["model"]#sim_nsf#
-a = torch.load(
- r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-no_opt-no_dropout\G_1000.pth"
-)[
- "model"
-] # sim_nsf#
-for key in a.keys():
- a[key] = a[key].half()
-# torch.save(a,"ft-mi-freeze-vocoder_true_1k.pt")#
-# torch.save(a,"ft-mi-sim1k.pt")#
-torch.save(a, "ft-mi-no_opt-no_dropout.pt") #
diff --git a/spaces/ljjggr/bingo/src/components/external-link.tsx b/spaces/ljjggr/bingo/src/components/external-link.tsx
deleted file mode 100644
index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000
--- a/spaces/ljjggr/bingo/src/components/external-link.tsx
+++ /dev/null
@@ -1,30 +0,0 @@
-export function ExternalLink({
- href,
- children
-}: {
- href: string
- children: React.ReactNode
-}) {
- return (
-
- {children}
-
-
- )
-}
diff --git a/spaces/luwujie/QQsign/devices/device_8958.js b/spaces/luwujie/QQsign/devices/device_8958.js
deleted file mode 100644
index 455ddb0108b70276949e6539926481590a98e0d9..0000000000000000000000000000000000000000
--- a/spaces/luwujie/QQsign/devices/device_8958.js
+++ /dev/null
@@ -1,344 +0,0 @@
-"use strict";
-var __importDefault = (this && this.__importDefault) || function (mod) {
- return (mod && mod.__esModule) ? mod : { "default": mod };
-};
-Object.defineProperty(exports, "__esModule", { value: true });
-exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0;
-const crypto_1 = require("crypto");
-const constants_1 = require("./constants");
-const axios_1 = __importDefault(require("axios"));
-const algo_1 = require("./algo");
-function generateImei() {
- let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`;
- function calcSP(imei) {
- let sum = 0;
- for (let i = 0; i < imei.length; ++i) {
- if (i % 2) {
- let j = parseInt(imei[i]) * 2;
- sum += j % 10 + Math.floor(j / 10);
- }
- else {
- sum += parseInt(imei[i]);
- }
- }
- return (100 - sum) % 10;
- }
- return imei + calcSP(imei);
-}
-/** 生成短设备信息 */
-function generateShortDevice() {
- const randstr = (length, num = false) => {
- const map = num ? '0123456789' : '0123456789abcdef';
- return (0, constants_1.randomString)(length, map);
- };
- return {
- "--begin--": "该设备为随机生成,丢失后不能得到原先配置",
- product: `ILPP-${randstr(5).toUpperCase()}`,
- device: `${randstr(5).toUpperCase()}`,
- board: `${randstr(5).toUpperCase()}`,
- brand: `${randstr(4).toUpperCase()}`,
- model: `ICQQ ${randstr(4).toUpperCase()}`,
- wifi_ssid: `HUAWEI-${randstr(7)}`,
- bootloader: `U-boot`,
- android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`,
- boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`,
- proc_version: `Linux version 5.10.101-android12-${randstr(8)}`,
- mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`,
- ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`,
- imei: `${generateImei()}`,
- incremental: `${randstr(10, true).toUpperCase()}`,
- "--end--": "修改后可能需要重新验证设备。"
- };
-}
-exports.generateShortDevice = generateShortDevice;
-/** 生成完整设备信息 */
-function generateFullDevice(apk, d) {
- if (!d)
- d = generateShortDevice();
- return {
- display: d.android_id,
- product: d.product,
- device: d.device,
- board: d.board,
- brand: d.brand,
- model: d.model,
- bootloader: d.bootloader,
- fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`,
- boot_id: d.boot_id,
- proc_version: d.proc_version,
- baseband: "",
- sim: "T-Mobile",
- os_type: "android",
- mac_address: d.mac_address,
- ip_address: d.ip_address,
- wifi_bssid: d.mac_address,
- wifi_ssid: d.wifi_ssid,
- imei: d.imei,
- android_id: (0, constants_1.md5)(d.android_id).toString("hex"),
- apn: "wifi",
- version: {
- incremental: d.incremental,
- release: "10",
- codename: "REL",
- sdk: 29,
- },
- imsi: (0, crypto_1.randomBytes)(16),
- guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])),
- };
-}
-exports.generateFullDevice = generateFullDevice;
-class Device {
- constructor(apk, d) {
- this.apk = apk;
- this.secret = 'ZdJqM15EeO2zWc08';
- this.publicKey = `-----BEGIN PUBLIC KEY-----
-MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9
-qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq
-LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B
-9NMbHddGSAUmRTCrHQIDAQAB
------END PUBLIC KEY-----`;
- if (!d)
- d = generateShortDevice();
- Object.assign(this, generateFullDevice(apk, d));
- }
- async getQIMEI() {
- if (this.apk.app_key === "") {
- return;
- }
- const k = (0, constants_1.randomString)(16);
- const key = (0, algo_1.encryptPKCS1)(this.publicKey, k);
- const time = Date.now();
- const nonce = (0, constants_1.randomString)(16);
- const payload = this.genRandomPayloadByDevice();
- const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64');
- try {
- const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", {
- key,
- params,
- time, nonce,
- sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"),
- extra: ''
- }, {
- headers: {
- 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`,
- 'Content-Type': "application/json"
- }
- });
- if (data?.code !== 0) {
- return;
- }
- const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k));
- this.qImei16 = q16;
- this.qImei36 = q36;
- }
- catch {
- }
- }
- genRandomPayloadByDevice() {
- const fixedRand = (max = 1, min = 0) => {
- if (max < min)
- [max, min] = [min, max];
- const diff = max - min;
- return Math.floor(Math.random() * diff) + min;
- };
- const reserved = {
- "harmony": "0",
- "clone": Math.random() > 0.5 ? "1" : "0",
- "containe": "",
- "oz": "",
- "oo": "",
- "kelong": Math.random() > 0.5 ? "1" : "0",
- "uptimes": (0, constants_1.formatTime)(new Date()),
- "multiUser": Math.random() > 0.5 ? "1" : "0",
- "bod": this.board,
- "brd": this.brand,
- "dv": this.device,
- "firstLevel": "",
- "manufact": this.brand,
- "name": this.model,
- "host": "se.infra",
- "kernel": this.fingerprint
- };
- const timestamp = Date.now();
- this.mtime = this.mtime || Date.now();
- const mtime1 = new Date(this.mtime || Date.now());
- const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt);
- const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11);
- const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4)));
- const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14);
- let beaconIdArr = [
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- mtimeStr1,
- '0000000000000000',
- (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16),
- ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)),
- this.boot_id,
- '1',
- fixedRand(5, 0),
- fixedRand(5, 0),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(5, 0),
- fixedRand(100, 10),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(50000, 10000),
- fixedRand(100, 10),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- mtimeStr2,
- fixedRand(10000, 1000),
- fixedRand(5, 0),
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- fixedRand(10000, 1000),
- fixedRand(100, 10),
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`,
- fixedRand(10000, 1000),
- fixedRand(5, 0),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(5, 0),
- fixedRand(100, 10),
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`,
- fixedRand(5, 0),
- fixedRand(5, 0),
- ].map((str, idx) => `k${idx + 1}:${str}`);
- return {
- "androidId": this.android_id,
- "platformId": 1,
- "appKey": this.apk.app_key,
- "appVersion": this.apk.version,
- "beaconIdSrc": beaconIdArr.join(';'),
- "brand": this.brand,
- "channelId": "2017",
- "cid": "",
- "imei": this.imei,
- "imsi": this.imsi.toString("hex"),
- "mac": this.mac_address,
- "model": this.model,
- "networkType": "unknown",
- "oaid": "",
- "osVersion": `Android ${this.version.release},level ${this.version.sdk}`,
- "qimei": "",
- "qimei36": "",
- "sdkVersion": "1.2.13.6",
- "targetSdkVersion": "26",
- "audit": "",
- "userId": "{}",
- "packageId": this.apk.id,
- "deviceType": this.display,
- "sdkName": "",
- "reserved": JSON.stringify(reserved),
- };
- }
-}
-exports.Device = Device;
-/** 支持的登录设备平台 */
-var Platform;
-(function (Platform) {
- Platform[Platform["Android"] = 1] = "Android";
- Platform[Platform["aPad"] = 2] = "aPad";
- Platform[Platform["Watch"] = 3] = "Watch";
- Platform[Platform["iMac"] = 4] = "iMac";
- Platform[Platform["iPad"] = 5] = "iPad";
- Platform[Platform["Tim"] = 6] = "Tim";
-})(Platform = exports.Platform || (exports.Platform = {}));
-const mobile = {
- id: "com.tencent.mobileqq",
- app_key: '0S200MNJT807V3GE',
- name: "A8.9.58.11175",
- version: "8.9.58.11175",
- ver: "8.9.58",
- sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))),
- buildtime: 1684467300,
- appid: 16,
- subid: 537163194,
- bitmap: 150470524,
- main_sig_map: 16724722,
- sub_sig_map: 0x10400,
- sdkver: "6.0.0.2545",
- display: "Android_8.9.58",
- qua: 'V1_AND_SQ_8.9.58_4108_YYB_D',
- ssover: 20,
-};
-const tim = {
- id: "com.tencent.tim",
- app_key: '0S200MNJT807V3GE',
- name: "A3.5.1.3168",
- version: "3.5.1.3168",
- ver: "3.5.1",
- sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'),
- buildtime: 1630062176,
- appid: 16,
- subid: 537150355,
- bitmap: 150470524,
- main_sig_map: 16724722,
- sub_sig_map: 0x10400,
- sdkver: "6.0.0.2484",
- display: "Tim",
- qua: "V1_AND_SQ_8.3.9_351_TIM_D",
- ssover: 18,
-};
-const watch = {
- id: "com.tencent.qqlite",
- app_key: '0S200MNJT807V3GE',
- name: "A2.0.8",
- version: "2.0.8",
- ver: "2.0.8",
- sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))),
- buildtime: 1559564731,
- appid: 16,
- subid: 537065138,
- bitmap: 16252796,
- main_sig_map: 16724722,
- sub_sig_map: 0x10400,
- sdkver: "6.0.0.2365",
- display: "Watch",
- qua: '',
- ssover: 5
-};
-const hd = {
- id: "com.tencent.minihd.qq",
- app_key: '0S200MNJT807V3GE',
- name: "A5.9.3.3468",
- version: "5.9.3.3468",
- ver: "5.9.3",
- sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))),
- buildtime: 1637427966,
- appid: 16,
- subid: 537128930,
- bitmap: 150470524,
- main_sig_map: 1970400,
- sub_sig_map: 66560,
- sdkver: "6.0.0.2433",
- display: "iMac",
- qua: '',
- ssover: 12
-};
-const apklist = {
- [Platform.Android]: mobile,
- [Platform.Tim]: tim,
- [Platform.aPad]: {
- ...mobile,
- subid: 537163242,
- display: 'aPad_8.9.58'
- },
- [Platform.Watch]: watch,
- [Platform.iMac]: { ...hd },
- [Platform.iPad]: {
- ...mobile,
- subid: 537155074,
- sign: hd.sign,
- name: '8.9.50.611',
- ver: '8.9.50',
- sdkver: '6.0.0.2535',
- qua: 'V1_AND_SQ_8.9.50_3898_YYB_D',
- display: 'iPad'
- },
-};
-function getApkInfo(p) {
- return apklist[p] || apklist[Platform.Android];
-}
-exports.getApkInfo = getApkInfo;
diff --git a/spaces/ma-xu/LIVE/matrix.h b/spaces/ma-xu/LIVE/matrix.h
deleted file mode 100644
index b53f484e2abf613c6d0c1b36890a332d778f24b5..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/matrix.h
+++ /dev/null
@@ -1,544 +0,0 @@
-#pragma once
-
-#include "diffvg.h"
-#include "vector.h"
-#include
-
-template
-struct TMatrix3x3 {
- DEVICE
- TMatrix3x3() {
- for (int i = 0; i < 3; i++) {
- for (int j = 0; j < 3; j++) {
- data[i][j] = T(0);
- }
- }
- }
-
- template
- DEVICE
- TMatrix3x3(T2 *arr) {
- data[0][0] = arr[0];
- data[0][1] = arr[1];
- data[0][2] = arr[2];
- data[1][0] = arr[3];
- data[1][1] = arr[4];
- data[1][2] = arr[5];
- data[2][0] = arr[6];
- data[2][1] = arr[7];
- data[2][2] = arr[8];
- }
- DEVICE
- TMatrix3x3(T v00, T v01, T v02,
- T v10, T v11, T v12,
- T v20, T v21, T v22) {
- data[0][0] = v00;
- data[0][1] = v01;
- data[0][2] = v02;
- data[1][0] = v10;
- data[1][1] = v11;
- data[1][2] = v12;
- data[2][0] = v20;
- data[2][1] = v21;
- data[2][2] = v22;
- }
-
- DEVICE
- const T& operator()(int i, int j) const {
- return data[i][j];
- }
- DEVICE
- T& operator()(int i, int j) {
- return data[i][j];
- }
- DEVICE
- static TMatrix3x3 identity() {
- TMatrix3x3 m(1, 0, 0,
- 0, 1, 0,
- 0, 0, 1);
- return m;
- }
-
- T data[3][3];
-};
-
-using Matrix3x3 = TMatrix3x3;
-using Matrix3x3f = TMatrix3x3;
-
-template
-struct TMatrix4x4 {
- DEVICE TMatrix4x4() {
- for (int i = 0; i < 4; i++) {
- for (int j = 0; j < 4; j++) {
- data[i][j] = T(0);
- }
- }
- }
-
- template
- DEVICE TMatrix4x4(const T2 *arr) {
- for (int i = 0; i < 4; i++) {
- for (int j = 0; j < 4; j++) {
- data[i][j] = (T)arr[i * 4 + j];
- }
- }
- }
-
- template
- DEVICE TMatrix4x4(const TMatrix4x4 &m) {
- for (int i = 0; i < 4; i++) {
- for (int j = 0; j < 4; j++) {
- data[i][j] = T(m.data[i][j]);
- }
- }
- }
-
- template
- DEVICE TMatrix4x4(T2 v00, T2 v01, T2 v02, T2 v03,
- T2 v10, T2 v11, T2 v12, T2 v13,
- T2 v20, T2 v21, T2 v22, T2 v23,
- T2 v30, T2 v31, T2 v32, T2 v33) {
- data[0][0] = (T)v00;
- data[0][1] = (T)v01;
- data[0][2] = (T)v02;
- data[0][3] = (T)v03;
- data[1][0] = (T)v10;
- data[1][1] = (T)v11;
- data[1][2] = (T)v12;
- data[1][3] = (T)v13;
- data[2][0] = (T)v20;
- data[2][1] = (T)v21;
- data[2][2] = (T)v22;
- data[2][3] = (T)v23;
- data[3][0] = (T)v30;
- data[3][1] = (T)v31;
- data[3][2] = (T)v32;
- data[3][3] = (T)v33;
- }
-
- DEVICE
- const T& operator()(int i, int j) const {
- return data[i][j];
- }
-
- DEVICE
- T& operator()(int i, int j) {
- return data[i][j];
- }
-
- DEVICE
- static TMatrix4x4 identity() {
- TMatrix4x4 m(1, 0, 0, 0,
- 0, 1, 0, 0,
- 0, 0, 1, 0,
- 0, 0, 0, 1);
- return m;
- }
-
- T data[4][4];
-};
-
-using Matrix4x4 = TMatrix4x4;
-using Matrix4x4f = TMatrix4x4;
-
-template
-DEVICE
-inline auto operator+(const TMatrix3x3 &m0, const TMatrix3x3 &m1) -> TMatrix3x3 {
- TMatrix3x3 m;
- for (int i = 0; i < 3; i++) {
- for (int j = 0; j < 3; j++) {
- m(i, j) = m0(i, j) + m1(i, j);
- }
- }
- return m;
-}
-
-template
-DEVICE
-inline auto operator-(const TMatrix3x3 &m0, const TMatrix3x3 &m1) -> TMatrix3x3 {
- TMatrix3x3 m;
- for (int i = 0; i < 3; i++) {
- for (int j = 0; j < 3; j++) {
- m(i, j) = m0(i, j) - m1(i, j);
- }
- }
- return m;
-}
-
-template
-DEVICE
-inline auto operator*(const TMatrix3x3 &m0, const TMatrix3x3 &m1) -> TMatrix3x3 {
- TMatrix3x3 ret;
- for (int i = 0; i < 3; i++) {
- for (int j = 0; j < 3; j++) {
- ret(i, j) = T(0);
- for (int k = 0; k < 3; k++) {
- ret(i, j) += m0(i, k) * m1(k, j);
- }
- }
- }
- return ret;
-}
-
-template
-DEVICE
-inline auto operator*(const TVector3 &v, const TMatrix3x3 &m) -> TVector3 {
- TVector3 ret;
- for (int i = 0; i < 3; i++) {
- ret[i] = T(0);
- for (int j = 0; j < 3; j++) {
- ret[i] += v[j] * m(j, i);
- }
- }
- return ret;
-}
-
-template
-DEVICE
-inline auto operator*(const TMatrix3x3 &m, const TVector3 &v) -> TVector3 {
- TVector3 ret;
- for (int i = 0; i < 3; i++) {
- ret[i] = 0.f;
- for (int j = 0; j < 3; j++) {
- ret[i] += m(i, j) * v[j];
- }
- }
- return ret;
-}
-
-template
-DEVICE
-inline auto inverse(const TMatrix3x3 &m) -> TMatrix3x3 {
- // computes the inverse of a matrix m
- auto det = m(0, 0) * (m(1, 1) * m(2, 2) - m(2, 1) * m(1, 2)) -
- m(0, 1) * (m(1, 0) * m(2, 2) - m(1, 2) * m(2, 0)) +
- m(0, 2) * (m(1, 0) * m(2, 1) - m(1, 1) * m(2, 0));
-
- auto invdet = 1 / det;
-
- auto m_inv = TMatrix3x3{};
- m_inv(0, 0) = (m(1, 1) * m(2, 2) - m(2, 1) * m(1, 2)) * invdet;
- m_inv(0, 1) = (m(0, 2) * m(2, 1) - m(0, 1) * m(2, 2)) * invdet;
- m_inv(0, 2) = (m(0, 1) * m(1, 2) - m(0, 2) * m(1, 1)) * invdet;
- m_inv(1, 0) = (m(1, 2) * m(2, 0) - m(1, 0) * m(2, 2)) * invdet;
- m_inv(1, 1) = (m(0, 0) * m(2, 2) - m(0, 2) * m(2, 0)) * invdet;
- m_inv(1, 2) = (m(1, 0) * m(0, 2) - m(0, 0) * m(1, 2)) * invdet;
- m_inv(2, 0) = (m(1, 0) * m(2, 1) - m(2, 0) * m(1, 1)) * invdet;
- m_inv(2, 1) = (m(2, 0) * m(0, 1) - m(0, 0) * m(2, 1)) * invdet;
- m_inv(2, 2) = (m(0, 0) * m(1, 1) - m(1, 0) * m(0, 1)) * invdet;
- return m_inv;
-}
-
-template
-DEVICE
-inline auto operator+(const TMatrix4x4