diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX The Ultimate Review of the Award-Winning Simulation Game.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX The Ultimate Review of the Award-Winning Simulation Game.md
deleted file mode 100644
index de19ecc4e48d277c04c879d72cbc9fb75fc3475d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX The Ultimate Review of the Award-Winning Simulation Game.md
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
Anno 2070 Deep Ocean: A Review of the Expansion Pack
-Anno 2070, the latest entry in Ubisoft's long-running real-time strategy series, gets a major expansion pack in 2012, titled Deep Ocean. This add-on brings a new civilization level, new production chains and resources, new buildings and vehicles, new challenges and quests, and many other features and improvements to the game. In this article, we will review what Anno 2070 Deep Ocean has to offer, and how to install it on your PC.
- What is Anno 2070 Deep Ocean?
-Anno 2070 Deep Ocean is the add-on of Anno 2070, which was released in 2011. It is set in the year 2070, when global warming has melted the ice caps and raised the sea level, forcing humanity to adapt to the new conditions. The game features three factions: the Ecos, who are environmentally friendly and use renewable energy sources; the Tycoons, who are industrial and use fossil fuels; and the Techs, who are scientific and use advanced technology. The player can choose to ally with one or more factions, and build their own civilization on various islands and underwater plateaus.
-Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX
Download Zip ✸✸✸ https://byltly.com/2uKvyW
- The new civilization level: the Geniuses
-For the first time in the history of the Anno series, an add-on brings a new civilization level: the Tech faction is expanded by the Genius population class. These are highly intelligent and innovative people who require neuroimplants, immunity drugs, laboratory instruments, and bionic suits to satisfy their needs. To produce these goods, new fertilities have been added to the underwater islands, such as coral, sponges, lithium, platinum, and enzymes. The Geniuses also unlock access to the Tech monument: the Science Forum, which opens up all building restrictions on the island and gives special tasks from F.A.T.H.E.R. 2.0, the artificial intelligence that guides the Techs.
- The new production chains and resources
-Anno 2070 Deep Ocean adds several new production chains and resources to the game, especially for the underwater islands. Some of them are:
-
-- Bionic suits: these make the Geniuses euphoric and generate massive taxes. However, they require a four-step production chain that involves biopolymers and omega acids from both Ecos and Tycoons.
-- Geothermal energy: this is a powerful energy source that can be harvested from underwater volcanoes. It requires a geothermal power plant that can be upgraded with modules.
-- Hydrogen: this is a versatile resource that can be used for fuel cells or rockets. It can be produced from water by using an electrolysis station.
-- Laboratory instruments: these are needed by both Geniuses and Researchers (a sub-class of Ecos). They can be made from glass and platinum by using a laboratory.
-- Neuroimplants: these are essential for the Geniuses' health. They can be made from microchips and enzymes by using a neuro implant factory.
-
- The new buildings and vehicles
-Anno 2070 Deep Ocean also adds over 50 new buildings and vehicles to the game, some of them are:
-
-- Defense platforms: these are underwater structures that can defend the above-water level of an undersea island against attacks. They can be equipped with turrets or shields.
-- Underwater receiving dock: this expands the construction area and raises the storage capacity of an underwater island. It also allows trade with other players or NPCs.
-- Sisyphus transport submarine: this is a vehicle that can recover large amounts of goods from great depths. It can also transport goods between islands or trade routes.
-- Atlas carrier ship: this is a vehicle that can carry and refuel aircrafts. It can also deploy drones for reconnaissance or combat.
-- Tech ornamental buildings: these are decorative structures that can enhance the appearance of a Tech city. They include fountains, statues, holograms, etc.
-
- What are the benefits of playing Anno 2070 Deep Ocean?
-Anno 2070 Deep Ocean not only adds more content to the game but also enhances its gameplay experience in various ways. Some of them are:
- The new challenges and quests
-The expansion pack introduces a new campaign mode that consists of six missions that follow the story of F.A.T.H.E.R.'s evolution. It also adds several new scenarios that test the player's skills in different situations. Moreover, it adds more random events and disasters that affect both land and sea, such as tsunamis, oil spills, meteor showers, etc.
- The new features and improvements
-The expansion pack also brings many new features and improvements to the game's mechanics and interface. Some of them are:
-
-- Hostile takeover: this allows the player to take over another player's or NPC's island by buying shares or sabotaging their economy.
-- Energy transfer: this allows the player to transfer energy between islands by using power transmitter stations or submarines.
-- Deep sea warehouse: this allows the player to store goods in an underwater warehouse that can be accessed from any island.
-- Research system: this allows the player to unlock new upgrades and items by conducting research projects with Researchers or Geniuses.
-- Co-op mode: this allows up to four players to share an island and work together on common goals.
-- User interface: this includes various enhancements such as a mini-map for underwater islands, a filter for trade routes, a statistics overview for production chains, etc.
-
- The new graphics and sound effects
-Anno 2070 Deep Ocean also improves the game's graphics and sound effects by adding more details and variety to its environments and animations. Some of them are:
-
-- Underwater world: this includes more flora and fauna such as corals, fish, whales, etc., as well as more dynamic lighting and shadows.
-- Tech cities: this includes more futuristic buildings such as skyscrapers, domes, bridges, etc., as well as more neon lights and holograms.
-- Soundtrack: this includes more music tracks that match the mood of each faction and situation.
-- Voice acting: this includes more voice lines for each character that reflect their personality and emotions.
-
- How to install Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX?
-If you want to play Anno 2070 Deep Ocean on your PC, you need to have Anno 2070 installed first. Then you need to download Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX from a reliable source such as Steam or Ubisoft Store. Here are some steps to guide you through the installation process:
-Anno 2070 Deep Ocean expansion pack download
-How to install Anno 2070 Deep Ocean crack
-Anno 2070 Deep Ocean CODEX torrent
-Anno 2070 Deep Ocean gameplay and features
-Anno 2070 Deep Ocean patch and update
-Anno 2070 Deep Ocean multiplayer crack
-Anno 2070 Deep Ocean system requirements
-Anno 2070 Deep Ocean review and rating
-Anno 2070 Deep Ocean cheats and mods
-Anno 2070 Deep Ocean free download full version
-Anno 2070 Deep Ocean keygen and serial number
-Anno 2070 Deep Ocean DLC and bonus content
-Anno 2070 Deep Ocean trainer and unlocker
-Anno 2070 Deep Ocean best settings and tips
-Anno 2070 Deep Ocean error fix and troubleshooting
-Anno 2070 Deep Ocean steam and origin activation
-Anno 2070 Deep Ocean skidrow and reloaded crack
-Anno 2070 Deep Ocean comparison and benchmark
-Anno 2070 Deep Ocean soundtrack and OST
-Anno 2070 Deep Ocean wallpaper and screenshots
-Anno 2070 Deep Ocean guide and walkthrough
-Anno 2070 Deep Ocean achievements and trophies
-Anno 2070 Deep Ocean mods and customization
-Anno 2070 Deep Ocean release date and price
-Anno 2070 Deep Ocean trailer and gameplay video
-Anno 2070 Deep Ocean iso and rar file download
-Anno 2070 Deep Ocean direct download link
-Anno 2070 Deep Ocean mega and google drive download
-Anno 2070 Deep Ocean crack only download
-Anno 2070 Deep Ocean language pack and subtitles
-Anno 2070 Deep Ocean repack and compressed download
-Anno 2070 Deep Ocean online and LAN play
-Anno 2070 Deep Ocean co-op and versus mode
-Anno 2070 Deep Ocean new missions and scenarios
-Anno 2070 Deep Ocean factions and tech tree
-Anno 2070 Deep Ocean underwater city building
-Anno 2070 Deep Ocean energy crisis and disaster management
-Anno 2070 Deep Ocean simulation and strategy game
-Anno 2070 Deep Ocean futuristic and sci-fi setting
-Anno 2070 Deep Ocean sandbox and endless mode
-Anno 2070 Deep Ocean world events and challenges
-Anno 2070 Deep Ocean graphics and performance optimization
-Anno 2070 Deep Ocean VR and controller support
-Anno 2070 Deep Ocean fan art and community creations
-Anno 2070 Deep Ocean wiki and FAQ page
-Anno 2070 Deep Ocean forum and discussion board
-Anno 2070 Deep Ocean news and updates
-Anno 2070 Deep Ocean crack status and working proof
-Anno 2070 Deep Ocean alternatives and similar games
- The system requirements
-Before you download Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX , you need to make sure that your PC meets the minimum system requirements for running it smoothly. These are:
-
-
-Minimum System Requirements | Recommended System Requirements |
OS : Windows XP / Windows Vista / Windows®7 Core 2 Duo E4400 @ 2.0 Ghz or AMD Athlon64 X2 3800+ @ 2.0Ghz Memory : 2 GB RAM Graphics : 512 MB DirectX® 9.0c–compatible with Shader Model 3.0 or higher (see supported list)* Hard Drive : 5 GB HD space Sound : DirectX 9.0c–compliant | OS : Windows XP / Windows Vista / Windows®7 Processor : Intel® Core 2 Duo E6700 @ 2.6 GHz or AMD Athlon64 X2 6000+ @ 3.0Ghz or better Memory : 4 GB RAM Graphics : 512 MB DirectX® 9.0c–compatible with Shader Model 3.0 or higher (see supported list)* Hard Drive : 5 GB HD space Sound : DirectX® 9.0c–compliant |
- *Supported Video Cards at Time of Release: AMD Radeon™ HD2600XT or better/3000/4000/5000/6000 desktop series NVIDIA® GeForce® 8600GTS or better/9/GT200/GT400/GT500 desktop series Laptop versions of these cards may work but are NOT supported. These chipsets are the only ones that will run this game.
- The download and installation steps
-Once you have checked your system requirements, you can proceed to download Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX from your preferred source. Here are some steps to follow:
-
-- Download the file Anno_2070_Deep_Ocean_[PCDVD_Crack][Multi6]_(2012)_CODEX.rar from the link provided by the source.
-- Extract the file using a program such as WinRAR or 7-Zip.
-- Mount or burn the image Anno_2070_Deep_Ocean_[PCDVD_Crack][Multi6]_(2012)_CODEX.iso using a program such as Daemon Tools or PowerISO.
-- Run the setup.exe file and follow the instructions to install the game.
-- Copy the contents of the folder CODEX to the installation folder of Anno 2070.
-- Run the game from the desktop shortcut or the launcher.exe file in the installation folder.
-- Enjoy playing Anno 2070 Deep Ocean!
-
- The troubleshooting tips
-If you encounter any problems while installing or playing Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX, here are some tips to help you fix them:
-
-- Make sure that your antivirus software is not blocking or deleting any files from the game.
-- Make sure that your drivers are updated, especially for your graphics card and sound card.
-- Make sure that you have DirectX 9.0c installed on your PC.
-- Make sure that you have enough free space on your hard drive for the game and its updates.
-- Make sure that you have a stable internet connection for online features and multiplayer modes.
-- If you have any other issues, you can check the official website of Anno 2070 or contact Ubisoft support for more help.
-
- Conclusion
-Anno 2070 Deep Ocean is a great expansion pack for Anno 2070 that adds a lot of new content and features to the game. It allows you to explore and exploit the underwater world, build and manage a new civilization level, face new challenges and quests, and enjoy improved graphics and sound effects. If you are a fan of real-time strategy games and futuristic scenarios, you should definitely give Anno 2070 Deep Ocean a try. You can download it from Steam or Ubisoft Store, or use Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX to install it on your PC.
- FAQs
-Here are some frequently asked questions about Anno 2070 Deep Ocean:
-
-- Q: Do I need Anno 2070 to play Anno 2070 Deep Ocean?
-- A: Yes, you need to have Anno 2070 installed on your PC before you can play Anno 2070 Deep Ocean.
-- Q: Can I play Anno 2070 Deep Ocean without an internet connection?
-- A: Yes, you can play Anno 2070 Deep Ocean offline, but you will not be able to access some online features such as multiplayer modes, leaderboards, achievements, etc.
-- Q: How long does it take to finish Anno 2070 Deep Ocean?
-- A: It depends on your playstyle and difficulty level, but it can take anywhere from 10 to 20 hours to complete the campaign mode and all the scenarios.
-- Q: Can I play Anno 2070 Deep Ocean with other players?
-- A: Yes, you can play Anno 2070 Deep Ocean with up to three other players in co-op mode or up to eight other players in competitive mode.
-- Q: What are the differences between the Ecos, Tycoons, and Techs?
-- A: The Ecos are environmentally friendly and use renewable energy sources. They have low pollution and high satisfaction levels, but they also have low productivity and high maintenance costs. The Tycoons are industrial and use fossil fuels. They have high productivity and low maintenance costs, but they also have high pollution and low satisfaction levels. The Techs are scientific and use advanced technology. They have moderate pollution and satisfaction levels, but they also have moderate productivity and maintenance costs. They also have access to the Geniuses, who are highly intelligent and innovative people who require special goods and buildings.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Direct Tax Laws Tn Manoharan Pdf REPACK Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Direct Tax Laws Tn Manoharan Pdf REPACK Download.md
deleted file mode 100644
index f32f3cfa9aeae4a7710ca7d9c2eacd60e8b0d0fb..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Direct Tax Laws Tn Manoharan Pdf REPACK Download.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-How to Download Direct Tax Laws by TN Manoharan PDF for CA Final Exams
-Direct Tax Laws by TN Manoharan is one of the most popular and comprehensive books for CA Final students who are preparing for the Direct Tax and International Taxation paper. The book covers the latest syllabus and amendments as per the Finance Act 2022 and provides numerous practical problems, case studies, illustrations and MCQs for practice.
-If you are looking for a reliable source to download Direct Tax Laws by TN Manoharan PDF for free, you may be disappointed to know that there is no official or legal way to do so. The book is protected by copyright laws and any unauthorized distribution or reproduction of it is a violation of the intellectual property rights of the author and the publisher.
-direct tax laws tn manoharan pdf download
Download File ::: https://byltly.com/2uKAaP
-However, there are some alternative ways to access the book online without downloading it. Here are some of them:
-
-- You can buy the book from online platforms like Amazon.in[^1^], Flipkart.com or MakeMyDelivery.com and get it delivered to your doorstep. You can also avail discounts and offers on these sites.
-- You can subscribe to online libraries or e-book services like CAclubindia.com, BookGanga.com or Scribd.com and read the book on your device. You may have to pay a nominal fee or register for a free trial to access these services.
-- You can visit the official website of the publisher, Snowwhite Publications Pvt Ltd, and view the book online. You can also order a hard copy or an e-book from their site.
-
-We hope this article helps you find the best way to access Direct Tax Laws by TN Manoharan PDF for your CA Final exams. Remember, reading the book is not enough; you also need to practice and revise the concepts regularly. All the best!
-
-Why Direct Tax Laws by TN Manoharan is a Must-Read for CA Final Students
-Direct Tax Laws by TN Manoharan is a must-read for CA Final students because it covers the entire syllabus of Direct Tax and International Taxation in a lucid and comprehensive manner. The book is written by an eminent author and a former president of the Institute of Chartered Accountants of India (ICAI), who has vast experience and expertise in the field of taxation. The book is updated with the latest amendments and notifications as per the Finance Act 2022 and the Income Tax Act 1961.
-The book is divided into two volumes: Volume I deals with Direct Tax Laws and Volume II deals with International Taxation. The book follows a systematic and logical approach to explain the concepts and provisions of the tax laws. The book also provides numerous examples, illustrations, case laws, MCQs and practical problems to help the students understand and apply the tax laws in various situations. The book also contains previous year question papers and suggested answers for reference and revision.
-
-How to Study Direct Tax Laws by TN Manoharan Effectively for CA Final Exams
-Studying Direct Tax Laws by TN Manoharan effectively for CA Final exams requires a proper planning and strategy. Here are some tips to help you study the book efficiently:
-
-
-- Read the book thoroughly and understand the concepts and provisions of the tax laws. Do not skip any topic or chapter as they are interlinked and important for the exam.
-- Make notes of the important points, formulas, definitions, exceptions and amendments while reading the book. Use charts, diagrams, tables and mnemonics to memorize the information.
-- Solve the examples, illustrations, case laws, MCQs and practical problems given in the book after each topic or chapter. This will help you test your knowledge and application skills.
-- Revise the book regularly and update your notes with the latest amendments and notifications. Refer to the previous year question papers and suggested answers given in the book to get an idea of the exam pattern and expectations.
-- Practice mock tests and sample papers based on the book before the exam. This will help you improve your speed, accuracy and confidence.
-
-By following these tips, you can study Direct Tax Laws by TN Manoharan effectively for CA Final exams and score high marks in the paper.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Eyeon Fusion 6.4 Crack Portable !!LINK!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Eyeon Fusion 6.4 Crack Portable !!LINK!!.md
deleted file mode 100644
index a5802c2964b00acebee9e3c5b52ec41a908496a7..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Eyeon Fusion 6.4 Crack Portable !!LINK!!.md
+++ /dev/null
@@ -1,73 +0,0 @@
-
-Eyeon Fusion 6.4 Crack Portable: A Review
-Eyeon Fusion 6.4 is a powerful and versatile software for creating stunning visual effects and motion graphics. It is used by professionals and enthusiasts alike for various projects such as films, commercials, games, and more. However, the software is not cheap and requires a license to use. If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for a cracked version of the software that can run on any Windows device without activation. This is what Eyeon Fusion 6.4 crack portable claims to offer.
-In this article, we will review some of the features and benefits of Eyeon Fusion 6.4 crack portable and why you should or should not use it.
-eyeon fusion 6.4 crack portable
Download File ===== https://imgfil.com/2uy0t2
-What is Eyeon Fusion 6.4 crack portable?
-Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and functions of the original software, such as:
-
-- A node-based interface that allows you to create complex effects and animations with ease.
-- A wide range of tools and plugins that let you manipulate images, videos, 3D models, particles, text, and more.
-- A fast and high-quality rendering engine that supports GPU acceleration and network rendering.
-- A flexible and customizable workflow that integrates with other software such as Adobe After Effects, Photoshop, Premiere Pro, Maya, Nuke, etc.
-- A comprehensive documentation and tutorial system that helps you learn and master the software.
-
-How to download and install Eyeon Fusion 6.4 crack portable?
-To download and install Eyeon Fusion 6.4 crack portable, you need to find a reliable source that offers the cracked version of the software. There are many websites that claim to provide this service, but most of them are fake or malicious. They might contain viruses, malware, spyware, or adware that can harm your device or steal your personal information. They might also require you to complete surveys, download additional software, or enter your credit card details before giving you access to the download link.
-Therefore, you need to be very careful and cautious when looking for Eyeon Fusion 6.4 crack portable online. You should always scan the files with a reputable antivirus program before opening them. You should also avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.
-Here are some steps to download and install Eyeon Fusion 6.4 crack portable safely:
-
-- Go to a trusted website that offers Eyeon Fusion 6.4 crack portable for free.
-- Click on the download link and save the file to your device.
-- Extract the file using a program such as WinRAR or 7-Zip.
-- Run the executable file (EyeonFusion.exe) from the extracted folder.
-- Enjoy using Eyeon Fusion 6.4 crack portable without activation.
-
-What are the pros and cons of Eyeon Fusion 6.4 crack portable?
-Eyeon Fusion 6.4 crack portable has some pros and cons that you should consider before using it. Here are some of them:
-
-Pros | Cons |
-Free: You can use Eyeon Fusion 6.4 without paying for it. | Illegal: You are violating the terms and conditions of the original software by using a cracked version of it. |
-Portable: You can run Eyeon Fusion 6.4 from any removable device without installing it on your computer. | Unstable: You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable. |
-Feature-rich: You can access all the features and functions of Eyeon Fusion 6.4 as if you were using the original software. | Unsafe: You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable. |
-
-Conclusion
-Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and benefits of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.
-
-However, Eyeon Fusion 6.4 crack portable also has some drawbacks that you should consider before using it, such as being illegal, unstable, and unsafe. You might violate the terms and conditions of the original software by using a cracked version of it. You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable. You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable.
-If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for Eyeon Fusion 6.4 crack portable online. However, you need to be very careful and cautious when looking for it online as most websites that offer it are fake or malicious. You need to scan the files with a reputable antivirus program before opening them and avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.
-If you want to use Eyeon Fusion 6.4 legally and safely, you should buy a license from the official website or any other authorized source and use it on your computer with activation.
-How to use Eyeon Fusion 6.4 crack portable?
-Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and functions of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.
-To use Eyeon Fusion 6.4 crack portable, you need to download and install it on your device as explained in the previous section. Then, you can run the software from the extracted folder and start creating your visual effects and motion graphics projects.
-Using Eyeon Fusion 6.4 crack portable is very easy and intuitive. Here are some steps to get you started:
-
-- Run the executable file (EyeonFusion.exe) from the extracted folder.
-- Choose a project template or create a new project from scratch.
-- Add nodes to your flow by dragging them from the toolbar or using the right-click menu.
-- Connect nodes by dragging their output to another node's input.
-- Edit node properties by double-clicking on them or using the inspector panel.
-- Preview your results by clicking on the viewer button or pressing F4.
-- Render your project by clicking on the render button or pressing F5.
-- Save your project by clicking on the save button or pressing Ctrl+S.
-
-What are some tips and tricks for using Eyeon Fusion 6.4 crack portable?
-Eyeon Fusion 6.4 crack portable is a powerful and versatile software that can help you create stunning visual effects and motion graphics. However, it also has some tips and tricks that can help you improve your workflow and results. Here are some of them:
-
-- Use hotkeys: You can use various hotkeys to perform common tasks faster and easier. For example, you can use Ctrl+Z to undo, Ctrl+C to copy, Ctrl+V to paste, Ctrl+G to group, Ctrl+U to ungroup, etc.
-- Use expressions: You can use expressions to control node properties based on mathematical formulas or other node values. For example, you can use an expression to link the position of one node to another node's rotation.
-- Use macros: You can use macros to create custom nodes that combine multiple nodes into one. For example, you can create a macro that applies a blur, a color correction, and a glow effect to an image.
-- Use plugins: You can use plugins to extend the functionality of Eyeon Fusion 6.4 crack portable with additional tools and effects. For example, you can use plugins to add 3D models, particles, text, etc.
-- Use tutorials: You can use tutorials to learn new techniques and skills for using Eyeon Fusion 6.4 crack portable. You can find many tutorials online or in the documentation system of the software.
-
-Summary
-Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and benefits of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.
-However, Eyeon Fusion 6.4 crack portable also has some drawbacks that you should consider before using it, such as being illegal, unstable, and unsafe. You might violate the terms and conditions of the original software by using a cracked version of it. You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable. You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable.
-If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for Eyeon Fusion 6.4 crack portable online. However, you need to be very careful and cautious when looking for it online as most websites that offer it are fake or malicious. You need to scan the files with a reputable antivirus program before opening them and avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.
-If you want to use Eyeon Fusion 6.4 legally and safely, you should buy a license from the official website or any other authorized source and use it on your computer with activation.
-Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and benefits of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.
-However, Eyeon Fusion 6.4 crack portable also has some drawbacks that you should consider before using it, such as being illegal, unstable, and unsafe. You might violate the terms and conditions of the original software by using a cracked version of it. You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable. You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable.
-If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for Eyeon Fusion 6.4 crack portable online. However, you need to be very careful and cautious when looking for it online as most websites that offer it are fake or malicious. You need to scan the files with a reputable antivirus program before opening them and avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.
-If you want to use Eyeon Fusion 6.4 legally and safely, you should buy a license from the official website or any other authorized source and use it on your computer with activation.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 Mobile Grand Theft Auto and Experience the Thrill of Action and Crime.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 Mobile Grand Theft Auto and Experience the Thrill of Action and Crime.md
deleted file mode 100644
index 741131249f5904bfc8d051b47f1d7b0e69a5fd08..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 Mobile Grand Theft Auto and Experience the Thrill of Action and Crime.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-GTA 5 3D Game Download for Android: How to Play the Best Open-World Game on Your Smartphone
- Introduction
- GTA 5 is one of the most popular and acclaimed games of all time. It is an action-adventure open-world game that lets you experience the life of a criminal in the fictional city of Los Santos. You can play as one of three protagonists, each with their own story, personality, and skills. You can also switch between them at any time, creating a dynamic and immersive gameplay.
-gta 5 3d game download for android
Download File ✒ ✒ ✒ https://urlin.us/2uSYQU
- GTA 5 is not only a game, but also a cultural phenomenon. It has sold over 150 million copies worldwide, making it one of the best-selling games ever. It has also received numerous awards and accolades, such as Game of the Year, Best Game Design, Best Soundtrack, and more. It has also inspired many other games, movies, TV shows, and memes.
- But what if you want to play GTA 5 on your Android device? Is it possible? And if so, how can you do it? In this article, we will answer these questions and show you how to download GTA 5 for Android and enjoy playing it on your smartphone. We will also give you some tips and tricks to make the most out of your gaming experience.
- How to Download GTA 5 for Android
- Unfortunately, GTA 5 is not officially available for Android devices. Rockstar Games, the developer of GTA 5, has not released a mobile version of the game yet. However, there are some ways to play GTA 5 on your Android device using some third-party apps and services. Here are two methods that you can try:
- Method 1: Using Steam Link
- Steam Link is an app that allows you to stream games from your PC to your Android device over a local network. You can use it to play GTA 5 on your Android device as long as you have a PC that can run the game and a stable Wi-Fi or Bluetooth connection. Here are the steps to follow:
- Step 1: Download and install Steam Link on your Android device
- You can download Steam Link from the Google Play Store for free. Once you have installed it, open it and tap on Settings. Then tap on Computer and scan for devices in the Bluetooth range or on the same Wi-Fi network as your PC.
-gta 5 free open-world games for android devices
-gta 5 action-adventure game for android mobile
-gta 5 best alternatives for android phones
-gta 5 3d graphics and realistic physics for android
-gta 5 latest version download for android apk
-gta 5 offline mode and online multiplayer for android
-gta 5 cheats and mods for android users
-gta 5 epic games store free download for android
-gta 5 how to install and play on android devices
-gta 5 compatible android models and requirements
-gta 5 new features and updates for android gamers
-gta 5 tips and tricks for android beginners
-gta 5 comparison with other gta games for android
-gta 5 fan-made and unofficial versions for android
-gta 5 reviews and ratings for android players
-gta 5 custom skins and vehicles for android
-gta 5 fun activities and missions for android
-gta 5 sandbox and exploration mode for android
-gta 5 crime and gangster theme for android
-gta 5 soundtrack and voice acting for android
-gta 5 controller support and touch screen controls for android
-gta 5 performance and optimization for android
-gta 5 bugs and glitches for android
-gta 5 secrets and easter eggs for android
-gta 5 best locations and landmarks for android
-gta 5 role-playing and simulation mode for android
-gta 5 character customization and outfits for android
-gta 5 weapons and combat system for android
-gta 5 cars and bikes collection for android
-gta 5 helicopters and planes flying for android
-gta 5 boats and water activities for android
-gta 5 police and wanted level system for android
-gta 5 heists and robberies mode for android
-gta 5 races and stunts mode for android
-gta 5 minigames and side quests for android
-gta 5 story mode and plot summary for android
-gta 5 online mode and multiplayer features for android
-gta 5 online mode how to join and create sessions for android
-gta 5 online mode how to make money and buy properties for android
-gta 5 online mode how to customize your character and vehicle for android
-gta 5 online mode how to play with friends and chat with other players for android
-gta 5 online mode how to join or create crews and gangs for android
-gta 5 online mode how to participate in events and challenges for android
-gta 5 online mode how to rank up and unlock items for android
-gta 5 online mode how to deal with hackers and cheaters for android
-gta 5 online mode best modes and activities to play for android
-gta 5 online mode best tips and strategies to win for android
-gta 5 online mode best weapons and vehicles to use for android
- Step 2: Connect your Android device and PC via Bluetooth or Wi-Fi
- Once you have found your PC on the list of devices, tap on it and enter the PIN code that appears on your PC screen. This will pair your Steam Link app with your PC and allow you to stream games from it.
- Step 3: Pair your Steam Link app with your PC and launch GTA 5
- On your PC, open Steam and make sure that GTA 5 is installed and updated. Then, on your Android device, tap on Start Playing on the Steam Link app. This will launch Steam on your PC and show you your library of games. Find GTA 5 and tap on it to start the game.
- Step 4: Enjoy playing GTA 5 on your Android device
- Once the game is running, you can use your Android device as a touch screen controller or connect a compatible controller via Bluetooth or USB. You can also adjust the streaming quality and settings on the Steam Link app to optimize your experience. You can now play GTA 5 on your Android device as if you were playing it on your PC.
- Method 2: Using Epic Games Store
- Epic Games Store is another platform that allows you to download and play games on your PC. It also offers free games every week, and one of them was GTA 5 in May 2020. If you have claimed GTA 5 from Epic Games Store, you can use it to play the game on your Android device using a similar method as Steam Link. Here are the steps to follow:
- Step 1: Download and install Epic Games Store on your PC
- You can download Epic Games Store from its official website for free. Once you have installed it, open it and create an account or sign in with your existing one.
- Step 2: Find GTA 5 on the store and download it for free
- If you have claimed GTA 5 from Epic Games Store when it was free, you can find it in your library of games. If not, you can buy it from the store for $29.99. Once you have the game, download it and install it on your PC.
- Step 3: Use Steam Link or any other remote play app to stream GTA 5 from your PC to your Android device
- Since Epic Games Store does not have its own streaming app, you can use Steam Link or any other app that allows you to stream games from your PC to your Android device. Some examples are Parsec, Moonlight, and Rainway. You can follow the same steps as Method 1 to connect your Android device and PC and launch GTA 5.
- Step 4: Enjoy playing GTA 5 on your Android device
- Once the game is running, you can use your Android device as a touch screen controller or connect a compatible controller via Bluetooth or USB. You can also adjust the streaming quality and settings on the app to optimize your experience. You can now play GTA 5 on your Android device as if you were playing it on your PC.
- Tips and Tricks for Playing GTA 5 on Android
- Playing GTA 5 on Android can be a lot of fun, but it can also be challenging and frustrating at times. Here are some tips and tricks to help you enjoy the game more:
- Adjust the graphics settings to optimize performance and battery life
- GTA 5 is a very demanding game that requires a lot of resources from your PC and Android device. To avoid lagging, crashing, overheating, or draining your battery too fast, you should adjust the graphics settings of the game on your PC and the streaming app on your Android device. You can lower the resolution, frame rate, texture quality, shadows, anti-aliasing, and other options to make the game run smoother and save power.
- Use a controller or a keyboard and mouse for better control and accuracy
- GTA 5 is a game that involves a lot of shooting, driving, flying, and other actions that require precise and responsive controls. Using a touch screen controller may not be the best option for this game, as it can be inaccurate, uncomfortable, or obstructive. You may want to use a controller or a keyboard and mouse instead for better control and accuracy. You can connect them to your Android device via Bluetooth or USB, or use them directly on your PC if you are close enough.
- Explore the vast open-world map and discover hidden secrets and easter eggs
- GTA 5 has a huge open-world map that is full of details, variety, and surprises. You can explore different areas, such as the city, the countryside, the mountains, the desert, the ocean, and more. You can also find hidden secrets and easter eggs that reference other games, movies, TV shows, celebrities, or real-life events. Some examples are UFOs , Bigfoot, aliens, zombies, ghosts, and more. You can also interact with various characters, animals, vehicles, and objects that make the game more realistic and fun.
- Try out different game modes and activities, such as races, heists, missions, and more
- GTA 5 is not just a single-player game. It also has a multiplayer mode called GTA Online, where you can play with or against other players from around the world. You can join or create different game modes and activities, such as races, heists, missions, deathmatches, survival, and more. You can also customize your character, vehicle, weapons, and properties. GTA Online is constantly updated with new content and features, so you will never run out of things to do.
- Conclusion
- GTA 5 is one of the best games ever made, and you can play it on your Android device using some third-party apps and services. You can use Steam Link or Epic Games Store to stream the game from your PC to your Android device over a local network. You can also adjust the graphics settings, use a controller or a keyboard and mouse, explore the open-world map, and try out different game modes and activities to enhance your gaming experience.
- If you are a fan of GTA 5 or want to try it out for the first time, you should definitely download it for Android and play it on your smartphone. It is a game that will keep you entertained for hours and hours. You will not regret it.
- Have you played GTA 5 on Android? What are your thoughts on it? Let us know in the comments below!
- FAQs
- Here are some frequently asked questions about GTA 5 3D game download for Android:
- Q: Is GTA 5 free for Android?
-A: No, GTA 5 is not free for Android. You need to buy the game from Steam or Epic Games Store for your PC first. Then you can use Steam Link or any other remote play app to stream the game from your PC to your Android device.
- Q: Is GTA 5 compatible with all Android devices?
-A: No, GTA 5 is not compatible with all Android devices. You need to have a device that meets the minimum requirements for streaming games from your PC. These include a fast processor, a good amount of RAM, a decent graphics card, and a stable Wi-Fi or Bluetooth connection.
- Q: Can I play GTA 5 offline on Android?
-A: No, you cannot play GTA 5 offline on Android. You need to have an internet connection to stream the game from your PC to your Android device. You also need to have an internet connection to play GTA Online.
- Q: Can I play GTA 5 with my friends on Android?
-A: Yes, you can play GTA 5 with your friends on Android. You can join them in GTA Online or invite them to your private session. You can also chat with them using voice or text messages.
- Q: How much storage space does GTA 5 take on Android?
-A: GTA 5 does not take any storage space on Android. The game is stored on your PC and streamed to your Android device. However, you may need some storage space for the streaming app that you use.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download FIFA Mobile MOD APK (Unlocked All Money Menu) and Relive the Worlds Greatest Soccer Tournament with 32 Qualified Nations.md b/spaces/1phancelerku/anime-remove-background/Download FIFA Mobile MOD APK (Unlocked All Money Menu) and Relive the Worlds Greatest Soccer Tournament with 32 Qualified Nations.md
deleted file mode 100644
index 9dfdad910fa123582f62972c90149e255476865e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download FIFA Mobile MOD APK (Unlocked All Money Menu) and Relive the Worlds Greatest Soccer Tournament with 32 Qualified Nations.md
+++ /dev/null
@@ -1,197 +0,0 @@
-
-How to Download FIFA Mobile Mod APK for Android and iOS
-If you are a fan of soccer games, you have probably heard of FIFA Mobile, the popular football simulation game developed by EA Sports. The game features real-world teams, players, stadiums, and tournaments, allowing you to create your own ultimate team and compete against others online. But what if you want to enjoy the game with more features, such as unlimited coins, unlocked players, menu mod, speed hack, and more? In that case, you might want to download FIFA Mobile mod APK, a modified version of the game that gives you access to these features and more.
-In this article, we will show you how to download FIFA Mobile mod APK for Android and iOS devices, as well as the benefits and risks of using it. We will also give you some tips and tricks for playing FIFA Mobile and improving your skills. But before we get into that, let's take a look at some of the features and gameplay of FIFA Mobile.
-download fifa mobile mod apk
DOWNLOAD ✵ https://jinyurl.com/2uNNr1
-FIFA Mobile Features and Gameplay
-FIFA Mobile is one of the most popular soccer games on mobile devices, with over 100 million downloads on Google Play Store alone. The game offers a variety of modes, events, challenges, and rewards for you to enjoy. Here are some of the main features and gameplay aspects of FIFA Mobile:
-FIFA World Cup 2022 Mode
-Relive the world's greatest soccer tournament with FIFA World Cup 2022 mode, the only licensed FIFA World Cup mobile game where you can replay the official tournament brackets with any of the 32 qualified nations. You can also choose from 15 non-qualified nations and rewrite history by taking them to glory. You can play with authentic World Cup kits, badges, balls, and stadiums, as well as enjoy localized commentary that brings the match atmosphere to life.
-Soccer Icons and Heroes
-Build your ultimate team with over 100 soccer icons and heroes from different leagues and eras. You can score big with world soccer icons like Paolo Maldini,
Diego Maradona, Zinedine Zidane, and Cristiano Ronaldo, or discover new soccer heroes like Erling Haaland, Kylian Mbappé, and Bruno Fernandes. You can also upgrade your players with skill boosts, chemistry, and training to make them even more powerful.
-Immersive Next-Level Soccer Simulation
-Experience realistic soccer simulation with stunning graphics, immersive audio commentary, and 60 frames per second gameplay. You can play in over 30 official leagues, 700 clubs, and 17,000 players from around the world. You can also customize your controls, camera angles, and difficulty levels to suit your preferences. Whether you prefer fast-paced arcade action or tactical simulation, FIFA Mobile has something for you.
-Manager Mode
-Be the soccer manager of your own dream team and adjust your tactics in real time. You can choose from different formations, styles, and instructions to outsmart your opponents. You can also scout and sign new players, train and develop your squad, and manage your club's finances. You can compete in various leagues and tournaments to earn rewards and climb the leaderboards.
-FIFA Mobile System Requirements and Compatibility
-Before you download FIFA Mobile mod APK for Android or iOS devices, you need to make sure that your device meets the minimum system requirements and compatibility for the game. Here are the details:
-How to download fifa mobile mod apk for free
-Download fifa mobile mod apk with unlimited money and coins
-FIFA Mobile v18.1.03 mod apk: unlock all players and features
-FIFA World Cup 2022 mode in fifa mobile mod apk
-Best soccer stars to build your ultimate team in fifa mobile mod apk
-FIFA Mobile mod apk vs FIFA 23: which one is better?
-FIFA Mobile mod apk tips and tricks: how to win every match
-FIFA Mobile mod apk review: is it worth downloading?
-FIFA Mobile mod apk download link and installation guide
-FIFA Mobile mod apk compatibility: which devices can run it?
-FIFA Mobile mod apk problems and solutions: how to fix common issues
-FIFA Mobile mod apk cheats and hacks: how to get unlimited resources
-FIFA Mobile mod apk updates and news: what's new in the latest version?
-FIFA Mobile mod apk gameplay and features: what can you do in the game?
-FIFA Mobile mod apk online and offline modes: how to play with or without internet
-FIFA Mobile mod apk graphics and sound quality: how realistic is the game?
-FIFA Mobile mod apk vs PES 23: which one has better soccer simulation?
-FIFA Mobile mod apk ratings and reviews: what do other users think of the game?
-FIFA Mobile mod apk alternatives: what are some other soccer games you can try?
-FIFA Mobile mod apk FAQs: answers to the most common questions about the game
-Download fifa mobile mod apk for android devices
-Download fifa mobile mod apk for ios devices
-Download fifa mobile mod apk for pc or laptop
-Download fifa mobile mod apk for windows or mac
-Download fifa mobile mod apk for firestick or smart tv
-Download fifa mobile mod apk with obb data file
-Download fifa mobile mod apk with no root or jailbreak required
-Download fifa mobile mod apk with anti-ban protection
-Download fifa mobile mod apk with all leagues and teams unlocked
-Download fifa mobile mod apk with all icons and heroes available
-Download fifa mobile mod apk with manager mode enabled
-Download fifa mobile mod apk with head-to-head mode enabled
-Download fifa mobile mod apk with vs attack mode enabled
-Download fifa mobile mod apk with world cup mode enabled
-Download fifa mobile mod apk with champions league mode enabled
-Download fifa mobile mod apk with realistic stadiums and commentary
-Download fifa mobile mod apk with high fps and smooth performance
-Download fifa mobile mod apk with easy controls and user interface
-Download fifa mobile mod apk with custom kits and logos
-Download fifa mobile mod apk with live events and rewards
-Minimum Requirements for Downloading FIFA Mobile
-To download FIFA Mobile on your Android or iOS device, you need to have at least 1 GB of free storage space and a stable internet connection. The game also requires the following operating system versions:
-
-- Android: 6.0 Marshmallow or higher
-- iOS: 12.0 or higher
-
-Minimum Requirements for Playing Head to Head Mode and 60 FPS Mode
-To play Head to Head mode and 60 FPS mode in FIFA Mobile, you need to have a device that supports these features. The game also requires the following specifications:
-
-- Android: 2 GB of RAM or higher, Snapdragon 660 processor or equivalent
-- iOS: iPhone 7 or higher
-
-List of Supported and Unsupported Devices
-Here is a list of some of the supported and unsupported devices for FIFA Mobile:
- | Supported Devices | Unsupported Devices | | --- | --- | | Samsung Galaxy S7 and above | Samsung Galaxy S6 and below | | Huawei P10 and above | Huawei P9 and below | | OnePlus 5T and above | OnePlus 5 and below | | Google Pixel 2 and above | Google Pixel and below | | iPhone 7 and above | iPhone 6s and below | | iPad Air 2 and above | iPad Air and below | If your device is not listed here, you can check the compatibility by visiting the official website of FIFA Mobile or by contacting the customer support team.
How to Download FIFA Mobile Mod APK for Android
-If you have an Android device and you want to download FIFA Mobile mod APK, you need to follow these steps:
-Step 1: Find a reliable source for the modded APK file and download it to your device
-There are many websites that offer FIFA Mobile mod APK files, but not all of them are safe and trustworthy. Some of them may contain malware, viruses, or outdated versions of the game. Therefore, you need to be careful and do some research before downloading any file from the internet. You can use Google or any other search engine to find some reputable sources for FIFA Mobile mod APK files. You can also check the reviews, ratings, and comments of other users to see if they had any problems with the file.
-Once you find a reliable source, you need to download the modded APK file to your device. You can use your browser or any other app that allows you to download files from the internet. Make sure that the file name ends with .apk and that the file size matches the one shown on the website. You can also scan the file with an antivirus app before installing it to make sure that it is safe.
-Step 2: Enable unknown sources in your device settings and install the APK file
-By default, Android devices do not allow you to install apps from unknown sources, which means sources other than Google Play Store. This is a security measure to prevent you from installing harmful or malicious apps on your device. However, if you want to install FIFA Mobile mod APK, you need to enable unknown sources in your device settings. Here is how you can do that:
-
-- Go to your device settings and look for security or privacy options.
-- Find the option that says unknown sources or install unknown apps and toggle it on.
-- You may see a warning message that says installing apps from unknown sources may harm your device. Tap on OK or Allow to proceed.
-
-Now that you have enabled unknown sources, you can install the APK file that you downloaded in step 1. Here is how you can do that:
-
-- Locate the APK file on your device using a file manager app or your browser's downloads folder.
-- Tap on the file and you will see a pop-up window that asks if you want to install the app.
-- Tap on Install and wait for the installation process to finish.
-- You may see another pop-up window that asks if you want to open the app or done. Tap on Open to launch the game.
-
-Step 3: Launch the game and enjoy the modded features
-Congratulations! You have successfully installed FIFA Mobile mod APK on your Android device. Now you can launch the game and enjoy the modded features, such as unlimited coins, unlocked players, menu mod, speed hack, and more. You can also access all the modes, events, challenges, and rewards that FIFA Mobile has to offer.
How to Download FIFA Mobile Mod APK for iOS
-If you have an iOS device and you want to download FIFA Mobile mod APK, you need to follow these steps:
-Step 1: Find a reliable source for the modded IPA file and download it to your computer
-An IPA file is the equivalent of an APK file for iOS devices. It is the file format that contains the app data and code for iOS apps. To download FIFA Mobile mod APK for iOS devices, you need to find a reliable source for the modded IPA file and download it to your computer. You can use the same methods as you did for finding the modded APK file for Android devices, such as using Google or any other search engine, checking the reviews, ratings, and comments of other users, and scanning the file with an antivirus app.
-Once you find a reliable source, you need to download the modded IPA file to your computer. You can use your browser or any other app that allows you to download files from the internet. Make sure that the file name ends with .ipa and that the file size matches the one shown on the website.
-Step 2: Install Cydia Impactor on your computer and connect your iOS device to it
-Cydia Impactor is a tool that allows you to install IPA files on your iOS device without jailbreaking it. You need to install Cydia Impactor on your computer and connect your iOS device to it in order to install FIFA Mobile mod APK on your iOS device. Here is how you can do that:
-
-- Go to the official website of Cydia Impactor and download the latest version of the tool for your operating system.
-- Extract the downloaded file and run Cydia Impactor.exe on your computer.
-- Connect your iOS device to your computer using a USB cable.
-- Make sure that your device is detected by Cydia Impactor and that its name appears in the drop-down menu.
-
-Step 3: Drag and drop the IPA file to Cydia Impactor and enter your Apple ID and password
-Now that you have Cydia Impactor and your iOS device ready, you can install FIFA Mobile mod APK on your iOS device. Here is how you can do that:
-
-- Locate the IPA file that you downloaded in step 1 on your computer.
-- Drag and drop the IPA file to Cydia Impactor's window.
-- You will see a pop-up window that asks for your Apple ID and password. Enter them and click OK.
-- Cydia Impactor will start installing FIFA Mobile mod APK on your iOS device. Wait for the process to finish.
-
-Step 4: Trust the developer profile on your iOS device and launch the game
-Congratulations! You have successfully installed FIFA Mobile mod APK on your iOS device. However, before you can launch the game, you need to trust the developer profile on your iOS device. Here is how you can do that:
-
-- Go to your device settings and look for general or profile options.
-- Find the profile that has the same name as your Apple ID and tap on it.
-- You will see a button that says trust or verify. Tap on it and confirm your action.
-- Now you can launch FIFA Mobile mod APK on your iOS device and enjoy the modded features.
-
Benefits and Risks of Using FIFA Mobile Mod APK
-As you can see, downloading FIFA Mobile mod APK for Android or iOS devices can give you many advantages and enhance your gaming experience. However, it also comes with some drawbacks and dangers that you should be aware of. Here are some of the benefits and risks of using FIFA Mobile mod APK:
-Benefits
-Some of the benefits of using FIFA Mobile mod APK are:
-
-- Unlimited coins: You can get unlimited coins in the game, which you can use to buy players, packs, skill boosts, and more. You can also upgrade your players and improve your team without spending any real money.
-- Unlocked players: You can unlock all the players in the game, including the soccer icons and heroes. You can also get any player you want from the market or the events. You can create your dream team with the best players in the world.
-- Menu mod: You can access a menu mod in the game, which allows you to enable or disable various features, such as auto win, no ads, no root, etc. You can also customize your settings and preferences according to your needs.
-- Speed hack: You can increase or decrease the speed of the game, which can give you an edge over your opponents. You can also save time and energy by completing events and challenges faster.
-
-Risks
-Some of the risks of using FIFA Mobile mod APK are:
-
-- Potential malware: The modded APK or IPA file may contain malware, viruses, or spyware that can harm your device or steal your personal information. You may also expose your device to hackers or cybercriminals who can access your data or accounts.
-- Account ban: EA Sports may detect that you are using a modded version of the game and ban your account permanently. You may lose all your progress, achievements, and rewards in the game. You may also face legal consequences for violating the terms of service of the game.
-- Game crashes: The modded version of the game may not be compatible with your device or the latest updates of the game. You may experience game crashes, glitches, errors, or bugs that can ruin your gameplay. You may also lose your data or files if the game crashes unexpectedly.
-
-Tips and Tricks for Playing FIFA Mobile
-If you want to play FIFA Mobile like a pro and improve your skills, you need to follow some tips and tricks that can help you win more matches and earn more rewards. Here are some of them:
-Attack Mode Tips
-Attack Mode is an asynchronous mode where you play against other players in turn-based matches. You only control your team's attacking moves, while your opponent controls their defending moves. Here are some tips for playing Attack Mode:
-
-- Choose your formation wisely: Different formations have different advantages and disadvantages in Attack Mode. For example, a 4-3-3 formation gives you more attacking options, but a 4-4-2 formation gives you more balance. You should choose a formation that suits your play style and your opponent's formation.
-- Use skill moves: Skill moves are special moves that you can perform by swiping on the screen. They can help you dribble past defenders, create space, or score goals. Some of the skill moves are roulette, rainbow flick, heel to heel flick, etc. You should learn how to use them effectively and when to use them.
-- Shoot smartly: Shooting is one of the most important skills in Attack Mode. You should know how to shoot accurately and powerfully from different angles and distances. You should also know when to shoot and when to pass. Some of the factors that affect your shooting are angle, distance, power, timing, position, etc.
-
Head to Head Tips
-Head to Head is a real-time mode where you play against other players in full matches. You control your team's attacking and defending moves, while your opponent does the same. Here are some tips for playing Head to Head:
-
-- Use the right players: Different players have different attributes, skills, and ratings in Head to Head. You should use the right players for the right positions and roles. For example, you should use fast and agile players for the wings, strong and tall players for the center backs, etc. You should also check your opponent's team and adjust your lineup accordingly.
-- Use the right tactics: Different tactics have different effects on your team's performance and behavior in Head to Head. You should use the right tactics for the right situations and scenarios. For example, you should use a defensive tactic when you are leading by a narrow margin, an attacking tactic when you are trailing by a large margin, etc. You should also change your tactics during the match if needed.
-- Use the right controls: Head to Head mode offers you two types of controls: classic and gesture. Classic controls allow you to use buttons and joysticks to control your players, while gesture controls allow you to use swipes and taps to control your players. You should use the type of controls that you are comfortable with and that suit your play style. You should also customize your controls and settings to optimize your gameplay.
-
-Manager Mode Tips
-Manager Mode is an idle mode where you play as the soccer manager of your own team. You can plan your strategy, adjust your tactics, scout and sign new players, train and develop your squad, and manage your club's finances. Here are some tips for playing Manager Mode:
-
-- Plan your strategy: Manager Mode allows you to choose from different strategies, such as possession, counter-attack, long ball, etc. You should plan your strategy based on your team's strengths and weaknesses, as well as your opponent's strategy. You should also consider the weather, the pitch condition, and the match importance when planning your strategy.
-- Adjust your tactics: Manager Mode allows you to adjust your tactics in real time during the match. You can change your formation, style, and instructions to adapt to the match situation and scenario. You can also make substitutions, switch players' positions, and give pep talks to motivate your players.
-- Scout and sign new players: Manager Mode allows you to scout and sign new players from different leagues and countries. You can use scouts to find potential targets, negotiate contracts with agents, and complete transfers with clubs. You should scout and sign new players that fit your strategy and budget, as well as improve your team's chemistry and rating.
-
-General Tips
-Here are some general tips that apply to all modes in FIFA Mobile:
-
-- Level up your players: You can level up your players by using training points or training XP. Leveling up your players can increase their attributes, skills, and ratings, as well as unlock new skill moves and traits. You should level up your players regularly to make them more competitive and effective.
-- Improve your chemistry: Chemistry is a measure of how well your players work together on the pitch. Chemistry can affect your team's performance and behavior in various ways, such as passing accuracy, shooting power, positioning, etc. You can improve your chemistry by using players from the same league, nation, or club, as well as using specific formations, styles, and instructions.
-- Complete events: Events are special modes that offer you unique challenges and rewards. Events can be based on real-world soccer tournaments, such as FIFA World Cup 2022 or UEFA Champions League 2023. Events can also be based on seasonal themes or celebrations, such as Halloween or Christmas. You should complete events regularly to earn coins, packs, players, skill boosts, etc.
-
-Conclusion
-FIFA Mobile is a fun and exciting soccer game that you can play on your Android or iOS device. The game offers you various modes, features, and gameplay aspects that can keep you entertained for hours. However, if you want to enjoy the game with more features and advantages, you can download FIFA Mobile mod APK for Android or iOS devices.
-In this article, we showed you how to download FIFA Mobile mod APK for Android or iOS devices using Cydia Impactor or unknown sources settings. We also showed you some of the benefits and risks of using FIFA Mobile mod APK, such as unlimited coins, unlocked players, menu mod, speed hack, and more. We also gave you some tips and tricks for playing FIFA Mobile and improving your skills, such as using skill moves, shooting smartly, adjusting your tactics, leveling up your players, improving your chemistry, and completing events.
-We hope that this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. And if you liked this article, please share it with your friends and family who might be interested in FIFA Mobile mod APK.
-Thank you for reading and happy gaming!
-FAQs
-Here are some of the frequently asked questions about FIFA Mobile mod APK:
-Q: Is FIFA Mobile mod APK safe to use?
-A: FIFA Mobile mod APK is not an official version of the game and it may contain malware, viruses, or spyware that can harm your device or steal your personal information. You should only download FIFA Mobile mod APK from reliable sources and scan the file with an antivirus app before installing it. You should also backup your data and files before using FIFA Mobile mod APK.
-Q: Is FIFA Mobile mod APK legal to use?
-A: FIFA Mobile mod APK is not a legal version of the game and it violates the terms of service of EA Sports. You may face legal consequences for using FIFA Mobile mod APK, such as account ban, game crashes, or lawsuits. You should only use FIFA Mobile mod APK at your own risk and responsibility.
-Q: How can I update FIFA Mobile mod APK?
-A: FIFA Mobile mod APK may not be compatible with the latest updates of the game. You may need to uninstall the modded version of the game and install the updated version from the official source or from a reliable source for the modded version. You should also check the compatibility and requirements of the updated version before installing it.
-Q: How can I uninstall FIFA Mobile mod APK?
-A: If you want to uninstall FIFA Mobile mod APK from your device, you can follow these steps:
-
-- Go to your device settings and look for apps or applications options.
-- Find FIFA Mobile mod APK and tap on it.
-- You will see a button that says uninstall or remove. Tap on it and confirm your action.
-- You may also need to delete the residual files and data of FIFA Mobile mod APK from your device using a file manager app or a cleaner app.
-
-Q: How can I contact EA Sports for support or feedback?
-A: If you want to contact EA Sports for support or feedback regarding FIFA Mobile or any other game, you can use these methods:
-
-- Email: help@eamobile.com
-- Phone: 1-866-543-5435
-- Website: https://help.ea.com/en/fifa/fifa-mobile/
-- Facebook: https://www.facebook.com/EASPORTSFIFAMOBILE/
-- Twitter: https://twitter.com/EAFIFAMOBILE
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy FIFA Mobile 2023 on Your Smartphone Download the Apk Without Any Hassle.md b/spaces/1phancelerku/anime-remove-background/Enjoy FIFA Mobile 2023 on Your Smartphone Download the Apk Without Any Hassle.md
deleted file mode 100644
index db63cc13ac4a1c527f9d42f9e9e6f7c174b26714..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy FIFA Mobile 2023 on Your Smartphone Download the Apk Without Any Hassle.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-FIFA 23 APK Download No Verification: How to Play the Latest FIFA Mobile Game on Your Android Device
- Introduction
- If you are a fan of soccer games, you must have heard of FIFA 23, the latest installment in the popular FIFA series by EA Sports. FIFA 23 is a game that offers realistic graphics, gameplay, and features that will make you feel like you are on the pitch. However, if you want to play FIFA 23 on your Android device, you might encounter some challenges. For one thing, the game is not yet officially released on the Google Play Store. For another, you might need to verify your device or account before you can play the game. This can be frustrating and time-consuming, especially if you just want to enjoy the game right away. Fortunately, there is a way to bypass these obstacles and play FIFA 23 on your Android device without any verification. All you need to do is download the FIFA 23 APK file and install it on your device. In this article, we will show you how to do that, as well as what features and benefits you can expect from playing FIFA 23 Mobile.
- What is FIFA 23 Mobile?
- FIFA 23 Mobile is a version of FIFA 23 that is designed for mobile devices. It is a game that allows you to experience the thrill and excitement of soccer on your smartphone or tablet. You can create your own custom lineup, choose from hundreds of players and teams, and compete in various modes and events. You can also play online with other players from around the world, or challenge your friends in head-to-head matches. FIFA 23 Mobile is a game that will keep you entertained and engaged for hours.
-fifa 23 apk download no verification
Download ————— https://jinyurl.com/2uNS1K
- Why do you need to download the APK file?
- An APK file is an application package file that contains all the data and files needed to run an app on an Android device. Normally, when you download an app from the Google Play Store, it automatically installs the APK file on your device. However, since FIFA 23 is not yet available on the Google Play Store, you need to download the APK file manually from another source. This way, you can install and play the game without waiting for the official release or verification.
- How to download and install the FIFA 23 APK file?
- To download and install the FIFA 23 APK file on your Android device, follow these steps:
-
-- Download the FIFA 23 APK file from one of these links:
-- Go to Settings > Security > Unknown Sources and enable installation from unknown sources.
-- Locate the downloaded APK file on your device and tap on it.
-- Follow the instructions on the screen to install the game.
-- Open the game and log in as a guest.
-- Enjoy playing FIFA 23 Mobile!
-
-Features of FIFA 23 Mobile
- FIFA 23 Mobile is a game that offers many features that will enhance your gaming experience. Some of these features are:
- New menus and UI
- FIFA 23 Mobile has a new and improved user interface that makes it easier and faster to navigate through the game. The menus are more intuitive and responsive, and the graphics are more crisp and clear. You can also customize your home screen with your favorite players and teams.
- Custom lineups
FIFA 23 Mobile allows you to create your own custom lineups with your favorite players and formations. You can also adjust the tactics and roles of each player according to your strategy. You can save up to five different lineups and switch between them anytime.
- Advanced passing
- FIFA 23 Mobile introduces a new and advanced passing system that gives you more control and accuracy over your passes. You can use gestures, buttons, or a combination of both to execute different types of passes, such as through balls, lobbed passes, or backheels. You can also use the new pass and move feature to make your players run after passing the ball.
- Updated player roster and event players
- FIFA 23 Mobile features an updated player roster with the latest transfers and ratings. You can choose from over 700 teams and 17,000 players from various leagues and countries. You can also unlock special event players that have boosted stats and skills. These players are available for a limited time during certain events, such as the Champions League, the World Cup, or the Halloween.
- Updated audio commentary
- FIFA 23 Mobile has a new and improved audio commentary that adds more realism and immersion to the game. The commentary is more dynamic and responsive to the actions on the pitch, and it also includes more languages and accents. You can also customize the volume and language of the commentary in the settings.
-fifa 23 mobile apk obb data offline
-fifa 23 mod apk unlimited money and coins
-fifa 23 android apk free download full version
-fifa 23 beta apk download for android
-fifa 23 apk + data highly compressed
-fifa 23 mobile apk hack mod menu
-fifa 23 apk download without human verification
-fifa 23 mod apk latest version download
-fifa 23 android apk + obb + data zip
-fifa 23 mobile apk online gameplay
-fifa 23 apk download for android phone
-fifa 23 mod apk unlimited points and tokens
-fifa 23 android apk + data google drive
-fifa 23 beta apk download link
-fifa 23 apk + data size and requirements
-fifa 23 mobile apk mod offline
-fifa 23 mod apk all players unlocked
-fifa 23 android apk free download no survey
-fifa 23 beta apk release date and time
-fifa 23 apk + data mega download
-fifa 23 mobile apk update patch notes
-fifa 23 mod apk real faces and kits
-fifa 23 android apk + data mediafire download
-fifa 23 beta apk how to install and play
-fifa 23 apk + data features and gameplay
-fifa 23 mobile apk cheats and tips
-fifa 23 mod apk new stadiums and leagues
-fifa 23 android apk free download reddit
-fifa 23 beta apk feedback and reviews
-fifa 23 apk + data system requirements and compatibility
-fifa 23 mobile apk best players and teams
-fifa 23 mod apk career mode and manager mode
-fifa 23 android apk + data file download
-fifa 23 beta apk bugs and issues report
-fifa 23 apk + data graphics and sound quality
-fifa 23 mobile apk tricks and skills guide
-fifa 23 mod apk ultimate team and squad builder
-fifa 23 android apk free download apkpure
-fifa 23 beta apk news and updates
-fifa 23 apk + data download error and fix
- Live OVR mini-events
- FIFA 23 Mobile has a new feature called Live OVR mini-events that let you boost your team's overall rating (OVR) by completing certain tasks. These tasks include scoring goals, making assists, winning matches, or playing with specific players. The higher your OVR, the better your chances of winning matches and earning rewards.
- VS Attack and Head to Head modes
- FIFA 23 Mobile offers two modes for online multiplayer: VS Attack and Head to Head. VS Attack is a mode where you play against another player in a turn-based match. Each turn lasts for 90 seconds, and you have to score as many goals as possible while defending your own goal. The player with the most goals at the end of the match wins. Head to Head is a mode where you play against another player in a real-time match. You have full control over your players and you can use various tactics and strategies to outsmart your opponent. The match lasts for six minutes, and the player with the most goals at the end of the match wins.
- Benefits of playing FIFA 23 Mobile
- Playing FIFA 23 Mobile has many benefits that will make you enjoy the game even more. Some of these benefits are:
- Enjoy the realistic graphics and gameplay
- FIFA 23 Mobile has stunning graphics that will make you feel like you are watching a real soccer match. The game uses the Frostbite engine, which is also used for other EA games such as Battlefield and Need for Speed. The game also has realistic gameplay that simulates the physics, movements, and behaviors of real soccer players. You can see the expressions, emotions, and reactions of the players as they play on the pitch.
- Compete with other players online
- FIFA 23 Mobile lets you compete with other players online in various modes and events. You can test your skills and strategies against players from different countries and regions. You can also join leagues and tournaments to win trophies and prizes. You can also chat with other players and make friends through the game's social features.
- Earn rewards and bonuses
- FIFA 23 Mobile rewards you with coins, gems, packs, players, and other items for playing the game. You can earn these rewards by completing matches, events, achievements, or daily tasks. You can also get bonuses for logging in every day, watching ads, or inviting friends to play the game. You can use these rewards to upgrade your team, unlock new features, or buy more items in the game's store.
- Conclusion
- FIFA 23 Mobile is a game that will satisfy any soccer fan's cravings. It is a game that offers realistic graphics, gameplay, and features that will make you feel like you are on the pitch. It is also a game that lets you play online with other players from around the world, or challenge your friends in head-to-head matches. It is also a game that rewards you with coins, gems, packs, players, and other items for playing the game. FIFA 23 Mobile is a game that will keep you entertained and engaged for hours.
- FAQs
-
-- Q: How much space does FIFA 23 Mobile take on my device?
A: FIFA 23 Mobile requires about 2 GB of free space on your device. You might need more space if you download additional data or updates.
-- Q: How can I update FIFA 23 Mobile?
-- A: You can update FIFA 23 Mobile by downloading the latest APK file from the same links as above and installing it over the existing game. You can also check for updates in the game's settings.
-- Q: How can I get more coins and gems in FIFA 23 Mobile?
-- A: You can get more coins and gems in FIFA 23 Mobile by playing matches, events, achievements, or daily tasks. You can also watch ads, invite friends, or buy them with real money in the game's store.
-- Q: How can I contact the support team of FIFA 23 Mobile?
-- A: You can contact the support team of FIFA 23 Mobile by going to the game's settings and tapping on Help & Support. You can also visit the official website of EA Sports or the FIFA 23 Mobile Facebook page for more information and assistance.
-- Q: Is FIFA 23 Mobile compatible with my device?
-- A: FIFA 23 Mobile is compatible with most Android devices that have at least 2 GB of RAM and Android 6.0 or higher. However, some devices might experience performance issues or errors due to different specifications or settings.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/test/__init__.py b/spaces/2ndelement/voicevox/test/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index 55abcfdb87636a9ee85b8df5cdc1bec64098b5da..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import numpy as np
-import pyworld
-
-from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/README.md b/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/README.md
deleted file mode 100644
index 2530d1d0b19ac755a71446269b5e5bcb32c5079d..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Image Animation Using Thin Plate Spline Motion Model
-emoji: 👁
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.0.19
-app_file: app.py
-pinned: false
-duplicated_from: gronkomatic/Image-Animation-using-Thin-Plate-Spline-Motion-Model
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/data.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/data.py
deleted file mode 100644
index 1d80d598be97d4e04f1b7f3e53a877cfe82ce667..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/data.py
+++ /dev/null
@@ -1,977 +0,0 @@
-import ast
-import json
-import logging
-import math
-import os
-import random
-# import h5py
-from dataclasses import dataclass
-from audioldm.clap.training.params import parse_args
-# import braceexpand
-import numpy as np
-import pandas as pd
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision.datasets as datasets
-import torchvision.transforms
-# import webdataset as wds
-from PIL import Image
-from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler
-from torch.utils.data.distributed import DistributedSampler
-from functools import partial
-import soundfile as sf
-import io
-from pathlib import Path
-# import wget
-
-from audioldm.clap.open_clip.utils import (
- get_tar_path_from_dataset_name,
- dataset_split,
-)
-from audioldm.clap.open_clip.utils import load_p, load_class_label
-import copy
-
-try:
- import horovod.torch as hvd
-except ImportError:
- hvd = None
-
-try:
- import torchaudio
-except ImportError:
- torchaudio = None
-
-from audioldm.clap.open_clip import tokenize
-
-
-def tokenizer(text):
- return tokenize(text).squeeze(0)
-
-
-from transformers import RobertaTokenizer
-
-tokenize = RobertaTokenizer.from_pretrained("roberta-base")
-
-
-def tokenizer(text):
- result = tokenize(
- text,
- padding="max_length",
- truncation=True,
- max_length=77,
- return_tensors="pt",
- )
- return {k: v.squeeze(0) for k, v in result.items()}
-
-
-# initizlied the audioset map
-_AUDIOSET_MAP_PATH = os.path.join(Path(__file__).parent, "audioset_textmap.npy")
-_AUDIOSET_MAP = np.load(_AUDIOSET_MAP_PATH, allow_pickle=True)
-
-
-def int16_to_float32(x):
- return (x / 32767.0).astype(np.float32)
-
-
-def float32_to_int16(x):
- x = np.clip(x, a_min=-1.0, a_max=1.0)
- return (x * 32767.0).astype(np.int16)
-
-
-# For Toy Dataset
-# class ToyDataset(Dataset):
-# def __init__(self, index_path, ipc, config, eval_mode=False):
-# """Toy Dataset for testing the audioset input with text labels
-# Parameters
-# ----------
-# index_path: str
-# the link to the h5 file of each audio
-# idc: str
-# the link to the npy file, the number of samples in each class
-# config: dict
-# the audio cfg file
-# eval_model (bool): to indicate if the dataset is a testing dataset
-# """
-# self.audio_cfg = config["audio_cfg"]
-# self.text_cfg = config["text_cfg"]
-# self.fp = h5py.File(index_path, "r")
-# self.ipc = np.load(ipc, allow_pickle=True)
-# self.total_size = len(self.fp["audio_name"])
-# self.classes_num = self.audio_cfg["class_num"]
-# self.eval_mode = eval_mode
-
-# if not eval_mode:
-# self.generate_queue()
-# else:
-# self.queue = []
-# for i in range(self.total_size):
-# target = self.fp["target"][i]
-# if np.sum(target) > 0:
-# self.queue.append(i)
-# self.total_size = len(self.queue)
-# logging.info("total dataset size: %d" % (self.total_size))
-# logging.info("class num: %d" % (self.classes_num))
-
-# def time_shifting(self, x):
-# frame_num = len(x)
-# shift_len = random.randint(0, frame_num - 1)
-# new_sample = np.concatenate([x[shift_len:], x[:shift_len]], axis=0)
-# return new_sample
-
-# def generate_queue(self):
-# self.queue = []
-# while len(self.queue) < self.total_size:
-# class_set = [*range(self.classes_num)]
-# random.shuffle(class_set)
-# self.queue += [
-# self.ipc[d][random.randint(0, len(self.ipc[d]) - 1)] for d in class_set
-# ]
-# self.queue = self.queue[: self.total_size]
-
-# logging.info("queue regenerated:%s" % (self.queue[-5:]))
-
-# def crop_wav(self, x):
-# crop_size = self.audio_cfg["crop_size"]
-# crop_pos = random.randint(0, len(x) - crop_size - 1)
-# return x[crop_pos : crop_pos + crop_size]
-
-# def prompt_text(self, target):
-# events = _AUDIOSET_MAP[np.where(target > 0)]
-# event_text = "The sounds of " + ", ".join(events[:-1]) + " and " + events[-1]
-# text = tokenize(event_text)[0]
-# return text
-
-# def __getitem__(self, index):
-# """Load waveform, text, and target of an audio clip
-
-# Parameters
-# ----------
-# index: int
-# the index number
-# Return
-# ------
-# output: dict {
-# "hdf5_path": str,
-# "index_in_hdf5": int,
-# "audio_name": str,
-# "waveform": list (audio_length,),
-# "target": list (class_num, ),
-# "text": torch.tensor (context_length,)
-# }
-# the output dictionary
-# """
-# s_index = self.queue[index]
-
-# audio_name = self.fp["audio_name"][s_index].decode()
-# # Hardcode here CHANGE
-# hdf5_path = (
-# self.fp["hdf5_path"][s_index]
-# .decode()
-# .replace(
-# "../workspace",
-# "/home/la/kechen/Research/ke_zsasp/workspace",
-# )
-# )
-# r_idx = self.fp["index_in_hdf5"][s_index]
-# target = self.fp["target"][s_index].astype(np.float32)
-# text = self.prompt_text(target)
-# with h5py.File(hdf5_path, "r") as f:
-# waveform = int16_to_float32(f["waveform"][r_idx])[
-# : self.audio_cfg["clip_samples"]
-# ]
-# assert (
-# len(waveform) == self.audio_cfg["clip_samples"]
-# ), "The sample length is not match"
-# # Time shift
-# # if (self.config.enable_time_shift) and (not self.eval_mode):
-# # waveform = self.time_shifting(waveform)
-# # # Label Enhance
-# # if (self.config.crop_size is not None) and (not self.eval_mode):
-# # waveform = self.crop_wav(waveform)
-# # # the label enhance rate is fixed 0.5
-# # if (self.config.enable_label_enhance) and (not self.eval_mode) and random.random() < 0.5:
-# # kidx = np.where(target)[0]
-# # for k in kidx:
-# # for add_key in self.class_map[k][1]:
-# # target[add_key] = 1.0
-# # if len(self.class_map[k][2]) > 0:
-# # add_key = random.choice(self.class_map[k][2])
-# # target[add_key] = 1.0
-
-# # missing the text input
-# mel_spec = get_mel(torch.from_numpy(waveform), self.audio_cfg)[None, :, :]
-# mel_spec = (
-# torch.cat(
-# [mel_spec, mel_spec.clone(), mel_spec.clone(), mel_spec.clone()], dim=0
-# )
-# .cpu()
-# .numpy()
-# )
-# longer = random.choice([True, False])
-# if longer == False:
-# mel_spec[1:, :, :] = 0.0
-# data_dict = {
-# "hdf5_path": hdf5_path,
-# "index_in_hdf5": r_idx,
-# "audio_name": audio_name,
-# "waveform": waveform,
-# "class_label": target,
-# "text": text,
-# "longer": longer,
-# "mel_fusion": mel_spec,
-# }
-# return data_dict
-
-# def __len__(self):
-# return self.total_size
-
-
-class CsvDataset(Dataset):
- def __init__(self, input_filename, transforms, img_key, caption_key, sep="\t"):
- logging.debug(f"Loading csv data from {input_filename}.")
- df = pd.read_csv(input_filename, sep=sep)
-
- self.images = df[img_key].tolist()
- self.captions = df[caption_key].tolist()
- self.transforms = transforms
- logging.debug("Done loading data.")
-
- def __len__(self):
- return len(self.captions)
-
- def __getitem__(self, idx):
- images = self.transforms(Image.open(str(self.images[idx])))
- texts = tokenize([str(self.captions[idx])])[0]
- return images, texts
-
-
-@dataclass
-class DataInfo:
- dataloader: DataLoader
- sampler: DistributedSampler
-
-
-def preprocess_txt(text):
- return tokenize([str(text)])[0]
-
-
-def get_dataset_size(shards, sizefilepath_=None, is_local=True):
- if isinstance(shards, list):
- size_list = []
- for s in shards:
- size_list.append(
- get_dataset_size(s, sizefilepath_=sizefilepath_, is_local=is_local)[0]
- )
- else:
- if not is_local:
- for n in dataset_split.keys():
- if n in shards.split("/"):
- break
- for s in dataset_split[n]:
- if s in shards.split("/"):
- break
- sizefilepath_ = f"./json_files/{n}/{s}/sizes.json"
- shards_list = list(braceexpand.braceexpand(shards))
- dir_path = os.path.dirname(shards)
- if sizefilepath_ is not None:
- sizes = json.load(open(sizefilepath_, "r"))
- total_size = sum(
- [
- int(sizes[os.path.basename(shard.replace(".tar -", ".tar"))])
- for shard in shards_list
- ]
- )
- else:
- sizes_filename = os.path.join(dir_path, "sizes.json")
- len_filename = os.path.join(dir_path, "__len__")
- if os.path.exists(sizes_filename):
- sizes = json.load(open(sizes_filename, "r"))
- total_size = sum(
- [int(sizes[os.path.basename(shard)]) for shard in shards_list]
- )
- elif os.path.exists(len_filename):
- # FIXME this used to be eval(open(...)) but that seemed rather unsafe
- total_size = ast.literal_eval(open(len_filename, "r").read())
- else:
- raise Exception(
- "Cannot find sizes file for dataset. Please specify the path to the file."
- )
- # total_size = None # num samples undefined
- # some common dataset sizes (at time of authors last download)
- # cc3m-train: 2905954
- # cc12m: 10968539
- # LAION-400m: 407332084
- num_shards = len(shards_list)
- if isinstance(shards, list):
- return sum(size_list), len(shards)
- else:
- return total_size, num_shards
-
-
-def get_imagenet(args, preprocess_fns, split):
- assert split in ["train", "val", "v2"]
- is_train = split == "train"
- preprocess_train, preprocess_val = preprocess_fns
-
- if split == "v2":
- from imagenetv2_pytorch import ImageNetV2Dataset
-
- dataset = ImageNetV2Dataset(location=args.imagenet_v2, transform=preprocess_val)
- else:
- if is_train:
- data_path = args.imagenet_train
- preprocess_fn = preprocess_train
- else:
- data_path = args.imagenet_val
- preprocess_fn = preprocess_val
- assert data_path
-
- dataset = datasets.ImageFolder(data_path, transform=preprocess_fn)
-
- if is_train:
- idxs = np.zeros(len(dataset.targets))
- target_array = np.array(dataset.targets)
- k = 50
- for c in range(1000):
- m = target_array == c
- n = len(idxs[m])
- arr = np.zeros(n)
- arr[:k] = 1
- np.random.shuffle(arr)
- idxs[m] = arr
-
- idxs = idxs.astype("int")
- sampler = SubsetRandomSampler(np.where(idxs)[0])
- else:
- sampler = None
-
- dataloader = torch.utils.data.DataLoader(
- dataset,
- batch_size=args.batch_size,
- num_workers=args.workers,
- sampler=sampler,
- )
-
- return DataInfo(dataloader, sampler)
-
-
-def count_samples(dataloader):
- os.environ["WDS_EPOCH"] = "0"
- n_elements, n_batches = 0, 0
- for images, texts in dataloader:
- n_batches += 1
- n_elements += len(images)
- assert len(images) == len(texts)
- return n_elements, n_batches
-
-
-def filter_no_caption(sample):
- return "txt" in sample
-
-
-def log_and_continue(exn):
- """Call in an exception handler to ignore any exception, isssue a warning, and continue."""
- logging.warning(f"Handling webdataset error ({repr(exn)}). Ignoring.")
- return True
-
-
-_SHARD_SHUFFLE_SIZE = 2000
-_SHARD_SHUFFLE_INITIAL = 500
-_SAMPLE_SHUFFLE_SIZE = 5000
-_SAMPLE_SHUFFLE_INITIAL = 1000
-
-
-def sample_prop(sizefile, inputs, proportion, is_local=True):
- """
- Sample a proportion of the data.
- """
- file_path_dict = {
- os.path.split(inputs[i])[1]: os.path.split(inputs[i])[0]
- for i in range(len(inputs))
- }
- sampled_filepath_dict = {}
- sampled_size_dict = {}
- if not is_local:
- if os.path.exists("sizes.json"):
- os.remove("sizes.json")
- wget.download(sizefile, "sizes.json")
- sizefile = "sizes.json"
- with open(sizefile, "r", encoding="UTF-8") as f:
- load_dict = json.load(f)
- L = int(len(file_path_dict) * proportion)
- subkeys = random.sample(file_path_dict.keys(), L)
- for k in subkeys:
- sampled_size_dict[k] = load_dict[k]
- sampled_filepath_dict[k] = file_path_dict[k]
- return (
- sum(sampled_size_dict.values()),
- L,
- [os.path.join(v, k) for k, v in sampled_filepath_dict.items()],
- sampled_size_dict,
- )
-
-
-def get_mel(audio_data, audio_cfg):
- # mel shape: (n_mels, T)
- mel = torchaudio.transforms.MelSpectrogram(
- sample_rate=audio_cfg["sample_rate"],
- n_fft=audio_cfg["window_size"],
- win_length=audio_cfg["window_size"],
- hop_length=audio_cfg["hop_size"],
- center=True,
- pad_mode="reflect",
- power=2.0,
- norm=None,
- onesided=True,
- n_mels=64,
- f_min=audio_cfg["fmin"],
- f_max=audio_cfg["fmax"],
- ).to(audio_data.device)
- mel = mel(audio_data)
- # Align to librosa:
- # librosa_melspec = librosa.feature.melspectrogram(
- # waveform,
- # sr=audio_cfg['sample_rate'],
- # n_fft=audio_cfg['window_size'],
- # hop_length=audio_cfg['hop_size'],
- # win_length=audio_cfg['window_size'],
- # center=True,
- # pad_mode="reflect",
- # power=2.0,
- # n_mels=64,
- # norm=None,
- # htk=True,
- # f_min=audio_cfg['fmin'],
- # f_max=audio_cfg['fmax']
- # )
- # we use log mel spectrogram as input
- mel = torchaudio.transforms.AmplitudeToDB(top_db=None)(mel)
- return mel.T # (T, n_mels)
-
-
-def get_audio_features(
- sample, audio_data, max_len, data_truncating, data_filling, audio_cfg
-):
- """
- Calculate and add audio features to sample.
- Sample: a dict containing all the data of current sample.
- audio_data: a tensor of shape (T) containing audio data.
- max_len: the maximum length of audio data.
- data_truncating: the method of truncating data.
- data_filling: the method of filling data.
- audio_cfg: a dict containing audio configuration. Comes from model_cfg['audio_cfg'].
- """
- with torch.no_grad():
- if len(audio_data) > max_len:
- if data_truncating == "rand_trunc":
- longer = torch.tensor([True])
- elif data_truncating == "fusion":
- # fusion
- mel = get_mel(audio_data, audio_cfg)
- # split to three parts
- chunk_frames = (
- max_len // audio_cfg["hop_size"] + 1
- ) # the +1 related to how the spectrogram is computed
- total_frames = mel.shape[0]
- if chunk_frames == total_frames:
- # there is a corner case where the audio length is
- # larger than max_len but smaller than max_len+hop_size.
- # In this case, we just use the whole audio.
- mel_fusion = torch.stack([mel, mel, mel, mel], dim=0)
- sample["mel_fusion"] = mel_fusion
- longer = torch.tensor([False])
- else:
- ranges = np.array_split(
- list(range(0, total_frames - chunk_frames + 1)), 3
- )
- # print('total_frames-chunk_frames:', total_frames-chunk_frames,
- # 'len(audio_data):', len(audio_data),
- # 'chunk_frames:', chunk_frames,
- # 'total_frames:', total_frames)
- if len(ranges[1]) == 0:
- # if the audio is too short, we just use the first chunk
- ranges[1] = [0]
- if len(ranges[2]) == 0:
- # if the audio is too short, we just use the first chunk
- ranges[2] = [0]
- # randomly choose index for each part
- idx_front = np.random.choice(ranges[0])
- idx_middle = np.random.choice(ranges[1])
- idx_back = np.random.choice(ranges[2])
- # select mel
- mel_chunk_front = mel[idx_front : idx_front + chunk_frames, :]
- mel_chunk_middle = mel[idx_middle : idx_middle + chunk_frames, :]
- mel_chunk_back = mel[idx_back : idx_back + chunk_frames, :]
-
- # shrink the mel
- mel_shrink = torchvision.transforms.Resize(size=[chunk_frames, 64])(
- mel[None]
- )[0]
- # logging.info(f"mel_shrink.shape: {mel_shrink.shape}")
-
- # stack
- mel_fusion = torch.stack(
- [mel_chunk_front, mel_chunk_middle, mel_chunk_back, mel_shrink],
- dim=0,
- )
- sample["mel_fusion"] = mel_fusion
- longer = torch.tensor([True])
- else:
- raise NotImplementedError(
- f"data_truncating {data_truncating} not implemented"
- )
- # random crop to max_len (for compatibility)
- overflow = len(audio_data) - max_len
- idx = np.random.randint(0, overflow + 1)
- audio_data = audio_data[idx : idx + max_len]
-
- else: # padding if too short
- if len(audio_data) < max_len: # do nothing if equal
- if data_filling == "repeatpad":
- n_repeat = int(max_len / len(audio_data))
- audio_data = audio_data.repeat(n_repeat)
- # audio_data = audio_data.unsqueeze(0).unsqueeze(0).unsqueeze(0)
- # audio_data = F.interpolate(audio_data,size=max_len,mode="bicubic")[0,0,0]
- audio_data = F.pad(
- audio_data,
- (0, max_len - len(audio_data)),
- mode="constant",
- value=0,
- )
- elif data_filling == "pad":
- audio_data = F.pad(
- audio_data,
- (0, max_len - len(audio_data)),
- mode="constant",
- value=0,
- )
- elif data_filling == "repeat":
- n_repeat = int(max_len / len(audio_data))
- audio_data = audio_data.repeat(n_repeat + 1)[:max_len]
- else:
- raise NotImplementedError(
- f"data_filling {data_filling} not implemented"
- )
- if data_truncating == "fusion":
- mel = get_mel(audio_data, audio_cfg)
- mel_fusion = torch.stack([mel, mel, mel, mel], dim=0)
- sample["mel_fusion"] = mel_fusion
- longer = torch.tensor([False])
-
- sample["longer"] = longer
- sample["waveform"] = audio_data
-
- return sample
-
-
-def preprocess(
- sample,
- audio_ext,
- text_ext,
- max_len,
- audio_cfg,
- class_index_dict=None,
- data_filling="pad",
- data_truncating="rand_trunc",
- text_augment_selection=None,
-):
- """
- Preprocess a single sample for wdsdataloader.
- """
- audio_data, orig_sr = sf.read(io.BytesIO(sample[audio_ext]))
- audio_data = int16_to_float32(float32_to_int16(audio_data))
- audio_data = torch.tensor(audio_data).float()
-
- # TODO: (yusong) to be include in the future
- # # if torchaudio not installed, use soundfile to load audio
- # if torchaudio is None:
- # audio_data, orig_sr = sf.read(io.BytesIO(sample[audio_ext]))
- # audio_data = torch.tensor(audio_data).float()
- # else:
- # # https://github.com/webdataset/webdataset/blob/main/webdataset/autodecode.py
- # with tempfile.TemporaryDirectory() as dirname:
- # os.makedirs(dirname, exist_ok=True)
- # fname = os.path.join(dirname, f"file.flac")
- # with open(fname, "wb") as stream:
- # stream.write(sample[audio_ext])
- # audio_data, orig_sr = torchaudio.load(fname)
- # audio_data = audio_data[0, :].float()
-
- sample = get_audio_features(
- sample, audio_data, max_len, data_truncating, data_filling, audio_cfg
- )
- del sample[audio_ext]
-
- try:
- json_dict_raw = json.loads(sample[text_ext].decode("utf-8"))
- except:
- print("sample[__url__]:", sample["__url__"])
-
- # For selecting augmented text from dataset
- if text_augment_selection is None or text_augment_selection == "none":
- texts = json_dict_raw["text"]
- elif text_augment_selection == "all":
- if "text_augment_all" in json_dict_raw.keys():
- texts = json_dict_raw["text_augment_all"]
- else:
- texts = json_dict_raw["text"]
- elif text_augment_selection == "augment_only":
- if "text_augment_all" in json_dict_raw.keys():
- if json_dict_raw["text_augment_t5"] is None:
- texts = json_dict_raw["text"]
- else:
- texts = json_dict_raw["text_augment_t5"]
- else:
- texts = json_dict_raw["text"]
- else:
- raise NotImplementedError(
- f"text_augment_selection {text_augment_selection} not implemented"
- )
- sample["full_text"] = texts
-
- if isinstance(texts, list) and isinstance(texts[0], str) and len(texts) > 1:
- texts = random.choice(texts)
- sample["raw_text"] = texts
- sample["text"] = tokenizer(texts) # text shape: [num_token]
- if class_index_dict is not None:
- # https://stackoverflow.com/questions/48004243/how-to-share-large-read-only-dictionary-list-across-processes-in-multiprocessing
- # https://stackoverflow.com/questions/45693949/storing-strings-in-a-multiprocessing-sharedctypes-array
- # key, val = class_index_dict
- # key = key[:].split('\n')
- # _dict = {k: v for k, v in zip(key, val)}
- sample["class_label"] = np.zeros(len(class_index_dict.keys()))
- for x in json_dict_raw["tag"]:
- sample["class_label"][class_index_dict[x]] = 1
- sample["class_label"] = torch.tensor(sample["class_label"]).float()
- del sample[text_ext]
- sample["audio_name"] = sample["__key__"].split("/")[-1] + "." + audio_ext
- sample["text_name"] = sample["__key__"].split("/")[-1] + "." + text_ext
- sample["audio_orig_sr"] = orig_sr
- return sample
-
-
-def collate_fn(batch):
- """
- Collate function for wdsdataloader.
- batch: a list of dict, each dict is a sample
- """
- # concatenate values in each dictionary. if it is a tensor, concatenate. if it is a list, extend.
- batch_dict = {}
- for k in batch[0].keys():
- if isinstance(batch[0][k], dict): # dealwith bert tokenizer output
- batch_dict[k] = {}
- for kk in batch[0][k].keys():
- tmp = []
- for i in range(len(batch)):
- tmp.append(batch[i][k][kk])
- batch_dict[k][kk] = torch.vstack(tmp)
- elif isinstance(batch[0][k], torch.Tensor):
- batch_dict[k] = torch.stack([sample[k] for sample in batch])
- elif isinstance(batch[0][k], np.ndarray):
- batch_dict[k] = torch.tensor(np.stack([sample[k] for sample in batch]))
- else:
- batch_dict[k] = [sample[k] for sample in batch]
- return batch_dict
-
-
-def get_wds_dataset(
- args,
- model_cfg,
- is_train,
- audio_ext="flac",
- text_ext="json",
- max_len=480000,
- proportion=1.0,
- sizefilepath_=None,
- is_local=None,
-):
- """
- Get a dataset for wdsdataloader.
- """
- if is_local is None and (not args.remotedata is None):
- is_local = not args.remotedata
-
- input_shards = args.train_data if is_train else args.val_data
- assert input_shards is not None
-
- if not sizefilepath_ is None:
- sizefilepath = sizefilepath_
- else:
- sizefilepath = os.path.join(os.path.dirname(input_shards[0]), "sizes.json")
-
- if proportion != 1.0:
- num_samples, num_shards, input_shards, _ = sample_prop(
- sizefilepath, input_shards, proportion, is_local=is_local
- )
- else:
- num_samples, num_shards = get_dataset_size(
- input_shards, sizefilepath_=sizefilepath_, is_local=is_local
- )
-
- if not num_samples:
- if is_train:
- num_samples = args.train_num_samples
- if not num_samples:
- raise RuntimeError(
- "Currently, number of dataset samples must be specified for training dataset. "
- "Please specify via `--train-num-samples` if no dataset length info present."
- )
- else:
- num_samples = (
- args.val_num_samples or 0
- ) # eval will just exhaust the iterator if not specified
-
- pipeline = [wds.SimpleShardList(input_shards)]
- # at this point we have an iterator over all the shards
- # TODO: (yusong): add a if statement of distributed. If not, we don't need to split_by_node
- if is_train or args.parallel_eval:
- pipeline.extend(
- [
- wds.detshuffle(
- bufsize=_SHARD_SHUFFLE_SIZE,
- initial=_SHARD_SHUFFLE_INITIAL,
- seed=args.seed,
- ),
- wds.split_by_node,
- wds.split_by_worker,
- # at this point, we have an iterator over the shards assigned to each worker at each node
- wds.tarfile_to_samples(handler=log_and_continue),
- wds.shuffle(
- bufsize=_SAMPLE_SHUFFLE_SIZE,
- initial=_SAMPLE_SHUFFLE_INITIAL,
- rng=random.Random(args.seed),
- ),
- # wds.repeatedly, # FIXME determine if this is beneficial
- ]
- )
- else:
- pipeline.extend(
- [
- wds.split_by_worker,
- # at this point, we have an iterator over the shards assigned to each worker
- wds.tarfile_to_samples(handler=log_and_continue),
- ]
- )
- pipeline.append(
- wds.map(
- partial(
- preprocess,
- audio_ext=audio_ext,
- text_ext=text_ext,
- max_len=max_len,
- audio_cfg=model_cfg["audio_cfg"],
- class_index_dict=copy.deepcopy(args.class_index_dict),
- data_filling=args.data_filling,
- data_truncating=args.data_truncating,
- text_augment_selection=args.text_augment_selection,
- )
- ),
- )
-
- pipeline.append(
- wds.batched(
- args.batch_size,
- partial=not (is_train or args.parallel_eval),
- collation_fn=collate_fn,
- )
- )
-
- dataset = wds.DataPipeline(*pipeline)
- if is_train or args.parallel_eval:
- # (yusong): Currently parallel evaluation will be not precise as we are repeat the last few samples.
- # (yusong): See comments below.
- # roll over and repeat a few samples to get same number of full batches on each node
- global_batch_size = args.batch_size * args.world_size
- num_batches = math.ceil(num_samples / global_batch_size)
- num_workers = max(1, args.workers)
- num_worker_batches = math.ceil(
- num_batches / num_workers
- ) # per dataloader worker
- num_batches = num_worker_batches * num_workers
- num_samples = num_batches * global_batch_size
- dataset = dataset.with_epoch(
- num_worker_batches
- ) # each worker is iterating over this
- else:
- # last batches are partial, eval is done on single (master) node
- num_batches = math.ceil(num_samples / args.batch_size)
-
- kwargs = {}
- if args.horovod: # multi-node training on summit
- kwargs["multiprocessing_context"] = "forkserver"
-
- dataloader = wds.WebLoader(
- dataset, batch_size=None, shuffle=False, num_workers=args.workers, **kwargs
- )
-
- # FIXME not clear which approach is better, with_epoch before vs after dataloader?
- # hoping to resolve via https://github.com/webdataset/webdataset/issues/169
- # if is_train:
- # # roll over and repeat a few samples to get same number of full batches on each node
- # global_batch_size = args.batch_size * args.world_size
- # num_batches = math.ceil(num_samples / global_batch_size)
- # num_workers = max(1, args.workers)
- # num_batches = math.ceil(num_batches / num_workers) * num_workers
- # num_samples = num_batches * global_batch_size
- # dataloader = dataloader.with_epoch(num_batches)
- # else:
- # # last batches are partial, eval is done on single (master) node
- # num_batches = math.ceil(num_samples / args.batch_size)
-
- # add meta-data to dataloader instance for convenience
- dataloader.num_batches = num_batches
- dataloader.num_samples = num_samples
-
- return DataInfo(dataloader, None)
-
-
-def wds_batch_list2dict(
- batch,
- keys=[
- "__url__",
- "__key__",
- "waveform",
- "text",
- "raw_text",
- "audio_name",
- "text_name",
- "audio_orig_sr",
- ],
-):
- """
- Return a dictionary of the batch, with keys as the names of the fields.
- """
- assert len(keys) == len(
- batch
- ), "batch must have same number of keys as keys argument"
- return {keys[i]: batch[i] for i in range(len(batch))}
-
-
-def get_csv_dataset(args, preprocess_fn, is_train):
- input_filename = args.train_data if is_train else args.val_data
- assert input_filename
- dataset = CsvDataset(
- input_filename,
- preprocess_fn,
- img_key=args.csv_img_key,
- caption_key=args.csv_caption_key,
- sep=args.csv_separator,
- )
- num_samples = len(dataset)
- sampler = DistributedSampler(dataset) if args.distributed and is_train else None
- shuffle = is_train and sampler is None
-
- dataloader = DataLoader(
- dataset,
- batch_size=args.batch_size,
- shuffle=shuffle,
- num_workers=args.workers,
- pin_memory=True,
- sampler=sampler,
- drop_last=is_train,
- )
- dataloader.num_samples = num_samples
- dataloader.num_batches = len(dataloader)
-
- return DataInfo(dataloader, sampler)
-
-
-def get_toy_dataset(args, model_cfg, is_train):
- index_path = args.train_data if is_train else args.val_data
- ipc_path = args.train_ipc if is_train else args.val_ipc
- assert index_path and ipc_path
- eval_mode = not is_train
- dataset = ToyDataset(index_path, ipc_path, model_cfg, eval_mode=eval_mode)
-
- num_samples = len(dataset)
- sampler = (
- DistributedSampler(dataset, shuffle=False)
- if args.distributed and is_train
- else None
- )
-
- dataloader = DataLoader(
- dataset,
- batch_size=args.batch_size,
- shuffle=False,
- num_workers=args.workers,
- sampler=sampler,
- drop_last=is_train,
- )
- dataloader.num_samples = num_samples
- dataloader.num_batches = len(dataloader)
-
- return DataInfo(dataloader, sampler)
-
-
-def get_dataset_fn(data_path, dataset_type):
- if dataset_type == "webdataset":
- return get_wds_dataset
- elif dataset_type == "csv":
- return get_csv_dataset
- elif dataset_type == "auto":
- ext = data_path.split(".")[-1]
- if ext in ["csv", "tsv"]:
- return get_csv_dataset
- elif ext in ["tar"]:
- return get_wds_dataset
- else:
- raise ValueError(
- f"Tried to figure out dataset type, but failed for extention {ext}."
- )
- elif dataset_type == "toy":
- return get_toy_dataset
- else:
- raise ValueError(f"Unsupported dataset type: {dataset_type}")
-
-
-def get_data(args, model_cfg):
- data = {}
-
- args.class_index_dict = load_class_label(args.class_label_path)
-
- if args.datasetinfos is None:
- args.datasetinfos = ["train", "unbalanced_train", "balanced_train"]
- if args.dataset_type == "webdataset":
- args.train_data = get_tar_path_from_dataset_name(
- args.datasetnames,
- args.datasetinfos,
- islocal=not args.remotedata,
- proportion=args.dataset_proportion,
- dataset_path=args.datasetpath,
- full_dataset=args.full_train_dataset,
- )
-
- if args.full_train_dataset is None:
- args.full_train_dataset = []
- if args.exclude_eval_dataset is None:
- args.exclude_eval_dataset = []
- excluded_eval_datasets = args.full_train_dataset + args.exclude_eval_dataset
-
- val_dataset_names = (
- [n for n in args.datasetnames if n not in excluded_eval_datasets]
- if excluded_eval_datasets
- else args.datasetnames
- )
- args.val_dataset_names = val_dataset_names
- args.val_data = get_tar_path_from_dataset_name(
- val_dataset_names,
- ["valid", "test", "eval"],
- islocal=not args.remotedata,
- proportion=1,
- dataset_path=args.datasetpath,
- full_dataset=None,
- )
-
- if args.train_data:
- data["train"] = get_dataset_fn(args.train_data, args.dataset_type)(
- args, model_cfg, is_train=True
- )
-
- if args.val_data:
- data["val"] = get_dataset_fn(args.val_data, args.dataset_type)(
- args, model_cfg, is_train=False
- )
-
- return data
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py
deleted file mode 100644
index 7d1f4784898dbfeb69affefb6f624711adc8cb42..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import sys
-
-import os
-import torch
-import librosa
-from open_clip import create_model
-from training.data import get_audio_features
-from training.data import int16_to_float32, float32_to_int16
-from transformers import RobertaTokenizer
-
-tokenize = RobertaTokenizer.from_pretrained("roberta-base")
-
-
-def tokenizer(text):
- result = tokenize(
- text,
- padding="max_length",
- truncation=True,
- max_length=77,
- return_tensors="pt",
- )
- return {k: v.squeeze(0) for k, v in result.items()}
-
-
-PRETRAINED_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/checkpoints/epoch_top_0_audioset_no_fusion.pt"
-WAVE_48k_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/audio/machine.wav"
-
-
-def infer_text():
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
- precision = "fp32"
- amodel = "HTSAT-tiny" # or 'PANN-14'
- tmodel = "roberta" # the best text encoder in our training
- enable_fusion = False # False if you do not want to use the fusion model
- fusion_type = "aff_2d"
- pretrained = PRETRAINED_PATH
-
- model, model_cfg = create_model(
- amodel,
- tmodel,
- pretrained,
- precision=precision,
- device=device,
- enable_fusion=enable_fusion,
- fusion_type=fusion_type,
- )
- # load the text, can be a list (i.e. batch size)
- text_data = ["I love the contrastive learning", "I love the pretrain model"]
- # tokenize for roberta, if you want to tokenize for another text encoder, please refer to data.py#L43-90
- text_data = tokenizer(text_data)
-
- text_embed = model.get_text_embedding(text_data)
- print(text_embed.size())
-
-
-def infer_audio():
-
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
- precision = "fp32"
- amodel = "HTSAT-tiny" # or 'PANN-14'
- tmodel = "roberta" # the best text encoder in our training
- enable_fusion = False # False if you do not want to use the fusion model
- fusion_type = "aff_2d"
- pretrained = PRETRAINED_PATH
-
- model, model_cfg = create_model(
- amodel,
- tmodel,
- pretrained,
- precision=precision,
- device=device,
- enable_fusion=enable_fusion,
- fusion_type=fusion_type,
- )
-
- # load the waveform of the shape (T,), should resample to 48000
- audio_waveform, sr = librosa.load(WAVE_48k_PATH, sr=48000)
- # quantize
- audio_waveform = int16_to_float32(float32_to_int16(audio_waveform))
- audio_waveform = torch.from_numpy(audio_waveform).float()
- audio_dict = {}
-
- # the 'fusion' truncate mode can be changed to 'rand_trunc' if run in unfusion mode
- import ipdb
-
- ipdb.set_trace()
- audio_dict = get_audio_features(
- audio_dict,
- audio_waveform,
- 480000,
- data_truncating="fusion",
- data_filling="repeatpad",
- audio_cfg=model_cfg["audio_cfg"],
- )
- # can send a list to the model, to process many audio tracks in one time (i.e. batch size)
- audio_embed = model.get_audio_embedding([audio_dict])
- print(audio_embed.size())
- import ipdb
-
- ipdb.set_trace()
-
-
-if __name__ == "__main__":
- infer_text()
- infer_audio()
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/common_processors.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/common_processors.py
deleted file mode 100644
index 5cf79cfd118bc8ab13355ff57435a244688e4b22..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/common_processors.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import os
-import subprocess
-import librosa
-import numpy as np
-from text_to_speech.data_gen.tts.wav_processors.base_processor import BaseWavProcessor, register_wav_processors
-from text_to_speech.utils.audio import trim_long_silences
-from text_to_speech.utils.audio.io import save_wav
-from text_to_speech.utils.audio.rnnoise import rnnoise
-from text_to_speech.utils.commons.hparams import hparams
-
-
-@register_wav_processors(name='sox_to_wav')
-class ConvertToWavProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'ToWav'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- if input_fn[-4:] == '.wav':
- return input_fn, sr
- else:
- output_fn = self.output_fn(input_fn)
- subprocess.check_call(f'sox -v 0.95 "{input_fn}" -t wav "{output_fn}"', shell=True)
- return output_fn, sr
-
-
-@register_wav_processors(name='sox_resample')
-class ResampleProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'Resample'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- sr_file = librosa.core.get_samplerate(input_fn)
- if sr != sr_file:
- subprocess.check_call(f'sox -v 0.95 "{input_fn}" -r{sr} "{output_fn}"', shell=True)
- y, _ = librosa.core.load(input_fn, sr=sr)
- y, _ = librosa.effects.trim(y)
- save_wav(y, output_fn, sr)
- return output_fn, sr
- else:
- return input_fn, sr
-
-
-@register_wav_processors(name='trim_sil')
-class TrimSILProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'TrimSIL'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- y, _ = librosa.core.load(input_fn, sr=sr)
- y, _ = librosa.effects.trim(y)
- save_wav(y, output_fn, sr)
- return output_fn
-
-
-@register_wav_processors(name='trim_all_sil')
-class TrimAllSILProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'TrimSIL'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- y, audio_mask, _ = trim_long_silences(
- input_fn, vad_max_silence_length=preprocess_args.get('vad_max_silence_length', 12))
- save_wav(y, output_fn, sr)
- if preprocess_args['save_sil_mask']:
- os.makedirs(f'{processed_dir}/sil_mask', exist_ok=True)
- np.save(f'{processed_dir}/sil_mask/{item_name}.npy', audio_mask)
- return output_fn, sr
-
-
-@register_wav_processors(name='denoise')
-class DenoiseProcessor(BaseWavProcessor):
- @property
- def name(self):
- return 'Denoise'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- output_fn = self.output_fn(input_fn)
- rnnoise(input_fn, output_fn, out_sample_rate=sr)
- return output_fn, sr
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Lockchat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Lockchat.py
deleted file mode 100644
index 1bce74035403bf8615e68ccfcc9deb7e0151817a..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Lockchat.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-url = 'http://supertest.lockchat.app'
-model = ['gpt-4', 'gpt-3.5-turbo']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
-
- payload = {
- "temperature": 0.7,
- "messages": messages,
- "model": model,
- "stream": True,
- }
- headers = {
- "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0",
- }
- response = requests.post("http://supertest.lockchat.app/v1/chat/completions",
- json=payload, headers=headers, stream=True)
- for token in response.iter_lines():
- if b'The model: `gpt-4` does not exist' in token:
- print('error, retrying...')
- _create_completion(model=model, messages=messages, stream=stream, temperature=temperature, **kwargs)
- if b"content" in token:
- token = json.loads(token.decode('utf-8').split('data: ')[1])['choices'][0]['delta'].get('content')
- if token: yield (token)
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/AdamOswald1/finetuned_diffusion/utils.py b/spaces/AdamOswald1/finetuned_diffusion/utils.py
deleted file mode 100644
index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000
--- a/spaces/AdamOswald1/finetuned_diffusion/utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def is_google_colab():
- try:
- import google.colab
- return True
- except:
- return False
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.d.ts
deleted file mode 100644
index 24c4f07c0bcaff2b97d7a94963a0a8d9e5e5fedb..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.d.ts
+++ /dev/null
@@ -1,5 +0,0 @@
-import InputText from './InputText.js';
-
-export default function (
- config?: InputText.IConfig
-): InputText;
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.d.ts
deleted file mode 100644
index 3af0755e0f00da1f815731e886a5a505db183a05..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.d.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-// import * as Phaser from 'phaser';
-import Pinch from "./Pinch";
-
-export default function (
- gameObject: Phaser.GameObjects.GameObject | Phaser.Scene,
- config?: Pinch.IConfig
-): Pinch;
\ No newline at end of file
diff --git a/spaces/AkitoP/umamusume_bert_vits2/text/chinese_bert.py b/spaces/AkitoP/umamusume_bert_vits2/text/chinese_bert.py
deleted file mode 100644
index 581e683b7c9112296770b0094371a594a51b32e9..0000000000000000000000000000000000000000
--- a/spaces/AkitoP/umamusume_bert_vits2/text/chinese_bert.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import torch
-import sys
-from transformers import AutoTokenizer, AutoModelForMaskedLM
-import os
-#如果D:\pyprojs\Bert-VITS2\bert\chinese-roberta-wwm-ext-large\pytorch_model存在就用这个
-local_bert = False
-if os.path.exists("./bert/chinese-roberta-wwm-ext-large/pytorch_model.bin"):
- local_bert = True
-
-
-tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") if local_bert else AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext-large")
-
-models = dict()
-
-
-def get_bert_feature(text, word2ph, device=None):
- if (
- sys.platform == "darwin"
- and torch.backends.mps.is_available()
- and device == "cpu"
- ):
- device = "mps"
- if not device:
- device = "cuda"
- if device not in models.keys():
- models[device] = AutoModelForMaskedLM.from_pretrained(
- "./bert/chinese-roberta-wwm-ext-large"
- ).to(device) if local_bert else AutoModelForMaskedLM.from_pretrained(
- "hfl/chinese-roberta-wwm-ext-large"
- ).to(device)
- with torch.no_grad():
- inputs = tokenizer(text, return_tensors="pt")
- for i in inputs:
- inputs[i] = inputs[i].to(device)
- res = models[device](**inputs, output_hidden_states=True)
- res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu()
-
- assert len(word2ph) == len(text) + 2
- word2phone = word2ph
- phone_level_feature = []
- for i in range(len(word2phone)):
- repeat_feature = res[i].repeat(word2phone[i], 1)
- phone_level_feature.append(repeat_feature)
-
- phone_level_feature = torch.cat(phone_level_feature, dim=0)
-
- return phone_level_feature.T
-
-
-if __name__ == "__main__":
- import torch
-
- word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征
- word2phone = [
- 1,
- 2,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 2,
- 1,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 1,
- ]
-
- # 计算总帧数
- total_frames = sum(word2phone)
- print(word_level_feature.shape)
- print(word2phone)
- phone_level_feature = []
- for i in range(len(word2phone)):
- print(word_level_feature[i].shape)
-
- # 对每个词重复word2phone[i]次
- repeat_feature = word_level_feature[i].repeat(word2phone[i], 1)
- phone_level_feature.append(repeat_feature)
-
- phone_level_feature = torch.cat(phone_level_feature, dim=0)
- print(phone_level_feature.shape) # torch.Size([36, 1024])
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/train.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/train.py
deleted file mode 100644
index 55eca2d0ad9463415970e09bccab8b722e496704..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/train.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import argparse
-import logging
-import os
-
-import torch
-import torch.distributed as dist
-import torch.nn.functional as F
-import torch.utils.data.distributed
-from torch.nn.utils import clip_grad_norm_
-
-import losses
-from backbones import get_model
-from dataset import MXFaceDataset, SyntheticDataset, DataLoaderX
-from partial_fc import PartialFC
-from utils.utils_amp import MaxClipGradScaler
-from utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackModelCheckpoint
-from utils.utils_config import get_config
-from utils.utils_logging import AverageMeter, init_logging
-
-
-def main(args):
- cfg = get_config(args.config)
- try:
- world_size = int(os.environ['WORLD_SIZE'])
- rank = int(os.environ['RANK'])
- dist.init_process_group('nccl')
- except KeyError:
- world_size = 1
- rank = 0
- dist.init_process_group(backend='nccl', init_method="tcp://127.0.0.1:12584", rank=rank, world_size=world_size)
-
- local_rank = args.local_rank
- torch.cuda.set_device(local_rank)
- os.makedirs(cfg.output, exist_ok=True)
- init_logging(rank, cfg.output)
-
- if cfg.rec == "synthetic":
- train_set = SyntheticDataset(local_rank=local_rank)
- else:
- train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank)
-
- train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True)
- train_loader = DataLoaderX(
- local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size,
- sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True)
- backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank)
-
- if cfg.resume:
- try:
- backbone_pth = os.path.join(cfg.output, "backbone.pth")
- backbone.load_state_dict(torch.load(backbone_pth, map_location=torch.device(local_rank)))
- if rank == 0:
- logging.info("backbone resume successfully!")
- except (FileNotFoundError, KeyError, IndexError, RuntimeError):
- if rank == 0:
- logging.info("resume fail, backbone init successfully!")
-
- backbone = torch.nn.parallel.DistributedDataParallel(
- module=backbone, broadcast_buffers=False, device_ids=[local_rank])
- backbone.train()
- margin_softmax = losses.get_loss(cfg.loss)
- module_partial_fc = PartialFC(
- rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume,
- batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes,
- sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output)
-
- opt_backbone = torch.optim.SGD(
- params=[{'params': backbone.parameters()}],
- lr=cfg.lr / 512 * cfg.batch_size * world_size,
- momentum=0.9, weight_decay=cfg.weight_decay)
- opt_pfc = torch.optim.SGD(
- params=[{'params': module_partial_fc.parameters()}],
- lr=cfg.lr / 512 * cfg.batch_size * world_size,
- momentum=0.9, weight_decay=cfg.weight_decay)
-
- num_image = len(train_set)
- total_batch_size = cfg.batch_size * world_size
- cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch
- cfg.total_step = num_image // total_batch_size * cfg.num_epoch
-
- def lr_step_func(current_step):
- cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch]
- if current_step < cfg.warmup_step:
- return current_step / cfg.warmup_step
- else:
- return 0.1 ** len([m for m in cfg.decay_step if m <= current_step])
-
- scheduler_backbone = torch.optim.lr_scheduler.LambdaLR(
- optimizer=opt_backbone, lr_lambda=lr_step_func)
- scheduler_pfc = torch.optim.lr_scheduler.LambdaLR(
- optimizer=opt_pfc, lr_lambda=lr_step_func)
-
- for key, value in cfg.items():
- num_space = 25 - len(key)
- logging.info(": " + key + " " * num_space + str(value))
-
- val_target = cfg.val_targets
- callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec)
- callback_logging = CallBackLogging(50, rank, cfg.total_step, cfg.batch_size, world_size, None)
- callback_checkpoint = CallBackModelCheckpoint(rank, cfg.output)
-
- loss = AverageMeter()
- start_epoch = 0
- global_step = 0
- grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None
- for epoch in range(start_epoch, cfg.num_epoch):
- train_sampler.set_epoch(epoch)
- for step, (img, label) in enumerate(train_loader):
- global_step += 1
- features = F.normalize(backbone(img))
- x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc)
- if cfg.fp16:
- features.backward(grad_amp.scale(x_grad))
- grad_amp.unscale_(opt_backbone)
- clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2)
- grad_amp.step(opt_backbone)
- grad_amp.update()
- else:
- features.backward(x_grad)
- clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2)
- opt_backbone.step()
-
- opt_pfc.step()
- module_partial_fc.update()
- opt_backbone.zero_grad()
- opt_pfc.zero_grad()
- loss.update(loss_v, 1)
- callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp)
- callback_verification(global_step, backbone)
- scheduler_backbone.step()
- scheduler_pfc.step()
- callback_checkpoint(global_step, backbone, module_partial_fc)
- dist.destroy_process_group()
-
-
-if __name__ == "__main__":
- torch.backends.cudnn.benchmark = True
- parser = argparse.ArgumentParser(description='PyTorch ArcFace Training')
- parser.add_argument('config', type=str, help='py config file')
- parser.add_argument('--local_rank', type=int, default=0, help='local_rank')
- main(parser.parse_args())
diff --git a/spaces/Altinas/vits-uma-genshin-honkais/commons.py b/spaces/Altinas/vits-uma-genshin-honkais/commons.py
deleted file mode 100644
index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000
--- a/spaces/Altinas/vits-uma-genshin-honkais/commons.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
deleted file mode 100644
index 74737560cd8ee8167e2c7527ba4a8d08131e58bc..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
+++ /dev/null
@@ -1,329 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from math import acos, sin
-from typing import List, Tuple, Union
-
-import numpy as np
-import torch
-from PIL import Image
-
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...schedulers import DDIMScheduler, DDPMScheduler
-from ...utils import randn_tensor
-from ..pipeline_utils import AudioPipelineOutput, BaseOutput, DiffusionPipeline, ImagePipelineOutput
-from .mel import Mel
-
-
-class AudioDiffusionPipeline(DiffusionPipeline):
- """
- Pipeline for audio diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Parameters:
- vqae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- unet ([`UNet2DConditionModel`]):
- A `UNet2DConditionModel` to denoise the encoded image latents.
- mel ([`Mel`]):
- Transform audio into a spectrogram.
- scheduler ([`DDIMScheduler`] or [`DDPMScheduler`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`] or [`DDPMScheduler`].
- """
-
- _optional_components = ["vqvae"]
-
- def __init__(
- self,
- vqvae: AutoencoderKL,
- unet: UNet2DConditionModel,
- mel: Mel,
- scheduler: Union[DDIMScheduler, DDPMScheduler],
- ):
- super().__init__()
- self.register_modules(unet=unet, scheduler=scheduler, mel=mel, vqvae=vqvae)
-
- def get_default_steps(self) -> int:
- """Returns default number of steps recommended for inference.
-
- Returns:
- `int`:
- The number of steps.
- """
- return 50 if isinstance(self.scheduler, DDIMScheduler) else 1000
-
- @torch.no_grad()
- def __call__(
- self,
- batch_size: int = 1,
- audio_file: str = None,
- raw_audio: np.ndarray = None,
- slice: int = 0,
- start_step: int = 0,
- steps: int = None,
- generator: torch.Generator = None,
- mask_start_secs: float = 0,
- mask_end_secs: float = 0,
- step_generator: torch.Generator = None,
- eta: float = 0,
- noise: torch.Tensor = None,
- encoding: torch.Tensor = None,
- return_dict=True,
- ) -> Union[
- Union[AudioPipelineOutput, ImagePipelineOutput],
- Tuple[List[Image.Image], Tuple[int, List[np.ndarray]]],
- ]:
- """
- The call function to the pipeline for generation.
-
- Args:
- batch_size (`int`):
- Number of samples to generate.
- audio_file (`str`):
- An audio file that must be on disk due to [Librosa](https://librosa.org/) limitation.
- raw_audio (`np.ndarray`):
- The raw audio file as a NumPy array.
- slice (`int`):
- Slice number of audio to convert.
- start_step (int):
- Step to start diffusion from.
- steps (`int`):
- Number of denoising steps (defaults to `50` for DDIM and `1000` for DDPM).
- generator (`torch.Generator`):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- mask_start_secs (`float`):
- Number of seconds of audio to mask (not generate) at start.
- mask_end_secs (`float`):
- Number of seconds of audio to mask (not generate) at end.
- step_generator (`torch.Generator`):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) used to denoise.
- None
- eta (`float`):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- noise (`torch.Tensor`):
- A noise tensor of shape `(batch_size, 1, height, width)` or `None`.
- encoding (`torch.Tensor`):
- A tensor for [`UNet2DConditionModel`] of shape `(batch_size, seq_length, cross_attention_dim)`.
- return_dict (`bool`):
- Whether or not to return a [`AudioPipelineOutput`], [`ImagePipelineOutput`] or a plain tuple.
-
- Examples:
-
- For audio diffusion:
-
- ```py
- import torch
- from IPython.display import Audio
- from diffusers import DiffusionPipeline
-
- device = "cuda" if torch.cuda.is_available() else "cpu"
- pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device)
-
- output = pipe()
- display(output.images[0])
- display(Audio(output.audios[0], rate=mel.get_sample_rate()))
- ```
-
- For latent audio diffusion:
-
- ```py
- import torch
- from IPython.display import Audio
- from diffusers import DiffusionPipeline
-
- device = "cuda" if torch.cuda.is_available() else "cpu"
- pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device)
-
- output = pipe()
- display(output.images[0])
- display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
- ```
-
- For other tasks like variation, inpainting, outpainting, etc:
-
- ```py
- output = pipe(
- raw_audio=output.audios[0, 0],
- start_step=int(pipe.get_default_steps() / 2),
- mask_start_secs=1,
- mask_end_secs=1,
- )
- display(output.images[0])
- display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
- ```
-
- Returns:
- `List[PIL Image]`:
- A list of Mel spectrograms (`float`, `List[np.ndarray]`) with the sample rate and raw audio.
- """
-
- steps = steps or self.get_default_steps()
- self.scheduler.set_timesteps(steps)
- step_generator = step_generator or generator
- # For backwards compatibility
- if type(self.unet.config.sample_size) == int:
- self.unet.config.sample_size = (self.unet.config.sample_size, self.unet.config.sample_size)
- if noise is None:
- noise = randn_tensor(
- (
- batch_size,
- self.unet.config.in_channels,
- self.unet.config.sample_size[0],
- self.unet.config.sample_size[1],
- ),
- generator=generator,
- device=self.device,
- )
- images = noise
- mask = None
-
- if audio_file is not None or raw_audio is not None:
- self.mel.load_audio(audio_file, raw_audio)
- input_image = self.mel.audio_slice_to_image(slice)
- input_image = np.frombuffer(input_image.tobytes(), dtype="uint8").reshape(
- (input_image.height, input_image.width)
- )
- input_image = (input_image / 255) * 2 - 1
- input_images = torch.tensor(input_image[np.newaxis, :, :], dtype=torch.float).to(self.device)
-
- if self.vqvae is not None:
- input_images = self.vqvae.encode(torch.unsqueeze(input_images, 0)).latent_dist.sample(
- generator=generator
- )[0]
- input_images = self.vqvae.config.scaling_factor * input_images
-
- if start_step > 0:
- images[0, 0] = self.scheduler.add_noise(input_images, noise, self.scheduler.timesteps[start_step - 1])
-
- pixels_per_second = (
- self.unet.config.sample_size[1] * self.mel.get_sample_rate() / self.mel.x_res / self.mel.hop_length
- )
- mask_start = int(mask_start_secs * pixels_per_second)
- mask_end = int(mask_end_secs * pixels_per_second)
- mask = self.scheduler.add_noise(input_images, noise, torch.tensor(self.scheduler.timesteps[start_step:]))
-
- for step, t in enumerate(self.progress_bar(self.scheduler.timesteps[start_step:])):
- if isinstance(self.unet, UNet2DConditionModel):
- model_output = self.unet(images, t, encoding)["sample"]
- else:
- model_output = self.unet(images, t)["sample"]
-
- if isinstance(self.scheduler, DDIMScheduler):
- images = self.scheduler.step(
- model_output=model_output,
- timestep=t,
- sample=images,
- eta=eta,
- generator=step_generator,
- )["prev_sample"]
- else:
- images = self.scheduler.step(
- model_output=model_output,
- timestep=t,
- sample=images,
- generator=step_generator,
- )["prev_sample"]
-
- if mask is not None:
- if mask_start > 0:
- images[:, :, :, :mask_start] = mask[:, step, :, :mask_start]
- if mask_end > 0:
- images[:, :, :, -mask_end:] = mask[:, step, :, -mask_end:]
-
- if self.vqvae is not None:
- # 0.18215 was scaling factor used in training to ensure unit variance
- images = 1 / self.vqvae.config.scaling_factor * images
- images = self.vqvae.decode(images)["sample"]
-
- images = (images / 2 + 0.5).clamp(0, 1)
- images = images.cpu().permute(0, 2, 3, 1).numpy()
- images = (images * 255).round().astype("uint8")
- images = list(
- (Image.fromarray(_[:, :, 0]) for _ in images)
- if images.shape[3] == 1
- else (Image.fromarray(_, mode="RGB").convert("L") for _ in images)
- )
-
- audios = [self.mel.image_to_audio(_) for _ in images]
- if not return_dict:
- return images, (self.mel.get_sample_rate(), audios)
-
- return BaseOutput(**AudioPipelineOutput(np.array(audios)[:, np.newaxis, :]), **ImagePipelineOutput(images))
-
- @torch.no_grad()
- def encode(self, images: List[Image.Image], steps: int = 50) -> np.ndarray:
- """
- Reverse the denoising step process to recover a noisy image from the generated image.
-
- Args:
- images (`List[PIL Image]`):
- List of images to encode.
- steps (`int`):
- Number of encoding steps to perform (defaults to `50`).
-
- Returns:
- `np.ndarray`:
- A noise tensor of shape `(batch_size, 1, height, width)`.
- """
-
- # Only works with DDIM as this method is deterministic
- assert isinstance(self.scheduler, DDIMScheduler)
- self.scheduler.set_timesteps(steps)
- sample = np.array(
- [np.frombuffer(image.tobytes(), dtype="uint8").reshape((1, image.height, image.width)) for image in images]
- )
- sample = (sample / 255) * 2 - 1
- sample = torch.Tensor(sample).to(self.device)
-
- for t in self.progress_bar(torch.flip(self.scheduler.timesteps, (0,))):
- prev_timestep = t - self.scheduler.config.num_train_timesteps // self.scheduler.num_inference_steps
- alpha_prod_t = self.scheduler.alphas_cumprod[t]
- alpha_prod_t_prev = (
- self.scheduler.alphas_cumprod[prev_timestep]
- if prev_timestep >= 0
- else self.scheduler.final_alpha_cumprod
- )
- beta_prod_t = 1 - alpha_prod_t
- model_output = self.unet(sample, t)["sample"]
- pred_sample_direction = (1 - alpha_prod_t_prev) ** (0.5) * model_output
- sample = (sample - pred_sample_direction) * alpha_prod_t_prev ** (-0.5)
- sample = sample * alpha_prod_t ** (0.5) + beta_prod_t ** (0.5) * model_output
-
- return sample
-
- @staticmethod
- def slerp(x0: torch.Tensor, x1: torch.Tensor, alpha: float) -> torch.Tensor:
- """Spherical Linear intERPolation.
-
- Args:
- x0 (`torch.Tensor`):
- The first tensor to interpolate between.
- x1 (`torch.Tensor`):
- Second tensor to interpolate between.
- alpha (`float`):
- Interpolation between 0 and 1
-
- Returns:
- `torch.Tensor`:
- The interpolated tensor.
- """
-
- theta = acos(torch.dot(torch.flatten(x0), torch.flatten(x1)) / torch.norm(x0) / torch.norm(x1))
- return sin((1 - alpha) * theta) * x0 / sin(theta) + sin(alpha * theta) * x1 / sin(theta)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/test_consistency_models.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/test_consistency_models.py
deleted file mode 100644
index 8dce903185053c68012281530414ecdb398c1732..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/test_consistency_models.py
+++ /dev/null
@@ -1,288 +0,0 @@
-import gc
-import unittest
-
-import numpy as np
-import torch
-from torch.backends.cuda import sdp_kernel
-
-from diffusers import (
- CMStochasticIterativeScheduler,
- ConsistencyModelPipeline,
- UNet2DModel,
-)
-from diffusers.utils import randn_tensor, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_2, require_torch_gpu
-
-from ..pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS
-from ..test_pipelines_common import PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-class ConsistencyModelPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
- pipeline_class = ConsistencyModelPipeline
- params = UNCONDITIONAL_IMAGE_GENERATION_PARAMS
- batch_params = UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS
-
- # Override required_optional_params to remove num_images_per_prompt
- required_optional_params = frozenset(
- [
- "num_inference_steps",
- "generator",
- "latents",
- "output_type",
- "return_dict",
- "callback",
- "callback_steps",
- ]
- )
-
- @property
- def dummy_uncond_unet(self):
- unet = UNet2DModel.from_pretrained(
- "diffusers/consistency-models-test",
- subfolder="test_unet",
- )
- return unet
-
- @property
- def dummy_cond_unet(self):
- unet = UNet2DModel.from_pretrained(
- "diffusers/consistency-models-test",
- subfolder="test_unet_class_cond",
- )
- return unet
-
- def get_dummy_components(self, class_cond=False):
- if class_cond:
- unet = self.dummy_cond_unet
- else:
- unet = self.dummy_uncond_unet
-
- # Default to CM multistep sampler
- scheduler = CMStochasticIterativeScheduler(
- num_train_timesteps=40,
- sigma_min=0.002,
- sigma_max=80.0,
- )
-
- components = {
- "unet": unet,
- "scheduler": scheduler,
- }
-
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
-
- inputs = {
- "batch_size": 1,
- "num_inference_steps": None,
- "timesteps": [22, 0],
- "generator": generator,
- "output_type": "np",
- }
-
- return inputs
-
- def test_consistency_model_pipeline_multistep(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- pipe = ConsistencyModelPipeline(**components)
- pipe = pipe.to(device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = pipe(**inputs).images
- assert image.shape == (1, 32, 32, 3)
-
- image_slice = image[0, -3:, -3:, -1]
- expected_slice = np.array([0.3572, 0.6273, 0.4031, 0.3961, 0.4321, 0.5730, 0.5266, 0.4780, 0.5004])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
-
- def test_consistency_model_pipeline_multistep_class_cond(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components(class_cond=True)
- pipe = ConsistencyModelPipeline(**components)
- pipe = pipe.to(device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- inputs["class_labels"] = 0
- image = pipe(**inputs).images
- assert image.shape == (1, 32, 32, 3)
-
- image_slice = image[0, -3:, -3:, -1]
- expected_slice = np.array([0.3572, 0.6273, 0.4031, 0.3961, 0.4321, 0.5730, 0.5266, 0.4780, 0.5004])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
-
- def test_consistency_model_pipeline_onestep(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- pipe = ConsistencyModelPipeline(**components)
- pipe = pipe.to(device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- inputs["num_inference_steps"] = 1
- inputs["timesteps"] = None
- image = pipe(**inputs).images
- assert image.shape == (1, 32, 32, 3)
-
- image_slice = image[0, -3:, -3:, -1]
- expected_slice = np.array([0.5004, 0.5004, 0.4994, 0.5008, 0.4976, 0.5018, 0.4990, 0.4982, 0.4987])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
-
- def test_consistency_model_pipeline_onestep_class_cond(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components(class_cond=True)
- pipe = ConsistencyModelPipeline(**components)
- pipe = pipe.to(device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- inputs["num_inference_steps"] = 1
- inputs["timesteps"] = None
- inputs["class_labels"] = 0
- image = pipe(**inputs).images
- assert image.shape == (1, 32, 32, 3)
-
- image_slice = image[0, -3:, -3:, -1]
- expected_slice = np.array([0.5004, 0.5004, 0.4994, 0.5008, 0.4976, 0.5018, 0.4990, 0.4982, 0.4987])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
-
-
-@slow
-@require_torch_gpu
-class ConsistencyModelPipelineSlowTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, seed=0, get_fixed_latents=False, device="cpu", dtype=torch.float32, shape=(1, 3, 64, 64)):
- generator = torch.manual_seed(seed)
-
- inputs = {
- "num_inference_steps": None,
- "timesteps": [22, 0],
- "class_labels": 0,
- "generator": generator,
- "output_type": "np",
- }
-
- if get_fixed_latents:
- latents = self.get_fixed_latents(seed=seed, device=device, dtype=dtype, shape=shape)
- inputs["latents"] = latents
-
- return inputs
-
- def get_fixed_latents(self, seed=0, device="cpu", dtype=torch.float32, shape=(1, 3, 64, 64)):
- if type(device) == str:
- device = torch.device(device)
- generator = torch.Generator(device=device).manual_seed(seed)
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- return latents
-
- def test_consistency_model_cd_multistep(self):
- unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2")
- scheduler = CMStochasticIterativeScheduler(
- num_train_timesteps=40,
- sigma_min=0.002,
- sigma_max=80.0,
- )
- pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler)
- pipe.to(torch_device=torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs()
- image = pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- image_slice = image[0, -3:, -3:, -1]
-
- expected_slice = np.array([0.0888, 0.0881, 0.0666, 0.0479, 0.0292, 0.0195, 0.0201, 0.0163, 0.0254])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 2e-2
-
- def test_consistency_model_cd_onestep(self):
- unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2")
- scheduler = CMStochasticIterativeScheduler(
- num_train_timesteps=40,
- sigma_min=0.002,
- sigma_max=80.0,
- )
- pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler)
- pipe.to(torch_device=torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs()
- inputs["num_inference_steps"] = 1
- inputs["timesteps"] = None
- image = pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- image_slice = image[0, -3:, -3:, -1]
-
- expected_slice = np.array([0.0340, 0.0152, 0.0063, 0.0267, 0.0221, 0.0107, 0.0416, 0.0186, 0.0217])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 2e-2
-
- @require_torch_2
- def test_consistency_model_cd_multistep_flash_attn(self):
- unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2")
- scheduler = CMStochasticIterativeScheduler(
- num_train_timesteps=40,
- sigma_min=0.002,
- sigma_max=80.0,
- )
- pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler)
- pipe.to(torch_device=torch_device, torch_dtype=torch.float16)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(get_fixed_latents=True, device=torch_device)
- # Ensure usage of flash attention in torch 2.0
- with sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
- image = pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- image_slice = image[0, -3:, -3:, -1]
-
- expected_slice = np.array([0.1875, 0.1428, 0.1289, 0.2151, 0.2092, 0.1477, 0.1877, 0.1641, 0.1353])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
-
- @require_torch_2
- def test_consistency_model_cd_onestep_flash_attn(self):
- unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2")
- scheduler = CMStochasticIterativeScheduler(
- num_train_timesteps=40,
- sigma_min=0.002,
- sigma_max=80.0,
- )
- pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler)
- pipe.to(torch_device=torch_device, torch_dtype=torch.float16)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(get_fixed_latents=True, device=torch_device)
- inputs["num_inference_steps"] = 1
- inputs["timesteps"] = None
- # Ensure usage of flash attention in torch 2.0
- with sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
- image = pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- image_slice = image[0, -3:, -3:, -1]
-
- expected_slice = np.array([0.1663, 0.1948, 0.2275, 0.1680, 0.1204, 0.1245, 0.1858, 0.1338, 0.2095])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 62a0627ae2e9bb17974068e56ee660093e944e0d..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/apcnet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/data/yfcc100m.md b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/data/yfcc100m.md
deleted file mode 100644
index 575c54bc4bab3972878291c8d227a313c9fc766e..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/data/yfcc100m.md
+++ /dev/null
@@ -1,14 +0,0 @@
-# The YFCC100M Subset
-
-In the paper, we performed a dataset ablation using a subset of the YFCC100M dataset and showed that the performance remained largely similar.
-
-The subset contains 14,829,396 images, about 15% of the full dataset, which have been filtered to only keep those with natural languag titles and/or descriptions in English.
-
-We provide the list of (line number, photo identifier, photo hash) of each image contained in this subset. These correspond to the first three columns in the dataset's metadata TSV file.
-
-```
-wget https://openaipublic.azureedge.net/clip/data/yfcc100m_subset_data.tsv.bz2
-bunzip2 yfcc100m_subset_data.tsv.bz2
-```
-
-Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/).
\ No newline at end of file
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/optimization/losses.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/optimization/losses.py
deleted file mode 100644
index bbe076f59af9259fab74ab7c2a02645b1dd3ab93..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/optimization/losses.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from torch.nn import functional as F
-
-
-def d_clip_loss(x, y, use_cosine=False):
- x = F.normalize(x, dim=-1)
- y = F.normalize(y, dim=-1)
-
- if use_cosine:
- distance = 1 - (x @ y.t()).squeeze()
- else:
- distance = (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
-
- return distance
-
-
-def range_loss(input):
- return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3])
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddim.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddim.py
deleted file mode 100644
index 27ead0ea914c64c747b64e690662899fb3801144..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddim.py
+++ /dev/null
@@ -1,336 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-
-from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor
-
-
-class DDIMSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device("cuda"):
- attr = attr.to(torch.device("cuda"))
- setattr(self, name, attr)
-
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)
- alphas_cumprod = self.model.alphas_cumprod
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer('betas', to_torch(self.model.betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,verbose=verbose)
- self.register_buffer('ddim_sigmas', ddim_sigmas)
- self.register_buffer('ddim_alphas', ddim_alphas)
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
-
- @torch.no_grad()
- def sample(self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.,
- mask=None,
- x0=None,
- temperature=1.,
- noise_dropout=0.,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.,
- unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- dynamic_threshold=None,
- ucg_schedule=None,
- **kwargs
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- ctmp = conditioning[list(conditioning.keys())[0]]
- while isinstance(ctmp, list): ctmp = ctmp[0]
- cbs = ctmp.shape[0]
- if cbs != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
-
- elif isinstance(conditioning, list):
- for ctmp in conditioning:
- if ctmp.shape[0] != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
-
- else:
- if conditioning.shape[0] != batch_size:
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- print(f'Data shape for DDIM sampling is {size}, eta {eta}')
-
- samples, intermediates = self.ddim_sampling(conditioning, size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask, x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- dynamic_threshold=dynamic_threshold,
- ucg_schedule=ucg_schedule
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(self, cond, shape,
- x_T=None, ddim_use_original_steps=False,
- callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, log_every_t=100,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None,
- ucg_schedule=None):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
- time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
-
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
- img = img_orig * mask + (1. - mask) * img
-
- if ucg_schedule is not None:
- assert len(ucg_schedule) == len(time_range)
- unconditional_guidance_scale = ucg_schedule[i]
-
- outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised, temperature=temperature,
- noise_dropout=noise_dropout, score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- dynamic_threshold=dynamic_threshold)
- img, pred_x0 = outs
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates['x_inter'].append(img)
- intermediates['pred_x0'].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None,
- dynamic_threshold=None):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
- model_output = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- if isinstance(c, dict):
- assert isinstance(unconditional_conditioning, dict)
- c_in = dict()
- for k in c:
- if isinstance(c[k], list):
- c_in[k] = [torch.cat([
- unconditional_conditioning[k][i],
- c[k][i]]) for i in range(len(c[k]))]
- else:
- c_in[k] = torch.cat([
- unconditional_conditioning[k],
- c[k]])
- elif isinstance(c, list):
- c_in = list()
- assert isinstance(unconditional_conditioning, list)
- for i in range(len(c)):
- c_in.append(torch.cat([unconditional_conditioning[i], c[i]]))
- else:
- c_in = torch.cat([unconditional_conditioning, c])
- model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond)
-
- if self.model.parameterization == "v":
- e_t = self.model.predict_eps_from_z_and_v(x, t, model_output)
- else:
- e_t = model_output
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps", 'not implemented'
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
-
- # current prediction for x_0
- if self.model.parameterization != "v":
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- else:
- pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output)
-
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
-
- if dynamic_threshold is not None:
- raise NotImplementedError()
-
- # direction pointing to x_t
- dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
- return x_prev, pred_x0
-
- @torch.no_grad()
- def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None,
- unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None):
- num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0]
-
- assert t_enc <= num_reference_steps
- num_steps = t_enc
-
- if use_original_steps:
- alphas_next = self.alphas_cumprod[:num_steps]
- alphas = self.alphas_cumprod_prev[:num_steps]
- else:
- alphas_next = self.ddim_alphas[:num_steps]
- alphas = torch.tensor(self.ddim_alphas_prev[:num_steps])
-
- x_next = x0
- intermediates = []
- inter_steps = []
- for i in tqdm(range(num_steps), desc='Encoding Image'):
- t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long)
- if unconditional_guidance_scale == 1.:
- noise_pred = self.model.apply_model(x_next, t, c)
- else:
- assert unconditional_conditioning is not None
- e_t_uncond, noise_pred = torch.chunk(
- self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)),
- torch.cat((unconditional_conditioning, c))), 2)
- noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond)
-
- xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next
- weighted_noise_pred = alphas_next[i].sqrt() * (
- (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred
- x_next = xt_weighted + weighted_noise_pred
- if return_intermediates and i % (
- num_steps // return_intermediates) == 0 and i < num_steps - 1:
- intermediates.append(x_next)
- inter_steps.append(i)
- elif return_intermediates and i >= num_steps - 2:
- intermediates.append(x_next)
- inter_steps.append(i)
- if callback: callback(i)
-
- out = {'x_encoded': x_next, 'intermediate_steps': inter_steps}
- if return_intermediates:
- out.update({'intermediates': intermediates})
- return x_next, out
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
- return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
- extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
-
- @torch.no_grad()
- def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- use_original_steps=False, callback=None):
-
- timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
- x_dec = x_latent
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
- x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- if callback: callback(i)
- return x_dec
\ No newline at end of file
diff --git a/spaces/Apex-X/Tm/roop/processors/__init__.py b/spaces/Apex-X/Tm/roop/processors/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/encodec.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/models/encodec.py
deleted file mode 100644
index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/encodec.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from abc import ABC, abstractmethod
-import typing as tp
-
-from einops import rearrange
-import torch
-from torch import nn
-
-from .. import quantization as qt
-
-
-class CompressionModel(ABC, nn.Module):
-
- @abstractmethod
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- ...
-
- @abstractmethod
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """See `EncodecModel.encode`"""
- ...
-
- @abstractmethod
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """See `EncodecModel.decode`"""
- ...
-
- @property
- @abstractmethod
- def channels(self) -> int:
- ...
-
- @property
- @abstractmethod
- def frame_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def sample_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def cardinality(self) -> int:
- ...
-
- @property
- @abstractmethod
- def num_codebooks(self) -> int:
- ...
-
- @property
- @abstractmethod
- def total_codebooks(self) -> int:
- ...
-
- @abstractmethod
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- ...
-
-
-class EncodecModel(CompressionModel):
- """Encodec model operating on the raw waveform.
-
- Args:
- encoder (nn.Module): Encoder network.
- decoder (nn.Module): Decoder network.
- quantizer (qt.BaseQuantizer): Quantizer network.
- frame_rate (int): Frame rate for the latent representation.
- sample_rate (int): Audio sample rate.
- channels (int): Number of audio channels.
- causal (bool): Whether to use a causal version of the model.
- renormalize (bool): Whether to renormalize the audio before running the model.
- """
- # we need assignement to override the property in the abstract class,
- # I couldn't find a better way...
- frame_rate: int = 0
- sample_rate: int = 0
- channels: int = 0
-
- def __init__(self,
- encoder: nn.Module,
- decoder: nn.Module,
- quantizer: qt.BaseQuantizer,
- frame_rate: int,
- sample_rate: int,
- channels: int,
- causal: bool = False,
- renormalize: bool = False):
- super().__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.quantizer = quantizer
- self.frame_rate = frame_rate
- self.sample_rate = sample_rate
- self.channels = channels
- self.renormalize = renormalize
- self.causal = causal
- if self.causal:
- # we force disabling here to avoid handling linear overlap of segments
- # as supported in original EnCodec codebase.
- assert not self.renormalize, 'Causal model does not support renormalize'
-
- @property
- def total_codebooks(self):
- """Total number of quantizer codebooks available.
- """
- return self.quantizer.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
- """
- return self.quantizer.num_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- self.quantizer.set_num_codebooks(n)
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- return self.quantizer.bins
-
- def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- scale: tp.Optional[torch.Tensor]
- if self.renormalize:
- mono = x.mean(dim=1, keepdim=True)
- volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt()
- scale = 1e-8 + volume
- x = x / scale
- scale = scale.view(-1, 1)
- else:
- scale = None
- return x, scale
-
- def postprocess(self,
- x: torch.Tensor,
- scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor:
- if scale is not None:
- assert self.renormalize
- x = x * scale.view(-1, 1, 1)
- return x
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- assert x.dim() == 3
- length = x.shape[-1]
- x, scale = self.preprocess(x)
-
- emb = self.encoder(x)
- q_res = self.quantizer(emb, self.frame_rate)
- out = self.decoder(q_res.x)
-
- # remove extra padding added by the encoder and decoder
- assert out.shape[-1] >= length, (out.shape[-1], length)
- out = out[..., :length]
-
- q_res.x = self.postprocess(out, scale)
-
- return q_res
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """Encode the given input tensor to quantized representation along with scale parameter.
-
- Args:
- x (torch.Tensor): Float tensor of shape [B, C, T]
-
- Returns:
- codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of:
- codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep.
- scale a float tensor containing the scale for audio renormalizealization.
- """
- assert x.dim() == 3
- x, scale = self.preprocess(x)
- emb = self.encoder(x)
- codes = self.quantizer.encode(emb)
- return codes, scale
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """Decode the given codes to a reconstructed representation, using the scale to perform
- audio denormalization if needed.
-
- Args:
- codes (torch.Tensor): Int tensor of shape [B, K, T]
- scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value.
-
- Returns:
- out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio.
- """
- emb = self.quantizer.decode(codes)
- out = self.decoder(emb)
- out = self.postprocess(out, scale)
- # out contains extra padding added by the encoder and decoder
- return out
-
-
-class FlattenedCompressionModel(CompressionModel):
- """Wraps a CompressionModel and flatten its codebooks, e.g.
- instead of returning [B, K, T], return [B, S, T * (K // S)] with
- S the number of codebooks per step, and `K // S` the number of 'virtual steps'
- for each real time step.
-
- Args:
- model (CompressionModel): compression model to wrap.
- codebooks_per_step (int): number of codebooks to keep per step,
- this must divide the number of codebooks provided by the wrapped model.
- extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1,
- if each codebook has a cardinality N, then the first codebook will
- use the range [0, N - 1], and the second [N, 2 N - 1] etc.
- On decoding, this can lead to potentially invalid sequences.
- Any invalid entry will be silently remapped to the proper range
- with a modulo.
- """
- def __init__(self, model: CompressionModel, codebooks_per_step: int = 1,
- extend_cardinality: bool = True):
- super().__init__()
- self.model = model
- self.codebooks_per_step = codebooks_per_step
- self.extend_cardinality = extend_cardinality
-
- @property
- def total_codebooks(self):
- return self.model.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
-
- ..Warning:: this reports the number of codebooks after the flattening
- of the codebooks!
- """
- assert self.model.num_codebooks % self.codebooks_per_step == 0
- return self.codebooks_per_step
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
-
- ..Warning:: this sets the number of codebooks **before** the flattening
- of the codebooks.
- """
- assert n % self.codebooks_per_step == 0
- self.model.set_num_codebooks(n)
-
- @property
- def num_virtual_steps(self) -> int:
- """Return the number of virtual steps, e.g. one real step
- will be split into that many steps.
- """
- return self.model.num_codebooks // self.codebooks_per_step
-
- @property
- def frame_rate(self) -> int:
- return self.model.frame_rate * self.num_virtual_steps
-
- @property
- def sample_rate(self) -> int:
- return self.model.sample_rate
-
- @property
- def channels(self) -> int:
- return self.model.channels
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- if self.extend_cardinality:
- return self.model.cardinality * self.num_virtual_steps
- else:
- return self.model.cardinality
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- raise NotImplementedError("Not supported, use encode and decode.")
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- indices, scales = self.model.encode(x)
- B, K, T = indices.shape
- indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step)
- if self.extend_cardinality:
- for virtual_step in range(1, self.num_virtual_steps):
- indices[..., virtual_step] += self.model.cardinality * virtual_step
- indices = rearrange(indices, 'b k t v -> b k (t v)')
- return (indices, scales)
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- B, K, T = codes.shape
- assert T % self.num_virtual_steps == 0
- codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps)
- # We silently ignore potential errors from the LM when
- # using extend_cardinality.
- codes = codes % self.model.cardinality
- return self.model.decode(codes, scale)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/alias.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/alias.py
deleted file mode 100644
index 452a9244ea6766d8cf94425fb583583ef740baee..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/alias.py
+++ /dev/null
@@ -1,78 +0,0 @@
-from distutils.errors import DistutilsOptionError
-
-from setuptools.command.setopt import edit_config, option_base, config_file
-
-
-def shquote(arg):
- """Quote an argument for later parsing by shlex.split()"""
- for c in '"', "'", "\\", "#":
- if c in arg:
- return repr(arg)
- if arg.split() != [arg]:
- return repr(arg)
- return arg
-
-
-class alias(option_base):
- """Define a shortcut that invokes one or more commands"""
-
- description = "define a shortcut to invoke one or more commands"
- command_consumes_arguments = True
-
- user_options = [
- ('remove', 'r', 'remove (unset) the alias'),
- ] + option_base.user_options
-
- boolean_options = option_base.boolean_options + ['remove']
-
- def initialize_options(self):
- option_base.initialize_options(self)
- self.args = None
- self.remove = None
-
- def finalize_options(self):
- option_base.finalize_options(self)
- if self.remove and len(self.args) != 1:
- raise DistutilsOptionError(
- "Must specify exactly one argument (the alias name) when "
- "using --remove"
- )
-
- def run(self):
- aliases = self.distribution.get_option_dict('aliases')
-
- if not self.args:
- print("Command Aliases")
- print("---------------")
- for alias in aliases:
- print("setup.py alias", format_alias(alias, aliases))
- return
-
- elif len(self.args) == 1:
- alias, = self.args
- if self.remove:
- command = None
- elif alias in aliases:
- print("setup.py alias", format_alias(alias, aliases))
- return
- else:
- print("No alias definition found for %r" % alias)
- return
- else:
- alias = self.args[0]
- command = ' '.join(map(shquote, self.args[1:]))
-
- edit_config(self.filename, {'aliases': {alias: command}}, self.dry_run)
-
-
-def format_alias(name, aliases):
- source, command = aliases[name]
- if source == config_file('global'):
- source = '--global-config '
- elif source == config_file('user'):
- source = '--user-config '
- elif source == config_file('local'):
- source = ''
- else:
- source = '--filename=%r' % source
- return source + name + ' ' + command
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/meta_arch/grit.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/meta_arch/grit.py
deleted file mode 100644
index 101725fd455e723360eaafc26db37beb226a9233..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/meta_arch/grit.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from typing import Dict, List, Optional, Tuple
-import torch
-from detectron2.config import configurable
-from detectron2.structures import ImageList, Instances, Boxes
-from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY
-from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN
-
-
-@META_ARCH_REGISTRY.register()
-class GRiT(GeneralizedRCNN):
- @configurable
- def __init__(
- self,
- **kwargs):
- super().__init__(**kwargs)
- assert self.proposal_generator is not None
-
- @classmethod
- def from_config(cls, cfg):
- ret = super().from_config(cfg)
- return ret
-
- def inference(
- self,
- batched_inputs: Tuple[Dict[str, torch.Tensor]],
- detected_instances: Optional[List[Instances]] = None,
- do_postprocess: bool = True,
- ):
- assert not self.training
- assert detected_instances is None
-
- images = self.preprocess_image(batched_inputs)
- features = self.backbone(images.tensor)
- proposals, _ = self.proposal_generator(images, features, None)
- results, _ = self.roi_heads(features, proposals)
- if do_postprocess:
- assert not torch.jit.is_scripting(), \
- "Scripting is not supported for postprocess."
- return GRiT._postprocess(
- results, batched_inputs, images.image_sizes)
- else:
- return results
-
- def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]):
- if not self.training:
- return self.inference(batched_inputs)
-
- images = self.preprocess_image(batched_inputs)
-
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
-
- targets_task = batched_inputs[0]['task']
- for anno_per_image in batched_inputs:
- assert targets_task == anno_per_image['task']
-
- features = self.backbone(images.tensor)
- proposals, proposal_losses = self.proposal_generator(
- images, features, gt_instances)
- proposals, roihead_textdecoder_losses = self.roi_heads(
- features, proposals, gt_instances, targets_task=targets_task)
-
- losses = {}
- losses.update(roihead_textdecoder_losses)
- losses.update(proposal_losses)
-
- return losses
\ No newline at end of file
diff --git a/spaces/BalaBhaskarudu/Balu/README.md b/spaces/BalaBhaskarudu/Balu/README.md
deleted file mode 100644
index fdfd9f595330d2c7be5b58790e0f37b278038831..0000000000000000000000000000000000000000
--- a/spaces/BalaBhaskarudu/Balu/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Balu
-emoji: 📚
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Bloodbox.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Bloodbox.md
deleted file mode 100644
index 363e45eb01e2af770315e1d7c0db7d9266e2817a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar Bloodbox.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-Simulador de barco PC Descargar: Cómo experimentar la simulación realista de la navegación de varios buques
- ¿Alguna vez has soñado con navegar en un enorme buque portacontenedores, un rompehielos, una unidad de rescate, un buque cisterna o un crucero? ¿Quieres explorar las aguas más bellas y desafiantes del mundo, desde la Antártida hasta Bora Bora? ¿Quieres sentir la emoción de enfrentarte a la tormenta perfecta, salvar ballenas en peligro o manejar un puerto ocupado?
- Si respondiste sí a cualquiera de estas preguntas, entonces podrías estar interesado en jugar un juego de simulador de barcos en tu PC. Un juego de simulador de barcos es un tipo de juego de simulación que te permite controlar y operar diferentes tipos de embarcaciones en entornos y escenarios realistas. Puede aprender a navegar, maniobrar, atracar y manejar varias situaciones que los capitanes de barcos reales enfrentan todos los días.
-cómo descargar bloodbox
Download ->>->>->> https://bltlly.com/2v6LS4
- En este artículo, le mostraremos cómo elegir el mejor juego de simulador de barcos para su PC, cómo descargarlo e instalarlo, y cómo disfrutar de la simulación realista de navegar varios buques. También responderemos algunas preguntas frecuentes sobre juegos de simuladores de barcos. ¡Empecemos!
- ¿Qué es un juego de simulador de barcos?
- Una breve historia de juegos de simuladores de barcos
- Los juegos de simuladores de barcos no son un fenómeno nuevo. Han existido desde los primeros días de los juegos de ordenador, que se remontan a la década de 1980. Algunos de los primeros ejemplos de juegos de simuladores de barcos son Arpón, Naufragio, Puertos de llamada, y Servicio silencioso. Estos juegos se centraron en la guerra naval, el comercio o la simulación submarina.
- A medida que la tecnología avanzó, también lo hicieron los gráficos, la física y el realismo de los juegos de simuladores de barcos. En la década de 1990 y 2000, algunos de los juegos de simuladores de barcos más populares fueron Titanic: Adventure Out of Time, Virtual Sailor, Ship Simulator, y European Ship Simulator. Estos juegos introdujeron más variedad, interactividad y personalización al género.
-
- Los beneficios de jugar juegos de simulador de barcos
- Jugar juegos de simulador de barco puede ser divertido, relajante, educativo y gratificante. Estos son algunos de los beneficios de jugar juegos de simulador de barco:
-
-- Puedes experimentar la emoción y el desafío de navegar diferentes tipos de embarcaciones en diversas condiciones.
-- Puedes explorar la belleza y diversidad de los océanos, mares, ríos, lagos y puertos del mundo.
-- Puedes aprender sobre la historia, la cultura, la geografía y la ecología de diferentes regiones y países.
-- Puedes mejorar tu conciencia espacial, coordinación, resolución de problemas, toma de decisiones y habilidades de comunicación.
-- Puedes divertirte y relajarte creando tus propios escenarios, personalizando tus naves y compartiendo tus logros con otros jugadores.
-
- Cómo elegir el mejor juego de simulador de barco para su PC
- Las características a buscar en un juego de simulador de barco
- Hay muchos juegos de simulador de barcos disponibles en el mercado, pero no todos ellos valen su tiempo y dinero. Para ayudarle a elegir el mejor juego de simulador de barcos para su PC, aquí están algunas de las características para buscar:
-
-- Los gráficos y la calidad de sonido. Quieres un juego de simulador de barcos que tenga gráficos realistas y detallados, animaciones suaves y efectos de sonido inmersivos. También quieres un juego que admita pantallas de alta resolución, auriculares de realidad virtual y sistemas de sonido envolvente.
-- La física y el realismo. Usted quiere un juego de simulador de barco que tiene la física precisa y sensible, el comportamiento realista del agua, el clima dinámico, y los modelos realistas de daños y colisiones. También quieres un juego que simule los aspectos técnicos de la navegación, como la navegación, la comunicación, la gestión del motor y los procedimientos de seguridad.
-
-- La jugabilidad y la interactividad. Quieres un juego de simulador de barcos que tenga un juego atractivo y desafiante, con diferentes modos, niveles, objetivos y recompensas. También quieres un juego que tenga elementos interactivos, como NPCs, vida silvestre, tráfico, eventos y opciones multijugador.
-- El soporte y las actualizaciones. Quieres un juego de simulador de barcos que tenga buena atención al cliente, actualizaciones regulares, correcciones de errores y nuevo contenido. También quieres un juego que tenga una comunidad activa y amigable de jugadores, desarrolladores y modders.
-
- Los 3 mejores juegos de simuladores de barcos en Steam
- Para ahorrarte tiempo y esfuerzo, hemos seleccionado los 3 mejores juegos de simuladores de barcos en Steam según sus calificaciones, reseñas, características y popularidad. Aquí están:
- Extremos del simulador de buques
- Ship Simulator Extremes es uno de los juegos de simulación de barcos más populares y aclamados en Steam. Fue lanzado en 2010 por VSTEP y Paradox Interactive. Cuenta con más de 30 embarcaciones, desde lanchas rápidas hasta cruceros; más de 50 misiones, desde operaciones de rescate hasta campañas ambientales; más de 40 ubicaciones, desde Sydney a San Francisco; y efectos meteorológicos realistas, como lluvia, niebla, viento y olas.
- Ship Simulator Extremes también tiene un modo de campaña que te permite experimentar las historias de capitanes de la vida real; un modo de roaming gratuito que te permite explorar el mundo a tu propio ritmo; un modo multijugador que te permite jugar con o contra otros jugadores en línea; y un editor de misiones que te permite crear tus propios escenarios. También puedes descargar contenido adicional del Steam Workshop o del sitio web oficial.
-
- Ship Simulator Extremes está disponible en Steam por $19.99 USD. Requiere Windows XP o superior; procesador de 3 GHz o superior; 2 GB de RAM o superior; NVIDIA GeForce 8800 o superior; DirectX 9.0c o superior; 3 GB de espacio en disco o superior; conexión a Internet de banda ancha o superior.
- Simulador de nave realista
-
- Ship Simulator Realistic también tiene un modo de carrera que te permite comenzar como capitán novato y progresar a través de diferentes rangos y licencias; un modo sandbox que te permite personalizar tus embarcaciones y escenarios; un modo multijugador que le permite cooperar o competir con otros jugadores en línea; y un soporte de modificación que le permite agregar su propio contenido al juego.
- Ship Simulator Realistic está disponible en Steam por $24.99 USD. Requiere Windows 7 o superior; Intel i5-6400 o superior; 8 GB de RAM o superior; NVIDIA GTX 970 o superior; DirectX 11 o superior; 10 GB de espacio en disco o superior; conexión a Internet de banda ancha o superior.
- Naves 2022
- Ships 2022 es uno de los juegos de simulación de barcos más esperados en Steam. Se espera que sea lanzado a finales de 202 2 por Games Box S.A. Cuenta con más de 30 buques, desde barcos pesqueros hasta buques de guerra; más de 15 ubicaciones, desde el Mar Báltico hasta el Mar Caribe; más de 200 misiones, desde la pesca hasta la piratería; y gráficos realistas, sonido, clima y agua.
- Ships 2022 también tiene un modo de carrera que te permite construir tu propia flota y compañía; un modo sandbox que te permite navegar libremente y experimentar con diferentes embarcaciones y configuraciones; un modo multijugador que te permite unirte o alojar sesiones en línea con otros jugadores; y un soporte de taller que le permite acceder y compartir contenido generado por el usuario.
- Ships 2022 está disponible para pre-pedido en Steam por $29.99 USD. Requiere Windows 10 o superior; Intel Core i5-8400 o superior; 16 GB de RAM o superior; NVIDIA GeForce GTX 1060 o superior; DirectX 12 o superior; 20 GB de espacio en disco o superior; conexión a Internet de banda ancha o superior.
- Cómo descargar e instalar un juego de simulador de barcos en su PC
- Los requisitos para ejecutar un juego de simulador de buques en su PC
-
-
-- Un sistema operativo Windows (Windows XP, 7, 10, etc.)
-- Un procesador (Intel Core i5, i7, etc.)
-- Una memoria (RAM) (2 GB, 8 GB, 16 GB, etc.)
-- Una tarjeta gráfica (NVIDIA GeForce GTX 970, 1060, etc.)
-- Una versión DirectX (9.0c, 11, 12, etc.)
-- Un espacio en disco (3 GB, 10 GB, 20 GB, etc.)
-- Una conexión a Internet (banda ancha, inalámbrica, etc.)
-
- Si su PC no cumple con los requisitos, es posible que experimente un rendimiento deficiente, baja calidad gráfica, retraso, estrellarse u otros problemas durante el juego. Es posible que tenga que actualizar los componentes de su PC o bajar la configuración del juego para mejorar el juego.
- Los pasos para descargar e instalar un juego de simulador de barco en su PC
- Una vez que haya confirmado que su PC cumple con los requisitos, puede proceder a descargar e instalar un juego de simulador de buques en su PC. Estos son los pasos a seguir:
-
-- Ve al sitio web de Steam y crea una cuenta o inicia sesión en tu cuenta existente.
-- Descargue e instale el cliente de Steam en su PC.
-- Inicia el cliente de Steam e inicia sesión en tu cuenta.
-- Vaya a la pestaña Tienda y busque el juego de simulador de barcos que desea comprar.
-- Haga clic en el título del juego y luego haga clic en el botón Añadir al carrito.
-- Vaya a su carrito y haga clic en el botón Comprar para mí.
-- Elige tu método de pago y completa la transacción.
-- Vaya a la pestaña Biblioteca y encuentre el juego de simulador de barcos que compró.
-- Haga clic en el título del juego y luego haga clic en el botón Instalar.
-- Espera a que el juego se descargue e instale en tu PC.
-- Haga clic en el botón Jugar y disfrutar del juego!
-
- Cómo disfrutar de la simulación realista de la navegación de varios buques en un juego de simulador de barcos
- Los tipos de embarcaciones que puedes navegar en un juego de simulador de barcos
-
-
-- Buques de carga, como buques portacontenedores, graneleros, petroleros, etc.
-- Barcos de pasajeros, como cruceros, transbordadores, yates, etc.
-- Buques militares, como buques de guerra, submarinos, portaaviones, etc.
-- Buques de servicio, como remolcadores, unidades de rescate, buques de guardacostas , etc.
-- Barcos de pesca, como arrastreros, cangrejos, palangreros, etc.
-- Embarcaciones de recreo, como lanchas rápidas, veleros, motos acuáticas, etc.
-- Barcos históricos, como el Titanic, el HMS Victory, el USS Constitution, etc.
-
- Cada tipo de embarcación tiene sus propias características, ventajas, desventajas y desafíos. Es necesario aprender a operar de manera adecuada y eficiente, así como cómo hacer frente a las situaciones específicas y los riesgos que pueden encontrar.
- Los escenarios y misiones que puedes experimentar en un juego de simulador de barcos
- Otro aspecto de jugar un juego de simulador de barco es que puedes experimentar varios escenarios y misiones que ponen a prueba tus habilidades y conocimientos como capitán de barco. Dependiendo del juego que elijas, puedes experimentar:
-
-- Entrega de carga, donde tienes que transportar mercancías de un puerto a otro mientras gestionas tu combustible, carga, tripulación y tiempo.
-- Transporte de pasajeros, donde usted tiene que proporcionar un viaje seguro y cómodo para sus pasajeros al tratar con sus necesidades, solicitudes y quejas.
-- Operaciones militares, donde tienes que participar en misiones de combate, reconocimiento o apoyo, evitando el fuego enemigo, minas, torpedos y misiles.
-- Operaciones de rescate, donde tienes que salvar vidas y propiedades de desastres, accidentes o ataques mientras te enfrentas a condiciones climáticas adversas, incendios, inundaciones o piratas.
-- Campañas ambientales, donde hay que proteger la vida silvestre y el ecosistema de la contaminación, la caza furtiva o la pesca ilegal, mientras que la sensibilización y la recaudación de fondos para su causa.
-
-- Actividades recreativas, donde hay que disfrutar del paisaje y de los deportes acuáticos evitando colisiones, lesiones o multas.
-- Eventos históricos, donde tienes que revivir los famosos momentos e historias del pasado mientras te enfrentas a los mismos desafíos y peligros que las tripulaciones originales.
-
- Cada escenario y misión tiene sus propios objetivos, recompensas y consecuencias. Necesitas planificar tu estrategia cuidadosamente y ejecutarla hábilmente. También necesita adaptarse a las condiciones y circunstancias cambiantes que pueden afectar su rendimiento.
- Los consejos y trucos para mejorar sus habilidades de navegación en un juego de simulador de barco
- Jugar un juego de simulador de barco puede ser divertido y fácil si sabes lo que estás haciendo. Sin embargo, si eres nuevo en el género o quieres mejorar aún más tus habilidades de navegación, aquí hay algunos consejos y trucos que pueden ayudarte:
-
-- Leer el manual o ver el tutorial del juego antes de empezar a jugar. Esto te ayudará a entender los controles básicos, las características y la mecánica del juego.
-- Elige un recipiente y un escenario que coincida con tu nivel de habilidad e interés. Esto te ayudará a disfrutar más del juego y evitar la frustración o el aburrimiento.
-- Use el mapa, radar a Moto Racing, Ship Simulator Maritime Search and Rescue, Operaciones navales: Warship Gunner, y Sea of Thieves.
-
¿Cómo puedo jugar un juego de simulador de barco con un volante o un joystick?
- Algunos juegos de simulador de barcos admiten el uso de un volante o un joystick como dispositivo de entrada alternativo o adicional. Puedes comprobar la compatibilidad del juego con tu dispositivo en la página de Steam del juego o en el sitio web oficial. También es posible que necesites configurar la configuración del juego y tu dispositivo para permitir el uso del volante o del joystick.
- ¿Cómo puedo jugar un juego de simulador de barco con otros jugadores en línea?
-
- ¿Cómo puedo crear mi propio contenido para un juego de simulador de barcos?
- Algunos juegos de simulador de barcos tienen un soporte de modding que te permite crear tu propio contenido para el juego, como embarcaciones, ubicaciones, escenarios, misiones, etc. Puedes consultar la disponibilidad y características del soporte de modding en la página de Steam del juego o en el sitio web oficial. También es posible que necesite descargar e instalar una herramienta de modificación, seguir las instrucciones y directrices de los desarrolladores y módulos y compartir su contenido con otros jugadores.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Social Dummy En IOS.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Social Dummy En IOS.md
deleted file mode 100644
index 67984237b61c4d515ec039943f342b247ff640bb..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar Social Dummy En IOS.md
+++ /dev/null
@@ -1,56 +0,0 @@
-
-Cómo descargar Social Dummy en iOS
-Si usted está buscando una manera divertida y creativa para hacer capturas de pantalla de redes sociales falsas, es posible que desee probar Social Dummy. Social Dummy es una aplicación que te permite crear notas realistas en diferentes formatos, como Twitter, iMessage, Instagram, YouTube, Facebook, Tumblr, Snapchat, FaceTime, WhatsApp, Call, Spotify, Netflix, Safari e incluso la pantalla de bloqueo. En este artículo, te mostraremos cómo descargar social dummy en iOS y cómo usarlo para hacer tus propias capturas de pantalla de redes sociales falsas.
-cómo descargar social dummy en iOS
DOWNLOAD ===== https://bltlly.com/2v6JTE
- ¿Qué es Social Dummy?
-Social Dummy es una aplicación todo en uno que puede generar una captura de pantalla de redes sociales falsa de casi cualquier aplicación que desee. Puede crear falso Twitter, iMessage, Instagram, YouTube, Facebook, Tumblr, Snapchat, FaceTime, WhatsApp, Call, Spotify, Netflix, Safari, e incluso la pantalla de bloqueo. La aplicación le ofrece una forma única de estilizar sus notas en diferentes formatos con muchas opciones de personalización disponibles para usted. Puede editar el texto, imágenes, iconos, marcas de tiempo, nivel de batería, intensidad de la señal y otros detalles de sus notas. También puede elegir entre una lista de estilos que tienen diseños únicos para hacer que sus notas cobren vida.
- ¿Por qué descargar Social Dummy?
-Hay muchas razones por las que es posible que desee descargar social dummy en iOS. Aquí están algunos de ellos:
-
-- Puedes usarlo para divertirte y entretenerte. Puedes bromear con tus amigos con mensajes falsos o llamadas de celebridades o personajes ficticios. También puedes crear memes o chistes divertidos con publicaciones falsas en redes sociales.
-- Puedes usarlo para educación y aprendizaje. Puedes practicar tus habilidades de escritura creando tweets o textos falsos. También puedes aprender sobre diferentes plataformas de redes sociales y sus características explorando la aplicación.
-
-
- Cómo descargar Social Dummy en iOS
-Descargar dummy social en iOS es fácil y rápido. Solo tienes que seguir estos pasos:
- Paso 1: Abrir el App Store
-En tu iPhone o iPad, abre la aplicación App Store. Puedes encontrarla en tu pantalla de inicio o en tu biblioteca de aplicaciones.
-
- Paso 2: Búsqueda de Social Dummy
-Toca la pestaña Buscar en la esquina inferior derecha de la pantalla. Luego escribe "social dummy" en la barra de búsqueda y toca el botón Buscar en el teclado. Deberías ver la aplicación Social Dummy como el primer resultado. Tiene un icono azul con una cabeza blanca y un corazón rojo.
-
-
- Paso 3: Descargar Social Dummy
-Toca el botón Obtener o el icono de la nube junto a la aplicación Social Dummy. Es posible que necesites introducir tu contraseña de Apple ID o usar Face ID o Touch ID para confirmar tu descarga. La aplicación comenzará a descargar e instalar en su dispositivo. Puede ver el progreso en el icono de la aplicación.
-
- Paso 4: Abrir Social Dummy
-Una vez que la aplicación se descarga e instala, puede abrirla tocando el botón Abrir en la App Store o tocando el icono de la aplicación en la pantalla de inicio o en la biblioteca de aplicaciones. Verá una pantalla de bienvenida con el logotipo de la aplicación y luego una pantalla de bienvenida con información sobre la aplicación. Pulse Continuar.
-
- Cómo usar Social Dummy
-Usar social dummy en iOS es simple y divertido. Así es como puedes crear tus propias capturas de pantalla de redes sociales falsas con la aplicación:
- Elegir un estilo
-
-
- Personaliza tu contenido
-Después de elegir un estilo, verá una vista previa de su nota en la mitad superior de la pantalla y un menú de opciones en la mitad inferior. Puedes personalizar tu contenido tocando cualquiera de las opciones y editándolas como desees. Por ejemplo, puedes cambiar el nombre de usuario, la imagen del perfil, el texto del tweet, las imágenes, los iconos, los likes, los retweets, los comentarios y más. También puede pulsar sobre cualquier elemento de la vista previa para editarlo directamente. Para este ejemplo, crearemos un tweet falso de Elon Musk.
-
- Guardar o compartir su captura de pantalla
-Cuando esté satisfecho con su contenido, puede guardar o compartir su captura de pantalla tocando el botón Guardar o Compartir en la esquina superior derecha de la pantalla. Puede guardar su captura de pantalla como una imagen en la biblioteca de fotos de su dispositivo o compartirla con otros a través de correo electrónico, texto, redes sociales u otras aplicaciones. Para este ejemplo, guardaremos nuestra captura de pantalla en nuestra biblioteca de fotos.
-
- Conclusión
-Social Dummy es una aplicación increíble que te permite crear capturas de pantalla de redes sociales falsas de aspecto realista en diferentes formatos. Puede descargar social dummy en iOS desde la App Store y usarlo para hacer sus propias notas falsas en minutos. Puede personalizar su contenido con muchas opciones y elegir entre una variedad de estilos que tienen diseños únicos. También puede guardar o compartir sus capturas de pantalla con otros para fines de diversión, educación, trabajo o negocios. Descargar Social Dummy hoy y dar rienda suelta a su creatividad!
- Preguntas frecuentes
-
-- Q: ¿Es Social Dummy gratis?
-- A: Social Dummy es gratis de descargar y usar, pero algunos de los estilos requieren una compra en la aplicación para desbloquear. También puede eliminar anuncios y apoyar al desarrollador mediante la compra de la versión premium de la aplicación.
-
-- A: No, Social Dummy es solo para fines de entretenimiento. No debe usarlo para actividades ilegales o fraudulentas. El desarrollador no es responsable de ningún mal uso de la aplicación.
-- Q: ¿Cómo puedo contactar al desarrollador de Social Dummy?
-- A: Puede ponerse en contacto con el desarrollador de Social Dummy enviando un correo electrónico a support@socialdummy.app o visitando su sitio web en https://socialdummy.app/.
-- Q: ¿Cómo puedo actualizar Social Dummy?
-- A: Puede actualizar Social Dummy yendo a la App Store y tocando en Actualizaciones. A continuación, verá una lista de aplicaciones que tienen nuevas versiones disponibles. Toque en Social Dummy para actualizarlo a la última versión.
-- Q: ¿Cómo puedo eliminar Social Dummy?
-- A: Puede eliminar Social Dummy yendo a su pantalla de inicio o App Library y manteniendo pulsado el icono de la aplicación. A continuación, toque en el botón X que aparece en la esquina superior izquierda del icono. Confirme su eliminación pulsando en Eliminar.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Fuego Mx Gratis Para Ventanas 8.md b/spaces/Benson/text-generation/Examples/Descargar Fuego Mx Gratis Para Ventanas 8.md
deleted file mode 100644
index 68eeac2edce9bc7201fb89de052b4cb92f9c2535..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Fuego Mx Gratis Para Ventanas 8.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-Descarga gratuita de Fire MAX para Windows 8: Cómo jugar el juego Ultimate Battle Royale en PC
-Si eres un fan de los juegos battle royale, es posible que hayas oído hablar de Free Fire, el juego móvil más descargado en 2020. Pero ¿sabías que hay una versión mejorada de este juego llamado Free Fire MAX? ¿Y sabías que puedes jugar en tu PC con Windows 8 con facilidad? En este artículo, te contaremos todo lo que necesitas saber sobre Free Fire MAX, cómo descargarlo e instalarlo en tu PC y cómo optimizar tu experiencia de juego. ¡Vamos a empezar!
-¿Qué es Free Fire MAX y por qué deberías jugarlo?
-Free Fire MAX es una aplicación móvil independiente, separada de la aplicación original Free Fire, que trae el popular shooter battle royale a la nueva década con una revisión gráfica total. Publicado por Garena International I para Android e iOS, Free Fire MAX ofrece imágenes increíblemente realistas con el mismo clásico, los aficionados al juego basado en habilidades saben y aman. Así que si tienes un dispositivo móvil de gran alcance con altas especificaciones, es el momento de dar el salto y experimentar Free Fire como nunca antes!
-descargar fuego máx gratis para ventanas 8
Download >> https://bltlly.com/2v6Kz8
-Características y beneficios de Free Fire MAX
-Free Fire MAX es un emocionante shooter de supervivencia en tercera persona para dispositivos móviles que arroja a 50 jugadores a una isla aislada para luchar por sus vidas. Primero, baja en paracaídas a la superficie de la isla y mantente con vida el mayor tiempo posible. Luego, eres tú contra ellos - recoge armas, armaduras, granadas y otros equipos para ayudarte a acumular las muertes y sobrevivir hasta que seas el último hombre en pie.
-Forma equipo con hasta 3 amigos para formar un equipo de 4 hombres. Enfréntate al mundo juntos y aumenta tus probabilidades de llegar al final del partido. Entonces, cuando solo queden un puñado de luchadores, te alegrarás de tener a tu equipo detrás de ti.
-
-Free Fire MAX también es compatible con la aplicación original Free Fire. Esto significa que los jugadores de ambas versiones pueden jugar entre sí sin ninguna ventaja adicional para cualquiera de las versiones. Las características mejoradas no afectan la capacidad de los jugadores para identificar o reaccionar a los elementos del juego.
-Compatibilidad y requisitos de Free Fire MAX
-Free Fire MAX está diseñado para dispositivos con especificaciones más altas que las requeridas por Free Fire. Por lo tanto, no todos los dispositivos pueden ejecutar este juego sin problemas. Estos son los requisitos mínimos y recomendados para los dispositivos Android:
-
-OS | GPU | CPU | | Memoria | Almacenamiento |
-Android 4.1 o superior | | Adreno 505 o superior | Octa-core 2.0 GHz o superior | 2 GB o superior | | 1.5 GB o más |
-Android 7.1 o superior | | Adreno 616 o superior | Octa-core 2.2 GHz o superior | 4 GB o superior | 4 GB o más |
-
-Para dispositivos iOS, el requisito mínimo es iPhone 6S, mientras que el requisito recomendado es iPhone 8 o superior.
-Cómo descargar e instalar Free Fire MAX en Windows 8 PC
-Si desea jugar Free Fire MAX en su PC con Windows 8, tendrá que utilizar un emulador de Android que puede ejecutar el juego sin problemas y de manera eficiente. Un emulador es un software que le permite ejecutar aplicaciones y juegos Android en su PC mediante la simulación del entorno Android. Hay muchos emuladores disponibles para Windows, pero recomendaremos dos de los mejores para Free Fire MAX: BlueStacks y GameLoop.
-Método 1: Usando el emulador de BlueStacks
-BlueStacks es uno de los emuladores de Android más populares y confiables para PC, con más de 500 millones de usuarios en todo el mundo. Ofrece alto rendimiento, compatibilidad y personalización para Free Fire MAX, así como otras características como controles inteligentes, modo de disparo, modo FPS alto y más. Estos son los pasos para descargar e instalar Free Fire MAX en Windows 8 PC usando BlueStacks:
-
-Puedes descargar BlueStacks desde su sitio web oficial . El tamaño del archivo es de aproximadamente 1 GB, así que asegúrese de tener suficiente espacio y una conexión a Internet estable. Después de descargar el archivo, ejecútelo y siga las instrucciones para instalar BlueStacks en su PC.
-
-Paso 2: Completa el inicio de sesión de Google para acceder a la Play Store
-Después de instalar BlueStacks, iniciarlo y completar el proceso de inicio de sesión de Google para acceder a la Google Play Store. Puedes usar tu cuenta de Google existente o crear una nueva.
-Paso 3: Búsqueda de fuego libre MAX en la barra de búsqueda
-En la pantalla de inicio de BlueStacks, verá una barra de búsqueda en la esquina superior derecha. Escriba "Free Fire MAX" y pulse enter. Verás el icono del juego en los resultados de búsqueda.
-Paso 4: Haga clic para instalar Free Fire MAX desde los resultados de búsqueda
-Haz clic en el icono del juego y serás redirigido a la página de Google Play Store de Free Fire MAX. Haga clic en el botón "Instalar" y espere a que se complete el proceso de descarga e instalación. El tamaño de descarga de Free Fire MAX es de alrededor de 0,93 GB para Android . El tamaño de descarga será el mismo en el PC como el juego se descarga desde Google Play Store.
-Paso 5: Haga clic en el icono Free Fire MAX en la pantalla de inicio para comenzar a jugar
-Una vez realizada la instalación, verá el icono Free Fire MAX en la pantalla de inicio de BlueStacks. Haga clic en él y disfrutar de jugar Free Fire MAX en su PC con Windows 8.
-Método 2: Usando el emulador de GameLoop
-GameLoop es otro emulador de Android popular para PC, especialmente diseñado para juegos. Ofrece una jugabilidad suave, baja latencia y alta calidad gráfica para Free Fire MAX, así como otras características como controles de teclado y ratón, grabación de pantalla, transmisión en vivo y más. Estos son los pasos para descargar e instalar Free Fire MAX en Windows 8 PC usando GameLoop:
-Paso 1: Descarga e instala GameLoop en tu PC
-
-Paso 2: Abre GameLoop y busca Free Fire MAX
-Después de instalar GameLoop, iniciarlo y verá una lista de juegos en la pantalla de inicio. Haga clic en el icono "Buscar" en la esquina superior izquierda y escriba "Free Fire MAX". Verá el icono del juego en los resultados de búsqueda.
-Paso 3: Haga clic para descargar e instalar Free Fire MAX desde los resultados de búsqueda
-Haga clic en el icono del juego y verá un botón "Descargar" en la esquina inferior derecha. Haga clic en él y espere a que se complete el proceso de descarga e instalación. El tamaño de descarga de Free Fire MAX es de alrededor de 1,5 GB para GameLoop . El tamaño de descarga será diferente del PC ya que el juego se descarga desde el propio servidor de GameLoop.
-Paso 4: Disfruta jugando Free Fire MAX en GameLoop
-Una vez realizada la instalación, verá el icono Free Fire MAX en la pantalla de inicio de GameLoop. Haga clic en él y disfrutar de jugar Free Fire MAX en su PC con Windows 8.
-Cómo optimizar tu experiencia de juego Free Fire MAX en PC
-Jugar Free Fire MAX en PC tiene muchas ventajas sobre jugarlo en dispositivos móviles, como una pantalla más grande, mejores gráficos y controles más precisos. Sin embargo, para aprovechar al máximo tu experiencia de juego, necesitas optimizar algunos ajustes y características en tu emulador. Aquí hay algunos consejos para ayudarle a hacer eso:
-Ajustar la configuración de gráficos y la resolución
-Free Fire MAX tiene muchas opciones de gráficos que puedes personalizar según tu preferencia y la capacidad del PC. Puede acceder a estas opciones haciendo clic en el icono de engranaje en la esquina superior derecha de la pantalla del juego, luego ir a la pestaña "Gráficos". Aquí puede ajustar los siguientes ajustes:
-
-- Calidad gráfica: Esto determina la calidad general de los gráficos del juego, tales como texturas, sombras y efectos. Puede elegir entre Bajo, Estándar, Alto, Ultra o Personalizado. Cuanto mayor sea la calidad, más recursos consumirá.
-
-- FPS: Esto determina cuán suave y fluida es la animación del juego. Puedes elegir entre 30 FPS, 60 FPS o 90 FPS. Cuanto más alto sea el FPS, más recursos consumirá.
-- Anti-Aliasing: Esto determina cuán suaves y realistas son los bordes del juego. Puedes elegir entre Off, Low, Medium o High. Cuanto más alto sea el anti-aliasing, más recursos consumirá.
-- Brillo: Esto determina cuán brillantes u oscuros son los gráficos del juego. Puedes ajustarlo deslizando la barra hacia la izquierda o hacia la derecha.
-
-También puede activar o desactivar algunas características gráficas al activarlas o desactivarlas, como Auto Adjust Graphics, Shadows, Bloom Effect, Depth of Field, Ragdoll Effect y Grass Density.
-La mejor manera de encontrar la configuración gráfica óptima para su PC es experimentar con diferentes combinaciones y ver cuál le da el mejor equilibrio entre rendimiento y calidad. También puede usar el botón "Probar" en la esquina inferior derecha de la pestaña "Gráficos" para ver cómo sus ajustes afectan su uso de FPS y CPU.
-Habilitar el modo de alto FPS y el modo de disparo
-Otra forma de optimizar tu experiencia de juego Free Fire MAX en PC es habilitar algunas características en tu emulador que pueden mejorar tu juego. Por ejemplo, BlueStacks tiene un modo FPS alto y un modo de disparo que puede activar haciendo clic en sus iconos en el lado derecho de la ventana del emulador.
-El modo de alto FPS le permite jugar Free Fire MAX hasta 90 FPS , que puede hacer que su juego más suave y sensible. Para habilitar este modo, necesitas tener un PC que cumpla con los requisitos mínimos para 90 FPS , así como ajustar la configuración de FPS de tu juego en consecuencia.
-
-GameLoop también tiene características similares que puede habilitar haciendo clic en sus iconos en el lado derecho de la ventana del emulador. El modo de alto FPS le permite jugar Free Fire MAX hasta 120 FPS , que puede hacer que su juego aún más suave y más sensible. El modo de disparo le permite apuntar y disparar con el cursor del ratón, así como utilizar algunos atajos de teclado para acciones rápidas. También puede personalizar la configuración del modo de disparo haciendo clic en el icono de engranaje al lado.
-Usa controles de teclado y ratón o personaliza tu propio
-Una de las mayores ventajas de jugar Free Fire MAX en PC es que puedes usar tu teclado y ratón para controlar tu personaje y tus acciones, lo que puede darte más flexibilidad y precisión que usar una pantalla táctil. Tanto BlueStacks como GameLoop tienen controles predeterminados de teclado y ratón que puede usar de inmediato, o puede personalizar los suyos utilizando sus respectivas herramientas de asignación de teclas.
-Para acceder a la herramienta de asignación de teclas en BlueStacks, haga clic en el icono del teclado en el lado derecho de la ventana del emulador. Verás una lista de teclas predefinidas para diferentes acciones, como mover, apuntar, disparar, saltar, agacharse y más. También puedes arrastrar y soltar diferentes iconos clave en la pantalla del juego para asignarlos a botones o áreas específicas. Puede guardar su mapa de teclas personalizado haciendo clic en el botón "Guardar" en la esquina inferior derecha de la herramienta de asignación de teclas.
-Para acceder a la herramienta de asignación de teclas en GameLoop, haga clic en el icono del teclado en el lado derecho de la ventana del emulador. Verás una lista de teclas predefinidas para diferentes acciones, como mover, apuntar, disparar, saltar, agacharse y más. También puedes arrastrar y soltar diferentes iconos clave en la pantalla del juego para asignarlos a botones o áreas específicas. Puede guardar su mapa de teclas personalizado haciendo clic en el botón "Guardar" en la esquina inferior derecha de la herramienta de asignación de teclas.
-
-Conclusión y preguntas frecuentes
-Free Fire MAX es un increíble juego de battle royale que ofrece gráficos impresionantes, un juego emocionante y características emocionantes. Si quieres jugar en tu PC con Windows 8, puedes usar un emulador de Android como BlueStacks o GameLoop para descargarlo e instalarlo fácilmente. También puedes optimizar tu experiencia de juego ajustando algunos ajustes y características en tu emulador. Esperamos que este artículo le ha ayudado a aprender a jugar Free Fire MAX en PC. Ahora, vamos a responder a algunas preguntas frecuentes sobre Free Fire MAX:
-
-- Q: ¿Free Fire MAX es libre para jugar?
-- A: Sí, Free Fire MAX es gratis para jugar en dispositivos móviles y PC. Sin embargo, tiene algunas compras en el juego que puedes hacer con dinero real, como diamantes, pieles, personajes y más.
-- Q: ¿Puedo jugar Free Fire MAX con mis amigos que están usando la aplicación original Free Fire?
-- A: Sí, Free Fire MAX es compatible con la aplicación original Free Fire. Esto significa que puedes jugar con o contra jugadores de ambas versiones sin ninguna ventaja añadida para cualquiera de ellas.
-- Q: ¿Cómo puedo actualizar Free Fire MAX en PC?
-- A: Para actualizar Free Fire MAX en el PC, es necesario abrir el emulador e ir a la Google Play Store o el propio servidor de GameLoop. Luego, busque Free Fire MAX y haga clic en el botón "Actualizar" si hay una nueva versión disponible.
-- Q: ¿Cómo puedo transferir mi cuenta de Free Fire a Free Fire MAX?
-- A: Para transferir su cuenta de Free Fire a Free Fire MAX, debe vincular su cuenta a una de las plataformas compatibles, como Facebook, Google, VK o Huawei ID. Luego, debe iniciar sesión con la misma plataforma en Free Fire MAX y los datos de su cuenta se sincronizarán automáticamente.
-- Q: ¿Cómo puedo contactar al servicio al cliente para Free Fire MAX?
-
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/README.md b/spaces/BernardoOlisan/vqganclip/taming-transformers/README.md
deleted file mode 100644
index d632e063b68d9b3dc07a3243fb0007edea4205b7..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/taming-transformers/README.md
+++ /dev/null
@@ -1,377 +0,0 @@
-# Taming Transformers for High-Resolution Image Synthesis
-##### CVPR 2021 (Oral)
-
-
-[**Taming Transformers for High-Resolution Image Synthesis**](https://compvis.github.io/taming-transformers/)
-[Patrick Esser](https://github.com/pesser)\*,
-[Robin Rombach](https://github.com/rromb)\*,
-[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)
-\* equal contribution
-
-**tl;dr** We combine the efficiancy of convolutional approaches with the expressivity of transformers by introducing a convolutional VQGAN, which learns a codebook of context-rich visual parts, whose composition is modeled with an autoregressive transformer.
-
-
-[arXiv](https://arxiv.org/abs/2012.09841) | [BibTeX](#bibtex) | [Project Page](https://compvis.github.io/taming-transformers/)
-
-
-### News
-- Thanks to [rom1504](https://github.com/rom1504) it is now easy to [train a VQGAN on your own datasets](#training-on-custom-data).
-- Included a bugfix for the quantizer. For backward compatibility it is
- disabled by default (which corresponds to always training with `beta=1.0`).
- Use `legacy=False` in the quantizer config to enable it.
- Thanks [richcmwang](https://github.com/richcmwang) and [wcshin-git](https://github.com/wcshin-git)!
-- Our paper received an update: See https://arxiv.org/abs/2012.09841v3 and the corresponding changelog.
-- Added a pretrained, [1.4B transformer model](https://k00.fr/s511rwcv) trained for class-conditional ImageNet synthesis, which obtains state-of-the-art FID scores among autoregressive approaches and outperforms BigGAN.
-- Added pretrained, unconditional models on [FFHQ](https://k00.fr/yndvfu95) and [CelebA-HQ](https://k00.fr/2xkmielf).
-- Added accelerated sampling via caching of keys/values in the self-attention operation, used in `scripts/sample_fast.py`.
-- Added a checkpoint of a [VQGAN](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) trained with f8 compression and Gumbel-Quantization.
- See also our updated [reconstruction notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb).
-- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](https://github.com/openai/DALL-E). See also [this section](#more-resources).
-- We now include an overview of pretrained models in [Tab.1](#overview-of-pretrained-models). We added models for [COCO](#coco) and [ADE20k](#ade20k).
-- The streamlit demo now supports image completions.
-- We now include a couple of examples from the D-RIN dataset so you can run the
- [D-RIN demo](#d-rin) without preparing the dataset first.
-- You can now jump right into sampling with our [Colab quickstart notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb).
-
-## Requirements
-A suitable [conda](https://conda.io/) environment named `taming` can be created
-and activated with:
-
-```
-conda env create -f environment.yaml
-conda activate taming
-```
-## Overview of pretrained models
-The following table provides an overview of all models that are currently available.
-FID scores were evaluated using [torch-fidelity](https://github.com/toshas/torch-fidelity).
-For reference, we also include a link to the recently released autoencoder of the [DALL-E](https://github.com/openai/DALL-E) model.
-See the corresponding [colab
-notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb)
-for a comparison and discussion of reconstruction capabilities.
-
-| Dataset | FID vs train | FID vs val | Link | Samples (256x256) | Comments
-| ------------- | ------------- | ------------- |------------- | ------------- |------------- |
-| FFHQ (f=16) | 9.6 | -- | [ffhq_transformer](https://k00.fr/yndvfu95) | [ffhq_samples](https://k00.fr/j626x093) |
-| CelebA-HQ (f=16) | 10.2 | -- | [celebahq_transformer](https://k00.fr/2xkmielf) | [celebahq_samples](https://k00.fr/j626x093) |
-| ADE20K (f=16) | -- | 35.5 | [ade20k_transformer](https://k00.fr/ot46cksa) | [ade20k_samples.zip](https://heibox.uni-heidelberg.de/f/70bb78cbaf844501b8fb/) [2k] | evaluated on val split (2k images)
-| COCO-Stuff (f=16) | -- | 20.4 | [coco_transformer](https://k00.fr/2zz6i2ce) | [coco_samples.zip](https://heibox.uni-heidelberg.de/f/a395a9be612f4a7a8054/) [5k] | evaluated on val split (5k images)
-| ImageNet (cIN) (f=16) | 15.98/15.78/6.59/5.88/5.20 | -- | [cin_transformer](https://k00.fr/s511rwcv) | [cin_samples](https://k00.fr/j626x093) | different decoding hyperparameters |
-| | | | || |
-| FacesHQ (f=16) | -- | -- | [faceshq_transformer](https://k00.fr/qqfl2do8)
-| S-FLCKR (f=16) | -- | -- | [sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/)
-| D-RIN (f=16) | -- | -- | [drin_transformer](https://k00.fr/39jcugc5)
-| | | | | || |
-| VQGAN ImageNet (f=16), 1024 | 10.54 | 7.94 | [vqgan_imagenet_f16_1024](https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
-| VQGAN ImageNet (f=16), 16384 | 7.41 | 4.98 |[vqgan_imagenet_f16_16384](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
-| VQGAN OpenImages (f=8), 8192, GumbelQuantization | 3.24 | 1.49 |[vqgan_gumbel_f8](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) | --- | Reconstruction-FIDs.
-| | | | | || |
-| DALL-E dVAE (f=8), 8192, GumbelQuantization | 33.88 | 32.01 | https://github.com/openai/DALL-E | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
-
-
-## Running pretrained models
-
-The commands below will start a streamlit demo which supports sampling at
-different resolutions and image completions. To run a non-interactive version
-of the sampling process, replace `streamlit run scripts/sample_conditional.py --`
-by `python scripts/make_samples.py --outdir ` and
-keep the remaining command line arguments.
-
-To sample from unconditional or class-conditional models,
-run `python scripts/sample_fast.py -r `.
-We describe below how to use this script to sample from the ImageNet, FFHQ, and CelebA-HQ models,
-respectively.
-
-### S-FLCKR
-
-
-You can also [run this model in a Colab
-notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb),
-which includes all necessary steps to start sampling.
-
-Download the
-[2020-11-09T13-31-51_sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/)
-folder and place it into `logs`. Then, run
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-09T13-31-51_sflckr/
-```
-
-### ImageNet
-
-
-Download the [2021-04-03T19-39-50_cin_transformer](https://k00.fr/s511rwcv)
-folder and place it into logs. Sampling from the class-conditional ImageNet
-model does not require any data preparation. To produce 50 samples for each of
-the 1000 classes of ImageNet, with k=600 for top-k sampling, p=0.92 for nucleus
-sampling and temperature t=1.0, run
-
-```
-python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25
-```
-
-To restrict the model to certain classes, provide them via the `--classes` argument, separated by
-commas. For example, to sample 50 *ostriches*, *border collies* and *whiskey jugs*, run
-
-```
-python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25 --classes 9,232,901
-```
-We recommended to experiment with the autoregressive decoding parameters (top-k, top-p and temperature) for best results.
-
-### FFHQ/CelebA-HQ
-
-Download the [2021-04-23T18-19-01_ffhq_transformer](https://k00.fr/yndvfu95) and
-[2021-04-23T18-11-19_celebahq_transformer](https://k00.fr/2xkmielf)
-folders and place them into logs.
-Again, sampling from these unconditional models does not require any data preparation.
-To produce 50000 samples, with k=250 for top-k sampling,
-p=1.0 for nucleus sampling and temperature t=1.0, run
-
-```
-python scripts/sample_fast.py -r logs/2021-04-23T18-19-01_ffhq_transformer/
-```
-for FFHQ and
-
-```
-python scripts/sample_fast.py -r logs/2021-04-23T18-11-19_celebahq_transformer/
-```
-to sample from the CelebA-HQ model.
-For both models it can be advantageous to vary the top-k/top-p parameters for sampling.
-
-### FacesHQ
-
-
-Download [2020-11-13T21-41-45_faceshq_transformer](https://k00.fr/qqfl2do8) and
-place it into `logs`. Follow the data preparation steps for
-[CelebA-HQ](#celeba-hq) and [FFHQ](#ffhq). Run
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-13T21-41-45_faceshq_transformer/
-```
-
-### D-RIN
-
-
-Download [2020-11-20T12-54-32_drin_transformer](https://k00.fr/39jcugc5) and
-place it into `logs`. To run the demo on a couple of example depth maps
-included in the repository, run
-
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.imagenet.DRINExamples}}}"
-```
-
-To run the demo on the complete validation set, first follow the data preparation steps for
-[ImageNet](#imagenet) and then run
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/
-```
-
-### COCO
-Download [2021-01-20T16-04-20_coco_transformer](https://k00.fr/2zz6i2ce) and
-place it into `logs`. To run the demo on a couple of example segmentation maps
-included in the repository, run
-
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2021-01-20T16-04-20_coco_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.coco.Examples}}}"
-```
-
-### ADE20k
-Download [2020-11-20T21-45-44_ade20k_transformer](https://k00.fr/ot46cksa) and
-place it into `logs`. To run the demo on a couple of example segmentation maps
-included in the repository, run
-
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T21-45-44_ade20k_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.ade20k.Examples}}}"
-```
-
-## Training on custom data
-
-Training on your own dataset can be beneficial to get better tokens and hence better images for your domain.
-Those are the steps to follow to make this work:
-1. install the repo with `conda env create -f environment.yaml`, `conda activate taming` and `pip install -e .`
-1. put your .jpg files in a folder `your_folder`
-2. create 2 text files a `xx_train.txt` and `xx_test.txt` that point to the files in your training and test set respectively (for example `find $(pwd)/your_folder -name "*.jpg" > train.txt`)
-3. adapt `configs/custom_vqgan.yaml` to point to these 2 files
-4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1` to
- train on two GPUs. Use `--gpus 0,` (with a trailing comma) to train on a single GPU.
-
-## Data Preparation
-
-### ImageNet
-The code will try to download (through [Academic
-Torrents](http://academictorrents.com/)) and prepare ImageNet the first time it
-is used. However, since ImageNet is quite large, this requires a lot of disk
-space and time. If you already have ImageNet on your disk, you can speed things
-up by putting the data into
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` (which defaults to
-`~/.cache/autoencoders/data/ILSVRC2012_{split}/data/`), where `{split}` is one
-of `train`/`validation`. It should have the following structure:
-
-```
-${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/
-├── n01440764
-│ ├── n01440764_10026.JPEG
-│ ├── n01440764_10027.JPEG
-│ ├── ...
-├── n01443537
-│ ├── n01443537_10007.JPEG
-│ ├── n01443537_10014.JPEG
-│ ├── ...
-├── ...
-```
-
-If you haven't extracted the data, you can also place
-`ILSVRC2012_img_train.tar`/`ILSVRC2012_img_val.tar` (or symlinks to them) into
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_train/` /
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_validation/`, which will then be
-extracted into above structure without downloading it again. Note that this
-will only happen if neither a folder
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` nor a file
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/.ready` exist. Remove them
-if you want to force running the dataset preparation again.
-
-You will then need to prepare the depth data using
-[MiDaS](https://github.com/intel-isl/MiDaS). Create a symlink
-`data/imagenet_depth` pointing to a folder with two subfolders `train` and
-`val`, each mirroring the structure of the corresponding ImageNet folder
-described above and containing a `png` file for each of ImageNet's `JPEG`
-files. The `png` encodes `float32` depth values obtained from MiDaS as RGBA
-images. We provide the script `scripts/extract_depth.py` to generate this data.
-**Please note** that this script uses [MiDaS via PyTorch
-Hub](https://pytorch.org/hub/intelisl_midas_v2/). When we prepared the data,
-the hub provided the [MiDaS
-v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2) version, but now it
-provides a v2.1 version. We haven't tested our models with depth maps obtained
-via v2.1 and if you want to make sure that things work as expected, you must
-adjust the script to make sure it explicitly uses
-[v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2)!
-
-### CelebA-HQ
-Create a symlink `data/celebahq` pointing to a folder containing the `.npy`
-files of CelebA-HQ (instructions to obtain them can be found in the [PGGAN
-repository](https://github.com/tkarras/progressive_growing_of_gans)).
-
-### FFHQ
-Create a symlink `data/ffhq` pointing to the `images1024x1024` folder obtained
-from the [FFHQ repository](https://github.com/NVlabs/ffhq-dataset).
-
-### S-FLCKR
-Unfortunately, we are not allowed to distribute the images we collected for the
-S-FLCKR dataset and can therefore only give a description how it was produced.
-There are many resources on [collecting images from the
-web](https://github.com/adrianmrit/flickrdatasets) to get started.
-We collected sufficiently large images from [flickr](https://www.flickr.com)
-(see `data/flickr_tags.txt` for a full list of tags used to find images)
-and various [subreddits](https://www.reddit.com/r/sfwpornnetwork/wiki/network)
-(see `data/subreddits.txt` for all subreddits that were used).
-Overall, we collected 107625 images, and split them randomly into 96861
-training images and 10764 validation images. We then obtained segmentation
-masks for each image using [DeepLab v2](https://arxiv.org/abs/1606.00915)
-trained on [COCO-Stuff](https://arxiv.org/abs/1612.03716). We used a [PyTorch
-reimplementation](https://github.com/kazuto1011/deeplab-pytorch) and include an
-example script for this process in `scripts/extract_segmentation.py`.
-
-### COCO
-Create a symlink `data/coco` containing the images from the 2017 split in
-`train2017` and `val2017`, and their annotations in `annotations`. Files can be
-obtained from the [COCO webpage](https://cocodataset.org/). In addition, we use
-the [Stuff+thing PNG-style annotations on COCO 2017
-trainval](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip)
-annotations from [COCO-Stuff](https://github.com/nightrome/cocostuff), which
-should be placed under `data/cocostuffthings`.
-
-### ADE20k
-Create a symlink `data/ade20k_root` containing the contents of
-[ADEChallengeData2016.zip](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip)
-from the [MIT Scene Parsing Benchmark](http://sceneparsing.csail.mit.edu/).
-
-## Training models
-
-### FacesHQ
-
-Train a VQGAN with
-```
-python main.py --base configs/faceshq_vqgan.yaml -t True --gpus 0,
-```
-
-Then, adjust the checkpoint path of the config key
-`model.params.first_stage_config.params.ckpt_path` in
-`configs/faceshq_transformer.yaml` (or download
-[2020-11-09T13-33-36_faceshq_vqgan](https://k00.fr/uxy5usa9) and place into `logs`, which
-corresponds to the preconfigured checkpoint path), then run
-```
-python main.py --base configs/faceshq_transformer.yaml -t True --gpus 0,
-```
-
-### D-RIN
-
-Train a VQGAN on ImageNet with
-```
-python main.py --base configs/imagenet_vqgan.yaml -t True --gpus 0,
-```
-
-or download a pretrained one from [2020-09-23T17-56-33_imagenet_vqgan](https://k00.fr/u0j2dtac)
-and place under `logs`. If you trained your own, adjust the path in the config
-key `model.params.first_stage_config.params.ckpt_path` of
-`configs/drin_transformer.yaml`.
-
-Train a VQGAN on Depth Maps of ImageNet with
-```
-python main.py --base configs/imagenetdepth_vqgan.yaml -t True --gpus 0,
-```
-
-or download a pretrained one from [2020-11-03T15-34-24_imagenetdepth_vqgan](https://k00.fr/55rlxs6i)
-and place under `logs`. If you trained your own, adjust the path in the config
-key `model.params.cond_stage_config.params.ckpt_path` of
-`configs/drin_transformer.yaml`.
-
-To train the transformer, run
-```
-python main.py --base configs/drin_transformer.yaml -t True --gpus 0,
-```
-
-## More Resources
-### Comparing Different First Stage Models
-The reconstruction and compression capabilities of different fist stage models can be analyzed in this [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb).
-In particular, the notebook compares two VQGANs with a downsampling factor of f=16 for each and codebook dimensionality of 1024 and 16384,
-a VQGAN with f=8 and 8192 codebook entries and the discrete autoencoder of OpenAI's [DALL-E](https://github.com/openai/DALL-E) (which has f=8 and 8192
-codebook entries).
-
-
-
-### Other
-- A [video summary](https://www.youtube.com/watch?v=o7dqGcLDf0A&feature=emb_imp_woyt) by [Two Minute Papers](https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg).
-- A [video summary](https://www.youtube.com/watch?v=-wDSDtIAyWQ) by [Gradient Dude](https://www.youtube.com/c/GradientDude/about).
-- A [weights and biases report summarizing the paper](https://wandb.ai/ayush-thakur/taming-transformer/reports/-Overview-Taming-Transformers-for-High-Resolution-Image-Synthesis---Vmlldzo0NjEyMTY)
-by [ayulockin](https://github.com/ayulockin).
-- A [video summary](https://www.youtube.com/watch?v=JfUTd8fjtX8&feature=emb_imp_woyt) by [What's AI](https://www.youtube.com/channel/UCUzGQrN-lyyc0BWTYoJM_Sg).
-- Take a look at [ak9250's notebook](https://github.com/ak9250/taming-transformers/blob/master/tamingtransformerscolab.ipynb) if you want to run the streamlit demos on Colab.
-
-### Text-to-Image Optimization via CLIP
-VQGAN has been successfully used as an image generator guided by the [CLIP](https://github.com/openai/CLIP) model, both for pure image generation
-from scratch and image-to-image translation. We recommend the following notebooks/videos/resources:
-
- - [Advadnouns](https://twitter.com/advadnoun/status/1389316507134357506) Patreon and corresponding LatentVision notebooks: https://www.patreon.com/patronizeme
- - The [notebook]( https://colab.research.google.com/drive/1L8oL-vLJXVcRzCFbPwOoMkPKJ8-aYdPN) of [Rivers Have Wings](https://twitter.com/RiversHaveWings).
- - A [video](https://www.youtube.com/watch?v=90QDe6DQXF4&t=12s) explanation by [Dot CSV](https://www.youtube.com/channel/UCy5znSnfMsDwaLlROnZ7Qbg) (in Spanish, but English subtitles are available)
-
-
-
-Text prompt: *'A bird drawn by a child'*
-
-## Shout-outs
-Thanks to everyone who makes their code and models available. In particular,
-
-- The architecture of our VQGAN is inspired by [Denoising Diffusion Probabilistic Models](https://github.com/hojonathanho/diffusion)
-- The very hackable transformer implementation [minGPT](https://github.com/karpathy/minGPT)
-- The good ol' [PatchGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) and [Learned Perceptual Similarity (LPIPS)](https://github.com/richzhang/PerceptualSimilarity)
-
-## BibTeX
-
-```
-@misc{esser2020taming,
- title={Taming Transformers for High-Resolution Image Synthesis},
- author={Patrick Esser and Robin Rombach and Björn Ommer},
- year={2020},
- eprint={2012.09841},
- archivePrefix={arXiv},
- primaryClass={cs.CV}
-}
-```
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/crt/auth.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/crt/auth.py
deleted file mode 100644
index 43b1819621aff5c5f0d15a21bf4548d15c6e3b6e..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/crt/auth.py
+++ /dev/null
@@ -1,629 +0,0 @@
-# Copyright 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-
-import datetime
-from io import BytesIO
-
-from botocore.auth import (
- SIGNED_HEADERS_BLACKLIST,
- STREAMING_UNSIGNED_PAYLOAD_TRAILER,
- UNSIGNED_PAYLOAD,
- BaseSigner,
- _get_body_as_dict,
- _host_from_url,
-)
-from botocore.compat import HTTPHeaders, awscrt, parse_qs, urlsplit, urlunsplit
-from botocore.exceptions import NoCredentialsError
-from botocore.utils import percent_encode_sequence
-
-
-class CrtSigV4Auth(BaseSigner):
- REQUIRES_REGION = True
- _PRESIGNED_HEADERS_BLOCKLIST = [
- 'Authorization',
- 'X-Amz-Date',
- 'X-Amz-Content-SHA256',
- 'X-Amz-Security-Token',
- ]
- _SIGNATURE_TYPE = awscrt.auth.AwsSignatureType.HTTP_REQUEST_HEADERS
- _USE_DOUBLE_URI_ENCODE = True
- _SHOULD_NORMALIZE_URI_PATH = True
-
- def __init__(self, credentials, service_name, region_name):
- self.credentials = credentials
- self._service_name = service_name
- self._region_name = region_name
- self._expiration_in_seconds = None
-
- def _is_streaming_checksum_payload(self, request):
- checksum_context = request.context.get('checksum', {})
- algorithm = checksum_context.get('request_algorithm')
- return isinstance(algorithm, dict) and algorithm.get('in') == 'trailer'
-
- def add_auth(self, request):
- if self.credentials is None:
- raise NoCredentialsError()
-
- # Use utcnow() because that's what gets mocked by tests, but set
- # timezone because CRT assumes naive datetime is local time.
- datetime_now = datetime.datetime.utcnow().replace(
- tzinfo=datetime.timezone.utc
- )
-
- # Use existing 'X-Amz-Content-SHA256' header if able
- existing_sha256 = self._get_existing_sha256(request)
-
- self._modify_request_before_signing(request)
-
- credentials_provider = awscrt.auth.AwsCredentialsProvider.new_static(
- access_key_id=self.credentials.access_key,
- secret_access_key=self.credentials.secret_key,
- session_token=self.credentials.token,
- )
-
- if self._is_streaming_checksum_payload(request):
- explicit_payload = STREAMING_UNSIGNED_PAYLOAD_TRAILER
- elif self._should_sha256_sign_payload(request):
- if existing_sha256:
- explicit_payload = existing_sha256
- else:
- explicit_payload = None # to be calculated during signing
- else:
- explicit_payload = UNSIGNED_PAYLOAD
-
- if self._should_add_content_sha256_header(explicit_payload):
- body_header = (
- awscrt.auth.AwsSignedBodyHeaderType.X_AMZ_CONTENT_SHA_256
- )
- else:
- body_header = awscrt.auth.AwsSignedBodyHeaderType.NONE
-
- signing_config = awscrt.auth.AwsSigningConfig(
- algorithm=awscrt.auth.AwsSigningAlgorithm.V4,
- signature_type=self._SIGNATURE_TYPE,
- credentials_provider=credentials_provider,
- region=self._region_name,
- service=self._service_name,
- date=datetime_now,
- should_sign_header=self._should_sign_header,
- use_double_uri_encode=self._USE_DOUBLE_URI_ENCODE,
- should_normalize_uri_path=self._SHOULD_NORMALIZE_URI_PATH,
- signed_body_value=explicit_payload,
- signed_body_header_type=body_header,
- expiration_in_seconds=self._expiration_in_seconds,
- )
- crt_request = self._crt_request_from_aws_request(request)
- future = awscrt.auth.aws_sign_request(crt_request, signing_config)
- future.result()
- self._apply_signing_changes(request, crt_request)
-
- def _crt_request_from_aws_request(self, aws_request):
- url_parts = urlsplit(aws_request.url)
- crt_path = url_parts.path if url_parts.path else '/'
- if aws_request.params:
- array = []
- for (param, value) in aws_request.params.items():
- value = str(value)
- array.append(f'{param}={value}')
- crt_path = crt_path + '?' + '&'.join(array)
- elif url_parts.query:
- crt_path = f'{crt_path}?{url_parts.query}'
-
- crt_headers = awscrt.http.HttpHeaders(aws_request.headers.items())
-
- # CRT requires body (if it exists) to be an I/O stream.
- crt_body_stream = None
- if aws_request.body:
- if hasattr(aws_request.body, 'seek'):
- crt_body_stream = aws_request.body
- else:
- crt_body_stream = BytesIO(aws_request.body)
-
- crt_request = awscrt.http.HttpRequest(
- method=aws_request.method,
- path=crt_path,
- headers=crt_headers,
- body_stream=crt_body_stream,
- )
- return crt_request
-
- def _apply_signing_changes(self, aws_request, signed_crt_request):
- # Apply changes from signed CRT request to the AWSRequest
- aws_request.headers = HTTPHeaders.from_pairs(
- list(signed_crt_request.headers)
- )
-
- def _should_sign_header(self, name, **kwargs):
- return name.lower() not in SIGNED_HEADERS_BLACKLIST
-
- def _modify_request_before_signing(self, request):
- # This could be a retry. Make sure the previous
- # authorization headers are removed first.
- for h in self._PRESIGNED_HEADERS_BLOCKLIST:
- if h in request.headers:
- del request.headers[h]
- # If necessary, add the host header
- if 'host' not in request.headers:
- request.headers['host'] = _host_from_url(request.url)
-
- def _get_existing_sha256(self, request):
- return request.headers.get('X-Amz-Content-SHA256')
-
- def _should_sha256_sign_payload(self, request):
- # Payloads will always be signed over insecure connections.
- if not request.url.startswith('https'):
- return True
-
- # Certain operations may have payload signing disabled by default.
- # Since we don't have access to the operation model, we pass in this
- # bit of metadata through the request context.
- return request.context.get('payload_signing_enabled', True)
-
- def _should_add_content_sha256_header(self, explicit_payload):
- # only add X-Amz-Content-SHA256 header if payload is explicitly set
- return explicit_payload is not None
-
-
-class CrtS3SigV4Auth(CrtSigV4Auth):
- # For S3, we do not normalize the path.
- _USE_DOUBLE_URI_ENCODE = False
- _SHOULD_NORMALIZE_URI_PATH = False
-
- def _get_existing_sha256(self, request):
- # always recalculate
- return None
-
- def _should_sha256_sign_payload(self, request):
- # S3 allows optional body signing, so to minimize the performance
- # impact, we opt to not SHA256 sign the body on streaming uploads,
- # provided that we're on https.
- client_config = request.context.get('client_config')
- s3_config = getattr(client_config, 's3', None)
-
- # The config could be None if it isn't set, or if the customer sets it
- # to None.
- if s3_config is None:
- s3_config = {}
-
- # The explicit configuration takes precedence over any implicit
- # configuration.
- sign_payload = s3_config.get('payload_signing_enabled', None)
- if sign_payload is not None:
- return sign_payload
-
- # We require that both a checksum be present and https be enabled
- # to implicitly disable body signing. The combination of TLS and
- # a checksum is sufficiently secure and durable for us to be
- # confident in the request without body signing.
- checksum_header = 'Content-MD5'
- checksum_context = request.context.get('checksum', {})
- algorithm = checksum_context.get('request_algorithm')
- if isinstance(algorithm, dict) and algorithm.get('in') == 'header':
- checksum_header = algorithm['name']
- if (
- not request.url.startswith('https')
- or checksum_header not in request.headers
- ):
- return True
-
- # If the input is streaming we disable body signing by default.
- if request.context.get('has_streaming_input', False):
- return False
-
- # If the S3-specific checks had no results, delegate to the generic
- # checks.
- return super()._should_sha256_sign_payload(request)
-
- def _should_add_content_sha256_header(self, explicit_payload):
- # Always add X-Amz-Content-SHA256 header
- return True
-
-
-class CrtSigV4AsymAuth(BaseSigner):
- REQUIRES_REGION = True
- _PRESIGNED_HEADERS_BLOCKLIST = [
- 'Authorization',
- 'X-Amz-Date',
- 'X-Amz-Content-SHA256',
- 'X-Amz-Security-Token',
- ]
- _SIGNATURE_TYPE = awscrt.auth.AwsSignatureType.HTTP_REQUEST_HEADERS
- _USE_DOUBLE_URI_ENCODE = True
- _SHOULD_NORMALIZE_URI_PATH = True
-
- def __init__(self, credentials, service_name, region_name):
- self.credentials = credentials
- self._service_name = service_name
- self._region_name = region_name
- self._expiration_in_seconds = None
-
- def add_auth(self, request):
- if self.credentials is None:
- raise NoCredentialsError()
-
- # Use utcnow() because that's what gets mocked by tests, but set
- # timezone because CRT assumes naive datetime is local time.
- datetime_now = datetime.datetime.utcnow().replace(
- tzinfo=datetime.timezone.utc
- )
-
- # Use existing 'X-Amz-Content-SHA256' header if able
- existing_sha256 = self._get_existing_sha256(request)
-
- self._modify_request_before_signing(request)
-
- credentials_provider = awscrt.auth.AwsCredentialsProvider.new_static(
- access_key_id=self.credentials.access_key,
- secret_access_key=self.credentials.secret_key,
- session_token=self.credentials.token,
- )
-
- if self._is_streaming_checksum_payload(request):
- explicit_payload = STREAMING_UNSIGNED_PAYLOAD_TRAILER
- elif self._should_sha256_sign_payload(request):
- if existing_sha256:
- explicit_payload = existing_sha256
- else:
- explicit_payload = None # to be calculated during signing
- else:
- explicit_payload = UNSIGNED_PAYLOAD
-
- if self._should_add_content_sha256_header(explicit_payload):
- body_header = (
- awscrt.auth.AwsSignedBodyHeaderType.X_AMZ_CONTENT_SHA_256
- )
- else:
- body_header = awscrt.auth.AwsSignedBodyHeaderType.NONE
-
- signing_config = awscrt.auth.AwsSigningConfig(
- algorithm=awscrt.auth.AwsSigningAlgorithm.V4_ASYMMETRIC,
- signature_type=self._SIGNATURE_TYPE,
- credentials_provider=credentials_provider,
- region=self._region_name,
- service=self._service_name,
- date=datetime_now,
- should_sign_header=self._should_sign_header,
- use_double_uri_encode=self._USE_DOUBLE_URI_ENCODE,
- should_normalize_uri_path=self._SHOULD_NORMALIZE_URI_PATH,
- signed_body_value=explicit_payload,
- signed_body_header_type=body_header,
- expiration_in_seconds=self._expiration_in_seconds,
- )
- crt_request = self._crt_request_from_aws_request(request)
- future = awscrt.auth.aws_sign_request(crt_request, signing_config)
- future.result()
- self._apply_signing_changes(request, crt_request)
-
- def _crt_request_from_aws_request(self, aws_request):
- url_parts = urlsplit(aws_request.url)
- crt_path = url_parts.path if url_parts.path else '/'
- if aws_request.params:
- array = []
- for (param, value) in aws_request.params.items():
- value = str(value)
- array.append(f'{param}={value}')
- crt_path = crt_path + '?' + '&'.join(array)
- elif url_parts.query:
- crt_path = f'{crt_path}?{url_parts.query}'
-
- crt_headers = awscrt.http.HttpHeaders(aws_request.headers.items())
-
- # CRT requires body (if it exists) to be an I/O stream.
- crt_body_stream = None
- if aws_request.body:
- if hasattr(aws_request.body, 'seek'):
- crt_body_stream = aws_request.body
- else:
- crt_body_stream = BytesIO(aws_request.body)
-
- crt_request = awscrt.http.HttpRequest(
- method=aws_request.method,
- path=crt_path,
- headers=crt_headers,
- body_stream=crt_body_stream,
- )
- return crt_request
-
- def _apply_signing_changes(self, aws_request, signed_crt_request):
- # Apply changes from signed CRT request to the AWSRequest
- aws_request.headers = HTTPHeaders.from_pairs(
- list(signed_crt_request.headers)
- )
-
- def _should_sign_header(self, name, **kwargs):
- return name.lower() not in SIGNED_HEADERS_BLACKLIST
-
- def _modify_request_before_signing(self, request):
- # This could be a retry. Make sure the previous
- # authorization headers are removed first.
- for h in self._PRESIGNED_HEADERS_BLOCKLIST:
- if h in request.headers:
- del request.headers[h]
- # If necessary, add the host header
- if 'host' not in request.headers:
- request.headers['host'] = _host_from_url(request.url)
-
- def _get_existing_sha256(self, request):
- return request.headers.get('X-Amz-Content-SHA256')
-
- def _is_streaming_checksum_payload(self, request):
- checksum_context = request.context.get('checksum', {})
- algorithm = checksum_context.get('request_algorithm')
- return isinstance(algorithm, dict) and algorithm.get('in') == 'trailer'
-
- def _should_sha256_sign_payload(self, request):
- # Payloads will always be signed over insecure connections.
- if not request.url.startswith('https'):
- return True
-
- # Certain operations may have payload signing disabled by default.
- # Since we don't have access to the operation model, we pass in this
- # bit of metadata through the request context.
- return request.context.get('payload_signing_enabled', True)
-
- def _should_add_content_sha256_header(self, explicit_payload):
- # only add X-Amz-Content-SHA256 header if payload is explicitly set
- return explicit_payload is not None
-
-
-class CrtS3SigV4AsymAuth(CrtSigV4AsymAuth):
- # For S3, we do not normalize the path.
- _USE_DOUBLE_URI_ENCODE = False
- _SHOULD_NORMALIZE_URI_PATH = False
-
- def _get_existing_sha256(self, request):
- # always recalculate
- return None
-
- def _should_sha256_sign_payload(self, request):
- # S3 allows optional body signing, so to minimize the performance
- # impact, we opt to not SHA256 sign the body on streaming uploads,
- # provided that we're on https.
- client_config = request.context.get('client_config')
- s3_config = getattr(client_config, 's3', None)
-
- # The config could be None if it isn't set, or if the customer sets it
- # to None.
- if s3_config is None:
- s3_config = {}
-
- # The explicit configuration takes precedence over any implicit
- # configuration.
- sign_payload = s3_config.get('payload_signing_enabled', None)
- if sign_payload is not None:
- return sign_payload
-
- # We require that both content-md5 be present and https be enabled
- # to implicitly disable body signing. The combination of TLS and
- # content-md5 is sufficiently secure and durable for us to be
- # confident in the request without body signing.
- if (
- not request.url.startswith('https')
- or 'Content-MD5' not in request.headers
- ):
- return True
-
- # If the input is streaming we disable body signing by default.
- if request.context.get('has_streaming_input', False):
- return False
-
- # If the S3-specific checks had no results, delegate to the generic
- # checks.
- return super()._should_sha256_sign_payload(request)
-
- def _should_add_content_sha256_header(self, explicit_payload):
- # Always add X-Amz-Content-SHA256 header
- return True
-
-
-class CrtSigV4AsymQueryAuth(CrtSigV4AsymAuth):
- DEFAULT_EXPIRES = 3600
- _SIGNATURE_TYPE = awscrt.auth.AwsSignatureType.HTTP_REQUEST_QUERY_PARAMS
-
- def __init__(
- self, credentials, service_name, region_name, expires=DEFAULT_EXPIRES
- ):
- super().__init__(credentials, service_name, region_name)
- self._expiration_in_seconds = expires
-
- def _modify_request_before_signing(self, request):
- super()._modify_request_before_signing(request)
-
- # We automatically set this header, so if it's the auto-set value we
- # want to get rid of it since it doesn't make sense for presigned urls.
- content_type = request.headers.get('content-type')
- if content_type == 'application/x-www-form-urlencoded; charset=utf-8':
- del request.headers['content-type']
-
- # Now parse the original query string to a dict, inject our new query
- # params, and serialize back to a query string.
- url_parts = urlsplit(request.url)
- # parse_qs makes each value a list, but in our case we know we won't
- # have repeated keys so we know we have single element lists which we
- # can convert back to scalar values.
- query_string_parts = parse_qs(url_parts.query, keep_blank_values=True)
- query_dict = {k: v[0] for k, v in query_string_parts.items()}
-
- # The spec is particular about this. It *has* to be:
- # https://?&
- # You can't mix the two types of params together, i.e just keep doing
- # new_query_params.update(op_params)
- # new_query_params.update(auth_params)
- # percent_encode_sequence(new_query_params)
- if request.data:
- # We also need to move the body params into the query string. To
- # do this, we first have to convert it to a dict.
- query_dict.update(_get_body_as_dict(request))
- request.data = ''
- new_query_string = percent_encode_sequence(query_dict)
- # url_parts is a tuple (and therefore immutable) so we need to create
- # a new url_parts with the new query string.
- # -
- # scheme - 0
- # netloc - 1
- # path - 2
- # query - 3 <-- we're replacing this.
- # fragment - 4
- p = url_parts
- new_url_parts = (p[0], p[1], p[2], new_query_string, p[4])
- request.url = urlunsplit(new_url_parts)
-
- def _apply_signing_changes(self, aws_request, signed_crt_request):
- # Apply changes from signed CRT request to the AWSRequest
- super()._apply_signing_changes(aws_request, signed_crt_request)
-
- signed_query = urlsplit(signed_crt_request.path).query
- p = urlsplit(aws_request.url)
- # urlsplit() returns a tuple (and therefore immutable) so we
- # need to create new url with the new query string.
- # -
- # scheme - 0
- # netloc - 1
- # path - 2
- # query - 3 <-- we're replacing this.
- # fragment - 4
- aws_request.url = urlunsplit((p[0], p[1], p[2], signed_query, p[4]))
-
-
-class CrtS3SigV4AsymQueryAuth(CrtSigV4AsymQueryAuth):
- """S3 SigV4A auth using query parameters.
- This signer will sign a request using query parameters and signature
- version 4A, i.e a "presigned url" signer.
- """
-
- # For S3, we do not normalize the path.
- _USE_DOUBLE_URI_ENCODE = False
- _SHOULD_NORMALIZE_URI_PATH = False
-
- def _should_sha256_sign_payload(self, request):
- # From the doc link above:
- # "You don't include a payload hash in the Canonical Request, because
- # when you create a presigned URL, you don't know anything about the
- # payload. Instead, you use a constant string "UNSIGNED-PAYLOAD".
- return False
-
- def _should_add_content_sha256_header(self, explicit_payload):
- # Never add X-Amz-Content-SHA256 header
- return False
-
-
-class CrtSigV4QueryAuth(CrtSigV4Auth):
- DEFAULT_EXPIRES = 3600
- _SIGNATURE_TYPE = awscrt.auth.AwsSignatureType.HTTP_REQUEST_QUERY_PARAMS
-
- def __init__(
- self, credentials, service_name, region_name, expires=DEFAULT_EXPIRES
- ):
- super().__init__(credentials, service_name, region_name)
- self._expiration_in_seconds = expires
-
- def _modify_request_before_signing(self, request):
- super()._modify_request_before_signing(request)
-
- # We automatically set this header, so if it's the auto-set value we
- # want to get rid of it since it doesn't make sense for presigned urls.
- content_type = request.headers.get('content-type')
- if content_type == 'application/x-www-form-urlencoded; charset=utf-8':
- del request.headers['content-type']
-
- # Now parse the original query string to a dict, inject our new query
- # params, and serialize back to a query string.
- url_parts = urlsplit(request.url)
- # parse_qs makes each value a list, but in our case we know we won't
- # have repeated keys so we know we have single element lists which we
- # can convert back to scalar values.
- query_dict = {
- k: v[0]
- for k, v in parse_qs(
- url_parts.query, keep_blank_values=True
- ).items()
- }
- if request.params:
- query_dict.update(request.params)
- request.params = {}
- # The spec is particular about this. It *has* to be:
- # https://?&
- # You can't mix the two types of params together, i.e just keep doing
- # new_query_params.update(op_params)
- # new_query_params.update(auth_params)
- # percent_encode_sequence(new_query_params)
- if request.data:
- # We also need to move the body params into the query string. To
- # do this, we first have to convert it to a dict.
- query_dict.update(_get_body_as_dict(request))
- request.data = ''
- new_query_string = percent_encode_sequence(query_dict)
- # url_parts is a tuple (and therefore immutable) so we need to create
- # a new url_parts with the new query string.
- # -
- # scheme - 0
- # netloc - 1
- # path - 2
- # query - 3 <-- we're replacing this.
- # fragment - 4
- p = url_parts
- new_url_parts = (p[0], p[1], p[2], new_query_string, p[4])
- request.url = urlunsplit(new_url_parts)
-
- def _apply_signing_changes(self, aws_request, signed_crt_request):
- # Apply changes from signed CRT request to the AWSRequest
- super()._apply_signing_changes(aws_request, signed_crt_request)
-
- signed_query = urlsplit(signed_crt_request.path).query
- p = urlsplit(aws_request.url)
- # urlsplit() returns a tuple (and therefore immutable) so we
- # need to create new url with the new query string.
- # -
- # scheme - 0
- # netloc - 1
- # path - 2
- # query - 3 <-- we're replacing this.
- # fragment - 4
- aws_request.url = urlunsplit((p[0], p[1], p[2], signed_query, p[4]))
-
-
-class CrtS3SigV4QueryAuth(CrtSigV4QueryAuth):
- """S3 SigV4 auth using query parameters.
- This signer will sign a request using query parameters and signature
- version 4, i.e a "presigned url" signer.
- Based off of:
- http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
- """
-
- # For S3, we do not normalize the path.
- _USE_DOUBLE_URI_ENCODE = False
- _SHOULD_NORMALIZE_URI_PATH = False
-
- def _should_sha256_sign_payload(self, request):
- # From the doc link above:
- # "You don't include a payload hash in the Canonical Request, because
- # when you create a presigned URL, you don't know anything about the
- # payload. Instead, you use a constant string "UNSIGNED-PAYLOAD".
- return False
-
- def _should_add_content_sha256_header(self, explicit_payload):
- # Never add X-Amz-Content-SHA256 header
- return False
-
-
-# Defined at the bottom of module to ensure all Auth
-# classes are defined.
-CRT_AUTH_TYPE_MAPS = {
- 'v4': CrtSigV4Auth,
- 'v4-query': CrtSigV4QueryAuth,
- 'v4a': CrtSigV4AsymAuth,
- 's3v4': CrtS3SigV4Auth,
- 's3v4-query': CrtS3SigV4QueryAuth,
- 's3v4a': CrtS3SigV4AsymAuth,
- 's3v4a-query': CrtS3SigV4AsymQueryAuth,
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/sdist.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/sdist.py
deleted file mode 100644
index d6e9489d1b1913f7090b225db69c42fc0454c17a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/sdist.py
+++ /dev/null
@@ -1,531 +0,0 @@
-"""distutils.command.sdist
-
-Implements the Distutils 'sdist' command (create a source distribution)."""
-
-import os
-import sys
-from glob import glob
-from warnings import warn
-
-from distutils.core import Command
-from distutils import dir_util
-from distutils import file_util
-from distutils import archive_util
-from distutils.text_file import TextFile
-from distutils.filelist import FileList
-from distutils import log
-from distutils.util import convert_path
-from distutils.errors import DistutilsOptionError, DistutilsTemplateError
-
-
-def show_formats():
- """Print all possible values for the 'formats' option (used by
- the "--help-formats" command-line option).
- """
- from distutils.fancy_getopt import FancyGetopt
- from distutils.archive_util import ARCHIVE_FORMATS
-
- formats = []
- for format in ARCHIVE_FORMATS.keys():
- formats.append(("formats=" + format, None, ARCHIVE_FORMATS[format][2]))
- formats.sort()
- FancyGetopt(formats).print_help("List of available source distribution formats:")
-
-
-class sdist(Command):
-
- description = "create a source distribution (tarball, zip file, etc.)"
-
- def checking_metadata(self):
- """Callable used for the check sub-command.
-
- Placed here so user_options can view it"""
- return self.metadata_check
-
- user_options = [
- ('template=', 't', "name of manifest template file [default: MANIFEST.in]"),
- ('manifest=', 'm', "name of manifest file [default: MANIFEST]"),
- (
- 'use-defaults',
- None,
- "include the default file set in the manifest "
- "[default; disable with --no-defaults]",
- ),
- ('no-defaults', None, "don't include the default file set"),
- (
- 'prune',
- None,
- "specifically exclude files/directories that should not be "
- "distributed (build tree, RCS/CVS dirs, etc.) "
- "[default; disable with --no-prune]",
- ),
- ('no-prune', None, "don't automatically exclude anything"),
- (
- 'manifest-only',
- 'o',
- "just regenerate the manifest and then stop " "(implies --force-manifest)",
- ),
- (
- 'force-manifest',
- 'f',
- "forcibly regenerate the manifest and carry on as usual. "
- "Deprecated: now the manifest is always regenerated.",
- ),
- ('formats=', None, "formats for source distribution (comma-separated list)"),
- (
- 'keep-temp',
- 'k',
- "keep the distribution tree around after creating " + "archive file(s)",
- ),
- (
- 'dist-dir=',
- 'd',
- "directory to put the source distribution archive(s) in " "[default: dist]",
- ),
- (
- 'metadata-check',
- None,
- "Ensure that all required elements of meta-data "
- "are supplied. Warn if any missing. [default]",
- ),
- (
- 'owner=',
- 'u',
- "Owner name used when creating a tar file [default: current user]",
- ),
- (
- 'group=',
- 'g',
- "Group name used when creating a tar file [default: current group]",
- ),
- ]
-
- boolean_options = [
- 'use-defaults',
- 'prune',
- 'manifest-only',
- 'force-manifest',
- 'keep-temp',
- 'metadata-check',
- ]
-
- help_options = [
- ('help-formats', None, "list available distribution formats", show_formats),
- ]
-
- negative_opt = {'no-defaults': 'use-defaults', 'no-prune': 'prune'}
-
- sub_commands = [('check', checking_metadata)]
-
- READMES = ('README', 'README.txt', 'README.rst')
-
- def initialize_options(self):
- # 'template' and 'manifest' are, respectively, the names of
- # the manifest template and manifest file.
- self.template = None
- self.manifest = None
-
- # 'use_defaults': if true, we will include the default file set
- # in the manifest
- self.use_defaults = 1
- self.prune = 1
-
- self.manifest_only = 0
- self.force_manifest = 0
-
- self.formats = ['gztar']
- self.keep_temp = 0
- self.dist_dir = None
-
- self.archive_files = None
- self.metadata_check = 1
- self.owner = None
- self.group = None
-
- def finalize_options(self):
- if self.manifest is None:
- self.manifest = "MANIFEST"
- if self.template is None:
- self.template = "MANIFEST.in"
-
- self.ensure_string_list('formats')
-
- bad_format = archive_util.check_archive_formats(self.formats)
- if bad_format:
- raise DistutilsOptionError("unknown archive format '%s'" % bad_format)
-
- if self.dist_dir is None:
- self.dist_dir = "dist"
-
- def run(self):
- # 'filelist' contains the list of files that will make up the
- # manifest
- self.filelist = FileList()
-
- # Run sub commands
- for cmd_name in self.get_sub_commands():
- self.run_command(cmd_name)
-
- # Do whatever it takes to get the list of files to process
- # (process the manifest template, read an existing manifest,
- # whatever). File list is accumulated in 'self.filelist'.
- self.get_file_list()
-
- # If user just wanted us to regenerate the manifest, stop now.
- if self.manifest_only:
- return
-
- # Otherwise, go ahead and create the source distribution tarball,
- # or zipfile, or whatever.
- self.make_distribution()
-
- def check_metadata(self):
- """Deprecated API."""
- warn(
- "distutils.command.sdist.check_metadata is deprecated, \
- use the check command instead",
- PendingDeprecationWarning,
- )
- check = self.distribution.get_command_obj('check')
- check.ensure_finalized()
- check.run()
-
- def get_file_list(self):
- """Figure out the list of files to include in the source
- distribution, and put it in 'self.filelist'. This might involve
- reading the manifest template (and writing the manifest), or just
- reading the manifest, or just using the default file set -- it all
- depends on the user's options.
- """
- # new behavior when using a template:
- # the file list is recalculated every time because
- # even if MANIFEST.in or setup.py are not changed
- # the user might have added some files in the tree that
- # need to be included.
- #
- # This makes --force the default and only behavior with templates.
- template_exists = os.path.isfile(self.template)
- if not template_exists and self._manifest_is_not_generated():
- self.read_manifest()
- self.filelist.sort()
- self.filelist.remove_duplicates()
- return
-
- if not template_exists:
- self.warn(
- ("manifest template '%s' does not exist " + "(using default file list)")
- % self.template
- )
- self.filelist.findall()
-
- if self.use_defaults:
- self.add_defaults()
-
- if template_exists:
- self.read_template()
-
- if self.prune:
- self.prune_file_list()
-
- self.filelist.sort()
- self.filelist.remove_duplicates()
- self.write_manifest()
-
- def add_defaults(self):
- """Add all the default files to self.filelist:
- - README or README.txt
- - setup.py
- - test/test*.py
- - all pure Python modules mentioned in setup script
- - all files pointed by package_data (build_py)
- - all files defined in data_files.
- - all files defined as scripts.
- - all C sources listed as part of extensions or C libraries
- in the setup script (doesn't catch C headers!)
- Warns if (README or README.txt) or setup.py are missing; everything
- else is optional.
- """
- self._add_defaults_standards()
- self._add_defaults_optional()
- self._add_defaults_python()
- self._add_defaults_data_files()
- self._add_defaults_ext()
- self._add_defaults_c_libs()
- self._add_defaults_scripts()
-
- @staticmethod
- def _cs_path_exists(fspath):
- """
- Case-sensitive path existence check
-
- >>> sdist._cs_path_exists(__file__)
- True
- >>> sdist._cs_path_exists(__file__.upper())
- False
- """
- if not os.path.exists(fspath):
- return False
- # make absolute so we always have a directory
- abspath = os.path.abspath(fspath)
- directory, filename = os.path.split(abspath)
- return filename in os.listdir(directory)
-
- def _add_defaults_standards(self):
- standards = [self.READMES, self.distribution.script_name]
- for fn in standards:
- if isinstance(fn, tuple):
- alts = fn
- got_it = False
- for fn in alts:
- if self._cs_path_exists(fn):
- got_it = True
- self.filelist.append(fn)
- break
-
- if not got_it:
- self.warn(
- "standard file not found: should have one of " + ', '.join(alts)
- )
- else:
- if self._cs_path_exists(fn):
- self.filelist.append(fn)
- else:
- self.warn("standard file '%s' not found" % fn)
-
- def _add_defaults_optional(self):
- optional = ['test/test*.py', 'setup.cfg']
- for pattern in optional:
- files = filter(os.path.isfile, glob(pattern))
- self.filelist.extend(files)
-
- def _add_defaults_python(self):
- # build_py is used to get:
- # - python modules
- # - files defined in package_data
- build_py = self.get_finalized_command('build_py')
-
- # getting python files
- if self.distribution.has_pure_modules():
- self.filelist.extend(build_py.get_source_files())
-
- # getting package_data files
- # (computed in build_py.data_files by build_py.finalize_options)
- for pkg, src_dir, build_dir, filenames in build_py.data_files:
- for filename in filenames:
- self.filelist.append(os.path.join(src_dir, filename))
-
- def _add_defaults_data_files(self):
- # getting distribution.data_files
- if self.distribution.has_data_files():
- for item in self.distribution.data_files:
- if isinstance(item, str):
- # plain file
- item = convert_path(item)
- if os.path.isfile(item):
- self.filelist.append(item)
- else:
- # a (dirname, filenames) tuple
- dirname, filenames = item
- for f in filenames:
- f = convert_path(f)
- if os.path.isfile(f):
- self.filelist.append(f)
-
- def _add_defaults_ext(self):
- if self.distribution.has_ext_modules():
- build_ext = self.get_finalized_command('build_ext')
- self.filelist.extend(build_ext.get_source_files())
-
- def _add_defaults_c_libs(self):
- if self.distribution.has_c_libraries():
- build_clib = self.get_finalized_command('build_clib')
- self.filelist.extend(build_clib.get_source_files())
-
- def _add_defaults_scripts(self):
- if self.distribution.has_scripts():
- build_scripts = self.get_finalized_command('build_scripts')
- self.filelist.extend(build_scripts.get_source_files())
-
- def read_template(self):
- """Read and parse manifest template file named by self.template.
-
- (usually "MANIFEST.in") The parsing and processing is done by
- 'self.filelist', which updates itself accordingly.
- """
- log.info("reading manifest template '%s'", self.template)
- template = TextFile(
- self.template,
- strip_comments=1,
- skip_blanks=1,
- join_lines=1,
- lstrip_ws=1,
- rstrip_ws=1,
- collapse_join=1,
- )
-
- try:
- while True:
- line = template.readline()
- if line is None: # end of file
- break
-
- try:
- self.filelist.process_template_line(line)
- # the call above can raise a DistutilsTemplateError for
- # malformed lines, or a ValueError from the lower-level
- # convert_path function
- except (DistutilsTemplateError, ValueError) as msg:
- self.warn(
- "%s, line %d: %s"
- % (template.filename, template.current_line, msg)
- )
- finally:
- template.close()
-
- def prune_file_list(self):
- """Prune off branches that might slip into the file list as created
- by 'read_template()', but really don't belong there:
- * the build tree (typically "build")
- * the release tree itself (only an issue if we ran "sdist"
- previously with --keep-temp, or it aborted)
- * any RCS, CVS, .svn, .hg, .git, .bzr, _darcs directories
- """
- build = self.get_finalized_command('build')
- base_dir = self.distribution.get_fullname()
-
- self.filelist.exclude_pattern(None, prefix=build.build_base)
- self.filelist.exclude_pattern(None, prefix=base_dir)
-
- if sys.platform == 'win32':
- seps = r'/|\\'
- else:
- seps = '/'
-
- vcs_dirs = ['RCS', 'CVS', r'\.svn', r'\.hg', r'\.git', r'\.bzr', '_darcs']
- vcs_ptrn = r'(^|{})({})({}).*'.format(seps, '|'.join(vcs_dirs), seps)
- self.filelist.exclude_pattern(vcs_ptrn, is_regex=1)
-
- def write_manifest(self):
- """Write the file list in 'self.filelist' (presumably as filled in
- by 'add_defaults()' and 'read_template()') to the manifest file
- named by 'self.manifest'.
- """
- if self._manifest_is_not_generated():
- log.info(
- "not writing to manually maintained "
- "manifest file '%s'" % self.manifest
- )
- return
-
- content = self.filelist.files[:]
- content.insert(0, '# file GENERATED by distutils, do NOT edit')
- self.execute(
- file_util.write_file,
- (self.manifest, content),
- "writing manifest file '%s'" % self.manifest,
- )
-
- def _manifest_is_not_generated(self):
- # check for special comment used in 3.1.3 and higher
- if not os.path.isfile(self.manifest):
- return False
-
- fp = open(self.manifest)
- try:
- first_line = fp.readline()
- finally:
- fp.close()
- return first_line != '# file GENERATED by distutils, do NOT edit\n'
-
- def read_manifest(self):
- """Read the manifest file (named by 'self.manifest') and use it to
- fill in 'self.filelist', the list of files to include in the source
- distribution.
- """
- log.info("reading manifest file '%s'", self.manifest)
- with open(self.manifest) as manifest:
- for line in manifest:
- # ignore comments and blank lines
- line = line.strip()
- if line.startswith('#') or not line:
- continue
- self.filelist.append(line)
-
- def make_release_tree(self, base_dir, files):
- """Create the directory tree that will become the source
- distribution archive. All directories implied by the filenames in
- 'files' are created under 'base_dir', and then we hard link or copy
- (if hard linking is unavailable) those files into place.
- Essentially, this duplicates the developer's source tree, but in a
- directory named after the distribution, containing only the files
- to be distributed.
- """
- # Create all the directories under 'base_dir' necessary to
- # put 'files' there; the 'mkpath()' is just so we don't die
- # if the manifest happens to be empty.
- self.mkpath(base_dir)
- dir_util.create_tree(base_dir, files, dry_run=self.dry_run)
-
- # And walk over the list of files, either making a hard link (if
- # os.link exists) to each one that doesn't already exist in its
- # corresponding location under 'base_dir', or copying each file
- # that's out-of-date in 'base_dir'. (Usually, all files will be
- # out-of-date, because by default we blow away 'base_dir' when
- # we're done making the distribution archives.)
-
- if hasattr(os, 'link'): # can make hard links on this system
- link = 'hard'
- msg = "making hard links in %s..." % base_dir
- else: # nope, have to copy
- link = None
- msg = "copying files to %s..." % base_dir
-
- if not files:
- log.warn("no files to distribute -- empty manifest?")
- else:
- log.info(msg)
- for file in files:
- if not os.path.isfile(file):
- log.warn("'%s' not a regular file -- skipping", file)
- else:
- dest = os.path.join(base_dir, file)
- self.copy_file(file, dest, link=link)
-
- self.distribution.metadata.write_pkg_info(base_dir)
-
- def make_distribution(self):
- """Create the source distribution(s). First, we create the release
- tree with 'make_release_tree()'; then, we create all required
- archive files (according to 'self.formats') from the release tree.
- Finally, we clean up by blowing away the release tree (unless
- 'self.keep_temp' is true). The list of archive files created is
- stored so it can be retrieved later by 'get_archive_files()'.
- """
- # Don't warn about missing meta-data here -- should be (and is!)
- # done elsewhere.
- base_dir = self.distribution.get_fullname()
- base_name = os.path.join(self.dist_dir, base_dir)
-
- self.make_release_tree(base_dir, self.filelist.files)
- archive_files = [] # remember names of files we create
- # tar archive must be created last to avoid overwrite and remove
- if 'tar' in self.formats:
- self.formats.append(self.formats.pop(self.formats.index('tar')))
-
- for fmt in self.formats:
- file = self.make_archive(
- base_name, fmt, base_dir=base_dir, owner=self.owner, group=self.group
- )
- archive_files.append(file)
- self.distribution.dist_files.append(('sdist', '', file))
-
- self.archive_files = archive_files
-
- if not self.keep_temp:
- dir_util.remove_tree(base_dir, dry_run=self.dry_run)
-
- def get_archive_files(self):
- """Return the list of archive files created when the command
- was run, or None if the command hasn't run yet.
- """
- return self.archive_files
diff --git a/spaces/BigDL/bigdl_nano_demo/original_models.py b/spaces/BigDL/bigdl_nano_demo/original_models.py
deleted file mode 100644
index a62c47e88891585683f3a13ce64f14f6b47a321e..0000000000000000000000000000000000000000
--- a/spaces/BigDL/bigdl_nano_demo/original_models.py
+++ /dev/null
@@ -1,359 +0,0 @@
-# This file is copied from https://github.com/rnwzd/FSPBT-Image-Translation/blob/master/original_models.py
-
-# MIT License
-
-# Copyright (c) 2022 Lorenzo Breschi
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-
-import torch
-import torch.nn as nn
-from torch.autograd import Variable
-from torch.nn import functional as F
-
-import torchvision
-from torchvision import models
-
-import pytorch_lightning as pl
-
-class LeakySoftplus(nn.Module):
- def __init__(self,negative_slope: float = 0.01 ):
- super().__init__()
- self.negative_slope=negative_slope
-
- def forward(self,input):
- return F.softplus(input)+F.logsigmoid(input)*self.negative_slope
-
-
-grelu = nn.LeakyReLU(0.2)
-#grelu = nn.Softplus()
-#grelu = LeakySoftplus(0.2)
-#####
-# Currently default generator we use
-# conv0 -> conv1 -> conv2 -> resnet_blocks -> upconv2 -> upconv1 -> conv_11 -> (conv_11_a)* -> conv_12 -> (Tanh)*
-# there are 2 conv layers inside conv_11_a
-# * means is optional, model uses skip-connections
-class Generator(pl.LightningModule):
- def __init__(self, norm_layer='batch_norm', use_bias=False, resnet_blocks=7, tanh=True,
- filters=[32, 64, 128, 128, 128, 64], input_channels=3, output_channels=3, append_smoothers=False):
- super().__init__()
- assert norm_layer in [None, 'batch_norm', 'instance_norm'], \
- "norm_layer should be None, 'batch_norm' or 'instance_norm', not {}".format(
- norm_layer)
- self.norm_layer = None
- if norm_layer == 'batch_norm':
- self.norm_layer = nn.BatchNorm2d
- elif norm_layer == 'instance_norm':
- self.norm_layer = nn.InstanceNorm2d
-
- # filters = [f//3 for f in filters]
- self.use_bias = use_bias
- self.resnet_blocks = resnet_blocks
- self.append_smoothers = append_smoothers
-
- stride1 = 2
- stride2 = 2
- self.conv0 = self.relu_layer(in_filters=input_channels, out_filters=filters[0],
- kernel_size=7, stride=1, padding=3,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.conv1 = self.relu_layer(in_filters=filters[0],
- out_filters=filters[1],
- kernel_size=3, stride=stride1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.conv2 = self.relu_layer(in_filters=filters[1],
- out_filters=filters[2],
- kernel_size=3, stride=stride2, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.resnets = nn.ModuleList()
- for i in range(self.resnet_blocks):
- self.resnets.append(
- self.resnet_block(in_filters=filters[2],
- out_filters=filters[2],
- kernel_size=3, stride=1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu))
-
- self.upconv2 = self.upconv_layer_upsample_and_conv(in_filters=filters[3] + filters[2],
- # in_filters=filters[3], # disable skip-connections
- out_filters=filters[4],
- scale_factor=stride2,
- kernel_size=3, stride=1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.upconv1 = self.upconv_layer_upsample_and_conv(in_filters=filters[4] + filters[1],
- # in_filters=filters[4], # disable skip-connections
- out_filters=filters[4],
- scale_factor=stride1,
- kernel_size=3, stride=1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.conv_11 = nn.Sequential(
- nn.Conv2d(in_channels=filters[0] + filters[4] + input_channels,
- # in_channels=filters[4], # disable skip-connections
- out_channels=filters[5],
- kernel_size=7, stride=1, padding=3, bias=self.use_bias, padding_mode='zeros'),
- grelu
- )
-
- if self.append_smoothers:
- self.conv_11_a = nn.Sequential(
- nn.Conv2d(filters[5], filters[5], kernel_size=3,
- bias=self.use_bias, padding=1, padding_mode='zeros'),
- grelu,
- # replace with variable
- nn.BatchNorm2d(num_features=filters[5]),
- nn.Conv2d(filters[5], filters[5], kernel_size=3,
- bias=self.use_bias, padding=1, padding_mode='zeros'),
- grelu
- )
-
- if tanh:
- self.conv_12 = nn.Sequential(nn.Conv2d(filters[5], output_channels,
- kernel_size=1, stride=1,
- padding=0, bias=True, padding_mode='zeros'),
- #torchvision.transforms.Grayscale(num_output_channels=3),
- nn.Sigmoid())
- else:
- self.conv_12 = nn.Conv2d(filters[5], output_channels, kernel_size=1, stride=1,
- padding=0, bias=True, padding_mode='zeros')
-
- def log_tensors(self, logger, tag, img_tensor):
- logger.experiment.add_images(tag, img_tensor)
-
- def forward(self, input, logger=None, **kwargs):
- # [1, 3, 534, 800]
- output_d0 = self.conv0(input)
- output_d1 = self.conv1(output_d0)
- # comment to disable skip-connections
- output_d2 = self.conv2(output_d1)
-
- output = output_d2
- for layer in self.resnets:
- output = layer(output) + output
-
- output_u2 = self.upconv2(torch.cat((output, output_d2), dim=1))
-
- output_u1 = self.upconv1(torch.cat((output_u2, output_d1), dim=1))
- output = torch.cat(
- (output_u1, output_d0, input), dim=1)
-
- output_11 = self.conv_11(output)
-
- if self.append_smoothers:
- output_11_a = self.conv_11_a(output_11)
- else:
- output_11_a = output_11
- output_12 = self.conv_12(output_11_a)
-
- output = output_12
-
- return output
-
- def relu_layer(self, in_filters, out_filters, kernel_size, stride, padding, bias,
- norm_layer, nonlinearity):
- out = nn.Sequential()
- out.add_module('conv', nn.Conv2d(in_channels=in_filters,
- out_channels=out_filters,
- kernel_size=kernel_size, stride=stride,
- padding=padding, bias=bias, padding_mode='zeros'))
-
- if norm_layer:
- out.add_module('normalization',
- norm_layer(num_features=out_filters))
- if nonlinearity:
- out.add_module('nonlinearity', nonlinearity)
- # out.add_module('dropout', nn.Dropout2d(0.25))
-
- return out
-
- def resnet_block(self, in_filters, out_filters, kernel_size, stride, padding, bias,
- norm_layer, nonlinearity):
- out = nn.Sequential()
- if nonlinearity:
- out.add_module('nonlinearity_0', nonlinearity)
- out.add_module('conv_0', nn.Conv2d(in_channels=in_filters,
- out_channels=out_filters,
- kernel_size=kernel_size, stride=stride,
- padding=padding, bias=bias, padding_mode='zeros'))
- if norm_layer:
- out.add_module('normalization',
- norm_layer(num_features=out_filters))
- if nonlinearity:
- out.add_module('nonlinearity_1', nonlinearity)
- out.add_module('conv_1', nn.Conv2d(in_channels=in_filters,
- out_channels=out_filters,
- kernel_size=kernel_size, stride=stride,
- padding=padding, bias=bias, padding_mode='zeros'))
- return out
-
- def upconv_layer_upsample_and_conv(self, in_filters, out_filters, scale_factor, kernel_size, stride, padding, bias,
- norm_layer, nonlinearity):
-
- parts = [nn.Upsample(scale_factor=scale_factor),
- nn.Conv2d(in_filters, out_filters, kernel_size,
- stride, padding=padding, bias=False, padding_mode='zeros')
- ]
-
- if norm_layer:
- parts.append(norm_layer(num_features=out_filters))
-
- if nonlinearity:
- parts.append(nonlinearity)
-
- return nn.Sequential(*parts)
-
-
-
-
-relu = grelu
-
-#####
-# Default discriminator
-#####
-
-relu = nn.LeakyReLU(0.2)
-
-class Discriminator(nn.Module):
- def __init__(self, num_filters=12, input_channels=3, n_layers=2,
- norm_layer='instance_norm', use_bias=True):
- super().__init__()
-
- self.num_filters = num_filters
-
- self.input_channels = input_channels
- self.use_bias = use_bias
-
- if norm_layer == 'batch_norm':
- self.norm_layer = nn.BatchNorm2d
- else:
- self.norm_layer = nn.InstanceNorm2d
- self.net = self.make_net(
- n_layers, self.input_channels, 1, 4, 2, self.use_bias)
-
- def make_net(self, n, flt_in, flt_out=1, k=4, stride=2, bias=True):
- padding = 1
- model = nn.Sequential()
-
- model.add_module('conv0', self.make_block(
- flt_in, self.num_filters, k, stride, padding, bias, None, relu))
-
- flt_mult, flt_mult_prev = 1, 1
- # n - 1 blocks
- for l in range(1, n):
- flt_mult_prev = flt_mult
- flt_mult = min(2**(l), 8)
- model.add_module('conv_%d' % (l), self.make_block(self.num_filters * flt_mult_prev, self.num_filters * flt_mult,
- k, stride, padding, bias, self.norm_layer, relu))
-
- flt_mult_prev = flt_mult
- flt_mult = min(2**n, 8)
- model.add_module('conv_%d' % (n), self.make_block(self.num_filters * flt_mult_prev, self.num_filters * flt_mult,
- k, 1, padding, bias, self.norm_layer, relu))
- model.add_module('conv_out', self.make_block(
- self.num_filters * flt_mult, 1, k, 1, padding, bias, None, None))
- return model
-
- def make_block(self, flt_in, flt_out, k, stride, padding, bias, norm, relu):
- m = nn.Sequential()
- m.add_module('conv', nn.Conv2d(flt_in, flt_out, k,
- stride=stride, padding=padding, bias=bias, padding_mode='zeros'))
- if norm is not None:
- m.add_module('norm', norm(flt_out))
- if relu is not None:
- m.add_module('relu', relu)
- return m
-
- def forward(self, x):
- output = self.net(x)
- # output = output.mean((2, 3), True)
- # output = output.squeeze(-1).squeeze(-1)
- # output = output.mean(dim=(-1,-2))
- return output
-
-
-#####
-# Perception VGG19 loss
-#####
-class PerceptualVGG19(nn.Module):
- def __init__(self, feature_layers=[0, 3, 5], use_normalization=False):
- super().__init__()
- # model = models.vgg19(pretrained=True)
- model = models.squeezenet1_1(pretrained=True)
- model.float()
- model.eval()
-
- self.model = model
- self.feature_layers = feature_layers
-
- self.mean = torch.FloatTensor([0.485, 0.456, 0.406])
- self.mean_tensor = None
-
- self.std = torch.FloatTensor([0.229, 0.224, 0.225])
- self.std_tensor = None
-
- self.use_normalization = use_normalization
-
- for param in self.parameters():
- param.requires_grad = False
-
- def normalize(self, x):
- if not self.use_normalization:
- return x
-
- if self.mean_tensor is None:
- self.mean_tensor = Variable(
- self.mean.view(1, 3, 1, 1).expand(x.shape),
- requires_grad=False)
- self.std_tensor = Variable(
- self.std.view(1, 3, 1, 1).expand(x.shape), requires_grad=False)
-
- x = (x + 1) / 2
- return (x - self.mean_tensor) / self.std_tensor
-
- def run(self, x):
- features = []
-
- h = x
-
- for f in range(max(self.feature_layers) + 1):
- h = self.model.features[f](h)
- if f in self.feature_layers:
- not_normed_features = h.clone().view(h.size(0), -1)
- features.append(not_normed_features)
-
- return torch.cat(features, dim=1)
-
- def forward(self, x):
- h = self.normalize(x)
- return self.run(h)
diff --git a/spaces/Billyosoro/ESRGAN/scripts/extract_subimages.py b/spaces/Billyosoro/ESRGAN/scripts/extract_subimages.py
deleted file mode 100644
index 9b969ae0d4adff403f2ad362b9afaaaee58e2cef..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/scripts/extract_subimages.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import argparse
-import cv2
-import numpy as np
-import os
-import sys
-from basicsr.utils import scandir
-from multiprocessing import Pool
-from os import path as osp
-from tqdm import tqdm
-
-
-def main(args):
- """A multi-thread tool to crop large images to sub-images for faster IO.
-
- opt (dict): Configuration dict. It contains:
- n_thread (int): Thread number.
- compression_level (int): CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size
- and longer compression time. Use 0 for faster CPU decompression. Default: 3, same in cv2.
- input_folder (str): Path to the input folder.
- save_folder (str): Path to save folder.
- crop_size (int): Crop size.
- step (int): Step for overlapped sliding window.
- thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
-
- Usage:
- For each folder, run this script.
- Typically, there are GT folder and LQ folder to be processed for DIV2K dataset.
- After process, each sub_folder should have the same number of subimages.
- Remember to modify opt configurations according to your settings.
- """
-
- opt = {}
- opt['n_thread'] = args.n_thread
- opt['compression_level'] = args.compression_level
- opt['input_folder'] = args.input
- opt['save_folder'] = args.output
- opt['crop_size'] = args.crop_size
- opt['step'] = args.step
- opt['thresh_size'] = args.thresh_size
- extract_subimages(opt)
-
-
-def extract_subimages(opt):
- """Crop images to subimages.
-
- Args:
- opt (dict): Configuration dict. It contains:
- input_folder (str): Path to the input folder.
- save_folder (str): Path to save folder.
- n_thread (int): Thread number.
- """
- input_folder = opt['input_folder']
- save_folder = opt['save_folder']
- if not osp.exists(save_folder):
- os.makedirs(save_folder)
- print(f'mkdir {save_folder} ...')
- else:
- print(f'Folder {save_folder} already exists. Exit.')
- sys.exit(1)
-
- # scan all images
- img_list = list(scandir(input_folder, full_path=True))
-
- pbar = tqdm(total=len(img_list), unit='image', desc='Extract')
- pool = Pool(opt['n_thread'])
- for path in img_list:
- pool.apply_async(worker, args=(path, opt), callback=lambda arg: pbar.update(1))
- pool.close()
- pool.join()
- pbar.close()
- print('All processes done.')
-
-
-def worker(path, opt):
- """Worker for each process.
-
- Args:
- path (str): Image path.
- opt (dict): Configuration dict. It contains:
- crop_size (int): Crop size.
- step (int): Step for overlapped sliding window.
- thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped.
- save_folder (str): Path to save folder.
- compression_level (int): for cv2.IMWRITE_PNG_COMPRESSION.
-
- Returns:
- process_info (str): Process information displayed in progress bar.
- """
- crop_size = opt['crop_size']
- step = opt['step']
- thresh_size = opt['thresh_size']
- img_name, extension = osp.splitext(osp.basename(path))
-
- # remove the x2, x3, x4 and x8 in the filename for DIV2K
- img_name = img_name.replace('x2', '').replace('x3', '').replace('x4', '').replace('x8', '')
-
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
-
- h, w = img.shape[0:2]
- h_space = np.arange(0, h - crop_size + 1, step)
- if h - (h_space[-1] + crop_size) > thresh_size:
- h_space = np.append(h_space, h - crop_size)
- w_space = np.arange(0, w - crop_size + 1, step)
- if w - (w_space[-1] + crop_size) > thresh_size:
- w_space = np.append(w_space, w - crop_size)
-
- index = 0
- for x in h_space:
- for y in w_space:
- index += 1
- cropped_img = img[x:x + crop_size, y:y + crop_size, ...]
- cropped_img = np.ascontiguousarray(cropped_img)
- cv2.imwrite(
- osp.join(opt['save_folder'], f'{img_name}_s{index:03d}{extension}'), cropped_img,
- [cv2.IMWRITE_PNG_COMPRESSION, opt['compression_level']])
- process_info = f'Processing {img_name} ...'
- return process_info
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder')
- parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_HR_sub', help='Output folder')
- parser.add_argument('--crop_size', type=int, default=480, help='Crop size')
- parser.add_argument('--step', type=int, default=240, help='Step for overlapped sliding window')
- parser.add_argument(
- '--thresh_size',
- type=int,
- default=0,
- help='Threshold size. Patches whose size is lower than thresh_size will be dropped.')
- parser.add_argument('--n_thread', type=int, default=20, help='Thread number.')
- parser.add_argument('--compression_level', type=int, default=3, help='Compression level')
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/packaging/build_all_wheels.sh b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/packaging/build_all_wheels.sh
deleted file mode 100644
index 99d9492c60e194be01e027807208c10a9d2c96da..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/packaging/build_all_wheels.sh
+++ /dev/null
@@ -1,56 +0,0 @@
-#!/bin/bash -e
-
-PYTORCH_VERSION=1.4
-
-build_for_one_cuda() {
- cu=$1
-
- case "$cu" in
- cu*)
- container_name=manylinux-cuda${cu/cu/}
- ;;
- cpu)
- container_name=manylinux-cuda101
- ;;
- *)
- echo "Unrecognized cu=$cu"
- exit 1
- ;;
- esac
-
- echo "Launching container $container_name ..."
-
- for py in 3.6 3.7 3.8; do
- docker run -itd \
- --name $container_name \
- --mount type=bind,source="$(pwd)",target=/detectron2 \
- pytorch/$container_name
-
- cat < 1:
- model = DistributedDataParallel(
- model, device_ids=[comm.get_local_rank()], broadcast_buffers=False
- )
- optimizer = build_optimizer(cfg, model)
- checkpointer = DetectionCheckpointer(model, optimizer=optimizer)
- checkpointer.load(cfg.MODEL.WEIGHTS)
-
- cfg.defrost()
- cfg.DATALOADER.NUM_WORKERS = 0
- data_loader = build_detection_train_loader(cfg)
- dummy_data = list(itertools.islice(data_loader, 100))
-
- def f():
- data = DatasetFromList(dummy_data, copy=False)
- while True:
- yield from data
-
- max_iter = 400
- trainer = SimpleTrainer(model, f(), optimizer)
- trainer.register_hooks(
- [hooks.IterationTimer(), hooks.PeriodicWriter([CommonMetricPrinter(max_iter)])]
- )
- trainer.train(1, max_iter)
-
-
-@torch.no_grad()
-def benchmark_eval(args):
- cfg = setup(args)
- model = build_model(cfg)
- model.eval()
- logger.info("Model:\n{}".format(model))
- DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS)
-
- cfg.defrost()
- cfg.DATALOADER.NUM_WORKERS = 0
- data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
- dummy_data = list(itertools.islice(data_loader, 100))
-
- def f():
- while True:
- yield from DatasetFromList(dummy_data, copy=False)
-
- for _ in range(5): # warmup
- model(dummy_data[0])
-
- max_iter = 400
- timer = Timer()
- with tqdm.tqdm(total=max_iter) as pbar:
- for idx, d in enumerate(f()):
- if idx == max_iter:
- break
- model(d)
- pbar.update()
- logger.info("{} iters in {} seconds.".format(max_iter, timer.seconds()))
-
-
-if __name__ == "__main__":
- parser = default_argument_parser()
- parser.add_argument("--task", choices=["train", "eval", "data"], required=True)
- args = parser.parse_args()
- assert not args.eval_only
-
- if args.task == "data":
- f = benchmark_data
- elif args.task == "train":
- """
- Note: training speed may not be representative.
- The training cost of a R-CNN model varies with the content of the data
- and the quality of the model.
- """
- f = benchmark_train
- elif args.task == "eval":
- f = benchmark_eval
- # only benchmark single-GPU inference.
- assert args.num_gpus == 1 and args.num_machines == 1
- launch(f, args.num_gpus, args.num_machines, args.machine_rank, args.dist_url, args=(args,))
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_async.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_async.cpp
deleted file mode 100644
index f0ad0d535048fbb825b444e743193c743551cdd4..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_async.cpp
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- tests/test_async.cpp -- __await__ support
-
- Copyright (c) 2019 Google Inc.
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#include "pybind11_tests.h"
-
-TEST_SUBMODULE(async_module, m) {
- struct DoesNotSupportAsync {};
- py::class_(m, "DoesNotSupportAsync")
- .def(py::init<>());
- struct SupportsAsync {};
- py::class_(m, "SupportsAsync")
- .def(py::init<>())
- .def("__await__", [](const SupportsAsync& self) -> py::object {
- static_cast(self);
- py::object loop = py::module::import("asyncio.events").attr("get_event_loop")();
- py::object f = loop.attr("create_future")();
- f.attr("set_result")(5);
- return f.attr("__await__")();
- });
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/copy.h b/spaces/CVPR/LIVE/thrust/thrust/detail/copy.h
deleted file mode 100644
index 5e9feb0f90c8773be2db8ddf74600e79fd988b5f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/copy.h
+++ /dev/null
@@ -1,91 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-template
-__host__ __device__
- OutputIterator copy(const thrust::detail::execution_policy_base &system,
- InputIterator first,
- InputIterator last,
- OutputIterator result);
-
-template
-__host__ __device__
- OutputIterator copy_n(const thrust::detail::execution_policy_base &system,
- InputIterator first,
- Size n,
- OutputIterator result);
-
-template
- OutputIterator copy(InputIterator first,
- InputIterator last,
- OutputIterator result);
-
-template
- OutputIterator copy_n(InputIterator first,
- Size n,
- OutputIterator result);
-
-
-namespace detail
-{
-
-
-template
-__host__ __device__
- OutputIterator two_system_copy(const thrust::execution_policy &from_system,
- const thrust::execution_policy &two_system,
- InputIterator first,
- InputIterator last,
- OutputIterator result);
-
-
-template
-__host__ __device__
- OutputIterator two_system_copy_n(const thrust::execution_policy &from_system,
- const thrust::execution_policy &two_system,
- InputIterator first,
- Size n,
- OutputIterator result);
-
-
-} // end detail
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reverse.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reverse.h
deleted file mode 100644
index 1f3e0325e257c301215e62c690837433ae24c30c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/reverse.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits reverse
-#include
-
diff --git a/spaces/CVPR/WALT/mmdet/models/dense_heads/rpn_head.py b/spaces/CVPR/WALT/mmdet/models/dense_heads/rpn_head.py
deleted file mode 100644
index a888cb8c188ca6fe63045b6230266553fbe8c996..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/dense_heads/rpn_head.py
+++ /dev/null
@@ -1,236 +0,0 @@
-import copy
-import warnings
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv import ConfigDict
-from mmcv.cnn import normal_init
-from mmcv.ops import batched_nms
-
-from ..builder import HEADS
-from .anchor_head import AnchorHead
-from .rpn_test_mixin import RPNTestMixin
-
-
-@HEADS.register_module()
-class RPNHead(RPNTestMixin, AnchorHead):
- """RPN head.
-
- Args:
- in_channels (int): Number of channels in the input feature map.
- """ # noqa: W605
-
- def __init__(self, in_channels, **kwargs):
- super(RPNHead, self).__init__(1, in_channels, **kwargs)
-
- def _init_layers(self):
- """Initialize layers of the head."""
- self.rpn_conv = nn.Conv2d(
- self.in_channels, self.feat_channels, 3, padding=1)
- self.rpn_cls = nn.Conv2d(self.feat_channels,
- self.num_anchors * self.cls_out_channels, 1)
- self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_anchors * 4, 1)
-
- def init_weights(self):
- """Initialize weights of the head."""
- normal_init(self.rpn_conv, std=0.01)
- normal_init(self.rpn_cls, std=0.01)
- normal_init(self.rpn_reg, std=0.01)
-
- def forward_single(self, x):
- """Forward feature map of a single scale level."""
- x = self.rpn_conv(x)
- x = F.relu(x, inplace=True)
- rpn_cls_score = self.rpn_cls(x)
- rpn_bbox_pred = self.rpn_reg(x)
- return rpn_cls_score, rpn_bbox_pred
-
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- losses = super(RPNHead, self).loss(
- cls_scores,
- bbox_preds,
- gt_bboxes,
- None,
- img_metas,
- gt_bboxes_ignore=gt_bboxes_ignore)
- return dict(
- loss_rpn_cls=losses['loss_cls'], loss_rpn_bbox=losses['loss_bbox'])
-
- def _get_bboxes(self,
- cls_scores,
- bbox_preds,
- mlvl_anchors,
- img_shapes,
- scale_factors,
- cfg,
- rescale=False):
- """Transform outputs for a single batch item into bbox predictions.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W).
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W).
- mlvl_anchors (list[Tensor]): Box reference for each scale level
- with shape (num_total_anchors, 4).
- img_shapes (list[tuple[int]]): Shape of the input image,
- (height, width, 3).
- scale_factors (list[ndarray]): Scale factor of the image arange as
- (w_scale, h_scale, w_scale, h_scale).
- cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
-
- Returns:
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
- The first item is an (n, 5) tensor, where the first 4 columns
- are bounding box positions (tl_x, tl_y, br_x, br_y) and the
- 5-th column is a score between 0 and 1. The second item is a
- (n,) tensor where each item is the predicted class labelof the
- corresponding box.
- """
- cfg = self.test_cfg if cfg is None else cfg
- cfg = copy.deepcopy(cfg)
- # bboxes from different level should be independent during NMS,
- # level_ids are used as labels for batched NMS to separate them
- level_ids = []
- mlvl_scores = []
- mlvl_bbox_preds = []
- mlvl_valid_anchors = []
- batch_size = cls_scores[0].shape[0]
- nms_pre_tensor = torch.tensor(
- cfg.nms_pre, device=cls_scores[0].device, dtype=torch.long)
- for idx in range(len(cls_scores)):
- rpn_cls_score = cls_scores[idx]
- rpn_bbox_pred = bbox_preds[idx]
- assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:]
- rpn_cls_score = rpn_cls_score.permute(0, 2, 3, 1)
- if self.use_sigmoid_cls:
- rpn_cls_score = rpn_cls_score.reshape(batch_size, -1)
- scores = rpn_cls_score.sigmoid()
- else:
- rpn_cls_score = rpn_cls_score.reshape(batch_size, -1, 2)
- # We set FG labels to [0, num_class-1] and BG label to
- # num_class in RPN head since mmdet v2.5, which is unified to
- # be consistent with other head since mmdet v2.0. In mmdet v2.0
- # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head.
- scores = rpn_cls_score.softmax(-1)[..., 0]
- rpn_bbox_pred = rpn_bbox_pred.permute(0, 2, 3, 1).reshape(
- batch_size, -1, 4)
- anchors = mlvl_anchors[idx]
- anchors = anchors.expand_as(rpn_bbox_pred)
- if nms_pre_tensor > 0:
- # sort is faster than topk
- # _, topk_inds = scores.topk(cfg.nms_pre)
- # keep topk op for dynamic k in onnx model
- if torch.onnx.is_in_onnx_export():
- # sort op will be converted to TopK in onnx
- # and k<=3480 in TensorRT
- scores_shape = torch._shape_as_tensor(scores)
- nms_pre = torch.where(scores_shape[1] < nms_pre_tensor,
- scores_shape[1], nms_pre_tensor)
- _, topk_inds = scores.topk(nms_pre)
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds)
- scores = scores[batch_inds, topk_inds]
- rpn_bbox_pred = rpn_bbox_pred[batch_inds, topk_inds, :]
- anchors = anchors[batch_inds, topk_inds, :]
-
- elif scores.shape[-1] > cfg.nms_pre:
- ranked_scores, rank_inds = scores.sort(descending=True)
- topk_inds = rank_inds[:, :cfg.nms_pre]
- scores = ranked_scores[:, :cfg.nms_pre]
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds)
- rpn_bbox_pred = rpn_bbox_pred[batch_inds, topk_inds, :]
- anchors = anchors[batch_inds, topk_inds, :]
-
- mlvl_scores.append(scores)
- mlvl_bbox_preds.append(rpn_bbox_pred)
- mlvl_valid_anchors.append(anchors)
- level_ids.append(
- scores.new_full((
- batch_size,
- scores.size(1),
- ),
- idx,
- dtype=torch.long))
-
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
- batch_mlvl_anchors = torch.cat(mlvl_valid_anchors, dim=1)
- batch_mlvl_rpn_bbox_pred = torch.cat(mlvl_bbox_preds, dim=1)
- batch_mlvl_proposals = self.bbox_coder.decode(
- batch_mlvl_anchors, batch_mlvl_rpn_bbox_pred, max_shape=img_shapes)
- batch_mlvl_ids = torch.cat(level_ids, dim=1)
-
- # deprecate arguments warning
- if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg:
- warnings.warn(
- 'In rpn_proposal or test_cfg, '
- 'nms_thr has been moved to a dict named nms as '
- 'iou_threshold, max_num has been renamed as max_per_img, '
- 'name of original arguments and the way to specify '
- 'iou_threshold of NMS will be deprecated.')
- if 'nms' not in cfg:
- cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr))
- if 'max_num' in cfg:
- if 'max_per_img' in cfg:
- assert cfg.max_num == cfg.max_per_img, f'You ' \
- f'set max_num and ' \
- f'max_per_img at the same time, but get {cfg.max_num} ' \
- f'and {cfg.max_per_img} respectively' \
- 'Please delete max_num which will be deprecated.'
- else:
- cfg.max_per_img = cfg.max_num
- if 'nms_thr' in cfg:
- assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set' \
- f' iou_threshold in nms and ' \
- f'nms_thr at the same time, but get' \
- f' {cfg.nms.iou_threshold} and {cfg.nms_thr}' \
- f' respectively. Please delete the nms_thr ' \
- f'which will be deprecated.'
-
- result_list = []
- for (mlvl_proposals, mlvl_scores,
- mlvl_ids) in zip(batch_mlvl_proposals, batch_mlvl_scores,
- batch_mlvl_ids):
- # Skip nonzero op while exporting to ONNX
- if cfg.min_bbox_size > 0 and (not torch.onnx.is_in_onnx_export()):
- w = mlvl_proposals[:, 2] - mlvl_proposals[:, 0]
- h = mlvl_proposals[:, 3] - mlvl_proposals[:, 1]
- valid_ind = torch.nonzero(
- (w >= cfg.min_bbox_size)
- & (h >= cfg.min_bbox_size),
- as_tuple=False).squeeze()
- if valid_ind.sum().item() != len(mlvl_proposals):
- mlvl_proposals = mlvl_proposals[valid_ind, :]
- mlvl_scores = mlvl_scores[valid_ind]
- mlvl_ids = mlvl_ids[valid_ind]
-
- dets, keep = batched_nms(mlvl_proposals, mlvl_scores, mlvl_ids,
- cfg.nms)
- result_list.append(dets[:cfg.max_per_img])
- return result_list
diff --git a/spaces/CVPR/regionclip-demo/detectron2/structures/masks.py b/spaces/CVPR/regionclip-demo/detectron2/structures/masks.py
deleted file mode 100644
index 3513a38dbdf55b10d5107209a81d16c36975423e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/structures/masks.py
+++ /dev/null
@@ -1,527 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import itertools
-import numpy as np
-from typing import Any, Iterator, List, Union
-import pycocotools.mask as mask_util
-import torch
-from torch import device
-
-from detectron2.layers.roi_align import ROIAlign
-from detectron2.utils.memory import retry_if_cuda_oom
-
-from .boxes import Boxes
-
-
-def polygon_area(x, y):
- # Using the shoelace formula
- # https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates
- return 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))
-
-
-def polygons_to_bitmask(polygons: List[np.ndarray], height: int, width: int) -> np.ndarray:
- """
- Args:
- polygons (list[ndarray]): each array has shape (Nx2,)
- height, width (int)
-
- Returns:
- ndarray: a bool mask of shape (height, width)
- """
- assert len(polygons) > 0, "COCOAPI does not support empty polygons"
- rles = mask_util.frPyObjects(polygons, height, width)
- rle = mask_util.merge(rles)
- return mask_util.decode(rle).astype(np.bool)
-
-
-def rasterize_polygons_within_box(
- polygons: List[np.ndarray], box: np.ndarray, mask_size: int
-) -> torch.Tensor:
- """
- Rasterize the polygons into a mask image and
- crop the mask content in the given box.
- The cropped mask is resized to (mask_size, mask_size).
-
- This function is used when generating training targets for mask head in Mask R-CNN.
- Given original ground-truth masks for an image, new ground-truth mask
- training targets in the size of `mask_size x mask_size`
- must be provided for each predicted box. This function will be called to
- produce such targets.
-
- Args:
- polygons (list[ndarray[float]]): a list of polygons, which represents an instance.
- box: 4-element numpy array
- mask_size (int):
-
- Returns:
- Tensor: BoolTensor of shape (mask_size, mask_size)
- """
- # 1. Shift the polygons w.r.t the boxes
- w, h = box[2] - box[0], box[3] - box[1]
-
- polygons = copy.deepcopy(polygons)
- for p in polygons:
- p[0::2] = p[0::2] - box[0]
- p[1::2] = p[1::2] - box[1]
-
- # 2. Rescale the polygons to the new box size
- # max() to avoid division by small number
- ratio_h = mask_size / max(h, 0.1)
- ratio_w = mask_size / max(w, 0.1)
-
- if ratio_h == ratio_w:
- for p in polygons:
- p *= ratio_h
- else:
- for p in polygons:
- p[0::2] *= ratio_w
- p[1::2] *= ratio_h
-
- # 3. Rasterize the polygons with coco api
- mask = polygons_to_bitmask(polygons, mask_size, mask_size)
- mask = torch.from_numpy(mask)
- return mask
-
-
-class BitMasks:
- """
- This class stores the segmentation masks for all objects in one image, in
- the form of bitmaps.
-
- Attributes:
- tensor: bool Tensor of N,H,W, representing N instances in the image.
- """
-
- def __init__(self, tensor: Union[torch.Tensor, np.ndarray]):
- """
- Args:
- tensor: bool Tensor of N,H,W, representing N instances in the image.
- """
- device = tensor.device if isinstance(tensor, torch.Tensor) else torch.device("cpu")
- tensor = torch.as_tensor(tensor, dtype=torch.bool, device=device)
- assert tensor.dim() == 3, tensor.size()
- self.image_size = tensor.shape[1:]
- self.tensor = tensor
-
- @torch.jit.unused
- def to(self, *args: Any, **kwargs: Any) -> "BitMasks":
- return BitMasks(self.tensor.to(*args, **kwargs))
-
- @property
- def device(self) -> torch.device:
- return self.tensor.device
-
- @torch.jit.unused
- def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "BitMasks":
- """
- Returns:
- BitMasks: Create a new :class:`BitMasks` by indexing.
-
- The following usage are allowed:
-
- 1. `new_masks = masks[3]`: return a `BitMasks` which contains only one mask.
- 2. `new_masks = masks[2:10]`: return a slice of masks.
- 3. `new_masks = masks[vector]`, where vector is a torch.BoolTensor
- with `length = len(masks)`. Nonzero elements in the vector will be selected.
-
- Note that the returned object might share storage with this object,
- subject to Pytorch's indexing semantics.
- """
- if isinstance(item, int):
- return BitMasks(self.tensor[item].view(1, -1))
- m = self.tensor[item]
- assert m.dim() == 3, "Indexing on BitMasks with {} returns a tensor with shape {}!".format(
- item, m.shape
- )
- return BitMasks(m)
-
- @torch.jit.unused
- def __iter__(self) -> torch.Tensor:
- yield from self.tensor
-
- @torch.jit.unused
- def __repr__(self) -> str:
- s = self.__class__.__name__ + "("
- s += "num_instances={})".format(len(self.tensor))
- return s
-
- def __len__(self) -> int:
- return self.tensor.shape[0]
-
- def nonempty(self) -> torch.Tensor:
- """
- Find masks that are non-empty.
-
- Returns:
- Tensor: a BoolTensor which represents
- whether each mask is empty (False) or non-empty (True).
- """
- return self.tensor.flatten(1).any(dim=1)
-
- @staticmethod
- def from_polygon_masks(
- polygon_masks: Union["PolygonMasks", List[List[np.ndarray]]], height: int, width: int
- ) -> "BitMasks":
- """
- Args:
- polygon_masks (list[list[ndarray]] or PolygonMasks)
- height, width (int)
- """
- if isinstance(polygon_masks, PolygonMasks):
- polygon_masks = polygon_masks.polygons
- masks = [polygons_to_bitmask(p, height, width) for p in polygon_masks]
- return BitMasks(torch.stack([torch.from_numpy(x) for x in masks]))
-
- @staticmethod
- def from_roi_masks(roi_masks: "ROIMasks", height: int, width: int) -> "BitMasks":
- """
- Args:
- roi_masks:
- height, width (int):
- """
- return roi_masks.to_bitmasks(height, width)
-
- def crop_and_resize(self, boxes: torch.Tensor, mask_size: int) -> torch.Tensor:
- """
- Crop each bitmask by the given box, and resize results to (mask_size, mask_size).
- This can be used to prepare training targets for Mask R-CNN.
- It has less reconstruction error compared to rasterization with polygons.
- However we observe no difference in accuracy,
- but BitMasks requires more memory to store all the masks.
-
- Args:
- boxes (Tensor): Nx4 tensor storing the boxes for each mask
- mask_size (int): the size of the rasterized mask.
-
- Returns:
- Tensor:
- A bool tensor of shape (N, mask_size, mask_size), where
- N is the number of predicted boxes for this image.
- """
- assert len(boxes) == len(self), "{} != {}".format(len(boxes), len(self))
- device = self.tensor.device
-
- batch_inds = torch.arange(len(boxes), device=device).to(dtype=boxes.dtype)[:, None]
- rois = torch.cat([batch_inds, boxes], dim=1) # Nx5
-
- bit_masks = self.tensor.to(dtype=torch.float32)
- rois = rois.to(device=device)
- output = (
- ROIAlign((mask_size, mask_size), 1.0, 0, aligned=True)
- .forward(bit_masks[:, None, :, :], rois)
- .squeeze(1)
- )
- output = output >= 0.5
- return output
-
- def get_bounding_boxes(self) -> Boxes:
- """
- Returns:
- Boxes: tight bounding boxes around bitmasks.
- If a mask is empty, it's bounding box will be all zero.
- """
- boxes = torch.zeros(self.tensor.shape[0], 4, dtype=torch.float32)
- x_any = torch.any(self.tensor, dim=1)
- y_any = torch.any(self.tensor, dim=2)
- for idx in range(self.tensor.shape[0]):
- x = torch.where(x_any[idx, :])[0]
- y = torch.where(y_any[idx, :])[0]
- if len(x) > 0 and len(y) > 0:
- boxes[idx, :] = torch.as_tensor(
- [x[0], y[0], x[-1] + 1, y[-1] + 1], dtype=torch.float32
- )
- return Boxes(boxes)
-
- @staticmethod
- def cat(bitmasks_list: List["BitMasks"]) -> "BitMasks":
- """
- Concatenates a list of BitMasks into a single BitMasks
-
- Arguments:
- bitmasks_list (list[BitMasks])
-
- Returns:
- BitMasks: the concatenated BitMasks
- """
- assert isinstance(bitmasks_list, (list, tuple))
- assert len(bitmasks_list) > 0
- assert all(isinstance(bitmask, BitMasks) for bitmask in bitmasks_list)
-
- cat_bitmasks = type(bitmasks_list[0])(torch.cat([bm.tensor for bm in bitmasks_list], dim=0))
- return cat_bitmasks
-
-
-class PolygonMasks:
- """
- This class stores the segmentation masks for all objects in one image, in the form of polygons.
-
- Attributes:
- polygons: list[list[ndarray]]. Each ndarray is a float64 vector representing a polygon.
- """
-
- def __init__(self, polygons: List[List[Union[torch.Tensor, np.ndarray]]]):
- """
- Arguments:
- polygons (list[list[np.ndarray]]): The first
- level of the list correspond to individual instances,
- the second level to all the polygons that compose the
- instance, and the third level to the polygon coordinates.
- The third level array should have the format of
- [x0, y0, x1, y1, ..., xn, yn] (n >= 3).
- """
- if not isinstance(polygons, list):
- raise ValueError(
- "Cannot create PolygonMasks: Expect a list of list of polygons per image. "
- "Got '{}' instead.".format(type(polygons))
- )
-
- def _make_array(t: Union[torch.Tensor, np.ndarray]) -> np.ndarray:
- # Use float64 for higher precision, because why not?
- # Always put polygons on CPU (self.to is a no-op) since they
- # are supposed to be small tensors.
- # May need to change this assumption if GPU placement becomes useful
- if isinstance(t, torch.Tensor):
- t = t.cpu().numpy()
- return np.asarray(t).astype("float64")
-
- def process_polygons(
- polygons_per_instance: List[Union[torch.Tensor, np.ndarray]]
- ) -> List[np.ndarray]:
- if not isinstance(polygons_per_instance, list):
- raise ValueError(
- "Cannot create polygons: Expect a list of polygons per instance. "
- "Got '{}' instead.".format(type(polygons_per_instance))
- )
- # transform each polygon to a numpy array
- polygons_per_instance = [_make_array(p) for p in polygons_per_instance]
- for polygon in polygons_per_instance:
- if len(polygon) % 2 != 0 or len(polygon) < 6:
- raise ValueError(f"Cannot create a polygon from {len(polygon)} coordinates.")
- return polygons_per_instance
-
- self.polygons: List[List[np.ndarray]] = [
- process_polygons(polygons_per_instance) for polygons_per_instance in polygons
- ]
-
- def to(self, *args: Any, **kwargs: Any) -> "PolygonMasks":
- return self
-
- @property
- def device(self) -> torch.device:
- return torch.device("cpu")
-
- def get_bounding_boxes(self) -> Boxes:
- """
- Returns:
- Boxes: tight bounding boxes around polygon masks.
- """
- boxes = torch.zeros(len(self.polygons), 4, dtype=torch.float32)
- for idx, polygons_per_instance in enumerate(self.polygons):
- minxy = torch.as_tensor([float("inf"), float("inf")], dtype=torch.float32)
- maxxy = torch.zeros(2, dtype=torch.float32)
- for polygon in polygons_per_instance:
- coords = torch.from_numpy(polygon).view(-1, 2).to(dtype=torch.float32)
- minxy = torch.min(minxy, torch.min(coords, dim=0).values)
- maxxy = torch.max(maxxy, torch.max(coords, dim=0).values)
- boxes[idx, :2] = minxy
- boxes[idx, 2:] = maxxy
- return Boxes(boxes)
-
- def nonempty(self) -> torch.Tensor:
- """
- Find masks that are non-empty.
-
- Returns:
- Tensor:
- a BoolTensor which represents whether each mask is empty (False) or not (True).
- """
- keep = [1 if len(polygon) > 0 else 0 for polygon in self.polygons]
- return torch.from_numpy(np.asarray(keep, dtype=np.bool))
-
- def __getitem__(self, item: Union[int, slice, List[int], torch.BoolTensor]) -> "PolygonMasks":
- """
- Support indexing over the instances and return a `PolygonMasks` object.
- `item` can be:
-
- 1. An integer. It will return an object with only one instance.
- 2. A slice. It will return an object with the selected instances.
- 3. A list[int]. It will return an object with the selected instances,
- correpsonding to the indices in the list.
- 4. A vector mask of type BoolTensor, whose length is num_instances.
- It will return an object with the instances whose mask is nonzero.
- """
- if isinstance(item, int):
- selected_polygons = [self.polygons[item]]
- elif isinstance(item, slice):
- selected_polygons = self.polygons[item]
- elif isinstance(item, list):
- selected_polygons = [self.polygons[i] for i in item]
- elif isinstance(item, torch.Tensor):
- # Polygons is a list, so we have to move the indices back to CPU.
- if item.dtype == torch.bool:
- assert item.dim() == 1, item.shape
- item = item.nonzero().squeeze(1).cpu().numpy().tolist()
- elif item.dtype in [torch.int32, torch.int64]:
- item = item.cpu().numpy().tolist()
- else:
- raise ValueError("Unsupported tensor dtype={} for indexing!".format(item.dtype))
- selected_polygons = [self.polygons[i] for i in item]
- return PolygonMasks(selected_polygons)
-
- def __iter__(self) -> Iterator[List[np.ndarray]]:
- """
- Yields:
- list[ndarray]: the polygons for one instance.
- Each Tensor is a float64 vector representing a polygon.
- """
- return iter(self.polygons)
-
- def __repr__(self) -> str:
- s = self.__class__.__name__ + "("
- s += "num_instances={})".format(len(self.polygons))
- return s
-
- def __len__(self) -> int:
- return len(self.polygons)
-
- def crop_and_resize(self, boxes: torch.Tensor, mask_size: int) -> torch.Tensor:
- """
- Crop each mask by the given box, and resize results to (mask_size, mask_size).
- This can be used to prepare training targets for Mask R-CNN.
-
- Args:
- boxes (Tensor): Nx4 tensor storing the boxes for each mask
- mask_size (int): the size of the rasterized mask.
-
- Returns:
- Tensor: A bool tensor of shape (N, mask_size, mask_size), where
- N is the number of predicted boxes for this image.
- """
- assert len(boxes) == len(self), "{} != {}".format(len(boxes), len(self))
-
- device = boxes.device
- # Put boxes on the CPU, as the polygon representation is not efficient GPU-wise
- # (several small tensors for representing a single instance mask)
- boxes = boxes.to(torch.device("cpu"))
-
- results = [
- rasterize_polygons_within_box(poly, box.numpy(), mask_size)
- for poly, box in zip(self.polygons, boxes)
- ]
- """
- poly: list[list[float]], the polygons for one instance
- box: a tensor of shape (4,)
- """
- if len(results) == 0:
- return torch.empty(0, mask_size, mask_size, dtype=torch.bool, device=device)
- return torch.stack(results, dim=0).to(device=device)
-
- def area(self):
- """
- Computes area of the mask.
- Only works with Polygons, using the shoelace formula:
- https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates
-
- Returns:
- Tensor: a vector, area for each instance
- """
-
- area = []
- for polygons_per_instance in self.polygons:
- area_per_instance = 0
- for p in polygons_per_instance:
- area_per_instance += polygon_area(p[0::2], p[1::2])
- area.append(area_per_instance)
-
- return torch.tensor(area)
-
- @staticmethod
- def cat(polymasks_list: List["PolygonMasks"]) -> "PolygonMasks":
- """
- Concatenates a list of PolygonMasks into a single PolygonMasks
-
- Arguments:
- polymasks_list (list[PolygonMasks])
-
- Returns:
- PolygonMasks: the concatenated PolygonMasks
- """
- assert isinstance(polymasks_list, (list, tuple))
- assert len(polymasks_list) > 0
- assert all(isinstance(polymask, PolygonMasks) for polymask in polymasks_list)
-
- cat_polymasks = type(polymasks_list[0])(
- list(itertools.chain.from_iterable(pm.polygons for pm in polymasks_list))
- )
- return cat_polymasks
-
-
-class ROIMasks:
- """
- Represent masks by N smaller masks defined in some ROIs. Once ROI boxes are given,
- full-image bitmask can be obtained by "pasting" the mask on the region defined
- by the corresponding ROI box.
- """
-
- def __init__(self, tensor: torch.Tensor):
- """
- Args:
- tensor: (N, M, M) mask tensor that defines the mask within each ROI.
- """
- if tensor.dim() != 3:
- raise ValueError("ROIMasks must take a masks of 3 dimension.")
- self.tensor = tensor
-
- def to(self, device: torch.device) -> "ROIMasks":
- return ROIMasks(self.tensor.to(device))
-
- @property
- def device(self) -> device:
- return self.tensor.device
-
- def __len__(self):
- return self.tensor.shape[0]
-
- def __getitem__(self, item) -> "ROIMasks":
- """
- Returns:
- ROIMasks: Create a new :class:`ROIMasks` by indexing.
-
- The following usage are allowed:
-
- 1. `new_masks = masks[2:10]`: return a slice of masks.
- 2. `new_masks = masks[vector]`, where vector is a torch.BoolTensor
- with `length = len(masks)`. Nonzero elements in the vector will be selected.
-
- Note that the returned object might share storage with this object,
- subject to Pytorch's indexing semantics.
- """
- t = self.tensor[item]
- if t.dim() != 3:
- raise ValueError(
- f"Indexing on ROIMasks with {item} returns a tensor with shape {t.shape}!"
- )
- return ROIMasks(t)
-
- @torch.jit.unused
- def __repr__(self) -> str:
- s = self.__class__.__name__ + "("
- s += "num_instances={})".format(len(self.tensor))
- return s
-
- @torch.jit.unused
- def to_bitmasks(self, boxes: torch.Tensor, height, width, threshold=0.5):
- """
- Args:
-
- """
- from detectron2.layers import paste_masks_in_image
-
- paste = retry_if_cuda_oom(paste_masks_in_image)
- bitmasks = paste(
- self.tensor,
- boxes,
- (height, width),
- threshold=threshold,
- )
- return BitMasks(bitmasks)
diff --git a/spaces/Candeloro/DeepDanbooru_string/app.py b/spaces/Candeloro/DeepDanbooru_string/app.py
deleted file mode 100644
index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000
--- a/spaces/Candeloro/DeepDanbooru_string/app.py
+++ /dev/null
@@ -1,185 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import argparse
-import functools
-import os
-import html
-import pathlib
-import tarfile
-
-import deepdanbooru as dd
-import gradio as gr
-import huggingface_hub
-import numpy as np
-import PIL.Image
-import tensorflow as tf
-import piexif
-import piexif.helper
-
-TITLE = 'DeepDanbooru String'
-
-TOKEN = os.environ['TOKEN']
-MODEL_REPO = 'CikeyQI/DeepDanbooru_string'
-MODEL_FILENAME = 'model-resnet_custom_v3.h5'
-LABEL_FILENAME = 'tags.txt'
-
-
-def parse_args() -> argparse.Namespace:
- parser = argparse.ArgumentParser()
- parser.add_argument('--score-slider-step', type=float, default=0.05)
- parser.add_argument('--score-threshold', type=float, default=0.5)
- parser.add_argument('--theme', type=str, default='dark-grass')
- parser.add_argument('--live', action='store_true')
- parser.add_argument('--share', action='store_true')
- parser.add_argument('--port', type=int)
- parser.add_argument('--disable-queue',
- dest='enable_queue',
- action='store_false')
- parser.add_argument('--allow-flagging', type=str, default='never')
- return parser.parse_args()
-
-
-def load_sample_image_paths() -> list[pathlib.Path]:
- image_dir = pathlib.Path('images')
- if not image_dir.exists():
- dataset_repo = 'hysts/sample-images-TADNE'
- path = huggingface_hub.hf_hub_download(dataset_repo,
- 'images.tar.gz',
- repo_type='dataset',
- use_auth_token=TOKEN)
- with tarfile.open(path) as f:
- f.extractall()
- return sorted(image_dir.glob('*'))
-
-
-def load_model() -> tf.keras.Model:
- path = huggingface_hub.hf_hub_download(MODEL_REPO,
- MODEL_FILENAME,
- use_auth_token=TOKEN)
- model = tf.keras.models.load_model(path)
- return model
-
-
-def load_labels() -> list[str]:
- path = huggingface_hub.hf_hub_download(MODEL_REPO,
- LABEL_FILENAME,
- use_auth_token=TOKEN)
- with open(path) as f:
- labels = [line.strip() for line in f.readlines()]
- return labels
-
-def plaintext_to_html(text):
- text = "" + "
\n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "
"
- return text
-
-def predict(image: PIL.Image.Image, score_threshold: float,
- model: tf.keras.Model, labels: list[str]) -> dict[str, float]:
- rawimage = image
- _, height, width, _ = model.input_shape
- image = np.asarray(image)
- image = tf.image.resize(image,
- size=(height, width),
- method=tf.image.ResizeMethod.AREA,
- preserve_aspect_ratio=True)
- image = image.numpy()
- image = dd.image.transform_and_pad_image(image, width, height)
- image = image / 255.
- probs = model.predict(image[None, ...])[0]
- probs = probs.astype(float)
- res = dict()
- for prob, label in zip(probs.tolist(), labels):
- if prob < score_threshold:
- continue
- res[label] = prob
- b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True))
- a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)')
- c = ', '.join(list(b.keys()))
-
- items = rawimage.info
- geninfo = ''
-
- if "exif" in rawimage.info:
- exif = piexif.load(rawimage.info["exif"])
- exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'')
- try:
- exif_comment = piexif.helper.UserComment.load(exif_comment)
- except ValueError:
- exif_comment = exif_comment.decode('utf8', errors="ignore")
-
- items['exif comment'] = exif_comment
- geninfo = exif_comment
-
- for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif',
- 'loop', 'background', 'timestamp', 'duration']:
- items.pop(field, None)
-
- geninfo = items.get('parameters', geninfo)
-
- info = f"""
-PNG Info
-"""
- for key, text in items.items():
- info += f"""
-
-
{plaintext_to_html(str(key))}
-
{plaintext_to_html(str(text))}
-
-""".strip()+"\n"
-
- if len(info) == 0:
- message = "Nothing found in the image."
- info = f""
-
- return (a,c,res,info)
-
-
-def main():
- args = parse_args()
- model = load_model()
- labels = load_labels()
-
- func = functools.partial(predict, model=model, labels=labels)
- func = functools.update_wrapper(func, predict)
-
- gr.Interface(
- func,
- [
- gr.inputs.Image(type='pil', label='Input'),
- gr.inputs.Slider(0,
- 1,
- step=args.score_slider_step,
- default=args.score_threshold,
- label='Score Threshold'),
- ],
- [
- gr.outputs.Textbox(label='Output (string)'),
- gr.outputs.Textbox(label='Output (raw string)'),
- gr.outputs.Label(label='Output (label)'),
- gr.outputs.HTML()
- ],
- examples=[
- ['miku.jpg',0.5],
- ['miku2.jpg',0.5]
- ],
- title=TITLE,
- description='''
-Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer.
-
-Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru)
-
-PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
- ''',
- theme=args.theme,
- allow_flagging=args.allow_flagging,
- live=args.live,
- ).launch(
- enable_queue=args.enable_queue,
- server_port=args.port,
- share=args.share,
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/CarlosMF/AI-ORUS-License-v1.0.0/app.py b/spaces/CarlosMF/AI-ORUS-License-v1.0.0/app.py
deleted file mode 100644
index 3d224ea22ad70009876a508cf22f87d19c675ebb..0000000000000000000000000000000000000000
--- a/spaces/CarlosMF/AI-ORUS-License-v1.0.0/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import streamlit as st
-import base64
-
-
-PDF_WIDTH = 700
-PDF_HEIGHT = 1000
-PDF_PATH = "/AI%20ORUS%20License%20(1).pdf"
-
-def display_pdf(file):
- # Opening file from file path
- with open(file, "rb") as f:
- base64_pdf = base64.b64encode(f.read()).decode('utf-8')
-
- # Embedding PDF in HTML
- pdf_display = F''
-
- # Displaying File
- st.markdown(pdf_display, unsafe_allow_html=True)
-
-st.title("AI Open Responsible Use License - Version 1.0.0")
-
-st.markdown("## Purpose")
-
-st.markdown("This license covers any AI-specific material, such as neural networks or any other type of model, software, data, or other material that is designated as being available under this license. These materials, together, are referred to in this license as licensed materials. This license gives you as much permission as possible to use, share and improve the licensed materials. This license places no limitations on the use of output from any model, or any changes you make to any licensed materials. It should be interpreted to provide you with the maximum possible freedom.")
-
-st.markdown("## License")
-
-st.markdown("Each contributor licenses you to do everything with the licensed materials that would otherwise infringe that contributor’s rights, including without limitation copyright, patent, trade secret rights, and the right to use the data to train models, but only in compliance with applicable law. However, this license grants you no trademark or publicity rights.")
-
-st.markdown("## Indemnity")
-
-st.markdown("If you use the licensed materials in a manner not compliant with applicable law, or that otherwise causes any contributor legal liability relating to your use of the licensed materials, or that damages the reputation of any contributor, you must, as a condition of this license as a promise to each contributor, defend and indemnify every contributor against any resulting losses, damages or costs.")
-
-st.markdown("## No Liability")
-
-st.markdown("As far as the law allows, each licensed material is provided as is, without any warranty or condition, and no contributor will be liable to anyone for any damages related to this any licensed material or this license, under any kind of legal claim.")
\ No newline at end of file
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/follow/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/follow/__init__.py
deleted file mode 100644
index e71a0b030d5c3eb89a46e19a7a7b2c784d084dc0..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/follow/__init__.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from typing import List
-
-from pil_utils import BuildImage, Text2Image
-
-from meme_generator import MemeArgsModel, add_meme
-from meme_generator.exception import TextOverLength
-
-
-def follow(images: List[BuildImage], texts: List[str], args: MemeArgsModel):
- img = images[0].circle().resize((200, 200))
-
- if texts:
- name = texts[0]
- else:
- if args.user_infos:
- user_info = args.user_infos[0]
- name = user_info.name
- if not name:
- name = "女同" if user_info.gender == "female" else "男同"
- else:
- name = "男同"
-
- name_img = Text2Image.from_text(name, 60).to_image()
- follow_img = Text2Image.from_text("关注了你", 60, fill="grey").to_image()
- text_width = max(name_img.width, follow_img.width)
- if text_width >= 1000:
- raise TextOverLength(name)
-
- frame = BuildImage.new("RGBA", (300 + text_width + 50, 300), (255, 255, 255, 0))
- frame.paste(img, (50, 50), alpha=True)
- frame.paste(name_img, (300, 135 - name_img.height), alpha=True)
- frame.paste(follow_img, (300, 145), alpha=True)
- return frame.save_jpg()
-
-
-add_meme(
- "follow",
- follow,
- min_images=1,
- max_images=1,
- min_texts=0,
- max_texts=1,
- keywords=["关注"],
-)
diff --git a/spaces/DHEIVER/Anomalias_no_Trato_Gastrointestinal/README.md b/spaces/DHEIVER/Anomalias_no_Trato_Gastrointestinal/README.md
deleted file mode 100644
index 62cd43172ab0a76ee1792e3c34df2bc93773a774..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/Anomalias_no_Trato_Gastrointestinal/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Anomalias No Trato Gastrointestinal
-emoji: 📈
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/tls.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/tls.py
deleted file mode 100644
index 9f9e9fd89c891dd6285789811f7ce29a7b86c00f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/tls.py
+++ /dev/null
@@ -1,320 +0,0 @@
-from __future__ import annotations
-
-import logging
-import re
-import ssl
-from dataclasses import dataclass
-from functools import wraps
-from typing import Any, Callable, Mapping, Tuple, TypeVar
-
-from .. import (
- BrokenResourceError,
- EndOfStream,
- aclose_forcefully,
- get_cancelled_exc_class,
-)
-from .._core._typedattr import TypedAttributeSet, typed_attribute
-from ..abc import AnyByteStream, ByteStream, Listener, TaskGroup
-
-T_Retval = TypeVar("T_Retval")
-_PCTRTT = Tuple[Tuple[str, str], ...]
-_PCTRTTT = Tuple[_PCTRTT, ...]
-
-
-class TLSAttribute(TypedAttributeSet):
- """Contains Transport Layer Security related attributes."""
-
- #: the selected ALPN protocol
- alpn_protocol: str | None = typed_attribute()
- #: the channel binding for type ``tls-unique``
- channel_binding_tls_unique: bytes = typed_attribute()
- #: the selected cipher
- cipher: tuple[str, str, int] = typed_attribute()
- #: the peer certificate in dictionary form (see :meth:`ssl.SSLSocket.getpeercert`
- #: for more information)
- peer_certificate: dict[str, str | _PCTRTTT | _PCTRTT] | None = typed_attribute()
- #: the peer certificate in binary form
- peer_certificate_binary: bytes | None = typed_attribute()
- #: ``True`` if this is the server side of the connection
- server_side: bool = typed_attribute()
- #: ciphers shared by the client during the TLS handshake (``None`` if this is the
- #: client side)
- shared_ciphers: list[tuple[str, str, int]] | None = typed_attribute()
- #: the :class:`~ssl.SSLObject` used for encryption
- ssl_object: ssl.SSLObject = typed_attribute()
- #: ``True`` if this stream does (and expects) a closing TLS handshake when the
- #: stream is being closed
- standard_compatible: bool = typed_attribute()
- #: the TLS protocol version (e.g. ``TLSv1.2``)
- tls_version: str = typed_attribute()
-
-
-@dataclass(eq=False)
-class TLSStream(ByteStream):
- """
- A stream wrapper that encrypts all sent data and decrypts received data.
-
- This class has no public initializer; use :meth:`wrap` instead.
- All extra attributes from :class:`~TLSAttribute` are supported.
-
- :var AnyByteStream transport_stream: the wrapped stream
-
- """
-
- transport_stream: AnyByteStream
- standard_compatible: bool
- _ssl_object: ssl.SSLObject
- _read_bio: ssl.MemoryBIO
- _write_bio: ssl.MemoryBIO
-
- @classmethod
- async def wrap(
- cls,
- transport_stream: AnyByteStream,
- *,
- server_side: bool | None = None,
- hostname: str | None = None,
- ssl_context: ssl.SSLContext | None = None,
- standard_compatible: bool = True,
- ) -> TLSStream:
- """
- Wrap an existing stream with Transport Layer Security.
-
- This performs a TLS handshake with the peer.
-
- :param transport_stream: a bytes-transporting stream to wrap
- :param server_side: ``True`` if this is the server side of the connection,
- ``False`` if this is the client side (if omitted, will be set to ``False``
- if ``hostname`` has been provided, ``False`` otherwise). Used only to create
- a default context when an explicit context has not been provided.
- :param hostname: host name of the peer (if host name checking is desired)
- :param ssl_context: the SSLContext object to use (if not provided, a secure
- default will be created)
- :param standard_compatible: if ``False``, skip the closing handshake when closing the
- connection, and don't raise an exception if the peer does the same
- :raises ~ssl.SSLError: if the TLS handshake fails
-
- """
- if server_side is None:
- server_side = not hostname
-
- if not ssl_context:
- purpose = (
- ssl.Purpose.CLIENT_AUTH if server_side else ssl.Purpose.SERVER_AUTH
- )
- ssl_context = ssl.create_default_context(purpose)
-
- # Re-enable detection of unexpected EOFs if it was disabled by Python
- if hasattr(ssl, "OP_IGNORE_UNEXPECTED_EOF"):
- ssl_context.options &= ~ssl.OP_IGNORE_UNEXPECTED_EOF
-
- bio_in = ssl.MemoryBIO()
- bio_out = ssl.MemoryBIO()
- ssl_object = ssl_context.wrap_bio(
- bio_in, bio_out, server_side=server_side, server_hostname=hostname
- )
- wrapper = cls(
- transport_stream=transport_stream,
- standard_compatible=standard_compatible,
- _ssl_object=ssl_object,
- _read_bio=bio_in,
- _write_bio=bio_out,
- )
- await wrapper._call_sslobject_method(ssl_object.do_handshake)
- return wrapper
-
- async def _call_sslobject_method(
- self, func: Callable[..., T_Retval], *args: object
- ) -> T_Retval:
- while True:
- try:
- result = func(*args)
- except ssl.SSLWantReadError:
- try:
- # Flush any pending writes first
- if self._write_bio.pending:
- await self.transport_stream.send(self._write_bio.read())
-
- data = await self.transport_stream.receive()
- except EndOfStream:
- self._read_bio.write_eof()
- except OSError as exc:
- self._read_bio.write_eof()
- self._write_bio.write_eof()
- raise BrokenResourceError from exc
- else:
- self._read_bio.write(data)
- except ssl.SSLWantWriteError:
- await self.transport_stream.send(self._write_bio.read())
- except ssl.SSLSyscallError as exc:
- self._read_bio.write_eof()
- self._write_bio.write_eof()
- raise BrokenResourceError from exc
- except ssl.SSLError as exc:
- self._read_bio.write_eof()
- self._write_bio.write_eof()
- if (
- isinstance(exc, ssl.SSLEOFError)
- or "UNEXPECTED_EOF_WHILE_READING" in exc.strerror
- ):
- if self.standard_compatible:
- raise BrokenResourceError from exc
- else:
- raise EndOfStream from None
-
- raise
- else:
- # Flush any pending writes first
- if self._write_bio.pending:
- await self.transport_stream.send(self._write_bio.read())
-
- return result
-
- async def unwrap(self) -> tuple[AnyByteStream, bytes]:
- """
- Does the TLS closing handshake.
-
- :return: a tuple of (wrapped byte stream, bytes left in the read buffer)
-
- """
- await self._call_sslobject_method(self._ssl_object.unwrap)
- self._read_bio.write_eof()
- self._write_bio.write_eof()
- return self.transport_stream, self._read_bio.read()
-
- async def aclose(self) -> None:
- if self.standard_compatible:
- try:
- await self.unwrap()
- except BaseException:
- await aclose_forcefully(self.transport_stream)
- raise
-
- await self.transport_stream.aclose()
-
- async def receive(self, max_bytes: int = 65536) -> bytes:
- data = await self._call_sslobject_method(self._ssl_object.read, max_bytes)
- if not data:
- raise EndOfStream
-
- return data
-
- async def send(self, item: bytes) -> None:
- await self._call_sslobject_method(self._ssl_object.write, item)
-
- async def send_eof(self) -> None:
- tls_version = self.extra(TLSAttribute.tls_version)
- match = re.match(r"TLSv(\d+)(?:\.(\d+))?", tls_version)
- if match:
- major, minor = int(match.group(1)), int(match.group(2) or 0)
- if (major, minor) < (1, 3):
- raise NotImplementedError(
- f"send_eof() requires at least TLSv1.3; current "
- f"session uses {tls_version}"
- )
-
- raise NotImplementedError(
- "send_eof() has not yet been implemented for TLS streams"
- )
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return {
- **self.transport_stream.extra_attributes,
- TLSAttribute.alpn_protocol: self._ssl_object.selected_alpn_protocol,
- TLSAttribute.channel_binding_tls_unique: self._ssl_object.get_channel_binding,
- TLSAttribute.cipher: self._ssl_object.cipher,
- TLSAttribute.peer_certificate: lambda: self._ssl_object.getpeercert(False),
- TLSAttribute.peer_certificate_binary: lambda: self._ssl_object.getpeercert(
- True
- ),
- TLSAttribute.server_side: lambda: self._ssl_object.server_side,
- TLSAttribute.shared_ciphers: lambda: self._ssl_object.shared_ciphers()
- if self._ssl_object.server_side
- else None,
- TLSAttribute.standard_compatible: lambda: self.standard_compatible,
- TLSAttribute.ssl_object: lambda: self._ssl_object,
- TLSAttribute.tls_version: self._ssl_object.version,
- }
-
-
-@dataclass(eq=False)
-class TLSListener(Listener[TLSStream]):
- """
- A convenience listener that wraps another listener and auto-negotiates a TLS session on every
- accepted connection.
-
- If the TLS handshake times out or raises an exception, :meth:`handle_handshake_error` is
- called to do whatever post-mortem processing is deemed necessary.
-
- Supports only the :attr:`~TLSAttribute.standard_compatible` extra attribute.
-
- :param Listener listener: the listener to wrap
- :param ssl_context: the SSL context object
- :param standard_compatible: a flag passed through to :meth:`TLSStream.wrap`
- :param handshake_timeout: time limit for the TLS handshake
- (passed to :func:`~anyio.fail_after`)
- """
-
- listener: Listener[Any]
- ssl_context: ssl.SSLContext
- standard_compatible: bool = True
- handshake_timeout: float = 30
-
- @staticmethod
- async def handle_handshake_error(exc: BaseException, stream: AnyByteStream) -> None:
- """
- Handle an exception raised during the TLS handshake.
-
- This method does 3 things:
-
- #. Forcefully closes the original stream
- #. Logs the exception (unless it was a cancellation exception) using the
- ``anyio.streams.tls`` logger
- #. Reraises the exception if it was a base exception or a cancellation exception
-
- :param exc: the exception
- :param stream: the original stream
-
- """
- await aclose_forcefully(stream)
-
- # Log all except cancellation exceptions
- if not isinstance(exc, get_cancelled_exc_class()):
- logging.getLogger(__name__).exception("Error during TLS handshake")
-
- # Only reraise base exceptions and cancellation exceptions
- if not isinstance(exc, Exception) or isinstance(exc, get_cancelled_exc_class()):
- raise
-
- async def serve(
- self,
- handler: Callable[[TLSStream], Any],
- task_group: TaskGroup | None = None,
- ) -> None:
- @wraps(handler)
- async def handler_wrapper(stream: AnyByteStream) -> None:
- from .. import fail_after
-
- try:
- with fail_after(self.handshake_timeout):
- wrapped_stream = await TLSStream.wrap(
- stream,
- ssl_context=self.ssl_context,
- standard_compatible=self.standard_compatible,
- )
- except BaseException as exc:
- await self.handle_handshake_error(exc, stream)
- else:
- await handler(wrapped_stream)
-
- await self.listener.serve(handler_wrapper, task_group)
-
- async def aclose(self) -> None:
- await self.listener.aclose()
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return {
- TLSAttribute.standard_compatible: lambda: self.standard_compatible,
- }
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/encoders.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/encoders.py
deleted file mode 100644
index b542749f250a313f01fe3a0fcffd1897c9fec90c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/encoders.py
+++ /dev/null
@@ -1,249 +0,0 @@
-import dataclasses
-import datetime
-from collections import defaultdict, deque
-from decimal import Decimal
-from enum import Enum
-from ipaddress import (
- IPv4Address,
- IPv4Interface,
- IPv4Network,
- IPv6Address,
- IPv6Interface,
- IPv6Network,
-)
-from pathlib import Path, PurePath
-from re import Pattern
-from types import GeneratorType
-from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union
-from uuid import UUID
-
-from fastapi.types import IncEx
-from pydantic import BaseModel
-from pydantic.color import Color
-from pydantic.networks import NameEmail
-from pydantic.types import SecretBytes, SecretStr
-
-from ._compat import PYDANTIC_V2, MultiHostUrl, Url, _model_dump
-
-
-# Taken from Pydantic v1 as is
-def isoformat(o: Union[datetime.date, datetime.time]) -> str:
- return o.isoformat()
-
-
-# Taken from Pydantic v1 as is
-# TODO: pv2 should this return strings instead?
-def decimal_encoder(dec_value: Decimal) -> Union[int, float]:
- """
- Encodes a Decimal as int of there's no exponent, otherwise float
-
- This is useful when we use ConstrainedDecimal to represent Numeric(x,0)
- where a integer (but not int typed) is used. Encoding this as a float
- results in failed round-tripping between encode and parse.
- Our Id type is a prime example of this.
-
- >>> decimal_encoder(Decimal("1.0"))
- 1.0
-
- >>> decimal_encoder(Decimal("1"))
- 1
- """
- if dec_value.as_tuple().exponent >= 0: # type: ignore[operator]
- return int(dec_value)
- else:
- return float(dec_value)
-
-
-ENCODERS_BY_TYPE: Dict[Type[Any], Callable[[Any], Any]] = {
- bytes: lambda o: o.decode(),
- Color: str,
- datetime.date: isoformat,
- datetime.datetime: isoformat,
- datetime.time: isoformat,
- datetime.timedelta: lambda td: td.total_seconds(),
- Decimal: decimal_encoder,
- Enum: lambda o: o.value,
- frozenset: list,
- deque: list,
- GeneratorType: list,
- IPv4Address: str,
- IPv4Interface: str,
- IPv4Network: str,
- IPv6Address: str,
- IPv6Interface: str,
- IPv6Network: str,
- NameEmail: str,
- Path: str,
- Pattern: lambda o: o.pattern,
- SecretBytes: str,
- SecretStr: str,
- set: list,
- UUID: str,
- Url: str,
- MultiHostUrl: str,
-}
-
-
-def generate_encoders_by_class_tuples(
- type_encoder_map: Dict[Any, Callable[[Any], Any]]
-) -> Dict[Callable[[Any], Any], Tuple[Any, ...]]:
- encoders_by_class_tuples: Dict[Callable[[Any], Any], Tuple[Any, ...]] = defaultdict(
- tuple
- )
- for type_, encoder in type_encoder_map.items():
- encoders_by_class_tuples[encoder] += (type_,)
- return encoders_by_class_tuples
-
-
-encoders_by_class_tuples = generate_encoders_by_class_tuples(ENCODERS_BY_TYPE)
-
-
-def jsonable_encoder(
- obj: Any,
- include: Optional[IncEx] = None,
- exclude: Optional[IncEx] = None,
- by_alias: bool = True,
- exclude_unset: bool = False,
- exclude_defaults: bool = False,
- exclude_none: bool = False,
- custom_encoder: Optional[Dict[Any, Callable[[Any], Any]]] = None,
- sqlalchemy_safe: bool = True,
-) -> Any:
- custom_encoder = custom_encoder or {}
- if custom_encoder:
- if type(obj) in custom_encoder:
- return custom_encoder[type(obj)](obj)
- else:
- for encoder_type, encoder_instance in custom_encoder.items():
- if isinstance(obj, encoder_type):
- return encoder_instance(obj)
- if include is not None and not isinstance(include, (set, dict)):
- include = set(include)
- if exclude is not None and not isinstance(exclude, (set, dict)):
- exclude = set(exclude)
- if isinstance(obj, BaseModel):
- # TODO: remove when deprecating Pydantic v1
- encoders: Dict[Any, Any] = {}
- if not PYDANTIC_V2:
- encoders = getattr(obj.__config__, "json_encoders", {}) # type: ignore[attr-defined]
- if custom_encoder:
- encoders.update(custom_encoder)
- obj_dict = _model_dump(
- obj,
- mode="json",
- include=include,
- exclude=exclude,
- by_alias=by_alias,
- exclude_unset=exclude_unset,
- exclude_none=exclude_none,
- exclude_defaults=exclude_defaults,
- )
- if "__root__" in obj_dict:
- obj_dict = obj_dict["__root__"]
- return jsonable_encoder(
- obj_dict,
- exclude_none=exclude_none,
- exclude_defaults=exclude_defaults,
- # TODO: remove when deprecating Pydantic v1
- custom_encoder=encoders,
- sqlalchemy_safe=sqlalchemy_safe,
- )
- if dataclasses.is_dataclass(obj):
- obj_dict = dataclasses.asdict(obj)
- return jsonable_encoder(
- obj_dict,
- include=include,
- exclude=exclude,
- by_alias=by_alias,
- exclude_unset=exclude_unset,
- exclude_defaults=exclude_defaults,
- exclude_none=exclude_none,
- custom_encoder=custom_encoder,
- sqlalchemy_safe=sqlalchemy_safe,
- )
- if isinstance(obj, Enum):
- return obj.value
- if isinstance(obj, PurePath):
- return str(obj)
- if isinstance(obj, (str, int, float, type(None))):
- return obj
- if isinstance(obj, dict):
- encoded_dict = {}
- allowed_keys = set(obj.keys())
- if include is not None:
- allowed_keys &= set(include)
- if exclude is not None:
- allowed_keys -= set(exclude)
- for key, value in obj.items():
- if (
- (
- not sqlalchemy_safe
- or (not isinstance(key, str))
- or (not key.startswith("_sa"))
- )
- and (value is not None or not exclude_none)
- and key in allowed_keys
- ):
- encoded_key = jsonable_encoder(
- key,
- by_alias=by_alias,
- exclude_unset=exclude_unset,
- exclude_none=exclude_none,
- custom_encoder=custom_encoder,
- sqlalchemy_safe=sqlalchemy_safe,
- )
- encoded_value = jsonable_encoder(
- value,
- by_alias=by_alias,
- exclude_unset=exclude_unset,
- exclude_none=exclude_none,
- custom_encoder=custom_encoder,
- sqlalchemy_safe=sqlalchemy_safe,
- )
- encoded_dict[encoded_key] = encoded_value
- return encoded_dict
- if isinstance(obj, (list, set, frozenset, GeneratorType, tuple, deque)):
- encoded_list = []
- for item in obj:
- encoded_list.append(
- jsonable_encoder(
- item,
- include=include,
- exclude=exclude,
- by_alias=by_alias,
- exclude_unset=exclude_unset,
- exclude_defaults=exclude_defaults,
- exclude_none=exclude_none,
- custom_encoder=custom_encoder,
- sqlalchemy_safe=sqlalchemy_safe,
- )
- )
- return encoded_list
-
- if type(obj) in ENCODERS_BY_TYPE:
- return ENCODERS_BY_TYPE[type(obj)](obj)
- for encoder, classes_tuple in encoders_by_class_tuples.items():
- if isinstance(obj, classes_tuple):
- return encoder(obj)
-
- try:
- data = dict(obj)
- except Exception as e:
- errors: List[Exception] = []
- errors.append(e)
- try:
- data = vars(obj)
- except Exception as e:
- errors.append(e)
- raise ValueError(errors) from e
- return jsonable_encoder(
- data,
- include=include,
- exclude=exclude,
- by_alias=by_alias,
- exclude_unset=exclude_unset,
- exclude_defaults=exclude_defaults,
- exclude_none=exclude_none,
- custom_encoder=custom_encoder,
- sqlalchemy_safe=sqlalchemy_safe,
- )
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_F_F_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_F_F_.py
deleted file mode 100644
index c231599e37b3a5864a774387d717baf297957876..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_F_F_.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from io import BytesIO
-from fontTools import cffLib
-from . import DefaultTable
-
-
-class table_C_F_F_(DefaultTable.DefaultTable):
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.cff = cffLib.CFFFontSet()
- self._gaveGlyphOrder = False
-
- def decompile(self, data, otFont):
- self.cff.decompile(BytesIO(data), otFont, isCFF2=False)
- assert len(self.cff) == 1, "can't deal with multi-font CFF tables."
-
- def compile(self, otFont):
- f = BytesIO()
- self.cff.compile(f, otFont, isCFF2=False)
- return f.getvalue()
-
- def haveGlyphNames(self):
- if hasattr(self.cff[self.cff.fontNames[0]], "ROS"):
- return False # CID-keyed font
- else:
- return True
-
- def getGlyphOrder(self):
- if self._gaveGlyphOrder:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("illegal use of getGlyphOrder()")
- self._gaveGlyphOrder = True
- return self.cff[self.cff.fontNames[0]].getGlyphOrder()
-
- def setGlyphOrder(self, glyphOrder):
- pass
- # XXX
- # self.cff[self.cff.fontNames[0]].setGlyphOrder(glyphOrder)
-
- def toXML(self, writer, otFont):
- self.cff.toXML(writer)
-
- def fromXML(self, name, attrs, content, otFont):
- if not hasattr(self, "cff"):
- self.cff = cffLib.CFFFontSet()
- self.cff.fromXML(name, attrs, content, otFont)
diff --git a/spaces/DShrimp/PoseMaker/README.md b/spaces/DShrimp/PoseMaker/README.md
deleted file mode 100644
index bc9b47b85a5178627db226125ccc2cba9d6ad569..0000000000000000000000000000000000000000
--- a/spaces/DShrimp/PoseMaker/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: PoseMaker
-emoji: ⚡
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.18.0
-app_file: start.py
-pinned: false
-license: creativeml-openrail-m
-duplicated_from: jonigata/PoseMaker
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DaleChen/AutoGPT/autogpt/commands/execute_code.py b/spaces/DaleChen/AutoGPT/autogpt/commands/execute_code.py
deleted file mode 100644
index 11266f852727f2f8aedbc995b1e504a17acbfb77..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/commands/execute_code.py
+++ /dev/null
@@ -1,158 +0,0 @@
-"""Execute code in a Docker container"""
-import os
-import subprocess
-
-import docker
-from docker.errors import ImageNotFound
-
-from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
-
-
-def execute_python_file(file: str) -> str:
- """Execute a Python file in a Docker container and return the output
-
- Args:
- file (str): The name of the file to execute
-
- Returns:
- str: The output of the file
- """
-
- print(f"Executing file '{file}' in workspace '{WORKSPACE_PATH}'")
-
- if not file.endswith(".py"):
- return "Error: Invalid file type. Only .py files are allowed."
-
- file_path = path_in_workspace(file)
-
- if not os.path.isfile(file_path):
- return f"Error: File '{file}' does not exist."
-
- if we_are_running_in_a_docker_container():
- result = subprocess.run(
- f"python {file_path}", capture_output=True, encoding="utf8", shell=True
- )
- if result.returncode == 0:
- return result.stdout
- else:
- return f"Error: {result.stderr}"
-
- try:
- client = docker.from_env()
-
- # You can replace this with the desired Python image/version
- # You can find available Python images on Docker Hub:
- # https://hub.docker.com/_/python
- image_name = "python:3-alpine"
- try:
- client.images.get(image_name)
- print(f"Image '{image_name}' found locally")
- except ImageNotFound:
- print(f"Image '{image_name}' not found locally, pulling from Docker Hub")
- # Use the low-level API to stream the pull response
- low_level_client = docker.APIClient()
- for line in low_level_client.pull(image_name, stream=True, decode=True):
- # Print the status and progress, if available
- status = line.get("status")
- progress = line.get("progress")
- if status and progress:
- print(f"{status}: {progress}")
- elif status:
- print(status)
-
- container = client.containers.run(
- image_name,
- f"python {file}",
- volumes={
- os.path.abspath(WORKSPACE_PATH): {
- "bind": "/workspace",
- "mode": "ro",
- }
- },
- working_dir="/workspace",
- stderr=True,
- stdout=True,
- detach=True,
- )
-
- container.wait()
- logs = container.logs().decode("utf-8")
- container.remove()
-
- # print(f"Execution complete. Output: {output}")
- # print(f"Logs: {logs}")
-
- return logs
-
- except docker.errors.DockerException as e:
- print(
- "Could not run the script in a container. If you haven't already, please install Docker https://docs.docker.com/get-docker/"
- )
- return f"Error: {str(e)}"
-
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def execute_shell(command_line: str) -> str:
- """Execute a shell command and return the output
-
- Args:
- command_line (str): The command line to execute
-
- Returns:
- str: The output of the command
- """
- current_dir = os.getcwd()
- # Change dir into workspace if necessary
- if str(WORKSPACE_PATH) not in current_dir:
- os.chdir(WORKSPACE_PATH)
-
- print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
-
- result = subprocess.run(command_line, capture_output=True, shell=True)
- output = f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
-
- # Change back to whatever the prior working dir was
-
- os.chdir(current_dir)
-
- return output
-
-
-def execute_shell_popen(command_line) -> str:
- """Execute a shell command with Popen and returns an english description
- of the event and the process id
-
- Args:
- command_line (str): The command line to execute
-
- Returns:
- str: Description of the fact that the process started and its id
- """
- current_dir = os.getcwd()
- # Change dir into workspace if necessary
- if str(WORKSPACE_PATH) not in current_dir:
- os.chdir(WORKSPACE_PATH)
-
- print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
-
- do_not_show_output = subprocess.DEVNULL
- process = subprocess.Popen(
- command_line, shell=True, stdout=do_not_show_output, stderr=do_not_show_output
- )
-
- # Change back to whatever the prior working dir was
-
- os.chdir(current_dir)
-
- return f"Subprocess started with PID:'{str(process.pid)}'"
-
-
-def we_are_running_in_a_docker_container() -> bool:
- """Check if we are running in a Docker container
-
- Returns:
- bool: True if we are running in a Docker container, False otherwise
- """
- return os.path.exists("/.dockerenv")
diff --git a/spaces/Danuuo/GPTDocs/README.md b/spaces/Danuuo/GPTDocs/README.md
deleted file mode 100644
index 097db6536b2408d9435f95114cdd38192415219a..0000000000000000000000000000000000000000
--- a/spaces/Danuuo/GPTDocs/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: PdfChatter
-emoji: 🏢
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
-license: afl-3.0
-duplicated_from: bhaskartripathi/pdfChatter
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dinoking/Garbage-Classifier-V2/README.md b/spaces/Dinoking/Garbage-Classifier-V2/README.md
deleted file mode 100644
index cf5aa75f972078a9d4093adf7be49aaf95af4cf5..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Garbage-Classifier-V2/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Garbage Classifier V2
-emoji: ♻️
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.1.3
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dref360/spectral-metric/app.py b/spaces/Dref360/spectral-metric/app.py
deleted file mode 100644
index 0841925e2aeef317706daf2622207ae84177f27f..0000000000000000000000000000000000000000
--- a/spaces/Dref360/spectral-metric/app.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from os import WEXITED
-import streamlit as st
-from datasets import load_dataset
-from sentence_transformers import SentenceTransformer
-import torch
-from spectral_metric.estimator import CumulativeGradientEstimator
-import numpy as np
-import seaborn as sns
-import matplotlib.pyplot as plt
-from spectral_metric.visualize import make_graph
-from scipy.stats import entropy
-import pandas as pd
-
-from utils import show_most_confused
-
-
-AVAILABLE_DATASETS = [
- ("clinc_oos", "small"),
- ("clinc_oos", "imbalanced"),
- ("banking77",),
- ("tweet_eval", "emoji"),
- ("tweet_eval", "stance_climate")
-]
-
-label_column_mapping = {
- "clinc_oos": "intent",
- "banking77": "label",
- "tweet_eval": "label",
-}
-
-st.title("Perform a data-driven analysis using `spectral-metric`")
-st.markdown(
- """Today, I would like to analyze this dataset and perform a
- data-driven analysis by `sentence-transformers` to extract features
- and `spectral_metric` to perform a spectral analysis of the dataset.
-
-For support, please submit an issue on [our repo](https://github.com/Dref360/spectral-metric) or [contact me directly](https://github.com/Dref360)
-"""
-)
-
-st.markdown(
- """
-Let's load your dataset, we will run our analysis on the train set.
-"""
-)
-
-dataset_name = st.selectbox("Select your dataset", AVAILABLE_DATASETS)
-if st.button("Start the analysis"):
-
- label_column = label_column_mapping[dataset_name[0]]
-
- # We perform the analysis on the train set.
- ds = load_dataset(*dataset_name)["train"]
- class_names = ds.features[label_column].names
- ds
-
- # I use all-MiniLM-L12-v2 as it is a good compromise between speed and performance.
- embedder = SentenceTransformer("all-MiniLM-L12-v2")
- # We will get **normalized** features for the dataset using our embedder.
- with st.spinner(text="Computing embeddings..."):
- features = embedder.encode(
- ds["text"],
- device=0 if torch.cuda.is_available() else "cpu",
- normalize_embeddings=True,
- )
-
- st.markdown(
- """
- ### Running the spectral analysis
-
- Now that we have our embeddings extracted by our sentence embedder, we can make an in-depth analysis of these features.
-
- To do so, we will use CSG (Branchaud-Charron et al, 2019), a technique that combines Probability Product Kernels (Jebara et al, 2004) and spectral clustering to analyze a dataset without training a model.
-
- In this notebook, we won't use the actual CSG metrics, but we will use the $W$ matrix.
- This matrix is computed as:
- * Run a Probabilistic K-NN on the dataset (optionally done via Monte-Carlo)
- * Compute the average prediction per class (results in the $S$ matrix)
- * Symetrize this matrix using Bray-Curtis distance metric, a metric that was made to compare samplings from a distribution.
-
- These steps are all done by `spectral_metric.estimator.CumulativeGradientEstimator`.
- """
- )
- X, y = features, np.array(ds[label_column]) # Your dataset with shape [N, ?], [N]
- estimator = CumulativeGradientEstimator(M_sample=250, k_nearest=9, distance="cosine")
- estimator.fit(data=X, target=y)
-
- fig, ax = plt.subplots(figsize=(10, 5))
- sns.heatmap(estimator.W, ax=ax, cmap="rocket_r")
- ax.set_title(f"Similarity between classes in {dataset_name[0]}")
- st.pyplot(fig)
-
- st.markdown(
- """
- This figure will be hard to read on most datasets, so we need to go deeper.
- Let's do the following analysis:
- 1. Find the class with the highest entropy ie. the class that is the most confused with others.
- 2. Find the 5 pairs of classes that are the most confused.
- 3. Find the items in these pairs that contribute to the confusion.
- """
- )
-
-
- entropy_per_class = entropy(estimator.W / estimator.W.sum(-1)[:, None], axis=-1)
- st.markdown(
- f"Most confused class (highest entropy): {class_names[np.argmax(entropy_per_class)]}",
- )
- st.markdown(
- f"Least confused class (lowest entropy): {class_names[np.argmin(entropy_per_class)]}",
- )
-
- pairs = list(zip(*np.unravel_index(np.argsort(estimator.W, axis=None), estimator.W.shape)))[::-1]
- pairs = [(i,j) for i,j in pairs if i != j]
-
- lst = []
- for idx, (i,j) in enumerate(pairs[::2][:10]):
- lst.append({"Intent A" : class_names[i], "Intent B": class_names[j], "Similarity": estimator.W[i,j]})
-
- st.title("Most similar pairs")
- st.dataframe(pd.DataFrame(lst).sort_values("Similarity", ascending=False))
-
-
- st.markdown("""
- ## Analysis
- By looking at the top-10 most similar pairs, we get some good insights on the dataset.
- While this does not 100% indicates that the classifier trained downstream will have issues with these pairs,
- we know that these intents are similar.
- In consequence, the classifier might not be able to separate them easily.
-
-
- Let's now look at which utterance is contributing the most to the confusion.
- """)
-
- first_pair = pairs[0]
- second_pair = pairs[2]
- st.dataframe(pd.DataFrame({**show_most_confused(ds,first_pair[0], first_pair[1], estimator, class_names),
- **show_most_confused(ds, first_pair[1], first_pair[0], estimator, class_names)}),
- width=1000)
-
- st.markdown("### We can do the same for the second pair")
-
- st.dataframe(pd.DataFrame({**show_most_confused(ds, second_pair[0], second_pair[1], estimator, class_names),
- **show_most_confused(ds, second_pair[1], second_pair[0], estimator, class_names)}),
- width=1000)
-
- st.markdown(f"""
- From the top-5 most confused examples per pair, we can see that the sentences are quite similar.
- While a human could easily separate the two intents, we see that the sentences are made of the same words which might confuse the classifier.
-
- Some sentences could be seen as mislabelled.
- Of course, these features come from a model that was not trained to separate these classes,
- they come from a general-purpose language model.
- The goal of this analysis is to give insights to the data scientist before they train an expensive model.
- If we were to train a model on this dataset, the model could probably handle the confusion between `{class_names[first_pair[0]]}`
- and `{class_names[first_pair[1]]}`,
- but maybe not easily.
-
-
- ## Conclusion
-
- In this tutorial, we covered how to conduct a data-driven analysis for on a text classification dataset.
- By using sentence embedding and the `spectral_metric` library, we found the intents that would be the most likely to be confused and which utterances caused this confusion.
-
- Following our analysis, we could take the following actions:
- 1. Upweight the classes that are confused during training for the model to better learn to separate them.
- 2. Merge similar classes together.
- 3. Analyse sentences that are confusing to find mislabelled sentences.
-
- If you have any questions, suggestions or ideas for this library please reach out:
-
- 1. frederic.branchaud.charron@gmail.com
- 2. [@Dref360 on Github](https://github.com/Dref360)
-
-
- If you have a dataset that you think would be a good fit for this analysis let me know too!
- """)
-
\ No newline at end of file
diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/matcher.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/matcher.py
deleted file mode 100644
index 7c6af7f874e9736c598726d1945a2622c0b93bc5..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/matcher.py
+++ /dev/null
@@ -1,189 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/models/matcher.py
-"""
-Modules to compute the matching cost and solve the corresponding LSAP.
-"""
-import torch
-import torch.nn.functional as F
-from scipy.optimize import linear_sum_assignment
-from torch import nn
-from torch.cuda.amp import autocast
-
-from detectron2.projects.point_rend.point_features import point_sample
-
-
-def batch_dice_loss(inputs: torch.Tensor, targets: torch.Tensor):
- """
- Compute the DICE loss, similar to generalized IOU for masks
- Args:
- inputs: A float tensor of arbitrary shape.
- The predictions for each example.
- targets: A float tensor with the same shape as inputs. Stores the binary
- classification label for each element in inputs
- (0 for the negative class and 1 for the positive class).
- """
- inputs = inputs.sigmoid()
- inputs = inputs.flatten(1)
- numerator = 2 * torch.einsum("nc,mc->nm", inputs, targets)
- denominator = inputs.sum(-1)[:, None] + targets.sum(-1)[None, :]
- loss = 1 - (numerator + 1) / (denominator + 1)
- return loss
-
-
-batch_dice_loss_jit = torch.jit.script(
- batch_dice_loss
-) # type: torch.jit.ScriptModule
-
-
-def batch_sigmoid_ce_loss(inputs: torch.Tensor, targets: torch.Tensor):
- """
- Args:
- inputs: A float tensor of arbitrary shape.
- The predictions for each example.
- targets: A float tensor with the same shape as inputs. Stores the binary
- classification label for each element in inputs
- (0 for the negative class and 1 for the positive class).
- Returns:
- Loss tensor
- """
- hw = inputs.shape[1]
-
- pos = F.binary_cross_entropy_with_logits(
- inputs, torch.ones_like(inputs), reduction="none"
- )
- neg = F.binary_cross_entropy_with_logits(
- inputs, torch.zeros_like(inputs), reduction="none"
- )
-
- loss = torch.einsum("nc,mc->nm", pos, targets) + torch.einsum(
- "nc,mc->nm", neg, (1 - targets)
- )
-
- return loss / hw
-
-
-batch_sigmoid_ce_loss_jit = torch.jit.script(
- batch_sigmoid_ce_loss
-) # type: torch.jit.ScriptModule
-
-
-class HungarianMatcher(nn.Module):
- """This class computes an assignment between the targets and the predictions of the network
-
- For efficiency reasons, the targets don't include the no_object. Because of this, in general,
- there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,
- while the others are un-matched (and thus treated as non-objects).
- """
-
- def __init__(self, cost_class: float = 1, cost_mask: float = 1, cost_dice: float = 1, num_points: int = 0):
- """Creates the matcher
-
- Params:
- cost_class: This is the relative weight of the classification error in the matching cost
- cost_mask: This is the relative weight of the focal loss of the binary mask in the matching cost
- cost_dice: This is the relative weight of the dice loss of the binary mask in the matching cost
- """
- super().__init__()
- self.cost_class = cost_class
- self.cost_mask = cost_mask
- self.cost_dice = cost_dice
-
- assert cost_class != 0 or cost_mask != 0 or cost_dice != 0, "all costs cant be 0"
-
- self.num_points = num_points
-
- @torch.no_grad()
- def memory_efficient_forward(self, outputs, targets):
- """More memory-friendly matching"""
- bs, num_queries = outputs["pred_logits"].shape[:2]
-
- indices = []
-
- # Iterate through batch size
- for b in range(bs):
-
- out_prob = outputs["pred_logits"][b].softmax(-1) # [num_queries, num_classes]
- tgt_ids = targets[b]["labels"]
-
- # Compute the classification cost. Contrary to the loss, we don't use the NLL,
- # but approximate it in 1 - proba[target class].
- # The 1 is a constant that doesn't change the matching, it can be ommitted.
- cost_class = -out_prob[:, tgt_ids]
-
- out_mask = outputs["pred_masks"][b] # [num_queries, H_pred, W_pred]
- # gt masks are already padded when preparing target
- tgt_mask = targets[b]["masks"].to(out_mask)
-
- out_mask = out_mask[:, None]
- tgt_mask = tgt_mask[:, None]
- # all masks share the same set of points for efficient matching!
- point_coords = torch.rand(1, self.num_points, 2, device=out_mask.device)
- # get gt labels
- tgt_mask = point_sample(
- tgt_mask,
- point_coords.repeat(tgt_mask.shape[0], 1, 1),
- align_corners=False,
- ).squeeze(1)
-
- out_mask = point_sample(
- out_mask,
- point_coords.repeat(out_mask.shape[0], 1, 1),
- align_corners=False,
- ).squeeze(1)
-
- with autocast(enabled=False):
- out_mask = out_mask.float()
- tgt_mask = tgt_mask.float()
- # Compute the focal loss between masks
- cost_mask = batch_sigmoid_ce_loss_jit(out_mask, tgt_mask)
-
- # Compute the dice loss betwen masks
- cost_dice = batch_dice_loss_jit(out_mask, tgt_mask)
-
- # Final cost matrix
- C = (
- self.cost_mask * cost_mask
- + self.cost_class * cost_class
- + self.cost_dice * cost_dice
- )
- C = C.reshape(num_queries, -1).cpu()
-
- indices.append(linear_sum_assignment(C))
-
- return [
- (torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64))
- for i, j in indices
- ]
-
- @torch.no_grad()
- def forward(self, outputs, targets):
- """Performs the matching
-
- Params:
- outputs: This is a dict that contains at least these entries:
- "pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
- "pred_masks": Tensor of dim [batch_size, num_queries, H_pred, W_pred] with the predicted masks
-
- targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:
- "labels": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth
- objects in the target) containing the class labels
- "masks": Tensor of dim [num_target_boxes, H_gt, W_gt] containing the target masks
-
- Returns:
- A list of size batch_size, containing tuples of (index_i, index_j) where:
- - index_i is the indices of the selected predictions (in order)
- - index_j is the indices of the corresponding selected targets (in order)
- For each batch element, it holds:
- len(index_i) = len(index_j) = min(num_queries, num_target_boxes)
- """
- return self.memory_efficient_forward(outputs, targets)
-
- def __repr__(self, _repr_indent=4):
- head = "Matcher " + self.__class__.__name__
- body = [
- "cost_class: {}".format(self.cost_class),
- "cost_mask: {}".format(self.cost_mask),
- "cost_dice: {}".format(self.cost_dice),
- ]
- lines = [head] + [" " * _repr_indent + line for line in body]
- return "\n".join(lines)
diff --git a/spaces/Eunice0120/text_generator/app.py b/spaces/Eunice0120/text_generator/app.py
deleted file mode 100644
index bb8b2f048281e672d3d6502d68658c32ddaaf85c..0000000000000000000000000000000000000000
--- a/spaces/Eunice0120/text_generator/app.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel
-
-title="My First Text Generator"
-description="Input text."
-example=[
- ["Once upon a time"]
-]
-
-model1=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-model2=gr.Interface.load("huggingface/gpt2")
-model3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-125M")
-
-gr.Parallel(model1, model2, model3, title=title, description=description).launch()
diff --git a/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/README.md b/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/README.md
deleted file mode 100644
index 29930eaee22bb7fbfc99d89f8d6257d47ee9bbf0..0000000000000000000000000000000000000000
--- a/spaces/FKBaffour/Streamlit_App_for_Sales_Forecasting/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Streamlit App For Sales Forecasting
-emoji: 👁
-colorFrom: purple
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/ChatgptLogin.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/ChatgptLogin.py
deleted file mode 100644
index 9551d15dd5121c4b42f80d0ba547a10f0868563b..0000000000000000000000000000000000000000
--- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/ChatgptLogin.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import os
-from ...typing import sha256, Dict, get_type_hints
-import requests
-import re
-import base64
-
-url = 'https://chatgptlogin.ac'
-model = ['gpt-3.5-turbo']
-supports_stream = False
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- def get_nonce():
- res = requests.get('https://chatgptlogin.ac/use-chatgpt-free/', headers={
- "Referer": "https://chatgptlogin.ac/use-chatgpt-free/",
- "User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36'
- })
-
- src = re.search(r'class="mwai-chat mwai-chatgpt">.*Send
-
-
-
-
-
-
-