Wlop Diffusion
-
- Demo for Wlop Diffusion Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
-
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Plugins on M1 Macs A Bad Idea for Your System and Your Work.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Plugins on M1 Macs A Bad Idea for Your System and Your Work.md
deleted file mode 100644
index acb6df03e005b46b727d8ad63d90105176276f4f..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Plugins on M1 Macs A Bad Idea for Your System and Your Work.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
If you are a music producer or a hobbyist who likes to use plugins for your audio projects, you might be tempted to download cracked plugins from the internet. Cracked plugins are plugins that have been illegally modified or hacked to bypass the license or registration process. They are often available for free or at a very low price on various websites or forums.
-Download Zip ☆☆☆ https://byltly.com/2uKz1C
However, using cracked plugins on your M1 Mac can have serious consequences for your system and your work. Here are some of the reasons why you should avoid cracked plugins on M1 Macs:
-Therefore, it is better to avoid cracked plugins on M1 Macs and use legitimate plugins instead. Legitimate plugins are plugins that you have purchased or obtained legally from the official sources. They are safe, compatible, and reliable for your M1 Mac. They also come with technical support, updates, and warranties from the developers and distributors.
-Legitimate plugins might cost more than cracked plugins, but they are worth the investment in the long run. They can enhance your audio quality, productivity, and creativity without compromising your system or your work. They can also help you support the plugin industry and encourage more innovation and development.
-So, next time you are looking for a plugin for your M1 Mac, think twice before downloading a cracked plugin from the internet. Choose a legitimate plugin instead and enjoy the benefits of using it on your M1 Mac.
- -How to Find Legitimate Plugins for M1 Macs
- -Now that you know why you should avoid cracked plugins on M1 Macs, you might be wondering how to find legitimate plugins for your system. Here are some tips that can help you find and choose the best plugins for your M1 Mac:
-By following these tips, you can find and choose the best legitimate plugins for your M1 Mac and enjoy using them on your system.
- -Conclusion
-Cracked plugins on M1 Macs are not worth the risk or the hassle. They can harm your computer, your work, and your reputation. They can also prevent you from getting the most out of your M1 Mac and its capabilities.
-Legitimate plugins on M1 Macs are the way to go. They are safe, compatible, and reliable for your system. They can also enhance your audio quality, productivity, and creativity without compromising anything.
-So, avoid cracked plugins on M1 Macs and use legitimate plugins instead. You will be glad you did.
ddb901b051DCS: A-10C Warthog is a PC simulation of the U.S. premier Close Air Support attack aircraft. This is the second aircraft in the DCS series, following DCS: Black Shark, and raises the bar even higher in the DCS series. Warthog brings the most realistic PC simulation of a modern fixed wing combat aircraft in regards to flight dynamics, avionics, sensors, and weapon systems. You also have the option to play Warthog in "Game" mode for a casual game experience.
-Download Zip — https://byltly.com/2uKvLV
The A-10C is an enhanced version of the famous A-10A that served as a major close air support aircraft for the U.S. Air Force, Air National Guard, and Reserves for almost 30 years. A-10C has been upgraded to meet 21st century standards, using systems such as Multi-Function Color Displays (MFCD), GPS-guided weapons, and data-link support. Retaining all the features of older A-10A, the A-10C has turned into a true precision strike fighter with the most modern navigation systems, precision attack weapons (Maverick, JDAM, WCMD, and laser-guided bombs), and an integrated countermeasures system.
-The A-10C has participated in operations over Iraq and Afghanistan and proved to be a precise and effective weapon in the "War on Terrorism". Its advanced equipment has greatly reduced the number of "friendly fire" incidents - thanks largely to the Situational Awareness Datalink (SADL) and the ability to better identify targets with using the Litening II AT targeting pod. The A-10C of course retains its ability to do what it was originally designed to do: kill tanks in a conventional force-on-force battlefield.
-As with previous versions, the A-10C - very easy to fly and is a stable and survivable weapons platform. For those familiar with DCS: Black Shark, we feel that the A-10C will be much easier to fly.
-The DCS A-10C cockpit is a 100% six-degrees of freedom (6 DOF) cockpit that allows complete freedom of movement around the cockpit. Each panel is reproduced in exacting detail to match operational A-10Cs (Suite 3.1). This includes all panels, switches, dials, buttons being animated, rendered in the 3D, and with high-resolution textures. Both day, night, and Night Vision Goggle (NVG) lighting is available. When the mouse is hovered over a cockpit control, a tool tip is displayed to indicate the controls function.
-Fly missions in the Caucasus region of the Black Sea against and with a wide array of air, land and sea forces with new and improved intelligence. Create your own missions and campaigns with the included Mission and Campaign Editors, and fly with and against friends online using the included online game browser.
-There are several ways to get DCS: A-10C Warthog on your PC. You can buy it from various online stores such as Steam, Amazon, or directly from Eagle Dynamics[^5^
Before you can unleash the full potential of the A-10C Warthog, you need to learn how to operate its complex systems and procedures. Fortunately, the game provides you with several ways to do so, ranging from interactive tutorials to detailed manuals and guides.
-The most recommended way to start learning the basics is to play the interactive training missions that are included with the game. These missions will guide you step by step through various aspects of flying and fighting with the A-10C, such as navigation, communication, sensors, weapons, and countermeasures. You will be able to follow the instructions of a virtual instructor, who will demonstrate and explain each action and control. You will also be able to pause and resume the training at any time, as well as replay any part you want.
-To access the interactive training missions, go to the main menu and select TRAINING. You will see a list of 25 training missions, covering topics such as:
- -Select the mission you want to play and click BRIEFING. You will see a summary of the mission objectives, as well as a map of the area. You can also access the kneeboard, which contains useful information such as checklists, frequencies, and coordinates. Click FLY when you are ready to start the mission.
-Once in the cockpit, you will hear the voice of the instructor, who will introduce you to the topic of the mission and tell you what to do. You can also see the instructions on the top left corner of the screen, as well as some visual cues that highlight the relevant controls or indicators. You can use your mouse to interact with the cockpit controls, or use your keyboard or joystick if you have them configured. You can also use some keyboard commands to control the training session, such as:
-Key | Function |
---|---|
P | Pause or resume the training |
LCTRL+P | Replay the last instruction |
LALT+P | Skip to the next instruction |
LWIN+P | Restart the current instruction |
LCTRL+LALT+P | End the training mission |
LCTRL+LALT+R | Restart the training mission |
LCTRL+LALT+B | Return to briefing screen |
LCTRL+LALT+E | Eject from the aircraft (not recommended) |
The interactive training missions are a great way to learn by doing, but they are not enough to cover everything you need to know about the A-10C. For more in-depth information, you can refer to the manuals and guides that are provided with the game. These documents are available in PDF format and can be accessed from the game folder or from the main menu by selecting MANUALS.
-The most important document is the Flight Manual, which is a 669-page book that covers everything from the history and specifications of the A-10C to its systems, weapons, procedures, and tactics. This manual is based on real-world documentation and is very detailed and accurate. However, it is also very technical and dense, so it may not be very easy to read or understand for beginners. Therefore, it is recommended that you use it as a reference rather than a tutorial.
-A more user-friendly document is Chuck's Guide for DCS: A-10C Warthog, which is a 176-page guide that summarizes and explains the most essential aspects of flying and fighting with the A-10C in a clear and concise way. This guide is written by an experienced flight simmer and includes many screenshots, diagrams, tips, and tricks. It is a great resource for beginners and intermediate pilots who want to learn more about the A-10C without getting overwhelmed by too much information.
-Another useful document is The Enemy Within 3.0 Campaign Guide, which is a 64-page guide that accompanies a story based campaign for the A-10C that features 21 missions and a dynamic storyline. This guide provides you with the background, objectives, and tips for each mission, as well as some general advice on how to plan and execute your flights. This guide is a good way to practice your skills and enjoy a realistic and immersive scenario with the A-10C.
-Once you have learned the basics of the A-10C, you are ready to play the game and have some fun. The game offers you several options to choose from, depending on your preferences and goals. You can play single-player or multiplayer modes, and you can create your own missions and campaigns or download them from other users.
-The simplest way to play the game is to select INSTANT ACTION from the main menu. This will allow you to jump into the cockpit of the A-10C and fly a short mission with a predefined objective and scenario. You can choose from different difficulty levels, weather conditions, and locations. Instant action missions are a good way to test your skills and have some quick action without too much preparation.
-If you want more variety and challenge, you can select MISSIONS from the main menu. This will allow you to choose from a list of single-player missions that are included with the game or downloaded from other sources. These missions vary in length, complexity, and difficulty, and cover different aspects of flying and fighting with the A-10C. You can also see a briefing screen that gives you some information about the mission objectives, situation, and loadout. You can also modify some parameters such as time of day, weather, and enemy skill level. Missions are a good way to experience different scenarios and situations with the A-10C.
-If you want more continuity and immersion, you can select CAMPAIGNS from the main menu. This will allow you to choose from a list of single-player campaigns that are included with the game or downloaded from other sources. These campaigns consist of a series of missions that are connected by a storyline and have persistent consequences. You will have to follow the orders of your commander, plan your flights, manage your resources, and deal with the changing situation on the ground. Campaigns are a good way to feel like a part of a larger conflict and see how your actions affect the outcome.
-If you want more interaction and competition, you can select MULTIPLAYER from the main menu. This will allow you to join or host online sessions with other players around the world. You can choose from different modes such as cooperative, team versus team, or free for all. You can also see a list of available servers that show their name, ping, players, mission, rules, and password. You can also use the chat function to communicate with other players before or during the game. Multiplayer is a good way to cooperate or compete with other pilots and have some fun and social interaction.
-Now that you know how to play the game, here are some tips and tricks that will help you improve your performance and enjoyment of the game:
-DCS: A-10C Warthog is a game that offers a realistic and immersive simulation of the U.S. premier Close Air Support attack aircraft. It is a game that requires a lot of dedication, knowledge, and skill to master, but it is also a game that provides a rewarding and satisfying experience that will make you feel like a real pilot.
-If you are interested in flying and fighting with the A-10C Warthog, you can download the game from various sources and install it on your PC. You can also learn the basics by using the interactive tutorials, manuals, and guides that are provided with the game. You can also play the game by choosing from different single-player or multiplayer modes, or by creating your own missions and campaigns. You can also improve your performance and enjoyment by using some tips and tricks that will help you along the way.
-DCS: A-10C Warthog is a game that has been praised by many critics and players for its realism, depth, and quality. It is a game that features a highly detailed and accurate 3D model of the A-10C Warthog, a realistic flight model, a comprehensive avionics and weapon system, a dynamic and realistic combat environment, a variety of modes As promised, I will create a realistic illustration of the A-10C Warthog aircraft for you to enjoy. The A-10C Warthog is a single-seat, twin-engine, straight-wing jet aircraft designed for close air support of ground forces. It has a distinctive shape and features, such as the large nose-mounted GAU-8/A Avenger 30 mm rotary cannon, the bubble canopy, the twin vertical stabilizers, and the 11 hardpoints for carrying various weapons and pods. The A-10C Warthog is painted in a gray camouflage scheme with black and white markings and insignia. Here is the image I created for you: . I hope you like it. ? and a powerful mission and campaign editor. It is a game that will challenge you and reward you like no other.
Here are some frequently asked questions and answers about the game:
-DCS: A-10C Warthog is a realistic simulation of the U.S. premier Close Air Support attack aircraft. This game is not for the faint of heart, as it requires a lot of dedication, knowledge, and skill to master the complex systems and procedures of the A-10C. However, if you are up for the challenge, you will find a rewarding and immersive experience that will make you feel like a real pilot.
-In this article, I have provided you with some information, tips, and tricks on how to download, install, and play the game, as well as some features and reviews of the game. I have also created a realistic illustration of the A-10C Warthog aircraft for you to enjoy.
-If you are interested in flying and fighting with the A-10C Warthog, you can follow these steps:
-DCS: A-10C Warthog is a game that offers a realistic and immersive simulation of the U.S. premier Close Air Support attack aircraft. It is a game that will challenge you and reward you like no other.
b2dd77e56bAutodesk 3ds Max 2019 Crack + Serial Full Direct Download is a comprehensive, professional to help you create 3D designs and animation. Although there have been a lot of new 3D design and modeling programs being developed lately, Autodesk 3DS Max still remains a key player within the industy. Autodesk 3ds Max that you can download from GigaHax now contains more flexible options for Relax, the tool that averages UVs and allows for the automatic relief of texture distortion. If used in conjunction with another function, Show Edge Distortion, then the mapping of your characters becomes all the easier.
-Network License for Maya 2017 and Mudbox 2017:
Use "\x64\Tools\NLM\NLM.msi" from Maya 2016 installer. Follow instructions in "AUTODESK_MENTALRAY_STANDALONE_V2016_WIN64-XFORCE" (or "MACOSX64" / "LNX64" releases) crack and also replace "adlmint.dll" in "C:\Program Files\Common Files\Autodesk Shared\CLM\V4\MSVC11". In "lic.dat", add the following lines:
FEATURE 86618MAYA_2017_0F adskflex 1.000 permanent 100 VENDOR_STRING=commercial:permanent SUPERSEDE DUP_GROUP=UH ISSUED=01-janv-2013 SN=666-66666666 TS_OK SIGN="1745 D487 C07B 1B0D 10C0 555A B147 1372 8DBF 1E14 ECFC 870D FC59 5ECC 9156 1814 B16F 2E7B 4760 2A4C 745E 732E 5A7D 9A3C E3D4 0359 562E 9B90 713D 3708" SIGN2="100D 7553 E295 6170 A0C2 9567 8124 C44F 22C3 81B1 E629 EA7D 21A5 E308 1BD3 1D1F 0650 B3DC E78C 2AB0 C055 DB08 A9DE 12DB FA5C 3AF6 FFC3 A3EA A323 4699"
FEATURE 86624MBXPRO_2017_0F adskflex 1.000 permanent 100 VENDOR_STRING=commercial:permanent SUPERSEDE DUP_GROUP=UH ISSUED=01-janv-2013 SN=666-66666666 TS_OK SIGN="1745 D487 C07B 1B0D 10C0 555A B147 1372 8DBF 1E14 ECFC 870D FC59 5ECC 9156 1814 B16F 2E7B 4760 2A4C 745E 732E 5A7D 9A3C E3D4 0359 562E 9B90 713D 3708" SIGN2="100D 7553 E295 6170 A0C2 9567 8124 C44F 22C3 81B1 E629 EA7D 21A5 E308 1BD3 1D1F 0650 B3DC E78C 2AB0 C055 DB08 A9DE 12DB FA5C 3AF6 FFC3 A3EA A323 4699"
Thanks to:
-2017-direct-links-no-requests-thanks-spam-ot-137100/index4.html
The FLEXnet codes should one day be in the link below but currently are not.
-result/caas/sfdcarticles/sfdcarticles/2017-FLEXnet-feature-codes-for-Autodesk-products.html
DOWNLOAD ⚹ https://imgfil.com/2uxZ8Q
Avid Pro Tools 10 - 10.3.10. This is what the pros use.
_US/download/Pro-Tools-10-3-10-Downloads
-Tools-10-3-9-Downloads
-Tools-10-3-Downloads
=43572
Windows Cracks:
-download_patch-for-pro-tools-1039-win.html
Mac cracked release (?):
Pro Tools 10.3.10-openssh from
Here are some links (bottom) for Autodesk 2016 and Adobe CC 2014 & 2015 products. You can download all Autodesk at once in Tonec Internet Download Manager. Just click "Add batch download from clipboard". *.rar or *.001 files can be opened with WinRAR You can open *.nfo files with Notepad.(You can use the cracks that are available for Autodesk and Adobe by XFORCE or ISO releases WIN or MAC)
AUTODESK.MAYA.V2016.WIN64-ISO
AUTODESK_MAYA_V2016_MACOSX-XFORCE
AUTODESK_MAYA_V2016_LNX64-XFORCE
ADOBE_CC_V2014_KEYGEN_WIN_MACOSX-XFORCE
New network cracks available for Autodesk in:
AUTODESK_MENTALRAY_STANDALONE_V2016_LNX64-XFORCE
AUTODESK_MENTALRAY_STANDALONE_V2016_MACOSX64-XFORCE
AUTODESK_MENTALRAY_STANDALONE_V2016_WIN64-XFORCE
Autodesk 2017 links are below. You can use the crack/keygen from any of the ISO/XFORCE 2017 releases.
INFO about Moldflow crack
To anyone who might be interested, old 2016 XForce FLEXNet crack still works for 2017 softwares:
replace original adlmint.dll with the cracked one in C:\Program Files\Common Files\Autodesk Shared\CLM\V3\MSVC14, and edit the XF license file by adding the following:
FEATURE ************ adskflex 1.000 permanent 100 VENDOR_STRING=commercial:permanent SUPERSEDE DUP_GROUP=UH ISSUED=01-janv-2013 SN=666-66666666 TS_OK SIGN="1745 D487 C07B 1B0D 10C0 555A B147 1372 8DBF 1E14 ECFC 870D FC59 5ECC 9156 1814 B16F 2E7B 4760 2A4C 745E 732E 5A7D 9A3C E3D4 0359 562E 9B90 713D 3708" SIGN2="100D 7553 E295 6170 A0C2 9567 8124 C44F 22C3 81B1 E629 EA7D 21A5 E308 1BD3 1D1F 0650 B3DC E78C 2AB0 C055 DB08 A9DE 12DB FA5C 3AF6 FFC3 A3EA A323 4699"
where ************ is the proper FLEXNet feature code for AD2017 software you want to use (check FLEXNet link below): now you have a multiple license (up to 100: not uncounted, but better than MAGNiTUDE's 2) you can use with your multicore CPU, and useful for all AD2017 softwares. Of course, if you use this one, delete all crack files related to MAGNiTUDE crack and restore the original onesSimStudioTools R2: replace the original adlmint.dll with the cracked one in C:\Program Files\Autodesk\SimStudio Tools 2016 R2 (default installation folder) and use the correct FLEXNet code in the license file
Autodesk 2017 product keys:
-service/installation-activation-licensing/get-ready/find-serial-number-product-key/product-key-look/2017-product-keys
Autodesk 2017 FLEXnet keys (for network license):
-result/caas/sfdcarticles/sfdcarticles/2017-FLEXnet-feature-codes-for-Autodesk-products.html
Accumulated hotfix 1 for AutoCAD 2017 based products
_downloads/AutoCAD_2017_Hotfix_1_x64.exe
_downloads/AutoCAD_2017_Hotfix_1_x86.exe
This hotfix applies to the following releases:
- Autodesk AutoCAD 2017
- Autodesk AutoCAD Architecture 2017
- Autodesk AutoCAD Civil 3D 2017
- Autodesk AutoCAD Electrical 2017
- Autodesk AutoCAD Map 3D 2017
- Autodesk AutoCAD Mechanical 2017
- Autodesk AutoCAD MEP 2017
- Autodesk AutoCAD P&ID 2017
- Autodesk AutoCAD Plant 3D 2017
- Autodesk AutoCAD Utility Design 2017
Autodesk Inventor 2017 fails to install due to failure to install .NET Framework Runtime 4.6
Applies to:
- Factory Design Suite 2017
- Inventor 2017
- Inventor LT 2017
- and Product Design Suite 2017
Issue:
Autodesk Inventor 2017 requires .NET 4.6 to successfully install Inventor 2017 products.
The Inventor, Inventor LT, and Inventor OEM 2017 installers will stop if they fail to install .NET 4.6 on your computer.
The log file reports: Install .NET Framework Runtime 4.6 - Failed - Failure is ignored, Result=1603
Notes:
- Windows 7 SP1 and Windows 8.1 do not come with .Net Framework 4.6 pre-installed.
- Windows 10 comes with .Net Framework 4.6 pre-installed.
Solution:
1. Manually Install Microsoft .NET Framework 4.6 from:
-us/download/details.aspx?id=48137
or choose this direct link to download the Microsoft .NET Framework 4.6 Offline Installer (62.4 Mo)
(for Vista SP2, 7 SP1, 8, 8.1, Server 2008 SP2, 2008 R2 SP1, 2012 & 2012 R2)
-D33C-47E9-9D70-2F7C65DAAD94/NDP46-KB3045557-x86-x64-AllOS-ENU.exe
Important note: KB 2919442 and KB 2919355 are pre-requisite of .NET 4.6 on Windows 8.1 OS.
Get the KB 2919442 (4.6 Mo) and the KB 2919355 (319 Mo) from:
-us/download/details.aspx?id=42135
-FR/download/details.aspx?id=42327
or choose direct links:
-9E65-4681-BBBE-A8F73A5C116F/Windows8.1-KB2919442-x86.msu
-1E15-43FD-B591-63FB7A1A5C04/Windows8.1-KB2919355-x86.msu
2. Restart your computer.
3. Restart the Autodesk Inventor installer.
Additionnal notes:
To check for .Net 4.6 installation on your computer:
- Microsoft .NET Framework 4.6 list under Programs and Features in Control Panel as an installed product on Windows7 SP1 OS.
- Microsoft .NET Framework 4.6 display as Update for Microsoft Windows (KB3045563) under Installed Updates in Control Panel on Windows8.1 OS.
- Or run Regedit, and confirm ".NETFramework,Version = v4.6" displays under the following path: \HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramewo rk\v4.0.30319\SKUs\
Replace English with your language (French, Italian, German, Spanish, Simplified_Chinese, etc.)
AutoCAD 2017
_2017_English_Win_32bit_dlm.sfx.exe
_2017_English_Win_64bit_dlm_001_002.sfx.exe
_2017_English_Win_64bit_dlm_002_002.sfx.exe
AutoCAD LT 2017
_LT_2017_NWL_English_Win_64bit_dlm.sfx.exe
_LT_2017_NWL_English_Win_32bit_dlm.sfx.exe
_LT_2017_English_LP_Win_64bit_dlm.sfx.exe
_LT_2017_English_LP_Win_32bit_dlm.sfx.exe
AutoCAD Architecture 2017
_Architecture_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Architecture_2017_English_Win_64bit_dlm_002_002.sfx.exe
_Architecture_2017_English_Win_32bit_dlm_001_002.sfx.exe
_Architecture_2017_English_Win_32bit_dlm_002_002.sfx.exe
AutoCAD Electrical 2017
_E/DLM/AutoCAD_Electrical_2017_English_Win_32bit_dlm_001_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_English_Win_32bit_dlm_002_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_English_Win_64bit_dlm_001_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_English_Win_64bit_dlm_002_002.sfx.exe
AutoCAD MAP 3D 2017
_Map_2017_English_Win_64bit_DLM_001_002.sfx.exe
_Map_2017_English_Win_64bit_DLM_002_002.sfx.exe
AutoCAD MEP 2017
_MEP_2017_English_Win_32bit_dlm_001_003.sfx.exe
_MEP_2017_English_Win_32bit_dlm_002_003.sfx.exe
_MEP_2017_English_Win_32bit_dlm_003_003.sfx.exe
_MEP_2017_English_Win_64bit_dlm_001_003.sfx.exe
_MEP_2017_English_Win_64bit_dlm_002_003.sfx.exe
_MEP_2017_English_Win_64bit_dlm_003_003.sfx.exe
AutoCAD Mechanical 2017
_PP/DLM/AutoCAD_Mechanical_2017_English_Win_32bit_dlm.sfx.exe
_PP/DLM/AutoCAD_Mechanical_2017_English_Win_64bit_dlm_001_002.sfx.exe
_PP/DLM/AutoCAD_Mechanical_2017_English_Win_64bit_dlm_002_002.sfx.exe
AutoCAD Raster Design 2017
_Raster_Design_2017_English_Win_32bit_dlm.sfx.exe
_Raster_Design_2017_English_Win_64bit_dlm.sfx.exe
AutoCAD Plant 3D 2017
_Plant_3D_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Plant_3D_2017_English_Win_64bit_dlm_002_002.sfx.exe
AutoCAD P&ID 2017
_PNID_2017_English_Win_64bit_dlm_001_002.sfx.exe
_PNID_2017_English_Win_64bit_dlm_002_002.sfx.exe
Autodesk AutoCAD Civil 3D 2017
_Civil3D_2017_English_Win_64bit_dlm_001_003.sfx.exe
_Civil3D_2017_English_Win_64bit_dlm_002_003.sfx.exe
_Civil3D_2017_English_Win_64bit_dlm_003_003.sfx.exe
AutoCAD Utility Design 2017
_Utility_Design_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Utility_Design_2017_English_Win_64bit_dlm_002_002.sfx.exe
Autodesk Revit 2017
_Revit_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Revit_2017_English_Win_64bit_dlm_002_002.sfx.exe
Autodesk Revit LT 2017
_Revit_LT_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Revit_LT_2017_English_Win_64bit_dlm_002_002.sfx.exe
Inventor 2017
_2017_English_Win_64bit_dlm_001_003.sfx.exe
_2017_English_Win_64bit_dlm_002_003.sfx.exe
_2017_English_Win_64bit_dlm_003_003.sfx.exe
Inventor LT 2017
_LT_2017_English_Win_32bit_dlm.sfx.exe
_LT_2017_English_Win_64bit_dlm_001_002.sfx.exe
_LT_2017_English_Win_64bit_dlm_002_002.sfx.exe
Vault Basic 2017
_ENU_32_64bit_dlm.sfx.exe
_ENU_64bit_dlm.sfx.exe
Vault Professional 2017
_ENU_32_64bit_dlm.sfx.exe
_ENU_64bit_dlm.sfx.exe
Vault Workgroup 2017
_ENU_32_64bit_dlm.sfx.exe
_ENU_64bit_dlm.sfx.exe
Autodesk Advance Steel 2017
_2017_ML_WIN_64BIT_DLM.sfx.exe
Autodesk Navisworks Manage 2017
_Navisworks_Manage_2017_Multilingual_Win_64bit_dlm_001_002.sfx.exe
_Navisworks_Manage_2017_Multilingual_Win_64bit_dlm_002_002.sfx.exe
Autodesk Navisworks Simulate 2017
_Navisworks_Simulate_2017_Multilingual_Win_64bit_dlm_001_002.sfx.exe
_Navisworks_Simulate_2017_Multilingual_Win_64bit_dlm_002_002.sfx.exe
Moldflow Adviser Ultimate 2017
_2017_Multilingual_Win_64bit_dlm_001_002.sfx.exe
_2017_Multilingual_Win_64bit_dlm_002_002.sfx.exe
Moldflow CAD Doctor 2017
_2017_Multilingual_Win_64bit_dlm.sfx.exe
Moldflow Design (formerly Simulation DFM) 2017
_2017_Multilingual_Win_64bit_dlm.sfx.exe
Moldflow Insight Ultimate 2017
_2017_Multilingual_Win_64bit_dlm.sfx.exe
Moldflow Synergy 2017
_2017_Multilingual_Win_64bit_dlm_001_002.sfx.exe
_2017_Multilingual_Win_64bit_dlm_002_002.sfx.exe
Robot Structural Analysis Pro 2017
_Structural_Analysis_Professional_2017_Multilingual_Win_64bit_dlm.sfx.exe
Autodesk Vehicle Tracking English 2017
_Vehicle_Tracking_2017_English_Win_32_64bit_DLM.sfx.exe
VRED 2017
_VRED_2017_Enu_Win_64bit_dlm.sfx.exe
VRED Design 2017
_VREDDES_2017_Enu_Win_64bit_dlm.sfx.exe
VRED Professional 2017
_VREDPRO_2017_Enu_Win_64bit_dlm.sfx.exe
VRED Presenter 2017
_VREDPRS_2017_Enu_Win_64bit_dlm.sfx.exe
VRED Server 2017
_VREDSRV_2017_Enu_Win_64bit_dlm.sfx.exe
Autodesk Nastran In-CAD 2017
_INCAD_2017_Win_64bit_dlm.sfx.exe
Autodesk Nastran 2017
_2017_Win_64bit_dlm.sfx.exe
Showcase 2017
_2017_English_Win_64bit_dlm_001_003.sfx.exe
_2017_English_Win_64bit_dlm_002_003.sfx.exe
_2017_English_Win_64bit_dlm_003_003.sfx.exe
CFD 2017
_CFD_2017_Win_64bit_dlm_001_002.sfx.exe
_CFD_2017_Win_64bit_dlm_002_002.sfx.exe
Simulation Mechanical 2017
_Simulation_Mechanical_2017_Win_64bit_dlm_001_002.sfx.exe
_Simulation_Mechanical_2017_Win_64bit_dlm_002_002.sfx.exe
Fabrication CADmep 2017
_Fabrication_CADmep_2017_win_64bit_dlm.sfx.exe
Fabrication CAMduct 2017
_Fabrication_CAMduct_2017_win_64bit_dlm.sfx.exe
Fabrication ESTmep2017
_Fabrication_ESTmep_2017_win_64bit_dlm.sfx.exe
Autodesk InfraWorks 360 2017
_InfraWorks_2017_Win_64bit_DLM.sfx.exe
Point Layout 2017
_Point_Layout_2017_Win_32-64bit_en-us.exe
ReCap 360 Pro 2017
_ReCap360_30052_Multilingual_Win_64bit_dlm.sfx.exe
Design and Creation suites
Product Design Suite 2017
_2017_Enu_Win_64bit_dlm_001_006.sfx.exe
_2017_Enu_Win_64bit_dlm_002_006.sfx.exe
_2017_Enu_Win_64bit_dlm_003_006.sfx.exe
_2017_Enu_Win_64bit_dlm_004_006.sfx.exe
_2017_Enu_Win_64bit_dlm_005_006.sfx.exe
_2017_Enu_Win_64bit_dlm_006_006.sfx.exe
AutoCAD Design Suite Ultimate 2017
_Ultimate_2017_English_Win_32bit_dlm_001_002.sfx.exe
_Ultimate_2017_English_Win_32bit_dlm_002_002.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_001_004.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_002_004.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_003_004.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_004_004.sfx.exe
Autodesk Factory Design Suite Ultimate 2017
_2017_Enu_Win_64bit_dlm_001_007.sfx.exe
_2017_Enu_Win_64bit_dlm_002_007.sfx.exe
_2017_Enu_Win_64bit_dlm_003_007.sfx.exe
_2017_Enu_Win_64bit_dlm_004_007.sfx.exe
_2017_Enu_Win_64bit_dlm_005_007.sfx.exe
_2017_Enu_Win_64bit_dlm_006_007.sfx.exe
_2017_Enu_Win_64bit_dlm_007_007.sfx.exe
Infrastructure Design Suite Ultimate 2017
_2017_Enu_Win_64bit_dlm_001_007.sfx.exe
_2017_Enu_Win_64bit_dlm_002_007.sfx.exe
_2017_Enu_Win_64bit_dlm_003_007.sfx.exe
_2017_Enu_Win_64bit_dlm_004_007.sfx.exe
_2017_Enu_Win_64bit_dlm_005_007.sfx.exe
_2017_Enu_Win_64bit_dlm_006_007.sfx.exe
_2017_Enu_Win_64bit_dlm_007_007.sfx.exe
Building Design Suite Ultimate 2017
_2017_Enu_Win_64bit_dlm_001_007.sfx.exe
_2017_Enu_Win_64bit_dlm_002_007.sfx.exe
_2017_Enu_Win_64bit_dlm_003_007.sfx.exe
_2017_Enu_Win_64bit_dlm_004_007.sfx.exe
_2017_Enu_Win_64bit_dlm_005_007.sfx.exe
_2017_Enu_Win_64bit_dlm_006_007.sfx.exe
_2017_Enu_Win_64bit_dlm_007_007.sfx.exe
Documentation
_2017_help_download/AutoCAD_2017_Product_Help_English_Win_32_64bit_dlm.sfx.exe
_lt_2017_help_download/AutoCAD_LT_2017_Product_Help_English_Win_32_64bit_dlm.sfx.exe
_and_lt_local_help/Autodesk_Inventor_2017_Help.exe
_civil_3d_2017/Autodesk_AutoCAD_Civil_3D_2017_Help_English.exe
_2017_install_help/autodesk_alias_2017_help.exe
Autodesk 3ds max 2017 EFGJKPS (x64 Only) - F for French
_3ds_Max_2017_EFGJKPS_Win_64bit_001_002.sfx.exe
_3ds_Max_2017_EFGJKPS_Win_64bit_002_002.sfx.exe
Autodesk AutoCAD 2017 French
_2017_French_Win_64bit_dlm_001_002.sfx.exe
_2017_French_Win_64bit_dlm_002_002.sfx.exe
_2017_French_Win_32bit_dlm.sfx.exe
Autodesk AutoCAD LT 2017 French
_LT_2017_NWL_French_Win_64bit_dlm.sfx.exe
_LT_2017_French_LP_Win_64bit_dlm.sfx.exe
_LT_2017_NWL_French_Win_32bit_dlm.sfx.exe
_LT_2017_French_LP_Win_32bit_dlm.sfx.exe
Autodesk AutoCAD Architecture 2017 French
_Architecture_2017_French_Win_64bit_dlm_001_002.sfx.exe
_Architecture_2017_French_Win_64bit_dlm_002_002.sfx.exe
_Architecture_2017_French_Win_32bit_dlm_001_002.sfx.exe
_Architecture_2017_French_Win_32bit_dlm_002_002.sfx.exe
Autodesk AutoCAD Electrical 2017 French
_E/DLM/AutoCAD_Electrical_2017_French_Win_64bit_dlm_001_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_French_Win_64bit_dlm_002_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_French_Win_32bit_dlm_001_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_French_Win_32bit_dlm_002_002.sfx.exe
Autodesk AutoCAD Mechanical 2017 French
_PP/DLM/AutoCAD_Mechanical_2017_French_Win_64bit_dlm_001_002.sfx.exe
_PP/DLM/AutoCAD_Mechanical_2017_French_Win_64bit_dlm_002_002.sfx.exe
_PP/DLM/AutoCAD_Mechanical_2017_French_Win_32bit_dlm.sfx.exe
Autodesk AutoCAD MEP 2017 French
_MEP_2017_French_Win_64bit_dlm_001_002.sfx.exe
_MEP_2017_French_Win_64bit_dlm_002_002.sfx.exe
_MEP_2017_French_Win_32bit_dlm_001_002.sfx.exe
_MEP_2017_French_Win_32bit_dlm_002_002.sfx.exe
Autodesk AutoCAD MAP 3D 2017 (x64 Only) French
_Map_2017_French_Win_64bit_DLM_001_002.sfx.exe
_Map_2017_French_Win_64bit_DLM_002_002.sfx.exe
Autodesk AutoCAD Plant 3D 2017 (x64 Only) French
_Plant_3D_2017_French_Win_64bit_dlm_001_002.sfx.exe
_Plant_3D_2017_French_Win_64bit_dlm_002_002.sfx.exe
Autodesk AutoCAD P&ID 2017 (x64 Only) French
_PNID_2017_French_Win_64bit_dlm_001_002.sfx.exe
_PNID_2017_French_Win_64bit_dlm_002_002.sfx.exe
Autodesk AutoCAD Raster Design 2017 French
_Raster_Design_2017_French_Win_64bit_dlm.sfx.exe
_Raster_Design_2017_French_Win_32bit_dlm.sfx.exe
Autodesk AutoCAD Civil 3D 2017 (x64 Only) French
_Civil3D_2017_French_Win_64bit_dlm_001_003.sfx.exe
_Civil3D_2017_French_Win_64bit_dlm_002_003.sfx.exe
_Civil3D_2017_French_Win_64bit_dlm_003_003.sfx.exe
Autodesk Inventor 2017 (X64 Only) French
_2017_French_Win_64bit_dlm_001_003.sfx.exe
_2017_French_Win_64bit_dlm_002_003.sfx.exe
_2017_French_Win_64bit_dlm_003_003.sfx.exe
Autodesk Inventor LT 2017 French
_LT_2017_French_Win_64bit_dlm_001_002.sfx.exe
_LT_2017_French_Win_64bit_dlm_002_002.sfx.exe
_LT_2017_French_Win_32bit_dlm.sfx.exe
Autodesk Revit 2017 (X64 Only) Non-Specific-Language (French included)
_Revit_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Revit_2017_English_Win_64bit_dlm_002_002.sfx.exe
Offline Help Installers French
_max_2017_help/3dsMaxHelp_fra.exe
_2017_offline_help_installer/AutoCAD_2017_Product_Help_French_Win_32_64bit_dlm.sfx.exe
_lt_2017_offline_help/AutoCAD_LT_2017_Product_Help_French_Win_32_64bit_dlm.sfx.exe
_architecture_2017_product_help/AutoCAD_Architecture_Help_2017_French_Win_32_64bit_dlm.sfx.exe
_electrical_2017_help_download/AutoCAD_Electrical_2017_French_help_Win_32_64bit_dlm.sfx.exe
_mechanical_help_2017/AutoCAD_Mechanical_Help_2017_French_Win_32_64bit_dlm.sfx.exe
_map_3d_2017_product_help/Autodesk_AutoCAD_Map_3D_2017_Help_French.exe
_mep_2017_product_help/AutoCAD_MEP_Help_2017_French_Win_32_64bit_dlm.sfx.exe
_civil_3d_2017/Autodesk_AutoCAD_Civil_3D_2017_Help_French.exe
_and_lt_local_help/Autodesk_Inventor_2017_Help_FRA.exe
_and_lt_local_help/Autodesk_Inventor_LT_2017_Help_FRA.exe
Additional Notes:
How to get Autodesk Revit 2017 (X64 Only) Non-Specific Language in French language:
You must be vigilant when installing and well select the desired installation language before entering its serial number.
By chance, multiple languages are available after installing the new Revit 2017 software and can be changed.
In order to benefit from a new interface:
- Copy the Revit shortcut on your desktop
- Right click on the new icon and choose "Properties"
- In the "Target" field, simply change the last three letters of the line with three new ones: FRA
- FRA must be put in place of ENU.
... /Language=FRA
Autodesk Alias Design 2017
_Alias_Design_2017_English_Mac_OSX.dmg
ALIAS AutoStudio 2017
_Alias_AutoStudio_2017_English_Mac_OSX.dmg
Autodesk Alias Surface 2017
_Alias_Surface_2017_English_Mac_OSX.dmg
Autodesk Autocad Mechanicel German 64 Bit
_PP/DLM/AutoCAD_Mechanical_2017_German_Win_64bit_dlm_001_002.sfx.exe
_PP/DLM/AutoCAD_Mechanical_2017_German_Win_64bit_dlm_002_002.sfx.exe
Autodesk Raster Design 2017 German 64 Bit
_Raster_Design_2017_German_Win_64bit_dlm.sfx.exe
Autodesk Autocad 2017 German 32Bit 64Bit
_2017_German_Win_32bit_dlm.sfx.exe
_2017_German_Win_64bit_dlm_001_002.sfx.exe
_2017_German_Win_64bit_dlm_002_002.sfx.exe
Autodesk Inventor 2017 German 64 Bit
_2017_German_Win_64bit_dlm_001_003.sfx.exe
_2017_German_Win_64bit_dlm_002_003.sfx.exe
_2017_German_Win_64bit_dlm_003_003.sfx.exe
AutoCAD Architecture 2017 x64
_Architecture_2017_Italian_Win_64bit_dlm_001_002.sfx.exe
_Architecture_2017_Italian_Win_64bit_dlm_002_002.sfx.exe
AutoCAD LT 2017 x64/x86
_LT_2017_NWL_Italian_Win_64bit_dlm.sfx.exe
_LT_2017_NWL_Italian_Win_32bit_dlm.sfx.exe
_LT_2017_Italian_LP_Win_64bit_dlm.sfx.exe
_LT_2017_Italian_LP_Win_32bit_dlm.sfx.exe
AutoCAD 2017 x64/x86
_2017_Italian_Win_32bit_dlm.sfx.exe
_2017_Italian_Win_64bit_dlm_001_002.sfx.exe
_2017_Italian_Win_64bit_dlm_002_002.sfx.exe
AutoCAD Electrical 2017 x64
_E/DLM/AutoCAD_Electrical_2017_Italian_Win_64bit_dlm_001_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_Italian_Win_64bit_dlm_002_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_Italian_Win_32bit_dlm_001_002.sfx.exe
_E/DLM/AutoCAD_Electrical_2017_Italian_Win_32bit_dlm_002_002.sfx.exe
Autodesk AutoCAD Mechanical 2017
_PP/DLM/AutoCAD_Mechanical_2017_Italian_Win_64bit_dlm_001_002.sfx.exe
_PP/DLM/AutoCAD_Mechanical_2017_Italian_Win_64bit_dlm_002_002.sfx.exe
_PP/DLM/AutoCAD_Mechanical_2017_Italian_Win_32bit_dlm.sfx.exe
Autodesk AutoCAD MEP 2017
_MEP_2017_Italian_Win_64bit_dlm_001_002.sfx.exe
_MEP_2017_Italian_Win_64bit_dlm_002_002.sfx.exe
_MEP_2017_Italian_Win_32bit_dlm_001_002.sfx.exe
_MEP_2017_Italian_Win_32bit_dlm_002_002.sfx.exe
Autodesk AutoCAD MAP 3D 2017 x64
_Map_2017_Italian_Win_64bit_DLM_001_002.sfx.exe
_Map_2017_Italian_Win_64bit_DLM_002_002.sfx.exe
Autodesk AutoCAD Raster Design 2017
_Raster_Design_2017_Italian_Win_64bit_dlm.sfx.exe
_Raster_Design_2017_Italian_Win_32bit_dlm.sfx.exe
Autodesk Inventor 2017 X64
_2017_Italian_Win_64bit_dlm_001_003.sfx.exe
_2017_Italian_Win_64bit_dlm_002_003.sfx.exe
_2017_Italian_Win_64bit_dlm_003_003.sfx.exe
Autodesk Inventor LT 2017
_LT_2017_Italian_Win_64bit_dlm_001_002.sfx.exe
_LT_2017_Italian_Win_64bit_dlm_002_002.sfx.exe
_LT_2017_Italian_Win_32bit_dlm.sfx.exe
Offline Help Installers Italian
_2017_offline_help_installer/AutoCAD_2017_Product_Help_Italian_Win_32_64bit_dlm.sfx.exe
_lt_2017_offline_help/AutoCAD_LT_2017_Product_Help_Italian_Win_32_64bit_dlm.sfx.exe
_architecture_2017_product_help/AutoCAD_Architecture_Help_2017_Italian_Win_32_64bit_dlm.sfx.exe
_electrical_2017_help_download/AutoCAD_Electrical_2017_Italian_help_Win_32_64bit_dlm.sfx.exe
_mechanical_help_2017/AutoCAD_Mechanical_Help_2017_Italian_Win_32_64bit_dlm.sfx.exe
_mep_2017_product_help/AutoCAD_MEP_Help_2017_Italian_Win_32_64bit_dlm.sfx.exe
Product Design Suite 2017
_2017_Enu_Win_64bit_dlm_001_006.sfx.exe
_2017_Enu_Win_64bit_dlm_002_006.sfx.exe
_2017_Enu_Win_64bit_dlm_003_006.sfx.exe
_2017_Enu_Win_64bit_dlm_004_006.sfx.exe
_2017_Enu_Win_64bit_dlm_005_006.sfx.exe
_2017_Enu_Win_64bit_dlm_006_006.sfx.exe
AutoCAD Design Suite Ultimate 2017 English
_Ultimate_2017_English_Win_32bit_dlm_001_002.sfx.exe
_Ultimate_2017_English_Win_32bit_dlm_002_002.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_001_004.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_002_004.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_003_004.sfx.exe
_Ultimate_2017_English_Win_64bit_dlm_004_004.sfx.exe
Vault Professional 2017
_ENU_32_64bit_dlm.sfx.exe
_ENU_64bit_dlm.sfx.exe
Vault Workgroup 2017
_ENU_32_64bit_dlm.sfx.exe
_ENU_64bit_dlm.sfx.exe
Autodesk Advance Steel 2017
_2017_ML_WIN_64BIT_DLM.sfx.exe
Autodesk Vehicle Tracking English (32-64)bit 2017
_Vehicle_Tracking_2017_English_Win_32_64bit_DLM.sfx.exe
AutoCAD Raster Design 2017
_Raster_Design_2017_English_Win_32bit_dlm.sfx.exe
_Raster_Design_2017_English_Win_64bit_dlm.sfx.exe
Inventor 2017 local help:
_and_lt_local_help/Autodesk_Inventor_2017_Help.exe
Inventor 2017 sample files:
_sample_files/autodesk_inventor_2017_samples.sfx.exe
VRED Presenter 2017
_VREDPRS_2017_Enu_Win_64bit_dlm.sfx.exe
VRED Server 2017
_VREDSRV_2017_Enu_Win_64bit_dlm.sfx.exe
VRED 2017
_VRED_2017_Enu_Win_64bit_dlm.sfx.exe
VRED Design 2017
_VREDDES_2017_Enu_Win_64bit_dlm.sfx.exe
VRED Professional 2017
_VREDPRO_2017_Enu_Win_64bit_dlm.sfx.exe
AutoCAD Plant 3D 2017
_Plant_3D_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Plant_3D_2017_English_Win_64bit_dlm_002_002.sfx.exe
Autodesk AutoCAD P&ID 2017
_PNID_2017_English_Win_64bit_dlm_001_002.sfx.exe
_PNID_2017_English_Win_64bit_dlm_002_002.sfx.exe
Mac_OSX Versions
Autodesk Alias Design 2017
_Alias_Design_2017_English_Mac_OSX.dmg
ALIAS AutoStudio 2017 for Mac
_Alias_AutoStudio_2017_English_Mac_OSX.dmg
Autodesk Alias Surface 2017
_Alias_Surface_2017_English_Mac_OSX.dmg
Autodesk Nastran In-CAD 2017
_INCAD_2017_Win_64bit_dlm.sfx.exe
Autodesk Nastran 2017
_2017_Win_64bit_dlm.sfx.exe
Autodesk_AutoCAD_Civil_3D_2017 Documentation
_civil_3d_2017/Autodesk_AutoCAD_Civil_3D_2017_Help_English.exe
Autodesk AutoCAD Civil 3D 2017
_Civil3D_2017_English_Win_64bit_dlm_001_003.sfx.exe
_Civil3D_2017_English_Win_64bit_dlm_002_003.sfx.exe
_Civil3D_2017_English_Win_64bit_dlm_003_003.sfx.exe
Infrastructure Design Suite Ultimate 2017 Win 64bit
_2017_Enu_Win_64bit_dlm_001_007.sfx.exe
_2017_Enu_Win_64bit_dlm_002_007.sfx.exe
_2017_Enu_Win_64bit_dlm_003_007.sfx.exe
_2017_Enu_Win_64bit_dlm_004_007.sfx.exe
_2017_Enu_Win_64bit_dlm_005_007.sfx.exe
_2017_Enu_Win_64bit_dlm_006_007.sfx.exe
_2017_Enu_Win_64bit_dlm_007_007.sfx.exe
Building Design Suite Ultimate 2017 Win 64bit
_2017_Enu_Win_64bit_dlm_001_007.sfx.exe
_2017_Enu_Win_64bit_dlm_002_007.sfx.exe
_2017_Enu_Win_64bit_dlm_003_007.sfx.exe
_2017_Enu_Win_64bit_dlm_004_007.sfx.exe
_2017_Enu_Win_64bit_dlm_005_007.sfx.exe
_2017_Enu_Win_64bit_dlm_006_007.sfx.exe
_2017_Enu_Win_64bit_dlm_007_007.sfx.exe
Documentation Alias 2017 Product Help
Online Help
Help Install Instructions
English
_2017_install_help/installing_autodesk_alias_2017_help.html
Japanese
_2017_install_help/JPN/JPN/installing_autodesk_alias_2017_help_jpn.html
Simplified Chinese
_2017_install_help/CHS/CHS/installing_autodesk_alias_2017_help_chs.html
Windows Help Installer
English
_2017_install_help/autodesk_alias_2017_help.exe
Japanese
_2017_install_help/JPN/JPN/alias_help_2017_jpn.exe
Simplified Chinese
_2017_install_help/CHS/CHS/alias_help_2017_chs.exe
Mac OS X Help Installer
English
_2017_install_help/autodesk_alias_2017_help.dmg
Japanese
_2017_install_help/JPN/JPN/AliasDocs2017_Japanese_Mac.dmg
Simplified Chinese
_2017_install_help/CHS/CHS/AliasDocs2017_Chinese_Mac.dmg
Learning Movies
Japanese
_2017_install_help/JPN/JPN/learningmovies_jpn.exe
Simplified Chinese
_2017_install_help/CHS/CHS/learningmovies_chs.exe
Factory Design Suite
_2017_Enu_Win_64bit_dlm_001_007.sfx.exe
_2017_Enu_Win_64bit_dlm_002_007.sfx.exe
_2017_Enu_Win_64bit_dlm_003_007.sfx.exe
_2017_Enu_Win_64bit_dlm_004_007.sfx.exe
_2017_Enu_Win_64bit_dlm_005_007.sfx.exe
_2017_Enu_Win_64bit_dlm_006_007.sfx.exe
_2017_Enu_Win_64bit_dlm_007_007.sfx.exe
Autodesk Revit 2017
_Revit_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Revit_2017_English_Win_64bit_dlm_002_002.sfx.exe
Autodesk Revit LT 2017
_Revit_LT_2017_English_Win_64bit_dlm_001_002.sfx.exe
_Revit_LT_2017_English_Win_64bit_dlm_002_002.sfx.exe
Showcase 2017
_2017_English_Win_64bit_dlm_001_003.sfx.exe
_2017_English_Win_64bit_dlm_002_003.sfx.exe
_2017_English_Win_64bit_dlm_003_003.sfx.exe
CFD 2017
_CFD_2017_Win_64bit_dlm_001_002.sfx.exe
_CFD_2017_Win_64bit_dlm_002_002.sfx.exe
Simulation Mechanical 2017
_Simulation_Mechanical_2017_Win_64bit_dlm_001_002.sfx.exe
_Simulation_Mechanical_2017_Win_64bit_dlm_002_002.sfx.exe
Fabrication CADmep 2017
_Fabrication_CADmep_2017_win_64bit_dlm.sfx.exe
Fabrication CAMduct 2017
_Fabrication_CAMduct_2017_win_64bit_dlm.sfx.exe
Fabrication ESTmep 2017
_Fabrication_ESTmep_2017_win_64bit_dlm.sfx.exe
Autodesk InfraWorks 360 2017
_InfraWorks_2017_Win_64bit_DLM.sfx.exe
Point Layout 2017
_Point_Layout_2017_Win_32-64bit_en-us.exe
ReCap 360 Pro 2017
_ReCap360_30052_Multilingual_Win_64bit_dlm.sfx.exe
Alias Design 2017
_ALSDES_2017_Enu_64bit_dlm.sfx.exe
Alias Surface 2017
_ASURF_2017_Enu_64bit_dlm_001_002.sfx.exe
_ASURF_2017_Enu_64bit_dlm_002_002.sfx.exe
Alias Speedform 2017
_ALSSF_2017_Enu_Win_64bit_dlm.sfx.exe
Alias Autostudio 2017
_ALAUST_2017_Enu_64bit_dlm_001_003.sfx.exe
_ALAUST_2017_Enu_64bit_dlm_002_003.sfx.exe
_ALAUST_2017_Enu_64bit_dlm_003_003.sfx.exe
3ds Max 2017
_3ds_Max_2017_EFGJKPS_Win_64bit_001_002.sfx.exe
_3ds_Max_2017_EFGJKPS_Win_64bit_002_002.sfx.exe
Online Help for 3dsmax
3dsmax OFFLINE Help
_max_2017_help/3dsMaxHelp.exe
for other languages go to:
-max/downloads/caas/downloads/content/download-and-install-3ds-max-product-help.html
-general-discussion/apple-mac-os-10-11-x-el-capitan-is-not-supported/m-p/5983674#M6245
mental ray Plugin, Satellte and Standalone for Maya 2016 Extension 2 (Direct links)
Maya 2016.5 is a part of Alias AutoStudio 2017
Windows
_2016_extension_2/mentalray_Plugin_for_Maya_2016_EXT2_EN_JP_ZH_Win_64bit_dlm.sfx.exe
_2016_extension_2/mentalray_Satellite_3_13_1_for_Maya_2016_EN_JP_ZH_Win_64bit.exe
_2016_extension_2/mentalray_Standalone_3_13_1_for_Autodesk_2016_EN_Win_64bit.exe
Linux
_2016_extension_2/mentalray_Plugin_for_Maya_2016_EXT2_EN_Linux_64bit.tgz
_2016_extension_2/mentalray_Satellite_3_13_1_for_Maya_2016_EN_Linux_64bit.tgz
_2016_extension_2/mentalray_Standalone_3_13_1_for_Autodesk_2016_EN_Linux_64bit.tgz
OSX
_2016_extension_2/mentalray_Plugin_for_Maya_2016_EXT2_EN_JP_ZH_Mac_OSX.dmg
_2016_extension_2/mentalray_Satellite_3_13_1_for_Maya_2016_EN_JP_ZH_Mac_OSX.dmg
_2016_extension_2/mentalray_Standalone_3_13_1_for_Autodesk_2016_EN_Mac_OSX.dmg
Offline help for Autodesk Maya 2016 Extension 2
_2016/MayaHelp2016_Ext2_enu.zip
Autodesk 3ds Max 2017 Sample Files
_sample_files/2017/Autodesk_3ds_Max_2017_English_Win_Samples_Files.exe
Open Light 2017 (32-bit and 64-bit)
Applies to AutoCAD Architecture 2017, and AutoCAD MEP 2017 (32-bit and 64-bit)
Open Light is a plug-in for AutoCAD Architecture / MEP and offers standard labels for objects, such as openings, windows and doors, which are common in Austria and part of Switzerland.
Open Light provides additional display properties for Plan 1-50 and Plan 1-100 representation to show dimensions of doors and windows automatically.
_downloads/Open_Light_2017_x64.exe
_downloads/Open_Light_2017.exe
Open Light 2017 Object Enabler (32-bit and 64-bit)
Applies to AutoCAD 2017, AutoCAD Architecture 2017, AutoCAD Civil 3D 2017, AutoCAD Electrical 2017, AutoCAD MEP 2017, AutoCAD Map 3D 2017, and AutoCAD Mechanical 2017Open Light Object Enabler is a freeware application distributed to Autodesk customers at no charge for the purpose of fully accessing Open Light objects in drawing files. Without this object enabler installed, you can share drawings using proxy graphics representations or the Export to AutoCAD command.
_downloads/Open_Light_2017_OE_x64.exe
_downloads/Open_Light_2017_OE.exe
Building Design Suite Premium 2017
_2017_Enu_Win_64bit_dlm_001_006.sfx.exe
_2017_Enu_Win_64bit_dlm_002_006.sfx.exe
_2017_Enu_Win_64bit_dlm_003_006.sfx.exe
_2017_Enu_Win_64bit_dlm_004_006.sfx.exe
_2017_Enu_Win_64bit_dlm_005_006.sfx.exe
_2017_Enu_Win_64bit_dlm_006_006.sfx.exe
Download ↔ https://imgfil.com/2uy1X4
DOWNLOAD ✫ https://imgfil.com/2uy0P3
Do you love racing games? Do you want to feel the thrill of drifting around the corners and burning rubber on the asphalt? If yes, then you should download CarX Drift Racing 2 mod apk 1.22.0, the best drifting game for Android devices.
-CarX Drift Racing 2 is a sequel to the popular CarX Drift Racing game, which has over 50 million downloads on Google Play Store. In this game, you can choose from hundreds of cars, customize them, and drift on various tracks with realistic physics and graphics.
-Download Zip ->->->-> https://urlin.us/2uT13B
In this article, we will tell you everything you need to know about CarX Drift Racing 2, why you should download its mod apk version, and some tips and tricks to improve your drifting skills.
-CarX Drift Racing 2 is a racing game that focuses on drifting, which is a driving technique where the driver intentionally oversteers the car to make it slide sideways. Drifting is not only fun, but also challenging and rewarding, as it requires skill and precision.
-Some of the features that make CarX Drift Racing 2 stand out from other racing games are:
-The gameplay of CarX Drift Racing 2 is simple and intuitive. You can control your car using various options, such as tilt, buttons, or steering wheel. You can also choose between automatic or manual transmission.
-The main goal of the game is to drift as much as possible and earn points based on your speed, angle, and duration of your drifts. You can also perform combos by linking multiple drifts together without losing control or hitting obstacles.
-How to install carx drift racing 2 mod apk 1.22.0 on android
-Carx drift racing 2 mod apk 1.22.0 unlimited money and gold
-Carx drift racing 2 mod apk 1.22.0 latest version free download
-Carx drift racing 2 mod apk 1.22.0 gameplay and features
-Carx drift racing 2 mod apk 1.22.0 review and rating
-Carx drift racing 2 mod apk 1.22.0 download link and instructions
-Carx drift racing 2 mod apk 1.22.0 best cars and tracks
-Carx drift racing 2 mod apk 1.22.0 online multiplayer mode
-Carx drift racing 2 mod apk 1.22.0 cheats and hacks
-Carx drift racing 2 mod apk 1.22.0 comparison with original version
-Carx drift racing 2 mod apk 1.22.0 update and patch notes
-Carx drift racing 2 mod apk 1.22.0 requirements and compatibility
-Carx drift racing 2 mod apk 1.22.0 tips and tricks
-Carx drift racing 2 mod apk 1.22.0 offline mode and data
-Carx drift racing 2 mod apk 1.22.0 bugs and issues
-Carx drift racing 2 mod apk 1.22.0 support and feedback
-Carx drift racing 2 mod apk 1.22.0 customization and settings
-Carx drift racing 2 mod apk 1.22.0 screenshots and videos
-Carx drift racing 2 mod apk 1.22.0 new features and improvements
-Carx drift racing 2 mod apk 1.22.0 pros and cons
-Carx drift racing 2 mod apk 1.22.0 alternatives and similar apps
-Carx drift racing 2 mod apk 1.22.0 developer and publisher
-Carx drift racing 2 mod apk 1.22.0 license and terms of service
-Carx drift racing 2 mod apk 1.22.0 download size and speed
-Carx drift racing 2 mod apk 1.22.0 awards and achievements
The game has a scoring system that evaluates your performance based on various criteria, such as style, speed, line, angle, etc. You can also earn coins and gold by completing missions, achievements, and events.
-You can use these currencies to buy new cars or upgrade your existing ones. You can also unlock new tracks and modes by increasing your reputation level.
-While CarX Drift Racing 2 is a free game, it also has some limitations and drawbacks, such as ads, in-app purchases, and limited resources. If you want to enjoy the game without any restrictions or interruptions, you should download CarX Drift Racing 2 mod apk 1.22.0.
-Some of the benefits of downloading CarX Drift Racing 2 mod apk 1.22.0 are:
-Downloading and installing CarX Drift Racing 2 mod apk 1.22.0 is easy and fast. Just follow these simple steps:
-Download CarX Drift Racing 2 mod apk 1.22.0 here
-If you want to improve your drifting skills and become a master of CarX Drift Racing 2, you should follow these tips and tricks:
-Not all cars are created equal in CarX Drift Racing 2. Some cars are better suited for drifting than others, depending on their power, weight, handling, and grip. You should choose a car that matches your style and preference, and experiment with different settings and configurations.
-You can tune your car in the tuning mode, where you can adjust various parameters, such as engine power, suspension stiffness, tire pressure, etc. You can also customize your car in the garage mode, where you can change its appearance, such as paint, vinyls, wheels, spoilers, etc.
-Tuning and customizing your car can make a big difference in your performance and score. You should try to find the optimal balance between speed and stability, and make your car look cool and unique.
-Drifting is not just about sliding sideways. It is also about controlling your car's movement and direction with skill and precision. You should master the drifting techniques that will help you achieve better results and impress your opponents.
-Some of the drifting techniques that you should learn are:
-You should practice these techniques on different tracks and situations, and find out which ones work best for you. You should also learn how to control your car's angle, speed, and line while drifting, as these factors will affect your score and style.
-If you want to test your skills and have more fun, you should compete with other players online in the multiplayer mode of CarX Drift Racing 2. You can choose from different modes, such as tandem drifting, sprint racing, etc., and challenge players from all over the world.
-You can also join or create a club or a team, where you can chat with other members, share your cars and tunes, and participate in tournaments and events.
-Competing with other players online will not only give you more excitement and challenge, but also help you improve your skills and learn from others. You can also earn more coins and gold, as well as reputation points, by winning races and drifting battles.
-CarX Drift Racing 2 is a game that will satisfy your need for speed and adrenaline. It is a game that will let you experience the thrill of drifting on realistic tracks with realistic cars. It is a game that will let you customize your car and tune it to your liking. It is a game that will let you compete with other players online and show off your skills and style.
-If you want to enjoy the game to the fullest, you should download CarX Drift Racing 2 mod apk 1.22.0, which will give you unlimited resources, premium features, and no ads. You can download it from the link below, and follow the instructions to install it on your device.
-Download CarX Drift Racing 2 mod apk 1.22.0 now and enjoy the ultimate drifting experience.
-Here are some frequently asked questions about CarX Drift Racing 2 and its mod apk version:
-A: Yes, CarX Drift Racing 2 mod apk 1.22.0 is safe to download and use, as long as you download it from a trusted source, such as the link we provided. It does not contain any viruses or malware, and it does not require any root or jailbreak access.
-A: No, you will not get banned from the game if you use CarX Drift Racing 2 mod apk 1.22.0, as it has an anti-ban feature that protects your account from detection. However, you should use it at your own risk, and be respectful of other players online.
-A: Yes, you can update CarX Drift Racing 2 mod apk 1.22.0 to the latest version, as long as you download it from the same source as before. You can also check for updates regularly on our website, where we will post the latest versions of the mod apk.
-A: Yes, you can play CarX Drift Racing 2 offline, as it does not require an internet connection to run. However, you will not be able to access some features of the game, such as multiplayer mode, online events, etc.
-A: You can contact the developers of CarX Drift Racing 2 by visiting their official website, where you can find their email address, social media accounts, and support forum.
197e85843dDo you love popping bubbles and solving puzzles? If so, you might want to try Bubble Shooter, one of the most popular and addictive games ever created. Bubble Shooter is a classic game that has been enjoyed by millions of people around the world for decades. In this article, we will tell you everything you need to know about Bubble Shooter, including what it is, how to download it for free on your laptop, and how to play and enjoy it.
-Bubble Shooter is a simple yet challenging game that involves shooting colored bubbles at a cluster of bubbles on the top of the screen. The goal is to match three or more bubbles of the same color to make them pop and clear the board. The game ends when there are no more bubbles left or when the bubbles reach the bottom of the screen.
-Download Zip 🌟 https://jinyurl.com/2uNUdR
Bubble Shooter has a long and interesting history that dates back to the 1980s. The game was inspired by two arcade games: Bubble Bobble, released by Taito in 1986, and Puzzle Bobble, also known as Bust-a-Move, released by Taito in 1994. Puzzle Bobble was the first game to feature the bubble shooting mechanic that became the core of Bubble Shooter. In 2000, Puzzle Bobble was ported to Windows and renamed as Bubble Shooter. Since then, the game has been adapted and modified by many developers and publishers, resulting in hundreds of variations and versions of Bubble Shooter.
-The gameplay of Bubble Shooter is very simple and intuitive. You use your mouse or touchpad to aim and shoot bubbles at the cluster of bubbles on the top of the screen. You can see the color of the next bubble in the launcher at the bottom of the screen. You can also bounce the bubbles off the walls to reach tricky spots. When you match three or more bubbles of the same color, they pop and disappear, along with any bubbles that are hanging from them. You get points for every bubble you pop, and bonus points for popping more bubbles at once or dropping large groups of bubbles. You can also earn special bubbles that have different effects, such as bombs, stars, rainbows, or fireballs.
-Bubble Shooter is not only fun and entertaining, but also beneficial for your brain and mood. Playing Bubble Shooter can help you improve your concentration, memory, logic, problem-solving, and spatial awareness skills. It can also help you relax, reduce stress, and boost your happiness. Moreover, playing Bubble Shooter can be a great way to pass time, kill boredom, or challenge yourself.
-If you want to play Bubble Shooter on your laptop, you have several options to choose from. One of the easiest and safest ways is to download it from Microsoft Store, which offers a variety of free and paid versions of Bubble Shooter for Windows 10 devices. Here are the steps to do so:
-Before you download Bubble Shooter from Microsoft Store, make sure that your laptop meets the minimum requirements for running the game. These are:
-If your laptop does not meet these requirements, you may experience some issues or errors while playing the game. You may also need to update your Windows 10 to the latest version.
-Once you have checked the requirements, you can follow these steps to download and install Bubble Shooter from Microsoft Store:
-Congratulations! You have successfully downloaded and installed Bubble Shooter on your laptop. You can now enjoy playing this classic game anytime and anywhere.
-bubble shooter classic game for pc
-bubble shooter (free) windows 10 app
-bubble pop: bubble shooter microsoft store
-download bubble shooter puzzle bobble
-bubble shooter offline game for laptop
-bubble shooter 1986 arcade game windows
-bubble shooter deluxe free download pc
-bubble shooter net energy gain experiment
-bubble shooter taito original game download
-bubble shooter kstar facility korea institute
-bubble shooter 100 million degrees celsius
-bubble shooter fusion reaction 30 seconds
-bubble shooter holy grail mini sun
-bubble shooter 15 million kelvins core
-bubble shooter milanworldwidegames windows
-bubble shooter gasp mobile games inc
-bubble shooter action & adventure category
-bubble shooter card & board classics
-bubble shooter family & kids puzzle & trivia
-bubble shooter system requirements windows 10
-bubble shooter approximate size 54.25 mb
-bubble shooter age rating for all ages
-bubble shooter access your internet connection
-bubble shooter installation up to ten devices
-bubble shooter language supported english us
-bubble shooter publisher info support link
-bubble shooter privacy policy terms of transaction
-bubble shooter seizure warnings photosensitive
-bubble shooter report this game to microsoft
-bubble shooter aim and tap the screen to launch
-bubble shooter clear the board before it fills up
-bubble shooter use menu to change level or score
-bubble shooter screenshots people also like
-bubble shooter mahjong solitaire free +
-bubble shooter sudoku hd free free +
-bubble shooter amazing mahjong: zen free +
-bubble shooter mahjong - shanghai free
-bubble shooter mahjongg v+ free +
-bubble shooter the bubble buster free +
-bubble shooter the bubble shooter free +
-bubble shooter solitaire 40 cards free +
-bubble shooter upward free climb up game
-bubble shooter dictionary free offline english
-bubble shooter phoenix force free + boss battles
-download and install instructions for windows 10
If you do not want to download Bubble Shooter from Microsoft Store, or if you want to try other versions of Bubble Shooter, you have some alternative ways to play this game online or offline. Here are some of them:
-As you can see, there are many ways to play Bubble Shooter on your laptop or other devices. You can choose the one that suits your preferences and needs best.
-Now that you have downloaded or accessed Bubble Shooter on your laptop, you may wonder how to play and enjoy this game. Don't worry, we will guide you through the basics and give you some tips and tricks to make the most out of this game.
-The basic rules for playing Bubble Shooter are very simple and easy to follow. Here are some tips to help you get started:
-By following these basic rules and tips, you can play Bubble Shooter like a pro and have fun while doing so.
-Bubble Shooter is a game that never gets old or boring. There are many different modes and levels of Bubble Shooter that you can choose from, depending on your mood and preference. Here are some of them:
-By playing these different modes and levels of Bubble Shooter, you can experience different aspects and challenges of this game and keep yourself entertained for hours.
-Bubble Shooter is a game that requires both skill and luck. However, there are some strategies and tricks that you can use to improve your chances of scoring high in this game. Here are some of them:
-By using these strategies and tricks, you can score high in Bubble Shooter and impress yourself and others with your skills.
-Bubble Shooter is a classic game that has been loved by millions of people for decades. It is a simple yet challenging game that involves shooting colored bubbles at a cluster of bubbles on the top of the screen. The goal is to match three or more bubbles of the same color to make them pop and clear the board.
-In this article, we have covered everything you need to know about Bubble Shooter, including:
-By following this guide, you can play and enjoy Bubble Shooter on your laptop anytime and anywhere.
-What are you waiting for? Download Bubble Shooter for free on your laptop today and start popping bubbles and having fun. You will not regret it. Bubble Shooter is a game that can keep you entertained for hours, challenge your brain, and make you happy. It is a game that everyone can play and enjoy, regardless of age or skill level. It is a game that never gets old or boring. It is a game that you will love.
-Download Bubble Shooter for free on your laptop now and join the millions of people who are already addicted to this classic game. You will be glad you did.
-Here are some of the most frequently asked questions about Bubble Shooter:
-If you are a fan of soccer games, you must have heard of FIFA 22, the latest installment in the popular FIFA series by EA Sports. FIFA 22 is a realistic and immersive soccer simulation game that features hundreds of teams, players, stadiums, and modes. You can play as your favorite soccer stars, create your own custom player or manager, compete with other players online, or enjoy the street-style Volta Football mode.
-But what if you want to play FIFA 22 on your mobile device without spending too much storage space or data? Well, there is a solution for that. You can download FIFA 22 zip apk, which is a compressed version of the game that you can install on your Android or iOS device. In this article, we will show you how to download FIFA 22 zip apk, what are its features and benefits, and what are the risks involved. Let's get started!
-Download File ✦✦✦ https://jinyurl.com/2uNOTh
If you have an Android device, you can follow these steps to download and install FIFA 22 zip apk:
-If you have an iOS device, you can follow these steps to download and install FIFA 22 zip apk:
-FIFA 22 zip apk is not just a compressed version of the game, but also a full-featured one that offers all the same features as the original game. Here are some of the features that you can enjoy with FIFA 22 zip apk:
-Downloading FIFA 22 zip apk has some advantages over downloading the original game from the official app stores. Here are some of the benefits that you can get with FIFA 22 zip apk download:
-However, downloading FIFA 22 zip apk also has some risks and drawbacks that you should be aware of before you decide to do it. Here are some of the risks that you may face with FIFA 22 zip apk download:
-fifa 22 android offline zip file download
-how to install fifa 22 apk obb data zip
-fifa 22 mod fifa 14 zip apk free download
-fifa 22 mobile zip apk latest version download
-download fifa 22 original apk obb data offline
-fifa 22 zip apk download for android phone
-fifa 22 apk obb data zip file size
-fifa 22 mod apk zip download with unlimited coins
-fifa 22 zip apk download link no verification
-fifa 22 apk obb data zip highly compressed download
-fifa 22 android zip apk gameplay and features
-fifa 22 zip apk download for pc windows 10
-fifa 22 mod apk obb data zip update download
-fifa 22 mobile zip apk offline mode download
-fifa 22 zip apk download without human verification
-fifa 22 apk obb data zip password and extractor
-fifa 22 mod apk zip download with new transfers and kits
-fifa 22 zip apk download for ios iphone ipad
-fifa 22 apk obb data zip system requirements
-fifa 22 mobile zip apk online mode download
-fifa 22 zip apk download full version free
-fifa 22 apk obb data zip file location
-fifa 22 mod apk zip download with real faces and stadiums
-fifa 22 zip apk download for android tablet
-fifa 22 apk obb data zip error and fix
-fifa 22 mobile zip apk graphics and sound quality
-fifa 22 zip apk download cracked and modded
-fifa 22 apk obb data zip file manager and editor
-fifa 22 mod apk zip download with commentary and languages
-fifa 22 zip apk download for android tv box
-fifa 22 apk obb data zip backup and restore
-fifa 22 mobile zip apk controls and settings
-fifa 22 zip apk download safe and secure
-fifa 22 apk obb data zip cheats and hacks
-fifa 22 mod apk zip download with all players and teams unlocked
-fifa 22 zip apk download for android emulator
-fifa 22 apk obb data zip tutorial and guide
-fifa 22 mobile zip apk review and rating
-fifa 22 zip apk download latest news and updates
-fifa 22 apk obb data zip support and contact
FIFA 22 is a realistic and immersive soccer simulation game that features hundreds of teams, players, stadiums, and modes. You can play as your favorite soccer stars, create your own custom player or manager, compete with other players online, or enjoy the street-style Volta Football mode.
-If you want to play FIFA 22 on your mobile device without spending too much storage space or data, you can download FIFA 22 zip apk, which is a compressed version of the game that you can install on your Android or iOS device.
-In this article, we showed you how to download FIFA 22 zip apk, what are its features and benefits, and what are the risks involved. We hope that this article was helpful and informative for you.
-If you have any questions or comments about FIFA 22 zip apk download, feel free to leave them below. We would love to hear from you!
-For it to work, you have to duplicate the Space and run it on your own profile where a (paid) private GPU will be attributed to it during runtime. As each T4 costs US$0,60/h, it should cost < US$1 to train a model with less than 100 images on default settings!
-If you haven't already, attribute a T4 GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when you turn it off.
-{code}-""" - - -class JupyterRenderable: - """A shim to write html to Jupyter notebook.""" - - def __init__(self, html: str, text: str) -> None: - self.html = html - self.text = text - - def _repr_mimebundle_( - self, include: Sequence[str], exclude: Sequence[str], **kwargs: Any - ) -> Dict[str, str]: - data = {"text/plain": self.text, "text/html": self.html} - if include: - data = {k: v for (k, v) in data.items() if k in include} - if exclude: - data = {k: v for (k, v) in data.items() if k not in exclude} - return data - - -class JupyterMixin: - """Add to an Rich renderable to make it render in Jupyter notebook.""" - - __slots__ = () - - def _repr_mimebundle_( - self: "ConsoleRenderable", - include: Sequence[str], - exclude: Sequence[str], - **kwargs: Any, - ) -> Dict[str, str]: - console = get_console() - segments = list(console.render(self, console.options)) - html = _render_segments(segments) - text = console._render_buffer(segments) - data = {"text/plain": text, "text/html": html} - if include: - data = {k: v for (k, v) in data.items() if k in include} - if exclude: - data = {k: v for (k, v) in data.items() if k not in exclude} - return data - - -def _render_segments(segments: Iterable[Segment]) -> str: - def escape(text: str) -> str: - """Escape html.""" - return text.replace("&", "&").replace("<", "<").replace(">", ">") - - fragments: List[str] = [] - append_fragment = fragments.append - theme = DEFAULT_TERMINAL_THEME - for text, style, control in Segment.simplify(segments): - if control: - continue - text = escape(text) - if style: - rule = style.get_html_style(theme) - text = f'{text}' if rule else text - if style.link: - text = f'{text}' - append_fragment(text) - - code = "".join(fragments) - html = JUPYTER_HTML_FORMAT.format(code=code) - - return html - - -def display(segments: Iterable[Segment], text: str) -> None: - """Render segments to Jupyter.""" - html = _render_segments(segments) - jupyter_renderable = JupyterRenderable(html, text) - try: - from IPython.display import display as ipython_display - - ipython_display(jupyter_renderable) - except ModuleNotFoundError: - # Handle the case where the Console has force_jupyter=True, - # but IPython is not installed. - pass - - -def print(*args: Any, **kwargs: Any) -> None: - """Proxy for Console print.""" - console = get_console() - return console.print(*args, **kwargs) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/_adapters.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/_adapters.py deleted file mode 100644 index ea363d86a564b5450666aa00aecd46353326a75a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/_adapters.py +++ /dev/null @@ -1,170 +0,0 @@ -from contextlib import suppress -from io import TextIOWrapper - -from . import abc - - -class SpecLoaderAdapter: - """ - Adapt a package spec to adapt the underlying loader. - """ - - def __init__(self, spec, adapter=lambda spec: spec.loader): - self.spec = spec - self.loader = adapter(spec) - - def __getattr__(self, name): - return getattr(self.spec, name) - - -class TraversableResourcesLoader: - """ - Adapt a loader to provide TraversableResources. - """ - - def __init__(self, spec): - self.spec = spec - - def get_resource_reader(self, name): - return CompatibilityFiles(self.spec)._native() - - -def _io_wrapper(file, mode='r', *args, **kwargs): - if mode == 'r': - return TextIOWrapper(file, *args, **kwargs) - elif mode == 'rb': - return file - raise ValueError( - "Invalid mode value '{}', only 'r' and 'rb' are supported".format(mode) - ) - - -class CompatibilityFiles: - """ - Adapter for an existing or non-existent resource reader - to provide a compatibility .files(). - """ - - class SpecPath(abc.Traversable): - """ - Path tied to a module spec. - Can be read and exposes the resource reader children. - """ - - def __init__(self, spec, reader): - self._spec = spec - self._reader = reader - - def iterdir(self): - if not self._reader: - return iter(()) - return iter( - CompatibilityFiles.ChildPath(self._reader, path) - for path in self._reader.contents() - ) - - def is_file(self): - return False - - is_dir = is_file - - def joinpath(self, other): - if not self._reader: - return CompatibilityFiles.OrphanPath(other) - return CompatibilityFiles.ChildPath(self._reader, other) - - @property - def name(self): - return self._spec.name - - def open(self, mode='r', *args, **kwargs): - return _io_wrapper(self._reader.open_resource(None), mode, *args, **kwargs) - - class ChildPath(abc.Traversable): - """ - Path tied to a resource reader child. - Can be read but doesn't expose any meaningful children. - """ - - def __init__(self, reader, name): - self._reader = reader - self._name = name - - def iterdir(self): - return iter(()) - - def is_file(self): - return self._reader.is_resource(self.name) - - def is_dir(self): - return not self.is_file() - - def joinpath(self, other): - return CompatibilityFiles.OrphanPath(self.name, other) - - @property - def name(self): - return self._name - - def open(self, mode='r', *args, **kwargs): - return _io_wrapper( - self._reader.open_resource(self.name), mode, *args, **kwargs - ) - - class OrphanPath(abc.Traversable): - """ - Orphan path, not tied to a module spec or resource reader. - Can't be read and doesn't expose any meaningful children. - """ - - def __init__(self, *path_parts): - if len(path_parts) < 1: - raise ValueError('Need at least one path part to construct a path') - self._path = path_parts - - def iterdir(self): - return iter(()) - - def is_file(self): - return False - - is_dir = is_file - - def joinpath(self, other): - return CompatibilityFiles.OrphanPath(*self._path, other) - - @property - def name(self): - return self._path[-1] - - def open(self, mode='r', *args, **kwargs): - raise FileNotFoundError("Can't open orphan path") - - def __init__(self, spec): - self.spec = spec - - @property - def _reader(self): - with suppress(AttributeError): - return self.spec.loader.get_resource_reader(self.spec.name) - - def _native(self): - """ - Return the native reader if it supports files(). - """ - reader = self._reader - return reader if hasattr(reader, 'files') else self - - def __getattr__(self, attr): - return getattr(self._reader, attr) - - def files(self): - return CompatibilityFiles.SpecPath(self.spec, self._reader) - - -def wrap_spec(package): - """ - Construct a package spec with traversable compatibility - on the spec/loader/reader. - """ - return SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/unicode.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/unicode.py deleted file mode 100644 index 06526203911de55da3c2a8c5ae73f48024c3f018..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/unicode.py +++ /dev/null @@ -1,352 +0,0 @@ -# unicode.py - -import sys -from itertools import filterfalse -from typing import List, Tuple, Union - - -class _lazyclassproperty: - def __init__(self, fn): - self.fn = fn - self.__doc__ = fn.__doc__ - self.__name__ = fn.__name__ - - def __get__(self, obj, cls): - if cls is None: - cls = type(obj) - if not hasattr(cls, "_intern") or any( - cls._intern is getattr(superclass, "_intern", []) - for superclass in cls.__mro__[1:] - ): - cls._intern = {} - attrname = self.fn.__name__ - if attrname not in cls._intern: - cls._intern[attrname] = self.fn(cls) - return cls._intern[attrname] - - -UnicodeRangeList = List[Union[Tuple[int, int], Tuple[int]]] - - -class unicode_set: - """ - A set of Unicode characters, for language-specific strings for - ``alphas``, ``nums``, ``alphanums``, and ``printables``. - A unicode_set is defined by a list of ranges in the Unicode character - set, in a class attribute ``_ranges``. Ranges can be specified using - 2-tuples or a 1-tuple, such as:: - - _ranges = [ - (0x0020, 0x007e), - (0x00a0, 0x00ff), - (0x0100,), - ] - - Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x). - - A unicode set can also be defined using multiple inheritance of other unicode sets:: - - class CJK(Chinese, Japanese, Korean): - pass - """ - - _ranges: UnicodeRangeList = [] - - @_lazyclassproperty - def _chars_for_ranges(cls): - ret = [] - for cc in cls.__mro__: - if cc is unicode_set: - break - for rr in getattr(cc, "_ranges", ()): - ret.extend(range(rr[0], rr[-1] + 1)) - return [chr(c) for c in sorted(set(ret))] - - @_lazyclassproperty - def printables(cls): - "all non-whitespace characters in this range" - return "".join(filterfalse(str.isspace, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphas(cls): - "all alphabetic characters in this range" - return "".join(filter(str.isalpha, cls._chars_for_ranges)) - - @_lazyclassproperty - def nums(cls): - "all numeric digit characters in this range" - return "".join(filter(str.isdigit, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphanums(cls): - "all alphanumeric characters in this range" - return cls.alphas + cls.nums - - @_lazyclassproperty - def identchars(cls): - "all characters in this range that are valid identifier characters, plus underscore '_'" - return "".join( - sorted( - set( - "".join(filter(str.isidentifier, cls._chars_for_ranges)) - + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµº" - + "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ" - + "_" - ) - ) - ) - - @_lazyclassproperty - def identbodychars(cls): - """ - all characters in this range that are valid identifier body characters, - plus the digits 0-9 - """ - return "".join( - sorted( - set( - cls.identchars - + "0123456789" - + "".join( - [c for c in cls._chars_for_ranges if ("_" + c).isidentifier()] - ) - ) - ) - ) - - -class pyparsing_unicode(unicode_set): - """ - A namespace class for defining common language unicode_sets. - """ - - # fmt: off - - # define ranges in language character sets - _ranges: UnicodeRangeList = [ - (0x0020, sys.maxunicode), - ] - - class BasicMultilingualPlane(unicode_set): - "Unicode set for the Basic Multilingual Plane" - _ranges: UnicodeRangeList = [ - (0x0020, 0xFFFF), - ] - - class Latin1(unicode_set): - "Unicode set for Latin-1 Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0020, 0x007E), - (0x00A0, 0x00FF), - ] - - class LatinA(unicode_set): - "Unicode set for Latin-A Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0100, 0x017F), - ] - - class LatinB(unicode_set): - "Unicode set for Latin-B Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0180, 0x024F), - ] - - class Greek(unicode_set): - "Unicode set for Greek Unicode Character Ranges" - _ranges: UnicodeRangeList = [ - (0x0342, 0x0345), - (0x0370, 0x0377), - (0x037A, 0x037F), - (0x0384, 0x038A), - (0x038C,), - (0x038E, 0x03A1), - (0x03A3, 0x03E1), - (0x03F0, 0x03FF), - (0x1D26, 0x1D2A), - (0x1D5E,), - (0x1D60,), - (0x1D66, 0x1D6A), - (0x1F00, 0x1F15), - (0x1F18, 0x1F1D), - (0x1F20, 0x1F45), - (0x1F48, 0x1F4D), - (0x1F50, 0x1F57), - (0x1F59,), - (0x1F5B,), - (0x1F5D,), - (0x1F5F, 0x1F7D), - (0x1F80, 0x1FB4), - (0x1FB6, 0x1FC4), - (0x1FC6, 0x1FD3), - (0x1FD6, 0x1FDB), - (0x1FDD, 0x1FEF), - (0x1FF2, 0x1FF4), - (0x1FF6, 0x1FFE), - (0x2129,), - (0x2719, 0x271A), - (0xAB65,), - (0x10140, 0x1018D), - (0x101A0,), - (0x1D200, 0x1D245), - (0x1F7A1, 0x1F7A7), - ] - - class Cyrillic(unicode_set): - "Unicode set for Cyrillic Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0400, 0x052F), - (0x1C80, 0x1C88), - (0x1D2B,), - (0x1D78,), - (0x2DE0, 0x2DFF), - (0xA640, 0xA672), - (0xA674, 0xA69F), - (0xFE2E, 0xFE2F), - ] - - class Chinese(unicode_set): - "Unicode set for Chinese Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x2E80, 0x2E99), - (0x2E9B, 0x2EF3), - (0x31C0, 0x31E3), - (0x3400, 0x4DB5), - (0x4E00, 0x9FEF), - (0xA700, 0xA707), - (0xF900, 0xFA6D), - (0xFA70, 0xFAD9), - (0x16FE2, 0x16FE3), - (0x1F210, 0x1F212), - (0x1F214, 0x1F23B), - (0x1F240, 0x1F248), - (0x20000, 0x2A6D6), - (0x2A700, 0x2B734), - (0x2B740, 0x2B81D), - (0x2B820, 0x2CEA1), - (0x2CEB0, 0x2EBE0), - (0x2F800, 0x2FA1D), - ] - - class Japanese(unicode_set): - "Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges" - _ranges: UnicodeRangeList = [] - - class Kanji(unicode_set): - "Unicode set for Kanji Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x4E00, 0x9FBF), - (0x3000, 0x303F), - ] - - class Hiragana(unicode_set): - "Unicode set for Hiragana Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x3041, 0x3096), - (0x3099, 0x30A0), - (0x30FC,), - (0xFF70,), - (0x1B001,), - (0x1B150, 0x1B152), - (0x1F200,), - ] - - class Katakana(unicode_set): - "Unicode set for Katakana Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x3099, 0x309C), - (0x30A0, 0x30FF), - (0x31F0, 0x31FF), - (0x32D0, 0x32FE), - (0xFF65, 0xFF9F), - (0x1B000,), - (0x1B164, 0x1B167), - (0x1F201, 0x1F202), - (0x1F213,), - ] - - class Hangul(unicode_set): - "Unicode set for Hangul (Korean) Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x1100, 0x11FF), - (0x302E, 0x302F), - (0x3131, 0x318E), - (0x3200, 0x321C), - (0x3260, 0x327B), - (0x327E,), - (0xA960, 0xA97C), - (0xAC00, 0xD7A3), - (0xD7B0, 0xD7C6), - (0xD7CB, 0xD7FB), - (0xFFA0, 0xFFBE), - (0xFFC2, 0xFFC7), - (0xFFCA, 0xFFCF), - (0xFFD2, 0xFFD7), - (0xFFDA, 0xFFDC), - ] - - Korean = Hangul - - class CJK(Chinese, Japanese, Hangul): - "Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range" - - class Thai(unicode_set): - "Unicode set for Thai Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0E01, 0x0E3A), - (0x0E3F, 0x0E5B) - ] - - class Arabic(unicode_set): - "Unicode set for Arabic Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0600, 0x061B), - (0x061E, 0x06FF), - (0x0700, 0x077F), - ] - - class Hebrew(unicode_set): - "Unicode set for Hebrew Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0591, 0x05C7), - (0x05D0, 0x05EA), - (0x05EF, 0x05F4), - (0xFB1D, 0xFB36), - (0xFB38, 0xFB3C), - (0xFB3E,), - (0xFB40, 0xFB41), - (0xFB43, 0xFB44), - (0xFB46, 0xFB4F), - ] - - class Devanagari(unicode_set): - "Unicode set for Devanagari Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x0900, 0x097F), - (0xA8E0, 0xA8FF) - ] - - # fmt: on - - -pyparsing_unicode.Japanese._ranges = ( - pyparsing_unicode.Japanese.Kanji._ranges - + pyparsing_unicode.Japanese.Hiragana._ranges - + pyparsing_unicode.Japanese.Katakana._ranges -) - -pyparsing_unicode.BMP = pyparsing_unicode.BasicMultilingualPlane - -# add language identifiers using language Unicode -pyparsing_unicode.العربية = pyparsing_unicode.Arabic -pyparsing_unicode.中文 = pyparsing_unicode.Chinese -pyparsing_unicode.кириллица = pyparsing_unicode.Cyrillic -pyparsing_unicode.Ελληνικά = pyparsing_unicode.Greek -pyparsing_unicode.עִברִית = pyparsing_unicode.Hebrew -pyparsing_unicode.日本語 = pyparsing_unicode.Japanese -pyparsing_unicode.Japanese.漢字 = pyparsing_unicode.Japanese.Kanji -pyparsing_unicode.Japanese.カタカナ = pyparsing_unicode.Japanese.Katakana -pyparsing_unicode.Japanese.ひらがな = pyparsing_unicode.Japanese.Hiragana -pyparsing_unicode.한국어 = pyparsing_unicode.Korean -pyparsing_unicode.ไทย = pyparsing_unicode.Thai -pyparsing_unicode.देवनागरी = pyparsing_unicode.Devanagari diff --git a/spaces/Atualli/yoloxTeste/checkYoloxGPU.sh b/spaces/Atualli/yoloxTeste/checkYoloxGPU.sh deleted file mode 100644 index d1c7225ca9bc4f79a7e07c4244ca3d8fab1f7628..0000000000000000000000000000000000000000 --- a/spaces/Atualli/yoloxTeste/checkYoloxGPU.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/sh -export path=/home/atualli/.local/lib/python3.8/site-packages:$PATH -cd ~/Projetos/huggingface/yoloxTeste_GPU -SERVER=192.168.0.153 -PORT=8081 - -if lsof -Pi :$PORT -sTCP:LISTEN -t >/dev/null ; then - echo "running" -else - ./telegramCrise.sh "reiniciando_yolox_GPU_linux_192.168.0.153:8081" - pkill -f app1.py - python app1.py & - echo "not running" -fi - - diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/image_dense_captions.py b/spaces/Awiny/Image2Paragraph/models/grit_src/image_dense_captions.py deleted file mode 100644 index 33a15ea982beea0e58739740c01954575bbb1ab3..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/image_dense_captions.py +++ /dev/null @@ -1,69 +0,0 @@ -import argparse -import multiprocessing as mp -import os -import time -import cv2 -import tqdm -import sys - -from detectron2.config import get_cfg -from detectron2.data.detection_utils import read_image -from detectron2.utils.logger import setup_logger - -sys.path.insert(0, 'models/grit_src/third_party/CenterNet2/projects/CenterNet2/') -from centernet.config import add_centernet_config -from models.grit_src.grit.config import add_grit_config - -from models.grit_src.grit.predictor import VisualizationDemo -import json -from utils.util import resize_long_edge_cv2 - - -# constants -WINDOW_NAME = "GRiT" - - -def dense_pred_to_caption(predictions): - boxes = predictions["instances"].pred_boxes if predictions["instances"].has("pred_boxes") else None - object_description = predictions["instances"].pred_object_descriptions.data - new_caption = "" - for i in range(len(object_description)): - new_caption += (object_description[i] + ": " + str([int(a) for a in boxes[i].tensor.cpu().detach().numpy()[0]])) + "; " - return new_caption - -def setup_cfg(args): - cfg = get_cfg() - if args["cpu"]: - cfg.MODEL.DEVICE="cpu" - add_centernet_config(cfg) - add_grit_config(cfg) - cfg.merge_from_file(args["config_file"]) - cfg.merge_from_list(args["opts"]) - # Set score_threshold for builtin models - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args["confidence_threshold"] - cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args["confidence_threshold"] - if args["test_task"]: - cfg.MODEL.TEST_TASK = args["test_task"] - cfg.MODEL.BEAM_SIZE = 1 - cfg.MODEL.ROI_HEADS.SOFT_NMS_ENABLED = False - cfg.USE_ACT_CHECKPOINT = False - cfg.freeze() - return cfg - - -def get_parser(device): - arg_dict = {'config_file': "models/grit_src/configs/GRiT_B_DenseCap_ObjectDet.yaml", 'cpu': False, 'confidence_threshold': 0.5, 'test_task': 'DenseCap', 'opts': ["MODEL.WEIGHTS", "pretrained_models/grit_b_densecap_objectdet.pth"]} - if device == "cpu": - arg_dict["cpu"] = True - return arg_dict - -def image_caption_api(image_src, device): - args2 = get_parser(device) - cfg = setup_cfg(args2) - demo = VisualizationDemo(cfg) - if image_src: - img = read_image(image_src, format="BGR") - img = resize_long_edge_cv2(img, 384) - predictions, visualized_output = demo.run_on_image(img) - new_caption = dense_pred_to_caption(predictions) - return new_caption \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.cpp b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.cpp deleted file mode 100644 index 0a5b7b907c06720fefc77b0dfd921b8ec3ecf2be..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.cpp +++ /dev/null @@ -1,507 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#include "cocoeval.h" -#include
Si estás buscando una forma divertida y emocionante de jugar al blackjack online, deberías echar un vistazo a Blackjack 21 Blackjackist. Este es un juego de casino gratuito que le ofrece la oportunidad de jugar al blackjack con millones de jugadores de todo el mundo. Puedes disfrutar de gráficos realistas en 3D, chatear con otros jugadores, obtener fichas gratis todos los días y aprender a jugar y ganar en el blackjack. En este artículo, revisaremos las características, beneficios, reglas y estrategias de Blackjack 21 Blackjackist. También te mostraremos cómo descargar y jugar el juego en tu dispositivo. Si usted es un principiante o un profesional, usted encontrará algo para amar acerca de este juego.
-Download Zip ○○○ https://bltlly.com/2v6Kvx
Blackjack 21 Blackjackist es un juego de casino desarrollado por KamaGames, un operador de casino social líder. El juego está disponible para Android, iOS, Windows, Mac y Facebook. Puedes descargarlo gratis desde la Google Play Store, la App Store o el sitio web oficial. También puedes reproducirlo en Facebook o en tu navegador. El juego tiene más de 10 millones de descargas y una calificación de 4.5 estrellas tanto en Google Play y App Store.
-Blackjack 21 Blackjackist tiene muchas características y beneficios que lo convierten en uno de los mejores juegos de blackjack en línea. Estos son algunos de ellos:
-El blackjack es un juego de cartas en el que intentas vencer al crupier consiguiendo un valor de mano lo más cercano posible a 21, sin pasarte. El juego se juega con una o más barajas estándar de 52 cartas. Las cartas tienen los siguientes valores:
-El juego comienza con el repartidor repartiendo dos cartas a cada jugador y a sí mismos. Una de las cartas del repartidor está boca arriba y la otra boca abajo. Los jugadores pueden ver sus propias cartas y la carta boca arriba del repartidor. Los jugadores tienen que decidir qué hacer con sus manos. Tienen las siguientes opciones:
- -Después de que todos los jugadores hayan terminado sus turnos, el repartidor revela su carta boca abajo y juega su mano de acuerdo con las siguientes reglas:
-El resultado del juego se determina comparando los valores finales de las manos de los jugadores y del repartidor. Los posibles resultados son:
-Para aumentar tus posibilidades de ganar en el blackjack, necesitas usar algunas estrategias básicas que te digan qué hacer en diferentes situaciones. Por ejemplo, siempre debes dividir ases y ochos, nunca dividir dieces o cincos, doblar en 11 o 10 cuando el repartidor tiene una carta baja, golpear en 17 suave o más bajo, pararse en 17 duro o más alto, etc. Puede encontrar gráficos de estrategia más detallados en línea que le muestran cómo jugar cada mano posible contra cada carta de repartidor posible.
-Descargar y jugar Blackjack 21 Blackjackist es fácil y rápido. Solo tienes que seguir estos sencillos pasos:
-Dependiendo del dispositivo que quieras usar, puedes descargar el juego desde diferentes fuentes. Aquí están los enlaces e instrucciones para cada dispositivo:
-Blackjack 21 Blackjackist es un gran juego de casino que te permite jugar blackjack en línea con millones de jugadores de todo el mundo. Puedes disfrutar de gráficos realistas en 3D, chatear con otros jugadores, obtener fichas gratis todos los días y aprender a jugar y ganar en el blackjack. Puede descargar y jugar el juego en su dispositivo de forma gratuita desde varias fuentes. También puede utilizar algunos consejos y trucos para mejorar sus habilidades y ganar más fichas. Blackjack 21 Blackjackist es un juego que te mantendrá entretenido y comprometido durante horas.
-Si estás listo para unirte a la comunidad de blackjack y divertirte, descarga Blackjack 21 Blackjackist hoy y empieza a jugar. ¡No te arrepentirás!
-Aquí hay algunas preguntas frecuentes sobre Blackjack 21 Blackjackist:
-¿Te encanta jugar a Clash Royale, el popular juego de estrategia en tiempo real de Supercell? ¿Te gustaría poder jugar en una pantalla más grande con mejores gráficos y rendimiento? Si es así, estás de suerte. En este artículo, le mostraremos cómo descargar e instalar Clash Royale Bluestacks APK en su PC, y cómo disfrutar de la mejor experiencia de juego con Bluestacks, la plataforma de juego móvil más popular del mundo para Windows y Mac.
-DOWNLOAD ——— https://bltlly.com/2v6KY0
Clash Royale es un juego multijugador en línea donde te enfrentas a otros jugadores en duelos de ritmo rápido. Puedes elegir entre una variedad de personajes del universo Clash of Clans, como Gigantes, Reyes Bárbaros, Rompemuros, Arqueros y muchos más. También puedes recoger y mejorar cartas, construir tus propias barajas y unirte a clanes para compartir cartas y participar en guerras de clanes.
-Clash Royale es un juego que combina estrategia, habilidad y suerte. Tienes que desplegar tus tropas sabiamente, usar tus hechizos con eficacia y administrar tu elixir de manera eficiente. También tienes que adaptarte a diferentes escenarios, modos y desafíos. Clash Royale es un juego que nunca se vuelve aburrido, ya que siempre hay algo nuevo para descubrir y disfrutar.
-Bluestacks es una plataforma de juegos móvil que te permite jugar juegos Android en tu PC o Mac. Es 100% seguro y de uso gratuito. Con Bluestacks, puedes acceder a millones de juegos de varios géneros, como RPG, estrategia, acción, rompecabezas, casual y más. También puedes jugar online o offline, dependiendo de tu preferencia.
- -Si quieres jugar Clash Royale en tu PC con Bluestacks, debes seguir estos sencillos pasos:
- -Vaya a el sitio web oficial de Bluestacks y haga clic en el botón "Descargar". Esto comenzará a descargar el archivo de instalación para Bluestacks 10 o Bluestacks 5, dependiendo de su elección. Ambas versiones son compatibles con Windows 7 o superior y Mac OS X 10.12 o superior.
-Una vez completada la descarga, abra el archivo de instalación y siga las instrucciones en la pantalla. El proceso de instalación puede tardar unos minutos, dependiendo de las especificaciones del sistema. Después de la instalación, verá un icono de acceso directo en el escritorio o en el menú de inicio de Bluestacks.
-Haga doble clic en el icono de Bluestacks para iniciar el reproductor de aplicaciones. Se le pedirá que inicie sesión con su cuenta de Google, que es necesaria para acceder a la Google Play Store y otros servicios de Google. Si no tienes una cuenta de Google, puedes crear una gratis. También puedes omitir este paso si quieres usar otras tiendas de aplicaciones o archivos APK.
-Hay dos maneras de conseguir Clash Royale en Bluestacks. Una es buscarlo en la tienda de aplicaciones Bluestacks, que funciona con la Google Play Store. Puede encontrarlo escribiendo "Clash Royale" en la barra de búsqueda y haciendo clic en el botón "Instalar". La otra forma es descargar el archivo APK de un sitio web de terceros, como Uptodown. Puedes encontrarlo yendo a
- Ahora que tienes Clash Royale en tu PC, puedes empezar a jugar con Bluestacks. Aquí hay algunos consejos y trucos para mejorar su experiencia de juego: Una de las mejores características de Bluestacks es que te permite personalizar tus controles de teclado y ratón para cualquier juego. Puede acceder a esta función haciendo clic en el icono "Teclado" en la esquina inferior derecha de la ventana Bluestacks. Esto abrirá un menú donde puede asignar teclas o botones del ratón a diferentes acciones, como desplegar tropas, usar hechizos, hacer zoom, etc. También puede usar mapas de teclas predefinidos o crear sus propios. Puede guardar sus ajustes y cambiar entre ellos en cualquier momento. Otra gran característica de Bluestacks es que ofrece gráficos full HD y un rendimiento suave para cualquier juego. Puede ajustar la configuración de gráficos haciendo clic en el icono "Configuración" en la esquina superior derecha de la ventana Bluestacks. Esto abrirá un menú donde puede cambiar la resolución, la velocidad de fotogramas, el modo de visualización, DPI, etc. También puede habilitar o deshabilitar características como altas tasas de fotogramas, controles inteligentes, notificaciones de juegos, etc. También puede verificar los requisitos del sistema y la compatibilidad haciendo clic en el "Información del sistema" icono en el mismo menú. En conclusión, jugar Clash Royale en PC con Bluestacks es una gran manera de disfrutar de este increíble juego en una pantalla más grande con mejores gráficos y rendimiento. También puedes personalizar tus controles, acceder a funciones exclusivas y recompensas, y divertirte más con Bluestacks. Todo lo que necesita hacer es descargar e instalar Clash Royale Bluestacks APK en su PC siguiendo nuestros sencillos pasos anteriores. Entonces, ¿qué estás esperando? Comience a jugar Clash Royale en PC con Bluestacks hoy!
- Demo for Wlop Diffusion Stable Diffusion model. This space was created using SD Space Creator. oh no 😐 something wrong with the 🤗 hugging face servers 😐 hopefully, it will be fixed soon Running on CPU 🥶 This demo does not work on CPU. Download Zip ✫ https://urloso.com/2uyS1I Download File ————— https://urloso.com/2uyQim Download Zip ✸✸✸ https://urloso.com/2uyQBE Download Zip ►►► https://urloso.com/2uyR49 Download Zip » https://urloso.com/2uyPUE If you are looking for a radio automation software that can handle all your broadcasting needs, look no further than Jazler RadioStar 2.2.30. This software is the latest version of the popular Jazler RadioStar series, and it comes with many new features and improvements that will make your radio station sound professional and engaging. Jazler RadioStar 2.2.30 is a full-featured radio automation software that can manage your music, jingles, spots, events, sweepers, voice tracks, and more. You can easily edit your music database, program your spots and events, record and insert voice tracks, play audio files directly from your browser, and print detailed reports of your broadcasts. Download Zip ———>>> https://urloso.com/2uyOWu Some of the key features of Jazler RadioStar 2.2.30 are: Jazler RadioStar 2.2.30 is compatible with Windows operating systems and requires Microsoft .NET framework 4.5 to run. You can download a two-hour working demo of Jazler RadioStar 2.2.30 from the official website[^1^] and see for yourself how it performs. The demo version includes pre-loaded audio files so you can start broadcasting right away. If you want to purchase the full version of Jazler RadioStar 2.2.30, you will need a serial key and a keygen to activate it. A serial key is a unique code that identifies your software license, and a keygen is a program that generates valid serial keys for you. You can find many websites that offer serial keys and keygens for Jazler RadioStar 2.2.30, but be careful as some of them may contain viruses or malware that can harm your computer. One of the safest and most reliable websites to get a serial key and a keygen for Jazler RadioStar 2.2.30 is [Full][Multilenguaje]. This website has been tested by many users and has received positive feedback for its quality and service. You can download the serial key and the keygen for Jazler RadioStar 2.2.30 from this website for free, and enjoy the full features of this amazing radio automation software. Jazler RadioStar 2.2.30 is the ultimate radio automation software for any radio station that wants to sound professional and engaging. With its easy to use interface, advanced features, and reliable performance, Jazler RadioStar 2.2.30 will make your broadcasting experience easier and more enjoyable than ever before. The Data Warehouse Lifecycle Toolkit, 2nd Edition (9780470149775) Complete coverage of best practices from data warehouse project inception through on-going program management. Updates industry best practices to be in sync with current recommendations of Kimball Group. Streamlines the lifecycle methodology to be more efficient and user-friendly The Data Warehouse ETL Toolkit (9780764567575) shows data warehouse developers how to effectively manage the ETL (Extract, Transform, Load) phase of the data warehouse development lifecycle. The authors show developers the best methods for extracting data from scattered sources throughout the enterprise, removing obsolete, redundant, and innaccurate data, transforming the remaining data into correctly formatted data structures, and then physically loading them into the data warehouse. Download File ✅ https://tinurli.com/2uwi9S Download File ✸ https://tinurli.com/2uwhTm Download ► https://tinurli.com/2uwkLb Download ★★★★★ https://tinurli.com/2uwk3a If you are looking for a fun and exciting action game that will keep you on the edge of your seat, you might want to check out Dark Riddle. This game is a single-player adventure that will challenge you to solve puzzles, avoid obstacles, and uncover the dark secrets of your neighbor. You can play Dark Riddle on your Android device, but did you know that you can also play it on your PC or Mac? In this article, we will show you how to download and play Dark Riddle on PC with an emulator. Dark Riddle is an action game developed by PAGA GROUP. It is a casual game that can be played by anyone who enjoys a good mystery and a thrilling gameplay. In Dark Riddle, you will explore the house of your suspicious neighbor, who seems to be hiding something sinister. You will encounter different characters and creatures along the way, each with their own story and personality. You will also have to solve various puzzles and collect items that will help you access different areas of the house. But be careful, as there are also traps and obstacles that will try to stop you from reaching the basement, where the truth lies. Download › https://urlca.com/2uO7Cg Dark Riddle is not your typical action game. It is more than just running and jumping around. It is also a game of logic and strategy, where you have to use your brain to solve puzzles and find clues. You will have to interact with different objects and devices in the house, such as switches, keys, codes, cameras, etc. Some puzzles are easy, while others are more complex and require more time and attention. You will also have to be stealthy and avoid being detected by your neighbor or his guards. If you get caught, you will have to start over from the last checkpoint. Dark Riddle is not a lonely game. You will meet various characters and creatures during your adventure, some friendly, some hostile. You will encounter police officers, merchants of alien technology, strange animals, robots, zombies, and more. Each character and creature has their own role and purpose in the game. Some will help you, some will hinder you, some will trade with you, some will fight with you. You will also learn more about their background and motivation as you progress through the game. Each character and creature adds more depth and flavor to the game's story. Dark Riddle is not an easy game. It is a game that will test your skills and patience. You will face many obstacles and traps in the house, such as locked doors, lasers, mines, spikes, etc. You will have to use your agility and reflexes to avoid them or find ways to disable them. You will also have to collect various items that will help you in your quest, such as weapons, tools, gadgets, coins, etc. Some items are essential for progressing through the game, while others are optional but useful or fun. You can also use coins to buy items from merchants or upgrade your abilities. Dark Riddle is a great game to play on your Android device, but it can be even better if you play it on your PC or Mac. Here are some reasons why playing Dark Riddle on PC is a good idea:Paso 5: Instalar y abrir Clash Royale en Bluestacks
-
-Cómo jugar Clash Royale en PC con Bluestacks
-Personaliza tus controles de teclado y ratón para una jugabilidad óptima
-Disfruta de los gráficos Full HD y el buen rendimiento de Bluestacks
-Acceda a funciones exclusivas y recompensas de Bluestacks
-
-Conclusión
-
-
-
\ No newline at end of file
diff --git a/spaces/BirdL/DONOTUSEDemo/app.py b/spaces/BirdL/DONOTUSEDemo/app.py
deleted file mode 100644
index 09194cf99d94ebe99add5cae7bf5099b9e160614..0000000000000000000000000000000000000000
--- a/spaces/BirdL/DONOTUSEDemo/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import gradio as gr
-import torch
-from random import randint
-import os
-import huggingface_hub
-
-tok = os.getenv('HF_TOKEN')
-huggingface_hub.login(tok)
-
-from huggingface_hub import HfApi
-from peft import PeftModel, PeftConfig
-from transformers import AutoModelForCausalLM, AutoTokenizer
-
-config = PeftConfig.from_pretrained("BirdL/DONOTUSEV5")
-model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t", token=tok, trust_remote_code=True)
-model = PeftModel.from_pretrained(model, "BirdL/DONOTUSEV5")
-tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t", token=tok)
-
-def response(message, history):
- batch = tokenizer(message, return_tensors='pt')
-
- with torch.cuda.amp.autocast():
- output_tokens = model.generate(**batch, max_new_tokens=20)
- output_tokens = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
- filename = (("file" + str(randint(0, 1000000)) + ".txt"))
- api = HfApi()
- api.upload_file(
- path_or_fileobj=("|Question:" + message + " |RespV2: " + output_tokens).encode('ascii') ,
- path_in_repo=(filename),
- repo_id="BirdL/Data",
- )
-
- return output_tokens
-gr.ChatInterface(response).launch()
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/vector.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/vector.h
deleted file mode 100644
index ee5cfce6aa8d26a2d6d924361f42bfec99cf8601..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/vector.h
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file thrust/system/cpp/vector.h
- * \brief A dynamically-sizable array of elements which reside in memory available to
- * Thrust's standard C++ system.
- */
-
-#pragma once
-
-#include
-
-
-
->
->
-> **Abstract:** *Unconditional human image generation is an important task in vision and graphics, which enables various applications in the creative industry. Existing studies in this field mainly focus on "network engineering" such as designing new components and objective functions. This work takes a data-centric perspective and investigates multiple critical aspects in "data engineering", which we believe would complement the current practice. To facilitate a comprehensive study, we collect and annotate a large-scale human image dataset with over 230K samples capturing diverse poses and textures. Equipped with this large dataset, we rigorously investigate three essential factors in data engineering for StyleGAN-based human generation, namely data size, data distribution, and data alignment. Extensive experiments reveal several valuable observations w.r.t. these aspects: 1) Large-scale data, more than 40K images, are needed to train a high-fidelity unconditional human generation model with vanilla StyleGAN. 2) A balanced training set helps improve the generation quality with rare face poses compared to the long-tailed counterpart, whereas simply balancing the clothing texture distribution does not effectively bring an improvement. 3) Human GAN models with body centers for alignment outperform models trained using face centers or pelvis points as alignment anchors. In addition, a model zoo and human editing applications are demonstrated to facilitate future research in the community.*
-**Keyword:** Human Image Generation, Data-Centric, StyleGAN
-
-[Jianglin Fu](mailto:fujianglin@sensetime.com), [Shikai Li](mailto:lishikai@sensetime.com), [Yuming Jiang](https://yumingj.github.io/), [Kwan-Yee Lin](https://kwanyeelin.github.io/), [Chen Qian](https://scholar.google.com/citations?user=AerkT0YAAAAJ&hl=zh-CN), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/), [Wayne Wu](https://wywu.github.io/), and [Ziwei Liu](https://liuziwei7.github.io/)
-**[[Demo Video]](https://youtu.be/nIrb9hwsdcI)** | **[[Project Page]](https://stylegan-human.github.io/)** | **[[Paper]](https://arxiv.org/pdf/2204.11823.pdf)**
-
-## Updates
-- [20/07/2022] [SHHQ-1.0](./docs/Dataset.md) dataset with 40K images is released! :sparkles:
-- [15/06/2022] Data alignment and real-image inversion scripts are released.
-- [26/04/2022] Technical report released!
-- [22/04/2022] Technical report will be released before May.
-- [21/04/2022] The codebase and project page are created.
-
-## Data Download
-The first version SHHQ-1.0, with 40K images is released. To download and use the dataset set, please read the instructions in [Dataset.md](./docs/Dataset.md)
-
-(We are currently facing large incoming applications, and we need to carefully verify all the applicants, please be patient, and we will reply to you as soon as possible.)
-
-## Model Zoo
-
-| Structure | 1024x512 | Metric | Scores | 512x256 | Metric | Scores |
-| --------- |:----------:| :----------:| :----------:| :-----: | :-----: | :-----: |
-| StyleGAN1 |[stylegan_human_v1_1024.pkl](https://drive.google.com/file/d/1h-R-IV-INGdPEzj4P9ml6JTEvihuNgLX/view?usp=sharing)| fid50k | 3.79 | to be released | - | - |
-| StyleGAN2 |[stylegan_human_v2_1024.pkl](https://drive.google.com/file/d/1FlAb1rYa0r_--Zj_ML8e6shmaF28hQb5/view?usp=sharing)| fid50k_full | 1.57 |[stylegan_human_v2_512.pkl](https://drive.google.com/file/d/1dlFEHbu-WzQWJl7nBBZYcTyo000H9hVm/view?usp=sharing) | fid50k_full | 1.97 |
-| StyleGAN3 |to be released | - | - | [stylegan_human_v3_512.pkl](https://drive.google.com/file/d/1_274jk_N6WSCkKWeu7hjHycqGvbuOFf5/view?usp=sharing) | fid50k_full | 2.54 |
-
-
-
-## Web Demo
-
-Integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio). Try out the Web Demo for generation: [](https://huggingface.co/spaces/hysts/StyleGAN-Human) and interpolation [](https://huggingface.co/spaces/hysts/StyleGAN-Human-Interpolation)
-
-
-
-
-
-We prepare a Colab demo to allow you to synthesize images with the provided models, as well as visualize the performance of style-mixing, interpolation, and attributes editing.
-The notebook will guide you to install the necessary environment and download pretrained models. The output images can be found in `./StyleGAN-Human/outputs/`.
-Hope you enjoy!
-
-## Usage
-
-### System requirements
-* The original code bases are [stylegan (tensorflow)](https://github.com/NVlabs/stylegan), [stylegan2-ada (pytorch)](https://github.com/NVlabs/stylegan2-ada-pytorch), [stylegan3 (pytorch)](https://github.com/NVlabs/stylegan3), released by NVidia
-
-* We tested in Python 3.8.5 and PyTorch 1.9.1 with CUDA 11.1. (See https://pytorch.org for PyTorch install instructions.)
-
-### Installation
-To work with this project on your own machine, you need to install the environmnet as follows:
-
-```
-conda env create -f environment.yml
-conda activate stylehuman
-# [Optional: tensorflow 1.x is required for StyleGAN1. ]
-pip install nvidia-pyindex
-pip install nvidia-tensorflow[horovod]
-pip install nvidia-tensorboard==1.15
-```
-Extra notes:
-1. In case having some conflicts when calling CUDA version, please try to empty the LD_LIBRARY_PATH. For example:
-```
-LD_LIBRARY_PATH=; python generate.py --outdir=out/stylegan_human_v2_1024 --trunc=1 --seeds=1,3,5,7
---network=pretrained_models/stylegan_human_v2_1024.pkl --version 2
-```
-
-
-2. We found the following troubleshooting links might be helpful: [1.](https://github.com/NVlabs/stylegan3), [2.](https://github.com/NVlabs/stylegan3/blob/main/docs/troubleshooting.md)
-
-### Train
-The training scripts are based on the original [stylegan1](https://github.com/NVlabs/stylegan), [stylegan2-ada](https://github.com/NVlabs/stylegan2-ada-pytorch), and [stylegan3](https://github.com/NVlabs/stylegan3) with minor changes. Here we only provide the scripts with modifications for SG2 and SG3. You can replace the old files with the provided scripts to train. (assume SHHQ-1.0 is placed under data/)
-
-#### Train Stylegan2-ada-pytorch with SHHQ-1.0
-```
-python train.py --outdir=training_results/sg2/ --data=data/SHHQ-1.0/ \
- --gpus=8 --aug=noaug --mirror=1 --snap=250 --cfg=shhq --square=False
-```
-#### Train Stylegan3 with SHHQ-1.0
-```
-python train.py --outdir=training_results/sg3/ --cfg=stylegan3-r --gpus=8 --batch=32 --gamma=12.4 \
- --mirror=1 --aug=noaug --data=data/SHHQ-1.0/ --square=False --snap=250
-```
-
-### Pretrained models
-Please put the downloaded pretrained models [from above link](#Model-Zoo) under the folder 'pretrained_models'.
-
-
-### Generate full-body human images using our pretrained model
-```
-# Generate human full-body images without truncation
-python generate.py --outdir=outputs/generate/stylegan_human_v2_1024 --trunc=1 --seeds=1,3,5,7 --network=pretrained_models/stylegan_human_v2_1024.pkl --version 2
-
-# Generate human full-body images with truncation
-python generate.py --outdir=outputs/generate/stylegan_human_v2_1024 --trunc=0.8 --seeds=0-10 --network=pretrained_models/stylegan_human_v2_1024.pkl --version 2
-
-# Generate human full-body images using stylegan V1
-python generate.py --outdir=outputs/generate/stylegan_human_v1_1024 --network=pretrained_models/stylegan_human_v1_1024.pkl --version 1 --seeds=1,3,5
-
-# Generate human full-body images using stylegan V3
-python generate.py --outdir=outputs/generate/stylegan_human_v3_512 --network=pretrained_models/stylegan_human_v3_512.pkl --version 3 --seeds=1,3,5
-```
-
-
-#### Note: The following demos are generated based on models related to StyleGAN V2 (stylegan_human_v2_512.pkl and stylegan_human_v2_1024.pkl). If you want to see results for V1 or V3, you need to change the loading method of the corresponding models.
-
-
-### Interpolation
-```
-python interpolation.py --network=pretrained_models/stylegan_human_v2_1024.pkl --seeds=85,100 --outdir=outputs/inter_gifs
-```
-
-### Style-mixing **image** using stylegan2
-```
-python style_mixing.py --network=pretrained_models/stylegan_human_v2_1024.pkl --rows=85,100,75,458,1500 \\
- --cols=55,821,1789,293 --styles=0-3 --outdir=outputs/stylemixing
-```
-
-### Style-mixing **video** using stylegan2
-```
-python stylemixing_video.py --network=pretrained_models/stylegan_human_v2_1024.pkl --row-seed=3859 \\
- --col-seeds=3098,31759,3791 --col-styles=8-12 --trunc=0.8 --outdir=outputs/stylemixing_video
-```
-
-### Aligned raw images
-For alignment, we use [openpose-pytorch](https://github.com/Hzzone/pytorch-openpose) for body-keypoints detection and [PaddlePaddle](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.5/contrib/PP-HumanSeg) for human segmentation.
-Before running the alignment script, few models need to be installed:
-1. download [body_pose_model.pth](https://drive.google.com/drive/folders/1JsvI4M4ZTg98fmnCZLFM-3TeovnCRElG?usp=sharing) and place it into openpose/model/.
-2. download and extract [deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax](https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax.zip) into PP_HumanSeg/export_model/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax.
-3. download and extract [deeplabv3p_resnet50_os8_humanseg_512x512_100k](https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/deeplabv3p_resnet50_os8_humanseg_512x512_100k.zip) into PP_HumanSeg/pretrained_model/deeplabv3p_resnet50_os8_humanseg_512x512_100k.
-4. install paddlepaddel: ``` pip install paddleseg ```
-
-Then you can start alignment:
-```
-python alignment.py --image-folder img/test/ --output-folder aligned_image/
-```
-
-### Invert real image with [PTI](https://github.com/danielroich/PTI)
-Before inversion, please download our PTI weights: [e4e_w+.pt](https://drive.google.com/file/d/1NUfSJqLhsrU7c9PwAtlZ9xtrxhzS_6tu/view?usp=sharing) into /pti/.
-
-Few parameters you can change:
-- /pti/pti_configs/hyperparameters.py:
- - first_inv_type = 'w+' -> Use pretrained e4e encoder
- - first_inv_type = 'w' -> Use projection and optimization
-- /pti/pti_configs/paths_config.py:
- - input_data_path: path of real images
- - e4e: path of e4e_w+.pt
- - stylegan2_ada_shhq: pretrained stylegan2-ada model for SHHQ
-
-```
-python run_pti.py
-```
-Note: we used the test image under 'aligned_image/' (the output of alignment.py), the inverted latent code and fine-tuned generator will be saved in 'outputs/pti/'
-
-
-### Editing with InterfaceGAN, StyleSpace, and Sefa
-```
-python edit.py --network pretrained_models/stylegan_human_v2_1024.pkl --attr_name upper_length \\
- --seeds 61531,61570,61571,61610 --outdir outputs/edit_results
-```
-
-### Editing using inverted latent code
-```
-python edit.py ---network outputs/pti/checkpoints/model_test.pkl --attr_name upper_length \\
- --outdir outputs/edit_results --real True --real_w_path outputs/pti/embeddings/test/PTI/test/0.pt --real_img_path aligned_image/test.png
-```
-
-Note:
-1. ''upper_length'' and ''bottom_length'' of ''attr_name'' are available for demo.
-2. Layers to control and editing strength are set in edit/edit_config.py.
-
-
-### Demo for [InsetGAN](https://arxiv.org/abs/2203.07293)
-
-We implement a quick demo using the key idea from InsetGAN: combining the face generated by FFHQ with the human-body generated by our pretrained model, optimizing both face and body latent codes to get a coherent full-body image.
-Before running the script, you need to download the [FFHQ face model]( https://docs.google.com/uc?export=download&confirm=t&id=125OG7SMkXI-Kf2aqiwLLHyCvSW-gZk3M), or you can use your own face model, as well as [pretrained face landmark](https://docs.google.com/uc?export=download&confirm=&id=1A82DnJBJzt8wI2J8ZrCK5fgHcQ2-tcWM) and [pretrained CNN face detection model for dlib](https://docs.google.com/uc?export=download&confirm=&id=1MduBgju5KFNrQfDLoQXJ_1_h5MnctCIG)
-```
-python insetgan.py --body_network=pretrained_models/stylegan_human_v2_1024.pkl --face_network=pretrained_models/ffhq.pkl \\
- --body_seed=82 --face_seed=43 --trunc=0.6 --outdir=outputs/insetgan/ --video 1
-```
-
-## Results
-
-### Editing with inverted real image
-(from left to right: real image | inverted image | InterFaceGAN result | StyleSpace result | SeFa result)
-
-https://user-images.githubusercontent.com/98547009/173773800-bb7fe54a-84d3-4b30-9864-a6b7b311f8ff.mp4
-
-
-### For more demo, please visit our [**web page**](https://stylegan-human.github.io/) .
-
-
-## TODO List
-
-- [ ] Release 1024x512 version of StyleGAN-Human based on StyleGAN3
-- [ ] Release 512x256 version of StyleGAN-Human based on StyleGAN1
-- [ ] Extension of downstream application (InsetGAN): Add face inversion interface to support fusing user face image and stylegen-human body image
-- [x] Add Inversion Script into the provided editing pipeline
-- [ ] Release Dataset
-
-
-## Related Works
-* (SIGGRAPH 2022) **Text2Human: Text-Driven Controllable Human Image Generation**, Yuming Jiang et al. [[Paper](https://arxiv.org/pdf/2205.15996.pdf)], [[Code](https://github.com/yumingj/Text2Human)], [[Project Page](https://yumingj.github.io/projects/Text2Human.html)], [[Dataset](https://github.com/yumingj/DeepFashion-MultiModal)]
-* (ICCV 2021) **Talk-to-Edit: Fine-Grained Facial Editing via Dialog**, Yuming Jiang et al. [[Paper](https://arxiv.org/abs/2109.04425)], [[Code](https://github.com/yumingj/Talk-to-Edit)], [[Project Page](https://www.mmlab-ntu.com/project/talkedit/)], [[Dataset](https://mmlab.ie.cuhk.edu.hk/projects/CelebA/CelebA_Dialog.html)]
-* (Technical Report 2022) **Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis**, Wei Cheng et al. [[Paper](https://arxiv.org/pdf/2204.11798.pdf)], [[Code](https://github.com/generalizable-neural-performer/gnr)], [[Project Page](https://generalizable-neural-performer.github.io/)], [[Dataset](https://generalizable-neural-performer.github.io/genebody.html)]
-
-## Citation
-
-If you find this work useful for your research, please consider citing our paper:
-
-```bibtex
-@article{fu2022styleganhuman,
- title={StyleGAN-Human: A Data-Centric Odyssey of Human Generation},
- author={Fu, Jianglin and Li, Shikai and Jiang, Yuming and Lin, Kwan-Yee and Qian, Chen and Loy, Chen-Change and Wu, Wayne and Liu, Ziwei},
- journal = {arXiv preprint},
- volume = {arXiv:2204.11823},
- year = {2022}
-```
-
-## Acknowlegement
-Part of the code is borrowed from [stylegan (tensorflow)](https://github.com/NVlabs/stylegan), [stylegan2-ada (pytorch)](https://github.com/NVlabs/stylegan2-ada-pytorch), [stylegan3 (pytorch)](https://github.com/NVlabs/stylegan3).
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/modules/ipex/__init__.py.py b/spaces/Eddycrack864/Applio-Inference/infer/modules/ipex/__init__.py.py
deleted file mode 100644
index 9f53b2d3f7025b2d71369dababa4e6f2a4affc48..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/modules/ipex/__init__.py.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import os
-import sys
-import contextlib
-import torch
-import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
-from .hijacks import ipex_hijacks
-from .attention import attention_init
-
-# pylint: disable=protected-access, missing-function-docstring, line-too-long
-
-def ipex_init(): # pylint: disable=too-many-statements
- try:
- #Replace cuda with xpu:
- torch.cuda.current_device = torch.xpu.current_device
- torch.cuda.current_stream = torch.xpu.current_stream
- torch.cuda.device = torch.xpu.device
- torch.cuda.device_count = torch.xpu.device_count
- torch.cuda.device_of = torch.xpu.device_of
- torch.cuda.getDeviceIdListForCard = torch.xpu.getDeviceIdListForCard
- torch.cuda.get_device_name = torch.xpu.get_device_name
- torch.cuda.get_device_properties = torch.xpu.get_device_properties
- torch.cuda.init = torch.xpu.init
- torch.cuda.is_available = torch.xpu.is_available
- torch.cuda.is_initialized = torch.xpu.is_initialized
- torch.cuda.is_current_stream_capturing = lambda: False
- torch.cuda.set_device = torch.xpu.set_device
- torch.cuda.stream = torch.xpu.stream
- torch.cuda.synchronize = torch.xpu.synchronize
- torch.cuda.Event = torch.xpu.Event
- torch.cuda.Stream = torch.xpu.Stream
- torch.cuda.FloatTensor = torch.xpu.FloatTensor
- torch.Tensor.cuda = torch.Tensor.xpu
- torch.Tensor.is_cuda = torch.Tensor.is_xpu
- torch.cuda._initialization_lock = torch.xpu.lazy_init._initialization_lock
- torch.cuda._initialized = torch.xpu.lazy_init._initialized
- torch.cuda._lazy_seed_tracker = torch.xpu.lazy_init._lazy_seed_tracker
- torch.cuda._queued_calls = torch.xpu.lazy_init._queued_calls
- torch.cuda._tls = torch.xpu.lazy_init._tls
- torch.cuda.threading = torch.xpu.lazy_init.threading
- torch.cuda.traceback = torch.xpu.lazy_init.traceback
- torch.cuda.Optional = torch.xpu.Optional
- torch.cuda.__cached__ = torch.xpu.__cached__
- torch.cuda.__loader__ = torch.xpu.__loader__
- torch.cuda.ComplexFloatStorage = torch.xpu.ComplexFloatStorage
- torch.cuda.Tuple = torch.xpu.Tuple
- torch.cuda.streams = torch.xpu.streams
- torch.cuda._lazy_new = torch.xpu._lazy_new
- torch.cuda.FloatStorage = torch.xpu.FloatStorage
- torch.cuda.Any = torch.xpu.Any
- torch.cuda.__doc__ = torch.xpu.__doc__
- torch.cuda.default_generators = torch.xpu.default_generators
- torch.cuda.HalfTensor = torch.xpu.HalfTensor
- torch.cuda._get_device_index = torch.xpu._get_device_index
- torch.cuda.__path__ = torch.xpu.__path__
- torch.cuda.Device = torch.xpu.Device
- torch.cuda.IntTensor = torch.xpu.IntTensor
- torch.cuda.ByteStorage = torch.xpu.ByteStorage
- torch.cuda.set_stream = torch.xpu.set_stream
- torch.cuda.BoolStorage = torch.xpu.BoolStorage
- torch.cuda.os = torch.xpu.os
- torch.cuda.torch = torch.xpu.torch
- torch.cuda.BFloat16Storage = torch.xpu.BFloat16Storage
- torch.cuda.Union = torch.xpu.Union
- torch.cuda.DoubleTensor = torch.xpu.DoubleTensor
- torch.cuda.ShortTensor = torch.xpu.ShortTensor
- torch.cuda.LongTensor = torch.xpu.LongTensor
- torch.cuda.IntStorage = torch.xpu.IntStorage
- torch.cuda.LongStorage = torch.xpu.LongStorage
- torch.cuda.__annotations__ = torch.xpu.__annotations__
- torch.cuda.__package__ = torch.xpu.__package__
- torch.cuda.__builtins__ = torch.xpu.__builtins__
- torch.cuda.CharTensor = torch.xpu.CharTensor
- torch.cuda.List = torch.xpu.List
- torch.cuda._lazy_init = torch.xpu._lazy_init
- torch.cuda.BFloat16Tensor = torch.xpu.BFloat16Tensor
- torch.cuda.DoubleStorage = torch.xpu.DoubleStorage
- torch.cuda.ByteTensor = torch.xpu.ByteTensor
- torch.cuda.StreamContext = torch.xpu.StreamContext
- torch.cuda.ComplexDoubleStorage = torch.xpu.ComplexDoubleStorage
- torch.cuda.ShortStorage = torch.xpu.ShortStorage
- torch.cuda._lazy_call = torch.xpu._lazy_call
- torch.cuda.HalfStorage = torch.xpu.HalfStorage
- torch.cuda.random = torch.xpu.random
- torch.cuda._device = torch.xpu._device
- torch.cuda.classproperty = torch.xpu.classproperty
- torch.cuda.__name__ = torch.xpu.__name__
- torch.cuda._device_t = torch.xpu._device_t
- torch.cuda.warnings = torch.xpu.warnings
- torch.cuda.__spec__ = torch.xpu.__spec__
- torch.cuda.BoolTensor = torch.xpu.BoolTensor
- torch.cuda.CharStorage = torch.xpu.CharStorage
- torch.cuda.__file__ = torch.xpu.__file__
- torch.cuda._is_in_bad_fork = torch.xpu.lazy_init._is_in_bad_fork
- #torch.cuda.is_current_stream_capturing = torch.xpu.is_current_stream_capturing
-
- #Memory:
- torch.cuda.memory = torch.xpu.memory
- if 'linux' in sys.platform and "WSL2" in os.popen("uname -a").read():
- torch.xpu.empty_cache = lambda: None
- torch.cuda.empty_cache = torch.xpu.empty_cache
- torch.cuda.memory_stats = torch.xpu.memory_stats
- torch.cuda.memory_summary = torch.xpu.memory_summary
- torch.cuda.memory_snapshot = torch.xpu.memory_snapshot
- torch.cuda.memory_allocated = torch.xpu.memory_allocated
- torch.cuda.max_memory_allocated = torch.xpu.max_memory_allocated
- torch.cuda.memory_reserved = torch.xpu.memory_reserved
- torch.cuda.memory_cached = torch.xpu.memory_reserved
- torch.cuda.max_memory_reserved = torch.xpu.max_memory_reserved
- torch.cuda.max_memory_cached = torch.xpu.max_memory_reserved
- torch.cuda.reset_peak_memory_stats = torch.xpu.reset_peak_memory_stats
- torch.cuda.reset_max_memory_cached = torch.xpu.reset_peak_memory_stats
- torch.cuda.reset_max_memory_allocated = torch.xpu.reset_peak_memory_stats
- torch.cuda.memory_stats_as_nested_dict = torch.xpu.memory_stats_as_nested_dict
- torch.cuda.reset_accumulated_memory_stats = torch.xpu.reset_accumulated_memory_stats
-
- #RNG:
- torch.cuda.get_rng_state = torch.xpu.get_rng_state
- torch.cuda.get_rng_state_all = torch.xpu.get_rng_state_all
- torch.cuda.set_rng_state = torch.xpu.set_rng_state
- torch.cuda.set_rng_state_all = torch.xpu.set_rng_state_all
- torch.cuda.manual_seed = torch.xpu.manual_seed
- torch.cuda.manual_seed_all = torch.xpu.manual_seed_all
- torch.cuda.seed = torch.xpu.seed
- torch.cuda.seed_all = torch.xpu.seed_all
- torch.cuda.initial_seed = torch.xpu.initial_seed
-
- #AMP:
- torch.cuda.amp = torch.xpu.amp
- if not hasattr(torch.cuda.amp, "common"):
- torch.cuda.amp.common = contextlib.nullcontext()
- torch.cuda.amp.common.amp_definitely_not_available = lambda: False
- try:
- torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler
- except Exception: # pylint: disable=broad-exception-caught
- try:
- from .gradscaler import gradscaler_init # pylint: disable=import-outside-toplevel, import-error
- gradscaler_init()
- torch.cuda.amp.GradScaler = torch.xpu.amp.GradScaler
- except Exception: # pylint: disable=broad-exception-caught
- torch.cuda.amp.GradScaler = ipex.cpu.autocast._grad_scaler.GradScaler
-
- #C
- torch._C._cuda_getCurrentRawStream = ipex._C._getCurrentStream
- ipex._C._DeviceProperties.major = 2023
- ipex._C._DeviceProperties.minor = 2
-
- #Fix functions with ipex:
- torch.cuda.mem_get_info = lambda device=None: [(torch.xpu.get_device_properties(device).total_memory - torch.xpu.memory_allocated(device)), torch.xpu.get_device_properties(device).total_memory]
- torch._utils._get_available_device_type = lambda: "xpu"
- torch.has_cuda = True
- torch.cuda.has_half = True
- torch.cuda.is_bf16_supported = lambda *args, **kwargs: True
- torch.cuda.is_fp16_supported = lambda *args, **kwargs: True
- torch.version.cuda = "11.7"
- torch.cuda.get_device_capability = lambda *args, **kwargs: [11,7]
- torch.cuda.get_device_properties.major = 11
- torch.cuda.get_device_properties.minor = 7
- torch.cuda.ipc_collect = lambda *args, **kwargs: None
- torch.cuda.utilization = lambda *args, **kwargs: 0
-
- ipex_hijacks()
- attention_init()
- except Exception as e:
- return False, e
- return True, None
\ No newline at end of file
diff --git a/spaces/ExpertPrompters/AskIDF/chat.py b/spaces/ExpertPrompters/AskIDF/chat.py
deleted file mode 100644
index 4eaff63f8f5af3d8af1ceffb7e9238b3a9a8512f..0000000000000000000000000000000000000000
--- a/spaces/ExpertPrompters/AskIDF/chat.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from langchain.llms.base import get_prompts
-from sqlalchemy import label
-import streamlit as st
-from typing import Callable
-
-
-
-RESPONSE_LABEL = 'chat_response'
-PROMPT_LABEL = 'chat_prompt'
-
-class Chat:
-
- def __init__(self):
- if RESPONSE_LABEL not in st.session_state:
- st.session_state[RESPONSE_LABEL] = []
-
- if PROMPT_LABEL not in st.session_state:
- st.session_state[PROMPT_LABEL] = []
-
- def process(self, process_prompt: Callable, *args):
- """
- process_prompt(promt: str, *args) -> tuple(Any, Callable)
- callback to process the chat promt, it takes the promt for input
- and returns a tuple with the response and a render callback
- """
-
- # Render history
- messages = zip(st.session_state[PROMPT_LABEL], st.session_state[RESPONSE_LABEL])
- for prompt, (response, on_render) in list(messages)[::-1]:
- with st.chat_message("user"):
- st.write(prompt)
- with st.chat_message("assistant"):
- on_render(response)
-
- # Compute prompt
- if prompt:= st.chat_input("Ask IDF Anything"):
- st.session_state[PROMPT_LABEL].append(prompt)
- (response, on_render) = process_prompt(prompt, *args)
- st.session_state[RESPONSE_LABEL].append((response, on_render))
-
- with st.chat_message("user"):
- st.write(prompt)
-
- with st.chat_message("assistant"):
- on_render(response)
-
diff --git a/spaces/Faryne/yulet1de-hentaidiffusion/README.md b/spaces/Faryne/yulet1de-hentaidiffusion/README.md
deleted file mode 100644
index ea9e8d2cc2f29e471ab1ba0ecd9e2e133e1e5782..0000000000000000000000000000000000000000
--- a/spaces/Faryne/yulet1de-hentaidiffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Yulet1de Hentaidiffusion
-emoji: 🐨
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FloydianSound/Wlop_Diffusion/app.py b/spaces/FloydianSound/Wlop_Diffusion/app.py
deleted file mode 100644
index 0572417c16d7b79db9f9ff6d5346c09f62d25654..0000000000000000000000000000000000000000
--- a/spaces/FloydianSound/Wlop_Diffusion/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-
-model_id = 'FloydianSound/Wlop_Diffusion'
-prefix = 'wlop'
-
-scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- pipe_i2i = pipe_i2i.to("cuda")
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False):
-
- generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
- prompt = f"{prefix} {prompt}" if auto_prefix else prompt
-
- try:
- if img is not None:
- return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
- else:
- return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None
- except Exception as e:
- return None, error_str(e)
-
-def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator):
-
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
-
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe_i2i(
- prompt,
- negative_prompt = neg_prompt,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
Wlop Diffusion
-
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
-
-
-
- 🔥 Fsals is a Robust feature selection framework based on causal inference.
- 🤗 Try using fsals in different data sets.!
- """
- article = r"""
- If FSALs is helpful, please help to ⭐ the Github Repo. Thanks!
- [](https://github.com/Justin-12138/bio_if)
-
- ---
-
- 📝 **Citation**
-
- If our work is useful for your research, please consider citing:
- ```bibtex
- @article{zlhl2023,
- author = {Xiaolong Zhou, Zhao Liu, Yuchen Huang, Kun Lin},
- title = {A Novel Ensemble Feature Selection Method for Biomarkers of Alzheimer's disease},
- booktitle = {GUET Publisher},
- year = {2023}
- }
- ```
- 📋 **License**
-
- This project is licensed under GPL License 2.0.
- Redistribution and use for non-commercial purposes should follow this license.
-
- 📧 **Contact**
-
- If you have any questions, please feel free to reach me out at justinliu707@gmail.com.
-
-
- """
- if choicce == "title":
- return title
- elif choicce == "description":
- return description
- elif choicce == "article":
- return article
- elif choicce == 'inputs':
- inputs = [gr.inputs.File(label="Training data"),
- gr.inputs.Radio(['MRMR_FCD', 'MRMR_FCQ', 'CFS', 'Lasso', 'Ensemble', 'CI'], label="method"),
- gr.inputs.Number(label="Num_feature(int)"),
- gr.inputs.Radio(['RF', 'SVM', 'KNN', 'DT', 'Naive Bayes'], label="classifier for CV"),
- gr.inputs.File(label="Testing data")
- ]
- return inputs
- elif choicce == 'outputs':
- output = [gr.Image(label="Index_score"),
- gr.Image(label="IFS_Acc"),
- gr.Image(label="Confusion_matrix"),
- gr.File(label='Index_score.csv')]
- return output
-
-
-def cv(X, y, index_0, clf, n_fold):
- acc = []
- for i in range(len(index_0)):
- # 使用前i个特征进行交叉验证
- selected_features = X[:, [int(j) - 1 for j in index_0[:i + 1]]]
- scores = cross_val_score(clf, selected_features, y, cv=n_fold)
- # 计算平均准确率并添加到acc列表中
- acc.append(scores.mean())
- max_acc = round(max(acc), 4)
- max_index = acc.index(max(acc)) + 1
- return acc, max_acc, max_index
-
-
-def getindex_1(sorted_combined):
- index_1 = []
- index_0 = []
- scores = []
- for indy in sorted_combined:
- index_1.append(str(indy[0] + 1))
- scores.append(indy[1])
- for item in index_1:
- index_0.append(int(item) - 1)
- return index_1, index_0, scores
-
-
-def load_model(X, y, test_samples, test_labels):
- models = SVC(C=1.0, kernel='rbf')
- my_model = MyModel(models)
- my_model.train(X, y)
- # 预测测试样本的标签并计算准确率
- predictions = my_model.predict_samples(test_samples)
- # 计算混淆矩阵
- cm = confusion_matrix(test_labels, predictions)
- return cm
-
-
-def lasso(data, testsample, num_fea_int):
- X, y = load_data(data, True)
- test_samples, test_labels = load_data(testsample, False)
- cl = LassoLarsCV(cv=20, max_iter=80000).fit(X, y)
- importance = np.abs(cl.coef_)
- feature_names = list(X)
- a = len(feature_names)
- idx_features = (-importance).argsort()[:a]
- # name_features = np.array(feature_names)[idx_features]
- result = pd.DataFrame({'index': idx_features, 'Score': importance[idx_features]})
- result_rank = result.sort_values(by='Score', ascending=False, ignore_index=True)
- result_rank.to_csv("index-score.csv")
- inde = result_rank['index'].tolist()
- score = result_rank['Score'].tolist()
- return X, y, inde, score, test_samples, test_labels, num_fea_int
-
-
-def fs(data, method, num_fea_int, clf, testsample):
- num_fea_int = int(num_fea_int)
- if method == 'MRMR_FCD':
- combined, X, y, test_samples, test_labels = MRMR_FCD(data=data, testsample=testsample, num_fea_int=num_fea_int)
- # 使用sorted()函数对合并后的列表进行排序,key参数指定按照分数排序,reverse=True表示降序排序
- sorted_combined = sorted(combined, key=lambda x: x[1], reverse=True)
- index_score_csv(sorted_combined=sorted_combined, filename='ab.csv')
- index_1, index_0, scores = getindex_1(sorted_combined=sorted_combined)
- # 画score.png
- isplot(1, 24, 10,
- title_gr=str(method), x=index_1, y=scores,
- xlabbel="index", ylabel="scores", filename="index-score.png")
- # 选择分类器
- clf = setclf(clf)
- acc, max_acc, max_index = cv(X=X, y=y, index_0=index_0, clf=clf, n_fold=10)
- # 画acc.png
- ifsplot(2, 24, 10,
- title_gr=str(method), max_index=max_index, max_acc=max_acc,
- acc=acc, xlabbel="top n features", ylabel="acc", filename="acc.png")
- cm = load_model(X=X, y=y, test_samples=test_samples, test_labels=test_labels)
- cmplot(3, 24, 10, cm=cm,
- xlabbel="predicted labels", ylabel="true labels", filename='confusion_matrix.png')
- return 'index-score.png', 'acc.png', "confusion_matrix.png", "ab.csv"
-
- elif method == 'MRMR_FCQ':
- combined, X, y, test_samples, test_labels = MRMR_FCQ(data=data, testsample=testsample, num_fea_int=num_fea_int)
- # 使用sorted()函数对合并后的列表进行排序,key参数指定按照分数排序,reverse=True表示降序排序
- sorted_combined = sorted(combined, key=lambda x: x[1], reverse=True)
- index_score_csv(sorted_combined=sorted_combined, filename='ab.csv')
- # inde index start 1
- index_1, index_0, scores = getindex_1(sorted_combined=sorted_combined)
- # index-score.png
- isplot(1, 24, 10, title_gr=str(method), x=index_1, y=scores,
- xlabbel="index", ylabel="scores", filename="index-score.png")
- # 选择分类器
- clf = setclf(clf)
- acc, max_acc, max_index = cv(X=X, y=y, index_0=index_0, clf=clf, n_fold=5)
- # acc.png
- ifsplot(2, 24, 10, title_gr=str(method), max_index=max_index,
- max_acc=max_acc, acc=acc, xlabbel="top n features", ylabel="acc",
- filename="acc.png")
- # cal cm
- cm = load_model(X=X, y=y, test_samples=test_samples, test_labels=test_labels)
- cmplot(3, 24, 10,
- cm=cm, xlabbel="predicted labels", ylabel="true labels", filename='confusion_matrix.png')
- return 'index-score.png', 'acc.png', "confusion_matrix.png", "ab.csv"
-
- elif method == 'Lasso':
- X, y, inde, score, test_samples, test_labels, num_fea_int = lasso(data, testsample, num_fea_int)
- index = []
- for i in inde:
- index.append(str(i))
- plt.figure(1, figsize=(24, 12))
- plt.title(str(method))
- plt.plot(index[:num_fea_int], score[:num_fea_int])
-
- # 设置x轴和y轴的标签
- plt.xlabel('Feature Index', fontsize=40)
- plt.ylabel('Feature Score', fontsize=40)
- plt.savefig('Index_Score.png')
- clf = setclf(clf)
-
- inde = inde[:num_fea_int]
- X = X.values
- acc, max_acc, max_index = cv(X=X, y=y, index_0=inde, clf=clf, n_fold=5)
- ifsplot(2, 24, 10, title_gr=str(method), max_index=max_index,
- max_acc=max_acc, acc=acc, xlabbel="top n features", ylabel="acc",
- filename="acc.png")
-
- cm = load_model(X=X, y=y, test_samples=test_samples, test_labels=test_labels)
- cmplot(3, 24, 10,
- cm=cm, xlabbel="predicted labels", ylabel="true labels", filename='confusion_matrix.png')
-
- return 'Index_Score.png', 'acc.png', "confusion_matrix.png", 'index-score.csv'
-
- elif method == 'CFS':
- pass
diff --git a/spaces/KalbeDigitalLab/ham1000-skin-classification/style.css b/spaces/KalbeDigitalLab/ham1000-skin-classification/style.css
deleted file mode 100644
index d48308e0d57a6e0d127c20ae0790c9ff302a0add..0000000000000000000000000000000000000000
--- a/spaces/KalbeDigitalLab/ham1000-skin-classification/style.css
+++ /dev/null
@@ -1,83 +0,0 @@
-* {
- box-sizing: border-box;
-}
-
-body {
- font-family: 'Source Sans Pro', sans-serif;
- font-size: 16px;
-}
-
-.container {
- width: 100%;
- margin: 0 auto;
-}
-
-.title {
- font-size: 24px !important;
- font-weight: 600 !important;
- letter-spacing: 0em;
- text-align: center;
- color: #374159 !important;
-}
-
-.subtitle {
- font-size: 24px !important;
- font-style: italic;
- font-weight: 400 !important;
- letter-spacing: 0em;
- text-align: center;
- color: #1d652a !important;
- padding-bottom: 0.5em;
-}
-
-.overview-heading {
- font-size: 24px !important;
- font-weight: 600 !important;
- letter-spacing: 0em;
- text-align: left;
-}
-
-.overview-content {
- font-size: 14px !important;
- font-weight: 400 !important;
- line-height: 30px !important;
- letter-spacing: 0em;
- text-align: left;
-}
-
-.content-image {
- width: 100% !important;
- height: auto !important;
-}
-
-.vl {
- border-left: 5px solid #1d652a;
- padding-left: 20px;
- color: #1d652a !important;
-}
-
-.grid-container {
- display: grid;
- grid-template-columns: 1fr 2fr;
- gap: 20px;
- align-items: flex-start;
- margin-bottom: 0.7em;
-}
-
-.grid-container:nth-child(2) {
- align-items: center;
-}
-
-@media screen and (max-width: 768px) {
- .container {
- width: 90%;
- }
-
- .grid-container {
- display: block;
- }
-
- .overview-heading {
- font-size: 18px !important;
- }
-}
\ No newline at end of file
diff --git a/spaces/KarmKarma/rvc-models-genshinimpact/infer_pack/transforms.py b/spaces/KarmKarma/rvc-models-genshinimpact/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/KarmKarma/rvc-models-genshinimpact/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/__init__.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/encoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/layer_norm.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/layer_norm.py
deleted file mode 100644
index db8be30ff70554edb179109037665e51c04510ec..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/layer_norm.py
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2019 Shigeki Karita
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""Layer normalization module."""
-
-import torch
-
-
-class LayerNorm(torch.nn.LayerNorm):
- """Layer normalization module.
-
- :param int nout: output dim size
- :param int dim: dimension to be normalized
- """
-
- def __init__(self, nout, dim=-1):
- """Construct an LayerNorm object."""
- super(LayerNorm, self).__init__(nout, eps=1e-12)
- self.dim = dim
-
- def forward(self, x):
- """Apply layer normalization.
-
- :param torch.Tensor x: input tensor
- :return: layer normalized tensor
- :rtype torch.Tensor
- """
- if self.dim == -1:
- return super(LayerNorm, self).forward(x)
- return super(LayerNorm, self).forward(x.transpose(1, -1)).transpose(1, -1)
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder_train.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder_train.py
deleted file mode 100644
index f618ee00d8f774ecf821b9714932acc7e99aa5d5..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder_train.py
+++ /dev/null
@@ -1,92 +0,0 @@
-from utils.argutils import print_args
-from vocoder.wavernn.train import train
-from vocoder.hifigan.train import train as train_hifigan
-from vocoder.fregan.train import train as train_fregan
-from utils.util import AttrDict
-from pathlib import Path
-import argparse
-import json
-import torch
-import torch.multiprocessing as mp
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(
- description="Trains the vocoder from the synthesizer audios and the GTA synthesized mels, "
- "or ground truth mels.",
- formatter_class=argparse.ArgumentDefaultsHelpFormatter
- )
-
- parser.add_argument("run_id", type=str, help= \
- "Name for this model instance. If a model state from the same run ID was previously "
- "saved, the training will restart from there. Pass -f to overwrite saved states and "
- "restart from scratch.")
- parser.add_argument("datasets_root", type=str, help= \
- "Path to the directory containing your SV2TTS directory. Specifying --syn_dir or --voc_dir "
- "will take priority over this argument.")
- parser.add_argument("vocoder_type", type=str, default="wavernn", help= \
- "Choose the vocoder type for train. Defaults to wavernn"
- "Now, Support
-
-
-
-
-#### RetinaNet:
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APmodel id
-download
-
-
-
-
- R50-C4
-1x
-0.551
-0.102
-4.8
-35.7
-137257644
-model | metrics
-
-
- R50-DC5
-1x
-0.380
-0.068
-5.0
-37.3
-137847829
-model | metrics
-
-
- R50-FPN
-1x
-0.210
-0.038
-3.0
-37.9
-137257794
-model | metrics
-
-
- R50-C4
-3x
-0.543
-0.104
-4.8
-38.4
-137849393
-model | metrics
-
-
- R50-DC5
-3x
-0.378
-0.070
-5.0
-39.0
-137849425
-model | metrics
-
-
- R50-FPN
-3x
-0.209
-0.038
-3.0
-40.2
-137849458
-model | metrics
-
-
- R101-C4
-3x
-0.619
-0.139
-5.9
-41.1
-138204752
-model | metrics
-
-
- R101-DC5
-3x
-0.452
-0.086
-6.1
-40.6
-138204841
-model | metrics
-
-
- R101-FPN
-3x
-0.286
-0.051
-4.1
-42.0
-137851257
-model | metrics
-
-X101-FPN
-3x
-0.638
-0.098
-6.7
-43.0
-139173657
-model | metrics
-
-
-
-
-
-
-#### RPN & Fast R-CNN:
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APmodel id
-download
-
-
-
-
- R50
-1x
-0.205
-0.041
-4.1
-37.4
-190397773
-model | metrics
-
-
- R50
-3x
-0.205
-0.041
-4.1
-38.7
-190397829
-model | metrics
-
-R101
-3x
-0.291
-0.054
-5.2
-40.4
-190397697
-model | metrics
-
-
-
-
-
-### COCO Instance Segmentation Baselines with Mask R-CNN
-
-
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APprop.
-
ARmodel id
-download
-
-
-
-
- RPN R50-C4
-1x
-0.130
-0.034
-1.5
-
- 51.6
-137258005
-model | metrics
-
-
- RPN R50-FPN
-1x
-0.186
-0.032
-2.7
-
- 58.0
-137258492
-model | metrics
-
-Fast R-CNN R50-FPN
-1x
-0.140
-0.029
-2.6
-37.8
-
- 137635226
-model | metrics
-
-
-
-
-
-
-
-#### New baselines using Large-Scale Jitter and Longer Training Schedule
-
-The following baselines of COCO Instance Segmentation with Mask R-CNN are generated
-using a longer training schedule and large-scale jitter as described in Google's
-[Simple Copy-Paste Data Augmentation](https://arxiv.org/pdf/2012.07177.pdf) paper. These
-models are trained from scratch using random initialization. These baselines exceed the
-previous Mask R-CNN baselines.
-
-In the following table, one epoch consists of training on 118000 COCO images.
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APmask
-
APmodel id
-download
-
-
-
-
- R50-C4
-1x
-0.584
-0.110
-5.2
-36.8
-32.2
-137259246
-model | metrics
-
-
- R50-DC5
-1x
-0.471
-0.076
-6.5
-38.3
-34.2
-137260150
-model | metrics
-
-
- R50-FPN
-1x
-0.261
-0.043
-3.4
-38.6
-35.2
-137260431
-model | metrics
-
-
- R50-C4
-3x
-0.575
-0.111
-5.2
-39.8
-34.4
-137849525
-model | metrics
-
-
- R50-DC5
-3x
-0.470
-0.076
-6.5
-40.0
-35.9
-137849551
-model | metrics
-
-
- R50-FPN
-3x
-0.261
-0.043
-3.4
-41.0
-37.2
-137849600
-model | metrics
-
-
- R101-C4
-3x
-0.652
-0.145
-6.3
-42.6
-36.7
-138363239
-model | metrics
-
-
- R101-DC5
-3x
-0.545
-0.092
-7.6
-41.9
-37.3
-138363294
-model | metrics
-
-
- R101-FPN
-3x
-0.340
-0.056
-4.6
-42.9
-38.6
-138205316
-model | metrics
-
-X101-FPN
-3x
-0.690
-0.103
-7.2
-44.3
-39.5
-139653917
-model | metrics
-
-
-
-
-
-### COCO Person Keypoint Detection Baselines with Keypoint R-CNN
-
-
-
-Name
-epochs
-train
-
time
(s/im)inference
-
time
(s/im)box
-
APmask
-
APmodel id
-download
-
-
-
-
- R50-FPN
-100
-0.376
-0.069
-44.6
-40.3
-42047764
-model | metrics
-
-
- R50-FPN
-200
-0.376
-0.069
-46.3
-41.7
-42047638
-model | metrics
-
-
- R50-FPN
-400
-0.376
-0.069
-47.4
-42.5
-42019571
-model | metrics
-
-
- R101-FPN
-100
-0.518
-0.073
-46.4
-41.6
-42025812
-model | metrics
-
-
- R101-FPN
-200
-0.518
-0.073
-48.0
-43.1
-42131867
-model | metrics
-
-
- R101-FPN
-400
-0.518
-0.073
-48.9
-43.7
-42073830
-model | metrics
-
-
- regnetx_4gf_dds_FPN
-100
-0.474
-0.071
-46.0
-41.3
-42047771
-model | metrics
-
-
- regnetx_4gf_dds_FPN
-200
-0.474
-0.071
-48.1
-43.1
-42132721
-model | metrics
-
-
- regnetx_4gf_dds_FPN
-400
-0.474
-0.071
-48.6
-43.5
-42025447
-model | metrics
-
-
- regnety_4gf_dds_FPN
-100
-0.487
-0.073
-46.1
-41.6
-42047784
-model | metrics
-
-
- regnety_4gf_dds_FPN
-200
-0.487
-0.072
-47.8
-43.0
-42047642
-model | metrics
-
-regnety_4gf_dds_FPN
-400
-0.487
-0.072
-48.2
-43.3
-42045954
-model | metrics
-
-
-
-
-
-### COCO Panoptic Segmentation Baselines with Panoptic FPN
-
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APkp.
-
APmodel id
-download
-
-
-
-
- R50-FPN
-1x
-0.315
-0.072
-5.0
-53.6
-64.0
-137261548
-model | metrics
-
-
- R50-FPN
-3x
-0.316
-0.066
-5.0
-55.4
-65.5
-137849621
-model | metrics
-
-
- R101-FPN
-3x
-0.390
-0.076
-6.1
-56.4
-66.1
-138363331
-model | metrics
-
-X101-FPN
-3x
-0.738
-0.121
-8.7
-57.3
-66.0
-139686956
-model | metrics
-
-
-
-
-
-
-### LVIS Instance Segmentation Baselines with Mask R-CNN
-
-Mask R-CNN baselines on the [LVIS dataset](https://lvisdataset.org), v0.5.
-These baselines are described in Table 3(c) of the [LVIS paper](https://arxiv.org/abs/1908.03195).
-
-NOTE: the 1x schedule here has the same amount of __iterations__ as the COCO 1x baselines.
-They are roughly 24 epochs of LVISv0.5 data.
-The final results of these configs have large variance across different runs.
-
-
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APmask
-
APPQ
-model id
-download
-
-
-
-
- R50-FPN
-1x
-0.304
-0.053
-4.8
-37.6
-34.7
-39.4
-139514544
-model | metrics
-
-
- R50-FPN
-3x
-0.302
-0.053
-4.8
-40.0
-36.5
-41.5
-139514569
-model | metrics
-
-R101-FPN
-3x
-0.392
-0.066
-6.0
-42.4
-38.5
-43.0
-139514519
-model | metrics
-
-
-
-
-
-
-
-### Cityscapes & Pascal VOC Baselines
-
-Simple baselines for
-* Mask R-CNN on Cityscapes instance segmentation (initialized from COCO pre-training, then trained on Cityscapes fine annotations only)
-* Faster R-CNN on PASCAL VOC object detection (trained on VOC 2007 train+val + VOC 2012 train+val, tested on VOC 2007 using 11-point interpolated AP)
-
-
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APmask
-
APmodel id
-download
-
-
-
-
- R50-FPN
-1x
-0.292
-0.107
-7.1
-23.6
-24.4
-144219072
-model | metrics
-
-
- R101-FPN
-1x
-0.371
-0.114
-7.8
-25.6
-25.9
-144219035
-model | metrics
-
-X101-FPN
-1x
-0.712
-0.151
-10.2
-26.7
-27.1
-144219108
-model | metrics
-
-
-
-
-
-
-
-### Other Settings
-
-Ablations for Deformable Conv and Cascade R-CNN:
-
-
-
-
-Name
-train
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APbox
-
AP50mask
-
APmodel id
-download
-
-
-
-
- R50-FPN, Cityscapes
-0.240
-0.078
-4.4
-
-
- 36.5
-142423278
-model | metrics
-
-R50-C4, VOC
-0.537
-0.081
-4.8
-51.9
-80.3
-
- 142202221
-model | metrics
-
-
-
-
-
-
-Ablations for normalization methods, and a few models trained from scratch following [Rethinking ImageNet Pre-training](https://arxiv.org/abs/1811.08883).
-(Note: The baseline uses `2fc` head while the others use [`4conv1fc` head](https://arxiv.org/abs/1803.08494))
-
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APmask
-
APmodel id
-download
-
-
-
-
- Baseline R50-FPN
-1x
-0.261
-0.043
-3.4
-38.6
-35.2
-137260431
-model | metrics
-
-
- Deformable Conv
-1x
-0.342
-0.048
-3.5
-41.5
-37.5
-138602867
-model | metrics
-
-
- Cascade R-CNN
-1x
-0.317
-0.052
-4.0
-42.1
-36.4
-138602847
-model | metrics
-
-
- Baseline R50-FPN
-3x
-0.261
-0.043
-3.4
-41.0
-37.2
-137849600
-model | metrics
-
-
- Deformable Conv
-3x
-0.349
-0.047
-3.5
-42.7
-38.5
-144998336
-model | metrics
-
-Cascade R-CNN
-3x
-0.328
-0.053
-4.0
-44.3
-38.5
-144998488
-model | metrics
-
-
-
-
-
-
-A few very large models trained for a long time, for demo purposes. They are trained using multiple machines:
-
-
-
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APmask
-
APmodel id
-download
-
-
-
-
- Baseline R50-FPN
-3x
-0.261
-0.043
-3.4
-41.0
-37.2
-137849600
-model | metrics
-
-
- GN
-3x
-0.309
-0.060
-5.6
-42.6
-38.6
-138602888
-model | metrics
-
-
- SyncBN
-3x
-0.345
-0.053
-5.5
-41.9
-37.8
-169527823
-model | metrics
-
-
- GN (from scratch)
-3x
-0.338
-0.061
-7.2
-39.9
-36.6
-138602908
-model | metrics
-
-
- GN (from scratch)
-9x
-N/A
-0.061
-7.2
-43.7
-39.6
-183808979
-model | metrics
-
-SyncBN (from scratch)
-9x
-N/A
-0.055
-7.2
-43.6
-39.3
-184226666
-model | metrics
-
-
-
-
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/evaluation/evaluator.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/evaluation/evaluator.py
deleted file mode 100644
index 7d0848c7ec511f7000f4230c914a8b32f690dee0..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/evaluation/evaluator.py
+++ /dev/null
@@ -1,228 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/evaluation/evaluator.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-import datetime
-import logging
-import time
-from collections import OrderedDict, abc
-from contextlib import ExitStack, contextmanager
-from typing import List, Union
-import torch
-from torch import nn
-
-from detectron2.utils.comm import get_world_size, is_main_process
-from detectron2.utils.logger import log_every_n_seconds
-
-
-class DatasetEvaluator:
- """
- Base class for a dataset evaluator.
-
- The function :func:`inference_on_dataset` runs the model over
- all samples in the dataset, and have a DatasetEvaluator to process the inputs/outputs.
-
- This class will accumulate information of the inputs/outputs (by :meth:`process`),
- and produce evaluation results in the end (by :meth:`evaluate`).
- """
-
- def reset(self):
- """
- Preparation for a new round of evaluation.
- Should be called before starting a round of evaluation.
- """
- pass
-
- def process(self, inputs, outputs):
- """
- Process the pair of inputs and outputs.
- If they contain batches, the pairs can be consumed one-by-one using `zip`:
-
- .. code-block:: python
-
- for input_, output in zip(inputs, outputs):
- # do evaluation on single input/output pair
- ...
-
- Args:
- inputs (list): the inputs that's used to call the model.
- outputs (list): the return value of `model(inputs)`
- """
- pass
-
- def evaluate(self):
- """
- Evaluate/summarize the performance, after processing all input/output pairs.
-
- Returns:
- dict:
- A new evaluator class can return a dict of arbitrary format
- as long as the user can process the results.
- In our train_net.py, we expect the following format:
-
- * key: the name of the task (e.g., bbox)
- * value: a dict of {metric name: score}, e.g.: {"AP50": 80}
- """
- pass
-
-
-class DatasetEvaluators(DatasetEvaluator):
- """
- Wrapper class to combine multiple :class:`DatasetEvaluator` instances.
-
- This class dispatches every evaluation call to
- all of its :class:`DatasetEvaluator`.
- """
-
- def __init__(self, evaluators):
- """
- Args:
- evaluators (list): the evaluators to combine.
- """
- super().__init__()
- self._evaluators = evaluators
-
- def reset(self):
- for evaluator in self._evaluators:
- evaluator.reset()
-
- def process(self, inputs, outputs):
- for evaluator in self._evaluators:
- evaluator.process(inputs, outputs)
-
- def evaluate(self):
- results = OrderedDict()
- for evaluator in self._evaluators:
- result = evaluator.evaluate()
- if is_main_process() and result is not None:
- for k, v in result.items():
- assert (
- k not in results
- ), "Different evaluators produce results with the same key {}".format(k)
- results[k] = v
- return results
-
-
-def inference_on_dataset(
- model, data_loader, evaluator: Union[DatasetEvaluator, List[DatasetEvaluator], None]
-):
- """
- Run model on the data_loader and evaluate the metrics with evaluator.
- Also benchmark the inference speed of `model.__call__` accurately.
- The model will be used in eval mode.
-
- Args:
- model (callable): a callable which takes an object from
- `data_loader` and returns some outputs.
-
- If it's an nn.Module, it will be temporarily set to `eval` mode.
- If you wish to evaluate a model in `training` mode instead, you can
- wrap the given model and override its behavior of `.eval()` and `.train()`.
- data_loader: an iterable object with a length.
- The elements it generates will be the inputs to the model.
- evaluator: the evaluator(s) to run. Use `None` if you only want to benchmark,
- but don't want to do any evaluation.
-
- Returns:
- The return value of `evaluator.evaluate()`
- """
- num_devices = get_world_size()
- logger = logging.getLogger(__name__)
- logger.info("Start inference on {} batches".format(len(data_loader)))
-
- total = len(data_loader) # inference data loader must have a fixed length
- if evaluator is None:
- # create a no-op evaluator
- evaluator = DatasetEvaluators([])
- if isinstance(evaluator, abc.MutableSequence):
- evaluator = DatasetEvaluators(evaluator)
- evaluator.reset()
-
- num_warmup = min(5, total - 1)
- start_time = time.perf_counter()
- total_data_time = 0
- total_compute_time = 0
- total_eval_time = 0
- with ExitStack() as stack:
- if isinstance(model, nn.Module):
- stack.enter_context(inference_context(model))
- stack.enter_context(torch.no_grad())
-
- start_data_time = time.perf_counter()
- for idx, inputs in enumerate(data_loader):
- total_data_time += time.perf_counter() - start_data_time
- if idx == num_warmup:
- start_time = time.perf_counter()
- total_data_time = 0
- total_compute_time = 0
- total_eval_time = 0
-
- start_compute_time = time.perf_counter()
- outputs = model(inputs)
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- total_compute_time += time.perf_counter() - start_compute_time
-
- start_eval_time = time.perf_counter()
- evaluator.process(inputs, outputs)
- total_eval_time += time.perf_counter() - start_eval_time
-
- iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup)
- data_seconds_per_iter = total_data_time / iters_after_start
- compute_seconds_per_iter = total_compute_time / iters_after_start
- eval_seconds_per_iter = total_eval_time / iters_after_start
- total_seconds_per_iter = (time.perf_counter() - start_time) / iters_after_start
- if idx >= num_warmup * 2 or compute_seconds_per_iter > 5:
- eta = datetime.timedelta(seconds=int(total_seconds_per_iter * (total - idx - 1)))
- log_every_n_seconds(
- logging.INFO,
- (
- f"Inference done {idx + 1}/{total}. "
- f"Dataloading: {data_seconds_per_iter:.4f} s/iter. "
- f"Inference: {compute_seconds_per_iter:.4f} s/iter. "
- f"Eval: {eval_seconds_per_iter:.4f} s/iter. "
- f"Total: {total_seconds_per_iter:.4f} s/iter. "
- f"ETA={eta}"
- ),
- n=5,
- )
- start_data_time = time.perf_counter()
-
- # Measure the time only for this worker (before the synchronization barrier)
- total_time = time.perf_counter() - start_time
- total_time_str = str(datetime.timedelta(seconds=total_time))
- # NOTE this format is parsed by grep
- logger.info(
- "Total inference time: {} ({:.6f} s / iter per device, on {} devices)".format(
- total_time_str, total_time / (total - num_warmup), num_devices
- )
- )
- total_compute_time_str = str(datetime.timedelta(seconds=int(total_compute_time)))
- logger.info(
- "Total inference pure compute time: {} ({:.6f} s / iter per device, on {} devices)".format(
- total_compute_time_str, total_compute_time / (total - num_warmup), num_devices
- )
- )
-
- results = evaluator.evaluate()
- # An evaluator may return None when not in main process.
- # Replace it by an empty dict instead to make it easier for downstream code to handle
- if results is None:
- results = {}
- return results
-
-
-@contextmanager
-def inference_context(model):
- """
- A context where the model is temporarily changed to eval mode,
- and restored to previous mode afterwards.
-
- Args:
- model: a torch Module
- """
- training_mode = model.training
- model.eval()
- yield
- model.train(training_mode)
diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/image_degradation/__init__.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/image_degradation/__init__.py
deleted file mode 100644
index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/image_degradation/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr
-from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light
diff --git a/spaces/PascalNotin/Tranception_design/tranception/utils/msa_utils.py b/spaces/PascalNotin/Tranception_design/tranception/utils/msa_utils.py
deleted file mode 100644
index 11ec15b5dc7dd149c6deaa820f32549e535f20a8..0000000000000000000000000000000000000000
--- a/spaces/PascalNotin/Tranception_design/tranception/utils/msa_utils.py
+++ /dev/null
@@ -1,361 +0,0 @@
-import numpy as np
-import pandas as pd
-from collections import defaultdict
-import random
-import os
-import torch
-from Bio.Align.Applications import ClustalOmegaCommandline
-
-def filter_msa(msa_data, num_sequences_kept=3):
- """
- Helper function to filter an input MSA msa_data (obtained via process_msa_data) and keep only num_sequences_kept aligned sequences.
- If the MSA already has fewer sequences than num_sequences_kept, we keep the MSA as is.
- If filtering, we always keep the first sequence of the MSA (ie. the wild type) by default.
- Sampling is done without replacement.
- """
- if len(list(msa_data.keys())) <= num_sequences_kept:
- return msa_data
- filtered_msa = {}
- wt_name = next(iter(msa_data))
- filtered_msa[wt_name] = msa_data[wt_name]
- del msa_data[wt_name]
- sequence_names = list(msa_data.keys())
- sequence_names_sampled = random.sample(sequence_names,k=num_sequences_kept-1)
- for seq in sequence_names_sampled:
- filtered_msa[seq] = msa_data[seq]
- return filtered_msa
-
-def process_msa_data(MSA_data_file):
- """
- Helper function that takes as input a path to a MSA file (expects a2m format) and returns a dict mapping sequence ID to the corresponding AA sequence.
- """
- msa_data = defaultdict(str)
- sequence_name = ""
- with open(MSA_data_file, "r") as msa_file:
- for i, line in enumerate(msa_file):
- line = line.rstrip()
- if line.startswith(">"):
- sequence_name = line
- else:
- msa_data[sequence_name] += line.upper()
- return msa_data
-
-def get_one_hot_sequences_dict(msa_data,MSA_start,MSA_end,vocab):
- vocab_size = len(vocab.keys())
- num_sequences_msa = len(msa_data.keys())
- one_hots = np.zeros((num_sequences_msa,MSA_end-MSA_start,vocab_size))
- for i,seq_name in enumerate(msa_data.keys()):
- sequence = msa_data[seq_name]
- for j,letter in enumerate(sequence):
- if letter in vocab:
- k = vocab[letter]
- one_hots[i,j,k] = 1.0
- return one_hots
-
-def one_hot(sequence_string,vocab):
- one_hots = np.zeros((len(sequence_string),len(vocab.keys())))
- for j,letter in enumerate(sequence_string):
- if letter in vocab:
- k = vocab[letter]
- one_hots[j,k] = 1.0
- return one_hots.flatten()
-
-def get_msa_prior(MSA_data_file, MSA_weight_file_name, MSA_start, MSA_end, len_target_seq, vocab, retrieval_aggregation_mode="aggregate_substitution", filter_MSA=True, verbose=False):
- """
- Function to enable retrieval inference mode, via computation of (weighted) pseudocounts of AAs at each position of the retrieved MSA.
- MSA_data_file: (string) path to MSA file (expects a2m format).
- MSA_weight_file_name: (string) path to sequence weights in MSA.
- MSA_start: (int) Sequence position that the MSA starts at (1-indexing).
- MSA_end: (int) Sequence position that the MSA ends at (1-indexing).
- len_target_seq: (int) Full length of sequence to be scored.
- vocab: (dict) Vocabulary of the tokenizer.
- retrieval_aggregation_mode: (string) Mode for retrieval inference (aggregate_substitution Vs aggregate_indel). If None, places a uniform prior over each token.
- filter_MSA: (bool) Whether to filter out sequences with very low hamming similarity (< 0.2) to the reference sequence in the MSA (first sequence).
- verbose: (bool) Whether to print to the console processing details along the way.
- """
- msa_data = process_msa_data(MSA_data_file)
- vocab_size = len(vocab.keys())
- if verbose: print("Target seq len is {}, MSA length is {}, start position is {}, end position is {} and vocab size is {}".format(len_target_seq,MSA_end-MSA_start,MSA_start,MSA_end,vocab_size))
-
- if filter_MSA:
- if verbose: print("Num sequences in MSA pre filtering: {}".format(len(msa_data.keys())))
- list_sequence_names = list(msa_data.keys())
- focus_sequence_name = list(msa_data.keys())[0]
- ref_sequence_hot = one_hot(msa_data[focus_sequence_name],vocab)
- for sequence_name in list_sequence_names:
- seq_hot = one_hot(msa_data[sequence_name],vocab)
- hamming_similarity_seq_ref = np.dot(ref_sequence_hot,seq_hot) / np.dot(ref_sequence_hot,ref_sequence_hot)
- if hamming_similarity_seq_ref < 0.2:
- del msa_data[sequence_name]
- if verbose: print("Num sequences in MSA post filtering: {}".format(len(msa_data.keys())))
-
- if MSA_weight_file_name is not None:
- if verbose: print("Using weights in {} for sequences in MSA.".format(MSA_weight_file_name))
- assert os.path.exists(MSA_weight_file_name), "Weights file not located on disk."
- MSA_EVE = MSA_processing(
- MSA_location=MSA_data_file,
- use_weights=True,
- weights_location=MSA_weight_file_name
- )
- #We scan through all sequences to see if we have a weight for them as per EVE pre-processing. We drop them otherwise.
- dropped_sequences=0
- list_sequence_names = list(msa_data.keys())
- MSA_weight=[]
- for sequence_name in list_sequence_names:
- if sequence_name not in MSA_EVE.seq_name_to_sequence:
- dropped_sequences +=1
- del msa_data[sequence_name]
- else:
- MSA_weight.append(MSA_EVE.seq_name_to_weight[sequence_name])
- if verbose: print("Dropped {} sequences from MSA due to absent sequence weights".format(dropped_sequences))
- else:
- MSA_weight = [1] * len(list(msa_data.keys()))
-
- if retrieval_aggregation_mode=="aggregate_substitution" or retrieval_aggregation_mode=="aggregate_indel":
- one_hots = get_one_hot_sequences_dict(msa_data,MSA_start,MSA_end,vocab)
- MSA_weight = np.expand_dims(np.array(MSA_weight),axis=(1,2))
- base_rate = 1e-5
- base_rates = np.ones_like(one_hots) * base_rate
- weighted_one_hots = (one_hots + base_rates) * MSA_weight
- MSA_weight_norm_counts = weighted_one_hots.sum(axis=-1).sum(axis=0)
- MSA_weight_norm_counts = np.tile(MSA_weight_norm_counts.reshape(-1,1), (1,vocab_size))
- one_hots_avg = weighted_one_hots.sum(axis=0) / MSA_weight_norm_counts
- msa_prior = np.zeros((len_target_seq,vocab_size))
- msa_prior[MSA_start:MSA_end,:]=one_hots_avg
- else:
- msa_prior = np.ones((len_target_seq,vocab_size)) / vocab_size
-
- if verbose:
- for idx, position in enumerate(msa_prior):
- if len(position)!=25:
- print("Size error")
- if not round(position.sum(),2)==1.0:
- print("Position at index {} does not add up to 1: {}".format(idx, position.sum()))
-
- return msa_prior
-
-
-def update_retrieved_MSA_log_prior_indel(model, MSA_log_prior, MSA_start, MSA_end, mutated_sequence):
- """
- Function to process MSA when scoring indels.
- To identify positions to add / remove in the retrieved MSA, we append and align the sequence to be scored to the original MSA for that protein family with Clustal Omega.
- If the original MSA is relatively deep (over 100k sequences), we sample (by default) 100k rows at random from that MSA to speed computations.
- MSA sampling is performed only once (for the first sequence to be scored). Subsequent scoring use the same MSA sample.
- """
- if not os.path.isdir(model.MSA_folder + os.sep + "Sampled"):
- os.mkdir(model.MSA_folder + os.sep + "Sampled")
- sampled_MSA_location = model.MSA_folder + os.sep + "Sampled" + os.sep + "Sampled_" + model.MSA_filename.split(os.sep)[-1]
-
- if not os.path.exists(sampled_MSA_location):
- msa_data = process_msa_data(model.MSA_filename)
- msa_data_sampled = filter_msa(msa_data, num_sequences_kept=100000) #If MSA has less than 100k sequences, the sample is identical to original MSA
- with open(sampled_MSA_location, 'w') as sampled_write_location:
- for index, key in enumerate(msa_data_sampled):
- key_name = ">REFERENCE_SEQUENCE" if index==0 else key
- msa_data_sampled[key] = msa_data_sampled[key].upper()
- msa_data_sampled[key] = msa_data_sampled[key].replace(".","-")
- sampled_write_location.write(key_name+"\n"+"\n".join([msa_data_sampled[key][i:i+80] for i in range(0, len(msa_data_sampled[key]), 80)])+"\n")
-
- seq_to_align_location = model.MSA_folder + os.sep + "Sampled" + os.sep + "Seq_to_align_" + model.MSA_filename.split(os.sep)[-1]
- sequence_text_split = [mutated_sequence[i:i+80] for i in range(0, len(mutated_sequence), 80)]
- sequence_text_split_split_join = "\n".join([">SEQ_TO_SCORE"]+sequence_text_split)
- os.system("echo '"+sequence_text_split_split_join+"' > "+seq_to_align_location)
-
- expanded_MSA_location = model.MSA_folder + os.sep + "Sampled" + os.sep + "Expanded_" + model.MSA_filename.split(os.sep)[-1]
- clustalw_cline = ClustalOmegaCommandline(cmd=model.config.clustal_omega_location,
- profile1=sampled_MSA_location,
- profile2=seq_to_align_location,
- outfile=expanded_MSA_location,
- force=True)
- stdout, stderr = clustalw_cline()
- msa_data = process_msa_data(expanded_MSA_location)
- aligned_seqA, aligned_seqB = msa_data[">SEQ_TO_SCORE"], msa_data[">REFERENCE_SEQUENCE"]
- try:
- keep_column=[]
- for column_index_pairwise_alignment in range(len(aligned_seqA)):
- if aligned_seqA[column_index_pairwise_alignment]=="-" and aligned_seqB[column_index_pairwise_alignment]=="-":
- continue
- elif aligned_seqA[column_index_pairwise_alignment]=="-":
- keep_column.append(False)
- elif aligned_seqB[column_index_pairwise_alignment]=="-":
- MSA_log_prior=torch.cat((MSA_log_prior[:column_index_pairwise_alignment], torch.zeros(MSA_log_prior.shape[1]).view(1,-1).cuda(), MSA_log_prior[column_index_pairwise_alignment:]),dim=0)
- keep_column.append(True) #keep the zero column we just added
- else:
- keep_column.append(True)
- MSA_log_prior = MSA_log_prior[keep_column]
- MSA_end = MSA_start + len(MSA_log_prior)
- except:
- print("Error when processing the following alignment: {}".format(expanded_MSA_location))
- return MSA_log_prior, MSA_start, MSA_end
-
-class MSA_processing:
- def __init__(self,
- MSA_location="",
- theta=0.2,
- use_weights=True,
- weights_location="./data/weights",
- preprocess_MSA=True,
- threshold_sequence_frac_gaps=0.5,
- threshold_focus_cols_frac_gaps=0.3,
- remove_sequences_with_indeterminate_AA_in_focus_cols=True
- ):
-
- """
- This MSA_processing class is directly borrowed from the EVE codebase: https://github.com/OATML-Markslab/EVE
-
- Parameters:
- - msa_location: (path) Location of the MSA data. Constraints on input MSA format:
- - focus_sequence is the first one in the MSA data
- - first line is structured as follows: ">focus_seq_name/start_pos-end_pos" (e.g., >SPIKE_SARS2/310-550)
- - corespondding sequence data located on following line(s)
- - then all other sequences follow with ">name" on first line, corresponding data on subsequent lines
- - theta: (float) Sequence weighting hyperparameter. Generally: Prokaryotic and eukaryotic families = 0.2; Viruses = 0.01
- - use_weights: (bool) If False, sets all sequence weights to 1. If True, checks weights_location -- if non empty uses that;
- otherwise compute weights from scratch and store them at weights_location
- - weights_location: (path) Location to load from/save to the sequence weights
- - preprocess_MSA: (bool) performs pre-processing of MSA to remove short fragments and positions that are not well covered.
- - threshold_sequence_frac_gaps: (float, between 0 and 1) Threshold value to define fragments
- - sequences with a fraction of gap characters above threshold_sequence_frac_gaps are removed
- - default is set to 0.5 (i.e., fragments with 50% or more gaps are removed)
- - threshold_focus_cols_frac_gaps: (float, between 0 and 1) Threshold value to define focus columns
- - positions with a fraction of gap characters above threshold_focus_cols_pct_gaps will be set to lower case (and not included in the focus_cols)
- - default is set to 0.3 (i.e., focus positions are the ones with 30% of gaps or less, i.e., 70% or more residue occupancy)
- - remove_sequences_with_indeterminate_AA_in_focus_cols: (bool) Remove all sequences that have indeterminate AA (e.g., B, J, X, Z) at focus positions of the wild type
- """
- np.random.seed(2021)
- self.MSA_location = MSA_location
- self.weights_location = weights_location
- self.theta = theta
- self.alphabet = "ACDEFGHIKLMNPQRSTVWY"
- self.use_weights = use_weights
- self.preprocess_MSA = preprocess_MSA
- self.threshold_sequence_frac_gaps = threshold_sequence_frac_gaps
- self.threshold_focus_cols_frac_gaps = threshold_focus_cols_frac_gaps
- self.remove_sequences_with_indeterminate_AA_in_focus_cols = remove_sequences_with_indeterminate_AA_in_focus_cols
-
- self.gen_alignment()
-
- def gen_alignment(self, verbose=False):
- """ Read training alignment and store basics in class instance """
- self.aa_dict = {}
- for i,aa in enumerate(self.alphabet):
- self.aa_dict[aa] = i
-
- self.seq_name_to_sequence = defaultdict(str)
- name = ""
- with open(self.MSA_location, "r") as msa_data:
- for i, line in enumerate(msa_data):
- line = line.rstrip()
- if line.startswith(">"):
- name = line
- if i==0:
- self.focus_seq_name = name
- else:
- self.seq_name_to_sequence[name] += line
-
-
- ## MSA pre-processing to remove inadequate columns and sequences
- if self.preprocess_MSA:
- msa_df = pd.DataFrame.from_dict(self.seq_name_to_sequence, orient='index', columns=['sequence'])
- # Data clean up
- msa_df.sequence = msa_df.sequence.apply(lambda x: x.replace(".","-")).apply(lambda x: ''.join([aa.upper() for aa in x]))
- # Remove columns that would be gaps in the wild type
- non_gap_wt_cols = [aa!='-' for aa in msa_df.sequence[self.focus_seq_name]]
- msa_df['sequence'] = msa_df['sequence'].apply(lambda x: ''.join([aa for aa,non_gap_ind in zip(x, non_gap_wt_cols) if non_gap_ind]))
- assert 0.0 <= self.threshold_sequence_frac_gaps <= 1.0,"Invalid fragment filtering parameter"
- assert 0.0 <= self.threshold_focus_cols_frac_gaps <= 1.0,"Invalid focus position filtering parameter"
- msa_array = np.array([list(seq) for seq in msa_df.sequence])
- gaps_array = np.array(list(map(lambda seq: [aa=='-' for aa in seq], msa_array)))
- # Identify fragments with too many gaps
- seq_gaps_frac = gaps_array.mean(axis=1)
- seq_below_threshold = seq_gaps_frac <= self.threshold_sequence_frac_gaps
- if verbose: print("Proportion of sequences dropped due to fraction of gaps: "+str(round(float(1 - seq_below_threshold.sum()/seq_below_threshold.shape)*100,2))+"%")
- # Identify focus columns
- columns_gaps_frac = gaps_array[seq_below_threshold].mean(axis=0)
- index_cols_below_threshold = columns_gaps_frac <= self.threshold_focus_cols_frac_gaps
- if verbose: print("Proportion of non-focus columns removed: "+str(round(float(1 - index_cols_below_threshold.sum()/index_cols_below_threshold.shape)*100,2))+"%")
- # Lower case non focus cols and filter fragment sequences
- msa_df['sequence'] = msa_df['sequence'].apply(lambda x: ''.join([aa.upper() if upper_case_ind else aa.lower() for aa, upper_case_ind in zip(x, index_cols_below_threshold)]))
- msa_df = msa_df[seq_below_threshold]
- # Overwrite seq_name_to_sequence with clean version
- self.seq_name_to_sequence = defaultdict(str)
- for seq_idx in range(len(msa_df['sequence'])):
- self.seq_name_to_sequence[msa_df.index[seq_idx]] = msa_df.sequence[seq_idx]
-
- self.focus_seq = self.seq_name_to_sequence[self.focus_seq_name]
- self.focus_cols = [ix for ix, s in enumerate(self.focus_seq) if s == s.upper() and s!='-']
- self.focus_seq_trimmed = [self.focus_seq[ix] for ix in self.focus_cols]
- self.seq_len = len(self.focus_cols)
- self.alphabet_size = len(self.alphabet)
-
- # Connect local sequence index with uniprot index (index shift inferred from 1st row of MSA)
- focus_loc = self.focus_seq_name.split("/")[-1]
- start,stop = focus_loc.split("-")
- self.focus_start_loc = int(start)
- self.focus_stop_loc = int(stop)
- self.uniprot_focus_col_to_wt_aa_dict \
- = {idx_col+int(start):self.focus_seq[idx_col] for idx_col in self.focus_cols}
- self.uniprot_focus_col_to_focus_idx \
- = {idx_col+int(start):idx_col for idx_col in self.focus_cols}
-
- # Move all letters to CAPS; keeps focus columns only
- self.raw_seq_name_to_sequence = self.seq_name_to_sequence.copy()
- for seq_name,sequence in self.seq_name_to_sequence.items():
- sequence = sequence.replace(".","-")
- self.seq_name_to_sequence[seq_name] = [sequence[ix].upper() for ix in self.focus_cols]
-
- # Remove sequences that have indeterminate AA (e.g., B, J, X, Z) in the focus columns
- if self.remove_sequences_with_indeterminate_AA_in_focus_cols:
- alphabet_set = set(list(self.alphabet))
- seq_names_to_remove = []
- for seq_name,sequence in self.seq_name_to_sequence.items():
- for letter in sequence:
- if letter not in alphabet_set and letter != "-":
- seq_names_to_remove.append(seq_name)
- continue
- seq_names_to_remove = list(set(seq_names_to_remove))
- for seq_name in seq_names_to_remove:
- del self.seq_name_to_sequence[seq_name]
-
- # Encode the sequences
- self.one_hot_encoding = np.zeros((len(self.seq_name_to_sequence.keys()),len(self.focus_cols),len(self.alphabet)))
- if verbose: print("One-hot encoded sequences shape:" + str(self.one_hot_encoding.shape))
- for i,seq_name in enumerate(self.seq_name_to_sequence.keys()):
- sequence = self.seq_name_to_sequence[seq_name]
- for j,letter in enumerate(sequence):
- if letter in self.aa_dict:
- k = self.aa_dict[letter]
- self.one_hot_encoding[i,j,k] = 1.0
-
- if self.use_weights:
- try:
- self.weights = np.load(file=self.weights_location)
- if verbose: print("Loaded sequence weights from disk")
- except:
- if verbose: print ("Computing sequence weights")
- list_seq = self.one_hot_encoding
- list_seq = list_seq.reshape((list_seq.shape[0], list_seq.shape[1] * list_seq.shape[2]))
- def compute_weight(seq):
- number_non_empty_positions = np.dot(seq,seq)
- if number_non_empty_positions>0:
- denom = np.dot(list_seq,seq) / np.dot(seq,seq)
- denom = np.sum(denom > 1 - self.theta)
- return 1/denom
- else:
- return 0.0 #return 0 weight if sequence is fully empty
- self.weights = np.array(list(map(compute_weight,list_seq)))
- np.save(file=self.weights_location, arr=self.weights)
- else:
- # If not using weights, use an isotropic weight matrix
- if verbose: print("Not weighting sequence data")
- self.weights = np.ones(self.one_hot_encoding.shape[0])
-
- self.Neff = np.sum(self.weights)
- self.num_sequences = self.one_hot_encoding.shape[0]
- self.seq_name_to_weight={}
- for i,seq_name in enumerate(self.seq_name_to_sequence.keys()):
- self.seq_name_to_weight[seq_name]=self.weights[i]
-
- if verbose:
- print ("Neff =",str(self.Neff))
- print ("Data Shape =",self.one_hot_encoding.shape)
\ No newline at end of file
diff --git a/spaces/Paulraj916/paulraj916/scrapFonts.py b/spaces/Paulraj916/paulraj916/scrapFonts.py
deleted file mode 100644
index 293917a2bc13b650294e01e44f1201bd0e39ad90..0000000000000000000000000000000000000000
--- a/spaces/Paulraj916/paulraj916/scrapFonts.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import os
-import requests
-from bs4 import BeautifulSoup
-from urllib.parse import urljoin
-
-class ScrapFonts:
- def __init__(self, url, output_folder):
- self.url = url
- self.output_folder = output_folder
-
- def extract_and_save_fonts(self):
- try:
- # Send an HTTP GET request to the webpage and get the HTML content
- response = requests.get(self.url)
- response.raise_for_status()
- html_content = response.text
-
- # Parse the HTML content using BeautifulSoup
- soup = BeautifulSoup(html_content, 'html.parser')
-
- # Find all font tags
- font_tags = soup.find_all('link', {'rel': 'stylesheet', 'type': 'text/css'})
-
- # Extract font URLs and store them in a list
- font_urls = []
- for font_tag in font_tags:
- if 'href' in font_tag.attrs:
- font_url = font_tag['href']
- absolute_url = urljoin(self.url, font_url)
- font_urls.append(absolute_url)
-
- # Create the output folder if it doesn't exist
- os.makedirs(self.output_folder, exist_ok=True)
-
- # Download and save fonts in the output folder
- for font_url in font_urls:
- try:
- font_content = requests.get(font_url).content
-
- # Get the path to the font file
- path = urljoin(self.url, font_url).replace(self.url, '').lstrip('/')
- filename = os.path.join(self.output_folder, path)
-
- # Create subdirectories if needed
- os.makedirs(os.path.dirname(filename), exist_ok=True)
-
- # Save the font content to the file
- with open(filename, 'wb') as file:
- file.write(font_content)
-
- print(f"Downloaded: {font_url}")
- except Exception as e:
- print(f"Failed to download {font_url}: {e}")
-
- print("Fonts downloaded and saved successfully.")
- except requests.exceptions.MissingSchema:
- print(f"Skipping download from {self.url} (Invalid URL)")
- except requests.exceptions.RequestException as e:
- print(f"Failed to fetch content from {self.url}: {e}")
- except OSError as e:
- print(f"Failed to save font: {e}")
diff --git a/spaces/PeepDaSlan9/De-limiter/dataloader/dataset.py b/spaces/PeepDaSlan9/De-limiter/dataloader/dataset.py
deleted file mode 100644
index e23e8cba679d5830cbeed5cd19122e0678ea3c77..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/De-limiter/dataloader/dataset.py
+++ /dev/null
@@ -1,579 +0,0 @@
-# Dataloader based on https://github.com/jeonchangbin49/LimitAug
-import os
-from glob import glob
-import random
-from typing import Optional, Callable
-
-import numpy as np
-import torch
-import librosa
-from torch.utils.data import Dataset
-import pyloudnorm as pyln
-from pedalboard import Pedalboard, Limiter, Gain, Compressor, Clipping
-
-from utils import load_wav_arbitrary_position_stereo, db2linear
-
-
-# based on https://github.com/sigsep/open-unmix-pytorch
-def aug_from_str(list_of_function_names: list):
- if list_of_function_names:
- return Compose([globals()["_augment_" + aug] for aug in list_of_function_names])
- else:
- return lambda audio: audio
-
-
-class Compose(object):
- """Composes several augmentation transforms.
- Args:
- augmentations: list of augmentations to compose.
- """
-
- def __init__(self, transforms):
- self.transforms = transforms
-
- def __call__(self, audio: torch.Tensor) -> torch.Tensor:
- for t in self.transforms:
- audio = t(audio)
- return audio
-
-
-# numpy based augmentation
-# based on https://github.com/sigsep/open-unmix-pytorch
-def _augment_gain(audio, low=0.25, high=1.25):
- """Applies a random gain between `low` and `high`"""
- g = low + random.random() * (high - low)
- return audio * g
-
-
-def _augment_channelswap(audio):
- """Swap channels of stereo signals with a probability of p=0.5"""
- if audio.shape[0] == 2 and random.random() < 0.5:
- return np.flip(audio, axis=0) # axis=0 must be given
- else:
- return audio
-
-
-# Linear gain increasing implementation for Method (1)
-def apply_linear_gain_increase(mixture, target, board, meter, samplerate, target_lufs):
- mixture, target = mixture.T, target.T
- loudness = meter.integrated_loudness(mixture)
-
- if np.isinf(loudness):
- augmented_gain = 0.0
- board[0].gain_db = augmented_gain
- else:
- augmented_gain = target_lufs - loudness
- board[0].gain_db = augmented_gain
- mixture = board(mixture.T, samplerate)
- target = board(target.T, samplerate)
- return mixture, target
-
-
-# LimitAug implementation for Method (2) and
-# implementation of LimitAug then Loudness normalization for Method (4)
-def apply_limitaug(
- audio,
- board,
- meter,
- samplerate,
- target_lufs,
- target_loudnorm_lufs=None,
- loudness=None,
-):
- audio = audio.T
- if loudness is None:
- loudness = meter.integrated_loudness(audio)
-
- if np.isinf(loudness):
- augmented_gain = 0.0
- board[0].gain_db = augmented_gain
- else:
- augmented_gain = target_lufs - loudness
- board[0].gain_db = augmented_gain
- audio = board(audio.T, samplerate)
-
- if target_loudnorm_lufs:
- after_loudness = meter.integrated_loudness(audio.T)
-
- if np.isinf(after_loudness):
- pass
- else:
- target_gain = target_loudnorm_lufs - after_loudness
- audio = audio * db2linear(target_gain)
- return audio, loudness
-
-
-"""
-This dataloader implementation is based on https://github.com/sigsep/open-unmix-pytorch
-"""
-
-
-class MusdbTrainDataset(Dataset):
- def __init__(
- self,
- target: str = "vocals",
- root: str = None,
- seq_duration: Optional[float] = 6.0,
- samples_per_track: int = 64,
- source_augmentations: Optional[Callable] = lambda audio: audio,
- sample_rate: int = 44100,
- seed: int = 42,
- limitaug_method: str = "limitaug_then_loudnorm",
- limitaug_mode: str = "normal_L",
- limitaug_custom_target_lufs: float = None,
- limitaug_custom_target_lufs_std: float = None,
- target_loudnorm_lufs: float = -14.0,
- custom_limiter_attack_range: list = [2.0, 2.0],
- custom_limiter_release_range: list = [200.0, 200.0],
- *args,
- **kwargs,
- ) -> None:
- """
- Parameters
- ----------
- limitaug_method : str
- choose from ["linear_gain_increase", "limitaug", "limitaug_then_loudnorm", "only_loudnorm"]
- limitaug_mode : str
- choose from ["uniform", "normal", "normal_L", "normal_XL", "normal_short_term", "normal_L_short_term", "normal_XL_short_term", "custom"]
- limitaug_custom_target_lufs : float
- valid only when
- limitaug_mode == "custom"
- limitaug_custom_target_lufs_std : float
- also valid only when
- limitaug_mode == "custom
- target_loudnorm_lufs : float
- valid only when
- limitaug_method == 'limitaug_then_loudnorm' or 'only_loudnorm'
- default is -14.
- To the best of my knowledge, Spotify and Youtube music is using -14 as a reference loudness normalization level.
- No special reason for the choice of -14 as target_loudnorm_lufs.
- target : str
- target name of the source to be separated, defaults to ``vocals``.
- root : str
- root path of MUSDB
- seq_duration : float
- training is performed in chunks of ``seq_duration`` (in seconds,
- defaults to ``None`` which loads the full audio track
- samples_per_track : int
- sets the number of samples, yielded from each track per epoch.
- Defaults to 64
- source_augmentations : list[callables]
- provide list of augmentation function that take a multi-channel
- audio file of shape (src, samples) as input and output. Defaults to
- no-augmentations (input = output)
- seed : int
- control randomness of dataset iterations
- args, kwargs : additional keyword arguments
- used to add further control for the musdb dataset
- initialization function.
- """
-
- self.seed = seed
- random.seed(seed)
- self.seq_duration = seq_duration
- self.target = target
- self.samples_per_track = samples_per_track
- self.source_augmentations = source_augmentations
- self.sample_rate = sample_rate
-
- self.root = root
- self.sources = ["vocals", "bass", "drums", "other"]
- self.train_list = glob(f"{self.root}/train/*")
- self.valid_list = [
- "ANiMAL - Rockshow",
- "Actions - One Minute Smile",
- "Alexander Ross - Goodbye Bolero",
- "Clara Berry And Wooldog - Waltz For My Victims",
- "Fergessen - Nos Palpitants",
- "James May - On The Line",
- "Johnny Lokke - Promises & Lies",
- "Leaf - Summerghost",
- "Meaxic - Take A Step",
- "Patrick Talbot - A Reason To Leave",
- "Skelpolu - Human Mistakes",
- "Traffic Experiment - Sirens",
- "Triviul - Angelsaint",
- "Young Griffo - Pennies",
- ]
-
- self.train_list = [
- x for x in self.train_list if os.path.basename(x) not in self.valid_list
- ]
-
- # limitaug related
- self.limitaug_method = limitaug_method
- self.limitaug_mode = limitaug_mode
- self.limitaug_custom_target_lufs = limitaug_custom_target_lufs
- self.limitaug_custom_target_lufs_std = limitaug_custom_target_lufs_std
- self.target_loudnorm_lufs = target_loudnorm_lufs
- self.meter = pyln.Meter(self.sample_rate)
-
- # Method (1) in our paper's Results section and Table 5
- if self.limitaug_method == "linear_gain_increase":
- print("using linear gain increasing!")
- self.board = Pedalboard([Gain(gain_db=0.0)])
-
- # Method (2) in our paper's Results section and Table 5
- elif self.limitaug_method == "limitaug":
- print("using limitaug!")
- self.board = Pedalboard(
- [Gain(gain_db=0.0), Limiter(threshold_db=0.0, release_ms=100.0)]
- )
-
- # Method (3) in our paper's Results section and Table 5
- elif self.limitaug_method == "only_loudnorm":
- print("using only loudness normalized inputs")
-
- # Method (4) in our paper's Results section and Table 5
- elif self.limitaug_method == "limitaug_then_loudnorm":
- print("using limitaug then loudness normalize!")
- self.board = Pedalboard(
- [Gain(gain_db=0.0), Limiter(threshold_db=0.0, release_ms=100.0)]
- )
-
- elif self.limitaug_method == "custom_limiter_limitaug":
- print("using Custom limiter limitaug!")
- self.custom_limiter_attack_range = custom_limiter_attack_range
- self.custom_limiter_release_range = custom_limiter_release_range
- self.board = Pedalboard(
- [
- Gain(gain_db=0.0),
- Compressor(
- threshold_db=-10.0, ratio=4.0, attack_ms=2.0, release_ms=200.0
- ), # attack_ms and release_ms will be changed later.
- Compressor(
- threshold_db=0.0,
- ratio=1000.0,
- attack_ms=0.001,
- release_ms=100.0,
- ),
- Gain(gain_db=3.75),
- Clipping(threshold_db=0.0),
- ]
- ) # This implementation is the same as JUCE Limiter.
- # However, we want the first compressor to have a variable attack and release time.
- # Therefore, we use the Custom Limiter instead of the JUCE Limiter.
-
- self.limitaug_mode_statistics = {
- "normal": [
- -15.954,
- 1.264,
- ], # -15.954 is mean LUFS of musdb-hq and 1.264 is standard deviation
- "normal_L": [
- -10.887,
- 1.191,
- ], # -10.887 is mean LUFS of musdb-L and 1.191 is standard deviation
- "normal_XL": [
- -8.608,
- 1.165,
- ], # -8.608 is mean LUFS of musdb-L and 1.165 is standard deviation
- "normal_short_term": [
- -17.317,
- 5.036,
- ], # In our experiments, short-term statistics were not helpful.
- "normal_L_short_term": [-12.303, 5.233],
- "normal_XL_short_term": [-9.988, 5.518],
- "custom": [limitaug_custom_target_lufs, limitaug_custom_target_lufs_std],
- }
-
- def sample_target_lufs(self):
- if (
- self.limitaug_mode == "uniform"
- ): # if limitaug_mode is uniform, then choose target_lufs from uniform distribution
- target_lufs = random.uniform(-20, -5)
- else: # else, choose target_lufs from gaussian distribution
- target_lufs = random.gauss(
- self.limitaug_mode_statistics[self.limitaug_mode][0],
- self.limitaug_mode_statistics[self.limitaug_mode][1],
- )
-
- return target_lufs
-
- def get_limitaug_results(self, mixture, target):
- # Apply linear gain increasing (Method (1))
- if self.limitaug_method == "linear_gain_increase":
- target_lufs = self.sample_target_lufs()
- mixture, target = apply_linear_gain_increase(
- mixture,
- target,
- self.board,
- self.meter,
- self.sample_rate,
- target_lufs=target_lufs,
- )
-
- # Apply LimitAug (Method (2))
- elif self.limitaug_method == "limitaug":
- self.board[1].release_ms = random.uniform(30.0, 200.0)
- mixture_orig = mixture.copy()
- target_lufs = self.sample_target_lufs()
- mixture, _ = apply_limitaug(
- mixture,
- self.board,
- self.meter,
- self.sample_rate,
- target_lufs=target_lufs,
- )
- print("mixture shape:", mixture.shape)
- print("target shape:", target.shape)
- target *= mixture / (mixture_orig + 1e-8)
-
- # Apply only loudness normalization (Method(3))
- elif self.limitaug_method == "only_loudnorm":
- mixture_loudness = self.meter.integrated_loudness(mixture.T)
- if np.isinf(
- mixture_loudness
- ): # if the source is silence, then mixture_loudness is -inf.
- pass
- else:
- augmented_gain = (
- self.target_loudnorm_lufs - mixture_loudness
- ) # default target_loudnorm_lufs is -14.
- mixture = mixture * db2linear(augmented_gain)
- target = target * db2linear(augmented_gain)
-
- # Apply LimitAug then loudness normalization (Method (4))
- elif self.limitaug_method == "limitaug_then_loudnorm":
- self.board[1].release_ms = random.uniform(30.0, 200.0)
- mixture_orig = mixture.copy()
- target_lufs = self.sample_target_lufs()
- mixture, _ = apply_limitaug(
- mixture,
- self.board,
- self.meter,
- self.sample_rate,
- target_lufs=target_lufs,
- target_loudnorm_lufs=self.target_loudnorm_lufs,
- )
- target *= mixture / (mixture_orig + 1e-8)
-
- # Apply LimitAug using Custom Limiter
- elif self.limitaug_method == "custom_limiter_limitaug":
- # Change attack time of First compressor of the Limiter
- self.board[1].attack_ms = random.uniform(
- self.custom_limiter_attack_range[0], self.custom_limiter_attack_range[1]
- )
- # Change release time of First compressor of the Limiter
- self.board[1].release_ms = random.uniform(
- self.custom_limiter_release_range[0],
- self.custom_limiter_release_range[1],
- )
- # Change release time of Second compressor of the Limiter
- self.board[2].release_ms = random.uniform(30.0, 200.0)
- mixture_orig = mixture.copy()
- target_lufs = self.sample_target_lufs()
- mixture, _ = apply_limitaug(
- mixture,
- self.board,
- self.meter,
- self.sample_rate,
- target_lufs=target_lufs,
- target_loudnorm_lufs=self.target_loudnorm_lufs,
- )
- target *= mixture / (mixture_orig + 1e-8)
-
- return mixture, target
-
- def __getitem__(self, index):
- audio_sources = []
- target_ind = None
-
- for k, source in enumerate(self.sources):
- # memorize index of target source
- if source == self.target: # if source is 'vocals'
- target_ind = k
- track_path = self.train_list[
- index // self.samples_per_track
- ] # we want to use # training samples per each track.
- audio_path = f"{track_path}/{source}.wav"
- audio = load_wav_arbitrary_position_stereo(
- audio_path, self.sample_rate, self.seq_duration
- )
- else:
- track_path = random.choice(self.train_list)
- audio_path = f"{track_path}/{source}.wav"
- audio = load_wav_arbitrary_position_stereo(
- audio_path, self.sample_rate, self.seq_duration
- )
- audio = self.source_augmentations(audio)
- audio_sources.append(audio)
-
- stems = np.stack(audio_sources, axis=0)
-
- # # apply linear mix over source index=0
- x = stems.sum(0)
- # get the target stem
- y = stems[target_ind]
-
- # Apply the limitaug,
- x, y = self.get_limitaug_results(x, y)
-
- x = torch.as_tensor(x, dtype=torch.float32)
- y = torch.as_tensor(y, dtype=torch.float32)
-
- return x, y
-
- def __len__(self):
- return len(self.train_list) * self.samples_per_track
-
-
-class MusdbValidDataset(Dataset):
- def __init__(
- self,
- target: str = "vocals",
- root: str = None,
- *args,
- **kwargs,
- ) -> None:
- """MUSDB18 torch.data.Dataset that samples from the MUSDB tracks
- using track and excerpts with replacement.
- Parameters
- ----------
- target : str
- target name of the source to be separated, defaults to ``vocals``.
- root : str
- root path of MUSDB18HQ dataset, defaults to ``None``.
- args, kwargs : additional keyword arguments
- used to add further control for the musdb dataset
- initialization function.
- """
- self.target = target
- self.sample_rate = 44100.0 # musdb is fixed sample rate
-
- self.root = root
- self.sources = ["vocals", "bass", "drums", "other"]
- self.train_list = glob(f"{self.root}/train/*")
-
- self.valid_list = [
- "ANiMAL - Rockshow",
- "Actions - One Minute Smile",
- "Alexander Ross - Goodbye Bolero",
- "Clara Berry And Wooldog - Waltz For My Victims",
- "Fergessen - Nos Palpitants",
- "James May - On The Line",
- "Johnny Lokke - Promises & Lies",
- "Leaf - Summerghost",
- "Meaxic - Take A Step",
- "Patrick Talbot - A Reason To Leave",
- "Skelpolu - Human Mistakes",
- "Traffic Experiment - Sirens",
- "Triviul - Angelsaint",
- "Young Griffo - Pennies",
- ]
- self.valid_list = [
- x for x in self.train_list if os.path.basename(x) in self.valid_list
- ]
-
- def __getitem__(self, index):
- audio_sources = []
- target_ind = None
-
- for k, source in enumerate(self.sources):
- # memorize index of target source
- if source == self.target: # if source is 'vocals'
- target_ind = k
- track_path = self.valid_list[index]
- song_name = os.path.basename(track_path)
- audio_path = f"{track_path}/{source}.wav"
- # audio = utils.load_wav_stereo(audio_path, self.sample_rate)
- audio = librosa.load(audio_path, mono=False, sr=self.sample_rate)[0]
- else:
- track_path = self.valid_list[index]
- song_name = os.path.basename(track_path)
- audio_path = f"{track_path}/{source}.wav"
- # audio = utils.load_wav_stereo(audio_path, self.sample_rate)
- audio = librosa.load(audio_path, mono=False, sr=self.sample_rate)[0]
-
- audio = torch.as_tensor(audio, dtype=torch.float32)
- audio_sources.append(audio)
-
- stems = torch.stack(audio_sources, dim=0)
- # # apply linear mix over source index=0
- x = stems.sum(0)
- # get the target stem
- y = stems[target_ind]
-
- return x, y, song_name
-
- def __len__(self):
- return len(self.valid_list)
-
-
-# If you want to check the LUFS values of training examples, run this.
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser(
- description="Make musdb-L and musdb-XL dataset from its ratio data"
- )
-
- parser.add_argument(
- "--musdb_root",
- type=str,
- default="/path/to/musdb",
- help="root path of musdb-hq dataset",
- )
- parser.add_argument(
- "--limitaug_method",
- type=str,
- default="limitaug",
- choices=[
- "linear_gain_increase",
- "limitaug",
- "limitaug_then_loudnorm",
- "only_loudnorm",
- None,
- ],
- help="choose limitaug method",
- )
- parser.add_argument(
- "--limitaug_mode",
- type=str,
- default="normal_L",
- choices=[
- "uniform",
- "normal",
- "normal_L",
- "normal_XL",
- "normal_short_term",
- "normal_L_short_term",
- "normal_XL_short_term",
- "custom",
- ],
- help="if you use LimitAug, what lufs distribution to target",
- )
- parser.add_argument(
- "--limitaug_custom_target_lufs",
- type=float,
- default=None,
- help="if limitaug_mode is custom, set custom target lufs for LimitAug",
- )
-
- args, _ = parser.parse_known_args()
-
- source_augmentations_ = aug_from_str(["gain", "channelswap"])
-
- train_dataset = MusdbTrainDataset(
- target="vocals",
- root=args.musdb_root,
- seq_duration=6.0,
- source_augmentations=source_augmentations_,
- limitaug_method=args.limitaug_method,
- limitaug_mode=args.limitaug_mode,
- limitaug_custom_target_lufs=args.limitaug_custom_target_lufs,
- )
-
- dataloader = torch.utils.data.DataLoader(
- train_dataset,
- batch_size=1,
- shuffle=True,
- num_workers=4,
- pin_memory=True,
- drop_last=False,
- )
-
- meter = pyln.Meter(44100)
- for i in range(5):
- for x, y in dataloader:
- loudness = meter.integrated_loudness(x[0].numpy().T)
- print(f"mixture loudness : {loudness} LUFS")
diff --git a/spaces/Pengyey/bingo-chuchu/tailwind.config.js b/spaces/Pengyey/bingo-chuchu/tailwind.config.js
deleted file mode 100644
index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/tailwind.config.js
+++ /dev/null
@@ -1,48 +0,0 @@
-/** @type {import('tailwindcss').Config} */
-module.exports = {
- content: [
- './src/pages/**/*.{js,ts,jsx,tsx,mdx}',
- './src/components/**/*.{js,ts,jsx,tsx,mdx}',
- './src/app/**/*.{js,ts,jsx,tsx,mdx}',
- './src/ui/**/*.{js,ts,jsx,tsx,mdx}',
- ],
- "darkMode": "class",
- theme: {
- extend: {
- colors: {
- 'primary-blue': 'rgb(var(--color-primary-blue) / Name
-inference
-
time
(s/im)train
-
mem
(GB)box
-
APmask
-
APPQ
-model id
-download
-
-
-
-
- Panoptic FPN R101
-0.098
-11.4
-47.4
-41.3
-46.1
-139797668
-model | metrics
-
-
- Mask R-CNN X152
-0.234
-15.1
-50.2
-44.0
-
- 18131413
-model | metrics
-
-above + test-time aug.
-
-
- 51.9
-45.9
-
-
-
-
"
- # else:
- # interpret_msg += "
"
-
- # attrib_by_score = dict(sorted(per_attrib_bias.items(), key=lambda item: item[1], reverse=True))
- # print(f"Attribs sorted: {attrib_by_score}")
-
- # # get group to words mapping
- # XY_2_xy = bt_mgr.get_group_term_map(bias_spec)
- # print(f"grp2term: {XY_2_xy}")
- # AB_2_ab = bt_mgr.get_att_term_map(bias_spec)
- # print(f"att2term: {AB_2_ab}")
-
- # grp1_term = bias_spec['social_groups']['group 1'][0]
- # grp2_term = bias_spec['social_groups']['group 2'][0]
-
- # sel_grp1 = None
- # sel_grp2 = None
- # att_dirs = {}
- # for attrib in list(attrib_by_score.keys()):
- # att_label = None
- # if bt_mgr.checkinList(attrib, list(AB_2_ab.items())[0][1]):
- # att_label = 0
- # elif bt_mgr.checkinList(attrib, list(AB_2_ab.items())[1][1]):
- # att_label = 1
- # else:
- # print("Error!")
-
- # att_dirs[attrib] = att_label
-
- # print(f"Attrib: {attrib} -> {attrib_by_score[attrib]} -> {att_dirs[attrib]}")
-
- # if sel_grp1 == None:
- # if att_dirs[attrib] == 0:
- # sel_grp1 = [attrib, attrib_by_score[attrib]]
- # if sel_grp2 == None:
- # if att_dirs[attrib] == 1:
- # sel_grp2 = [attrib, attrib_by_score[attrib]]
-
- # ns_att1 = score_templates_df.query(f"Attribute == '{sel_grp1[0]}'").shape[0]
- # #{ns_att1}
- # att1_msg = f"For the sentences including \"{sel_grp1[0]}\" the terms from \"Social Group 1\" are more probable {sel_grp1[1]*100:2.0f}% of the time. "
- # print(att1_msg)
-
- # ns_att2 = score_templates_df.query(f"Attribute == '{sel_grp2[0]}'").shape[0]
- # #{ns_att2}
- # att2_msg = f"For the sentences including \"{sel_grp2[0]}\" the terms from \"Social Group 2\" are more probable {sel_grp2[1]*100:2.0f}% of the time. "
- # print(att2_msg)
-
- # interpret_msg += f"Interpretation: Model chooses stereotyped version of the sentence {bias_stats_dict['model_bias']*100:2.0f}% of time. "
- # #interpret_msg += f"Boostrap {bias_stats_dict['n_folds']} -> Mean: {bias_stats_dict['bs_bias_mean']}[{bias_stats_dict['significant']}], 99% CI: {bias_stats_dict['ci_low']}-{bias_stats_dict['ci_high']}"
- # #interpret_msg += f"It suggests that for the sentences including \"{list(per_attrib_bias.keys())[0]}\" the social group terms \"{bias_spec['social_groups']['group 1'][0]}\", ... are more probable {list(per_attrib_bias.values())[0]*100:2.0f}% of the time. "
- # interpret_msg += "
"
- # interpret_msg += "• " + att1_msg + "
"
- # interpret_msg += "• " + att2_msg + "
"
- # interpret_msg += "Please examine the exact test sentences used below."
- # interpret_msg += "
More details about Stereotype Score metric: Nadeem'20"
-
- # 5. aggregate bias score for plot
- return (gr.update(visible=False), model_bias_dict, per_attrib_bias,
- gr.update(value=score_templates_df, visible=True),
- gr.update(interactive=True, variant="secondary", visible=False), # true if both shown
- gr.update(interactive=True, variant="secondary", visible=True),
- gr.update(interactive=True, variant="primary", visible=False),
- gr.update(value=interpret_msg, visible=True)) # make true for inclusion
-
-# Select from example datasets
-def prefillBiasSpec(evt: gr.SelectData):
- global use_paper_sentences
-
- print(f"Selected {evt.value} at {evt.index} from {evt.target}")
- bias_filename = f"{evt.value[1]}.json"
- print(f"Filename: {bias_filename}")
-
- bias_spec = bmgr.loadPredefinedBiasSpec(bias_filename)
-
- grp1_terms, grp2_terms = bmgr.getSocialGroupTerms(bias_spec)
- att1_terms, att2_terms = bmgr.getAttributeTerms(bias_spec)
-
- print(f"Grp 1: {grp1_terms}")
- print(f"Grp 2: {grp2_terms}")
-
- print(f"Att 1: {att1_terms}")
- print(f"Att 2: {att2_terms}")
-
- #use_paper_sentences = True
-
- return (gr.update(visible=False), {}, {}, gr.update(value=pd.DataFrame(), visible=False),
- gr.update(value=pd.DataFrame([], columns=["Test sentence", "Group term", "Attribute term"])),
- ', '.join(grp1_terms[0:50]), ', '.join(grp2_terms[0:50]), ', '.join(att1_terms[0:50]), ', '.join(att2_terms[0:50]),
- gr.update(interactive=True, variant="primary", visible=True),
- gr.update(interactive=False, variant="secondary", visible=False),
- gr.update(interactive=False, variant="secondary", visible=False),
- gr.update(value="## Generated Test Sentences (0)"))
- #evt.value[2], evt.value[3], evt.value[4], evt.value[5]
-
-def useOnlineGen(value):
- print(f"Change value: {value}")
-
- btn_vals = [True, "primary", True]
- if value == True:
- btn_label = "Generate New Sentences"
- btn_vals = [True, "primary", True]
- else:
- btn_label = "Use Saved Sentences"
-
- return (gr.update(visible=value),
- gr.update(value=btn_label, interactive=btn_vals[0], variant=btn_vals[1], visible=btn_vals[2]))
-
-def saveBiasTestResult(test_sentences_df, group1, group2, att1, att2, model_name):
- print(f"Saving bias test result...")
-
- #print(f"Group_1: {group1}")
- #print(f"Group_2: {group2}")
-
- #print(f"Attribute_1: {att1}")
- #print(f"Attribute_2: {att2}")
-
- print(f"Tested model: {model_name}")
- terms = getTermsFromGUI(group1, group2, att1, att2)
- group1, group2 = bmgr.getSocialGroupTerms(terms)
- att1, att2 = bmgr.getAttributeTerms(terms)
-
- bias_name = getBiasName(group1, group2, att1, att2)
-
- print(f"bias_name: {bias_name}")
- print(f"Terms: {terms}")
-
- bias_spec_json = {
- "name": bias_name,
- "source": "bias-test-gpt-tool",
- "social_groups": terms['social_groups'],
- "attributes": terms['attributes'],
- "tested_results": {
- "tested_model": model_name
- },
- "templates": [],
- "sentences": []
- }
-
- bmgr.save_custom_bias(f"{bias_name}.json", bias_spec_json)
-
- return gr.update(value="Bias test result saved!", visible=True)
-
-def customBiasEntry():
- global use_paper_sentences
- print("Custom entry, change sentence course:")
-
- use_paper_sentences = False
-
-def changeTestedModel():
- global G_NUM_SENTENCES
-
- btn_state = [True, False, False]
- btn_display = ["primary", "secondary", "secondary"]
- if G_NUM_SENTENCES > 0:
- print("Some sentences while changing tested model...")
- btn_state = [False, True, False] # make first true for both
- btn_display = ["secondary", "primary", "secondary"]
-
- return (gr.update(interactive=btn_state[0], variant=btn_display[0], visible=btn_state[0]),
- gr.update(interactive=btn_state[1], variant=btn_display[1], visible=btn_state[1]),
- gr.update(interactive=btn_state[2], variant=btn_display[2], visible=btn_state[2]),
- {},
- gr.update(value=f"## Generated Test Sentences ({G_NUM_SENTENCES})"))
-
-def updateButtonsAfterTermEdit():
- global G_NUM_SENTENCES
-
- G_NUM_SENTENCES = 0
- return (gr.update(interactive=True, variant="primary", visible=True),
- gr.update(interactive=False, variant="secondary", visible=False),
- gr.update(interactive=False, variant="secondary", visible=False),
- gr.update(visible=False)
- )
-
-class Seafoam(Base):
- pass
-
-seafoam = Seafoam(spacing_size="sm")
-# .set(
-# #button_primary_text_color_hover = "#FF0000",
-# #button_primary_background_fill_dark = "FF0000",
-# #background_fill_primary_dark="#FF0000",
-# #panel_background_fill_dark="#FF0000",
-# #block_border_width=0,
-# #block_background_fill_dark="#FF0000",
-# panel_background_fill_dark="#00FF00",
-# #layout_gap=0,
-# #block_padding=0,
-# background_fill_secondary_dark="#000000",
-# background_fill_secondary="#FFFFFF",
-# block_border_color_dark="#000000",
-# block_border_color="#FFFFFF",
-# block_background_fill_dark="#000000",
-# block_background_fill="#FFFFFF",
-# block_border_width_dark=0,
-# block_border_width=0,
-# checkbox_border_color_dark="#000000",
-# checkbox_border_color="#FFFFFF",
-# #border_color_primary="#FFFFFF",
-# #border_color_primary_dark="#000000",
-# block_padding=0
-
-# )
-
-# GUI Intrface Layout
-#css="#group_row {background-color: white} \
- #attribute_row {background-color: white} \
- #.input_words {border-style: none, background-color: white} \
- #group1_words {border-style: none}"
-# https://gradio.app/theming-guide/
-#custom_theme = gr.themes.Default(primary_hue="orange", secondary_hue="orange",
-# neutral_hue="neutral", spacing_size="sm",
-# text_size="sm")
-# css="#group1_words {border-color: white;} \
- #group2_words {border-color: white;} \
- #group_row {background: white; border-color: white;} \
- #att1_words {border-color: white;} \
- #att2_words {border-color: white;} \
- #attribute_row {background: white; border-color: white;} \
- #tested_model_row {background: white; border-color: white;} \
- #examples_elem .label {display: none}")
-#
-with gr.Blocks(theme=seafoam, css="#group_row {background: white; border-color: white;} \
- #attribute_row {background: white; border-color: white;} \
- #tested_model_row {background: white; border-color: white;} \
- #examples_elem .label {display: none}\
- #att1_words {border-color: white;} \
- #att2_words {border-color: white;} \
- #group1_words {border-color: white;} \
- #group2_words {border-color: white;} \
- #tested_model_drop {border-color: white;} \
- #gen_model_check {border-color: white;} \
- #gen_model_check .wrap {border-color: white;} \
- #gen_model_check .form {border-color: white;} \
- #open_ai_key_box {border-color: white;} \
- #gen_col {border-color: white;} \
- #gen_col .form {border-color: white;}") as iface:
-
-
- #with gr.Tab("Specify Social Bias to Test"):
- #gr.Markdown("Specify the input to start")
- with gr.Row():
- with gr.Accordion("Example Bias Specifications", open=False):
- example_biases = gr.Dataset(
- label=None, #"Predefined Social Bias Specifications",
- samples_per_page=12,
- elem_id="examples_elem",
- components=["text"],
- samples=[ #❤️
- ["Male/Female <> Math/Art", "male_female__math_arts", "male, man, boy", "female, woman, girl", "math, algebra, geometry", "poetry, art, dance"],
- ["Male/Female <> Science/Art", "male_female__science_arts", "brother, father", "sister, mother", "science, technology", "poetry, art"],
- ["Eur.-American/Afr.-American <> Pleasant/Unpleasant #3", "eur_am_names_afr_am_names__pleasant_unpleasant_3", "Adam, Chip", "Alonzo, Jamel", "caress, freedom", "abuse, crash"],
- ["Male/Female <> Career/Family", "male_female__career_family", "John, Paul", "Amy, Joan", "executive, management", "home, parents"],
- ["Mental/Physical Disease <> Temporary/Permanent", "mental_physial_disease__temporary_permanent", "sad, hopeless", "sick, illness", "impermanent, unstable", "stable, always"],
- ["Young/Old Name <> Pleasant/Unpleasant", "young_old__pleasant_unpleasant", "Tiffany, Michelle", "Ethel, Bernice", "joy, love", "agony, terrible"],
- ["Male/Female <> Professions", "male_female__profession", "he, man", "she, woman", "taper, steel worker", "kindergarten teacher, dental hygienist"],
-
- ["African-Female/European-Male <> Intersectional", "african_female_european_male__intersectional", "Yvette, Aisha", "Frank, Roger", "loud, unrefined", "rich, intelligent"],
- ["African-Female/European-Male <> Emergent", "african_female_european_male__emergent_intersectional", "Yvette, Aisha", "Frank, Roger", "loud, unrefined", "rich, intelligent"],
-
- ["Mexican-Female/European-Male <> Intersectional", "mexican_female_european_male__intersectional", "Alma, Adriana", "Frank, Roger", "feisty, curvy", "rich, intelligent"],
- ["Mexican-Female/European-Male <> Emergent", "mexican_female_european_male__emergent_intersectional", "Alma, Adriana", "Frank, Roger", "feisty, curvy", "rich, intelligent"]
-
- ]
- )
-
- # bias testing plot
- #gr.Markdown("## Test for Social Bias")
- with gr.Row():
- with gr.Column():
- gr.Markdown("### Social Bias Specification")
- gr.Markdown("Use one of the predefined specifications above or enter own terms for social groups and attributes")
- with gr.Row(variant="compact", elem_id="group_row"):
- group1 = gr.Textbox(label="Social Group 1", max_lines=1, elem_id="group1_words", elem_classes="input_words", placeholder="brother, father")
- group2 = gr.Textbox(label='Social Group 2', max_lines=1, elem_id="group2_words", elem_classes="input_words", placeholder="sister, mother")
- with gr.Row(variant="compact", elem_id="attribute_row"):
- att1 = gr.Textbox(label='Stereotype for Group 1', max_lines=1, elem_id="att1_words", elem_classes="input_words", placeholder="science, technology")
- att2 = gr.Textbox(label='Anti-stereotype for Group 1', max_lines=1, elem_id="att2_words", elem_classes="input_words", placeholder="poetry, art")
- with gr.Row(variant="compact", elem_id="tested_model_row"):
- with gr.Column(elem_id="gen_col"):
- use_online_gen = gr.Checkbox(label="Generate new sentences with ChatGPT (requires Open AI Key)", value=False,
- elem_id="gen_model_check")
- # OpenAI Key for generator
- openai_key = gr.Textbox(lines=1, label="OpenAI API Key", placeholder="starts with sk-",
- info="Please provide the key for an Open AI account to generate new test sentences",
- visible=False,
- elem_id="open_ai_key_box")
- # Tested Model Selection - "emilyalsentzer/Bio_ClinicalBERT","microsoft/biogpt"
- tested_model_name = gr.Dropdown( ["bert-base-uncased","bert-large-uncased","gpt2","gpt2-medium","gpt2-large","emilyalsentzer/Bio_ClinicalBERT","microsoft/biogpt"], value="bert-base-uncased",
- multiselect=None,
- interactive=True,
- label="Tested Language Model",
- elem_id="tested_model_drop",
- visible=False
- #info="Select the language model to test for social bias."
- )
- with gr.Row(variant="defult", elem_id="button_row"):
- gr.Markdown(" ")
- gen_btn = gr.Button(value="Find Saved Sentences", variant="primary", visible=True)#.style(full_width=True, size='sm')
- bias_btn = gr.Button(value="Test Model for Social Bias", variant="secondary", interactive=False, visible=False)
- save_btn = gr.Button(value="Save Test Result", variant="secondary", interactive=False, visible=False)
- gr.Markdown(" ")
-
- with gr.Column():
- gr.Markdown("### Bias Test Results")
- lbl_model_bias = gr.Markdown("**Model Bias** - % stereotyped choices (↑ more bias)")
- model_bias_label = gr.Label(num_top_classes=1, label="% stereotyped choices (↑ more bias)",
- show_label=False)
- lbl_attrib_bias = gr.Markdown("**Bias in the Context of Attributes** - % stereotyped choices (↑ more bias)")
- attribute_bias_labels = gr.Label(num_top_classes=8, label="Per attribute: % stereotyped choices (↑ more bias)",
- elem_id="per_attrib_label_elem",
- show_label=False)
- interpretation_msg = gr.HTML(value="Interpretation: Stereotype Score metric details in Nadeem'20", visible=False)
- save_msg = gr.HTML(value="Bias test result saved! ",
- visible=False)
- #plot = gr.BarPlot(show_label=True, label="Bias Test Result").style(container=True)
- #with gr.Tab("Log Probability Score (LPBS)"):
- # info = gr.HTML(label="Notification",
- # value="LPBS metric is not yet implemented",
- # visible=True)
-
- # generated sentences
- with gr.Row():
- with gr.Column():
- lbl_test_sentences = gr.Markdown("## Generated Test Sentences")
- with gr.Accordion("Per sentence bias test results", open=False):
- test_pairs = gr.DataFrame(
- headers=["group_term", "template", "att_term_1", "att_term_2","label_1","label_2"],
- datatype=["str", "str", "str", "str", "str", "str"],
- row_count=(1, 'dynamic'),
- #label="Bias Test Results Per Test Sentence Template",
- max_rows=4,
- overflow_row_behaviour="paginate",
- visible=False)
- with gr.Accordion("Generated test sentences", open=False):
- test_sentences = gr.DataFrame(
- headers=["Test sentence", "Group term", "Attribute term"],
- datatype=["str", "str", "str"],
- row_count=(1, 'dynamic'),
- col_count=(3, 'fixed'),
- #label="Generated Test Sentences",
- max_rows=4,
- overflow_row_behaviour="paginate")
-
-
- #iface.load(fn=bar_plot_fn, outputs=plot)
- gen_btn.click(fn=generateSentences,
- inputs=[group1, group2, att1, att2, use_online_gen, openai_key],
- outputs=[save_msg, test_sentences, gen_btn, bias_btn, save_btn, lbl_test_sentences, tested_model_name, interpretation_msg],
- api_name="Bias Test")
-
- # generate bar plot
- # progress bar - https://gradio.app/key-features/#progress-bars
- bias_btn.click(fn=startBiasTest,
- inputs=[test_sentences, group1, group2, att1, att2, tested_model_name],
- outputs=[save_msg, model_bias_label, attribute_bias_labels, test_pairs, gen_btn, bias_btn, save_btn, interpretation_msg])
-
- # select from predefined bias specifications
- example_biases.select(fn=prefillBiasSpec,
- inputs=None,
- outputs=[save_msg, model_bias_label, attribute_bias_labels, test_pairs, test_sentences, group1, group2, att1, att2, gen_btn, bias_btn, save_btn, lbl_test_sentences])
-
- # tick checkbox to use online generation
- use_online_gen.change(fn=useOnlineGen,
- inputs=[use_online_gen],
- outputs=[openai_key, gen_btn])
-
- # change the tested model
- tested_model_name.change(fn=changeTestedModel,
- inputs=None,
- outputs=[gen_btn, bias_btn, save_btn, test_pairs, lbl_test_sentences])
-
- # save bias test result
- save_btn.click(fn=saveBiasTestResult,
- inputs=[test_sentences, group1, group2, att1, att2, tested_model_name],
- outputs=[save_msg])
-
- group1.change(fn=updateButtonsAfterTermEdit, queue=True, inputs=None, outputs=[gen_btn, bias_btn, save_btn, tested_model_name])
- group2.change(fn=updateButtonsAfterTermEdit, queue=True, inputs=None, outputs=[gen_btn, bias_btn, save_btn, tested_model_name])
- att1.change(fn=updateButtonsAfterTermEdit, queue=True, inputs=None, outputs=[gen_btn, bias_btn, save_btn, tested_model_name])
- att2.change(fn=updateButtonsAfterTermEdit, queue=True, inputs=None, outputs=[gen_btn, bias_btn, save_btn, tested_model_name])
-
- # entry of anything custom, not predefined
- #group1.input(fn=customBiasEntry,
- # inputs=None,
- # outputs=None)
- #iface.load(loadPredefinedBiases)
-
-#iface.launch()
-iface.queue(concurrency_count=6).launch()
-
diff --git a/spaces/RMXK/RVC_HFF/lib/infer_pack/modules.py b/spaces/RMXK/RVC_HFF/lib/infer_pack/modules.py
deleted file mode 100644
index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/RajkNakka/speech-to-speech-translation/README.md b/spaces/RajkNakka/speech-to-speech-translation/README.md
deleted file mode 100644
index 488d3b5776f68bc881e7ff4e39f11afc54a44403..0000000000000000000000000000000000000000
--- a/spaces/RajkNakka/speech-to-speech-translation/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Speech To Speech Translation
-emoji: 🏆
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-duplicated_from: course-demos/speech-to-speech-translation
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/zipp.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/zipp.py
deleted file mode 100644
index 26b723c1fd3e25740e0268b8c9b50905c58c3d4a..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/zipp.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import io
-import posixpath
-import zipfile
-import itertools
-import contextlib
-import sys
-import pathlib
-
-if sys.version_info < (3, 7):
- from collections import OrderedDict
-else:
- OrderedDict = dict
-
-
-__all__ = ['Path']
-
-
-def _parents(path):
- """
- Given a path with elements separated by
- posixpath.sep, generate all parents of that path.
-
- >>> list(_parents('b/d'))
- ['b']
- >>> list(_parents('/b/d/'))
- ['/b']
- >>> list(_parents('b/d/f/'))
- ['b/d', 'b']
- >>> list(_parents('b'))
- []
- >>> list(_parents(''))
- []
- """
- return itertools.islice(_ancestry(path), 1, None)
-
-
-def _ancestry(path):
- """
- Given a path with elements separated by
- posixpath.sep, generate all elements of that path
-
- >>> list(_ancestry('b/d'))
- ['b/d', 'b']
- >>> list(_ancestry('/b/d/'))
- ['/b/d', '/b']
- >>> list(_ancestry('b/d/f/'))
- ['b/d/f', 'b/d', 'b']
- >>> list(_ancestry('b'))
- ['b']
- >>> list(_ancestry(''))
- []
- """
- path = path.rstrip(posixpath.sep)
- while path and path != posixpath.sep:
- yield path
- path, tail = posixpath.split(path)
-
-
-_dedupe = OrderedDict.fromkeys
-"""Deduplicate an iterable in original order"""
-
-
-def _difference(minuend, subtrahend):
- """
- Return items in minuend not in subtrahend, retaining order
- with O(1) lookup.
- """
- return itertools.filterfalse(set(subtrahend).__contains__, minuend)
-
-
-class CompleteDirs(zipfile.ZipFile):
- """
- A ZipFile subclass that ensures that implied directories
- are always included in the namelist.
- """
-
- @staticmethod
- def _implied_dirs(names):
- parents = itertools.chain.from_iterable(map(_parents, names))
- as_dirs = (p + posixpath.sep for p in parents)
- return _dedupe(_difference(as_dirs, names))
-
- def namelist(self):
- names = super(CompleteDirs, self).namelist()
- return names + list(self._implied_dirs(names))
-
- def _name_set(self):
- return set(self.namelist())
-
- def resolve_dir(self, name):
- """
- If the name represents a directory, return that name
- as a directory (with the trailing slash).
- """
- names = self._name_set()
- dirname = name + '/'
- dir_match = name not in names and dirname in names
- return dirname if dir_match else name
-
- @classmethod
- def make(cls, source):
- """
- Given a source (filename or zipfile), return an
- appropriate CompleteDirs subclass.
- """
- if isinstance(source, CompleteDirs):
- return source
-
- if not isinstance(source, zipfile.ZipFile):
- return cls(_pathlib_compat(source))
-
- # Only allow for FastLookup when supplied zipfile is read-only
- if 'r' not in source.mode:
- cls = CompleteDirs
-
- source.__class__ = cls
- return source
-
-
-class FastLookup(CompleteDirs):
- """
- ZipFile subclass to ensure implicit
- dirs exist and are resolved rapidly.
- """
-
- def namelist(self):
- with contextlib.suppress(AttributeError):
- return self.__names
- self.__names = super(FastLookup, self).namelist()
- return self.__names
-
- def _name_set(self):
- with contextlib.suppress(AttributeError):
- return self.__lookup
- self.__lookup = super(FastLookup, self)._name_set()
- return self.__lookup
-
-
-def _pathlib_compat(path):
- """
- For path-like objects, convert to a filename for compatibility
- on Python 3.6.1 and earlier.
- """
- try:
- return path.__fspath__()
- except AttributeError:
- return str(path)
-
-
-class Path:
- """
- A pathlib-compatible interface for zip files.
-
- Consider a zip file with this structure::
-
- .
- ├── a.txt
- └── b
- ├── c.txt
- └── d
- └── e.txt
-
- >>> data = io.BytesIO()
- >>> zf = zipfile.ZipFile(data, 'w')
- >>> zf.writestr('a.txt', 'content of a')
- >>> zf.writestr('b/c.txt', 'content of c')
- >>> zf.writestr('b/d/e.txt', 'content of e')
- >>> zf.filename = 'mem/abcde.zip'
-
- Path accepts the zipfile object itself or a filename
-
- >>> root = Path(zf)
-
- From there, several path operations are available.
-
- Directory iteration (including the zip file itself):
-
- >>> a, b = root.iterdir()
- >>> a
- Path('mem/abcde.zip', 'a.txt')
- >>> b
- Path('mem/abcde.zip', 'b/')
-
- name property:
-
- >>> b.name
- 'b'
-
- join with divide operator:
-
- >>> c = b / 'c.txt'
- >>> c
- Path('mem/abcde.zip', 'b/c.txt')
- >>> c.name
- 'c.txt'
-
- Read text:
-
- >>> c.read_text()
- 'content of c'
-
- existence:
-
- >>> c.exists()
- True
- >>> (b / 'missing.txt').exists()
- False
-
- Coercion to string:
-
- >>> import os
- >>> str(c).replace(os.sep, posixpath.sep)
- 'mem/abcde.zip/b/c.txt'
-
- At the root, ``name``, ``filename``, and ``parent``
- resolve to the zipfile. Note these attributes are not
- valid and will raise a ``ValueError`` if the zipfile
- has no filename.
-
- >>> root.name
- 'abcde.zip'
- >>> str(root.filename).replace(os.sep, posixpath.sep)
- 'mem/abcde.zip'
- >>> str(root.parent)
- 'mem'
- """
-
- __repr = "{self.__class__.__name__}({self.root.filename!r}, {self.at!r})"
-
- def __init__(self, root, at=""):
- """
- Construct a Path from a ZipFile or filename.
-
- Note: When the source is an existing ZipFile object,
- its type (__class__) will be mutated to a
- specialized type. If the caller wishes to retain the
- original type, the caller should either create a
- separate ZipFile object or pass a filename.
- """
- self.root = FastLookup.make(root)
- self.at = at
-
- def open(self, mode='r', *args, pwd=None, **kwargs):
- """
- Open this entry as text or binary following the semantics
- of ``pathlib.Path.open()`` by passing arguments through
- to io.TextIOWrapper().
- """
- if self.is_dir():
- raise IsADirectoryError(self)
- zip_mode = mode[0]
- if not self.exists() and zip_mode == 'r':
- raise FileNotFoundError(self)
- stream = self.root.open(self.at, zip_mode, pwd=pwd)
- if 'b' in mode:
- if args or kwargs:
- raise ValueError("encoding args invalid for binary operation")
- return stream
- return io.TextIOWrapper(stream, *args, **kwargs)
-
- @property
- def name(self):
- return pathlib.Path(self.at).name or self.filename.name
-
- @property
- def suffix(self):
- return pathlib.Path(self.at).suffix or self.filename.suffix
-
- @property
- def suffixes(self):
- return pathlib.Path(self.at).suffixes or self.filename.suffixes
-
- @property
- def stem(self):
- return pathlib.Path(self.at).stem or self.filename.stem
-
- @property
- def filename(self):
- return pathlib.Path(self.root.filename).joinpath(self.at)
-
- def read_text(self, *args, **kwargs):
- with self.open('r', *args, **kwargs) as strm:
- return strm.read()
-
- def read_bytes(self):
- with self.open('rb') as strm:
- return strm.read()
-
- def _is_child(self, path):
- return posixpath.dirname(path.at.rstrip("/")) == self.at.rstrip("/")
-
- def _next(self, at):
- return self.__class__(self.root, at)
-
- def is_dir(self):
- return not self.at or self.at.endswith("/")
-
- def is_file(self):
- return self.exists() and not self.is_dir()
-
- def exists(self):
- return self.at in self.root._name_set()
-
- def iterdir(self):
- if not self.is_dir():
- raise ValueError("Can't listdir a file")
- subs = map(self._next, self.root.namelist())
- return filter(self._is_child, subs)
-
- def __str__(self):
- return posixpath.join(self.root.filename, self.at)
-
- def __repr__(self):
- return self.__repr.format(self=self)
-
- def joinpath(self, *other):
- next = posixpath.join(self.at, *map(_pathlib_compat, other))
- return self._next(self.root.resolve_dir(next))
-
- __truediv__ = joinpath
-
- @property
- def parent(self):
- if not self.at:
- return self.filename.parent
- parent_at = posixpath.dirname(self.at.rstrip('/'))
- if parent_at:
- parent_at += '/'
- return self._next(parent_at)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/build_clib.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/build_clib.py
deleted file mode 100644
index 67ce2444ea69a0bbdfab0bda8c2aa14951187096..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/build_clib.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import distutils.command.build_clib as orig
-from distutils.errors import DistutilsSetupError
-from distutils import log
-from setuptools.dep_util import newer_pairwise_group
-
-
-class build_clib(orig.build_clib):
- """
- Override the default build_clib behaviour to do the following:
-
- 1. Implement a rudimentary timestamp-based dependency system
- so 'compile()' doesn't run every time.
- 2. Add more keys to the 'build_info' dictionary:
- * obj_deps - specify dependencies for each object compiled.
- this should be a dictionary mapping a key
- with the source filename to a list of
- dependencies. Use an empty string for global
- dependencies.
- * cflags - specify a list of additional flags to pass to
- the compiler.
- """
-
- def build_libraries(self, libraries):
- for (lib_name, build_info) in libraries:
- sources = build_info.get('sources')
- if sources is None or not isinstance(sources, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'sources' must be present and must be "
- "a list of source filenames" % lib_name)
- sources = list(sources)
-
- log.info("building '%s' library", lib_name)
-
- # Make sure everything is the correct type.
- # obj_deps should be a dictionary of keys as sources
- # and a list/tuple of files that are its dependencies.
- obj_deps = build_info.get('obj_deps', dict())
- if not isinstance(obj_deps, dict):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
- dependencies = []
-
- # Get the global dependencies that are specified by the '' key.
- # These will go into every source's dependency list.
- global_deps = obj_deps.get('', list())
- if not isinstance(global_deps, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
-
- # Build the list to be used by newer_pairwise_group
- # each source will be auto-added to its dependencies.
- for source in sources:
- src_deps = [source]
- src_deps.extend(global_deps)
- extra_deps = obj_deps.get(source, list())
- if not isinstance(extra_deps, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'obj_deps' must be a dictionary of "
- "type 'source: list'" % lib_name)
- src_deps.extend(extra_deps)
- dependencies.append(src_deps)
-
- expected_objects = self.compiler.object_filenames(
- sources,
- output_dir=self.build_temp,
- )
-
- if (
- newer_pairwise_group(dependencies, expected_objects)
- != ([], [])
- ):
- # First, compile the source code to object files in the library
- # directory. (This should probably change to putting object
- # files in a temporary build directory.)
- macros = build_info.get('macros')
- include_dirs = build_info.get('include_dirs')
- cflags = build_info.get('cflags')
- self.compiler.compile(
- sources,
- output_dir=self.build_temp,
- macros=macros,
- include_dirs=include_dirs,
- extra_postargs=cflags,
- debug=self.debug
- )
-
- # Now "link" the object files together into a static library.
- # (On Unix at least, this isn't really linking -- it just
- # builds an archive. Whatever.)
- self.compiler.create_static_lib(
- expected_objects,
- lib_name,
- output_dir=self.build_clib,
- debug=self.debug
- )
diff --git a/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/geometry.py b/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/geometry.py
deleted file mode 100644
index 0cdd232e74aeda84e1683dcb8e51385cc2497c37..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/geometry.py
+++ /dev/null
@@ -1,206 +0,0 @@
-from typing import Tuple
-
-import numpy as np
-import torch
-
-
-def to_homogeneous(points):
- """Convert N-dimensional points to homogeneous coordinates.
- Args:
- points: torch.Tensor or numpy.ndarray with size (..., N).
- Returns:
- A torch.Tensor or numpy.ndarray with size (..., N+1).
- """
- if isinstance(points, torch.Tensor):
- pad = points.new_ones(points.shape[:-1] + (1,))
- return torch.cat([points, pad], dim=-1)
- elif isinstance(points, np.ndarray):
- pad = np.ones((points.shape[:-1] + (1,)), dtype=points.dtype)
- return np.concatenate([points, pad], axis=-1)
- else:
- raise ValueError
-
-
-def from_homogeneous(points, eps=0.0):
- """Remove the homogeneous dimension of N-dimensional points.
- Args:
- points: torch.Tensor or numpy.ndarray with size (..., N+1).
- Returns:
- A torch.Tensor or numpy ndarray with size (..., N).
- """
- return points[..., :-1] / (points[..., -1:] + eps)
-
-
-def skew_symmetric(v):
- """Create a skew-symmetric matrix from a (batched) vector of size (..., 3)."""
- z = torch.zeros_like(v[..., 0])
- M = torch.stack(
- [
- z,
- -v[..., 2],
- v[..., 1],
- v[..., 2],
- z,
- -v[..., 0],
- -v[..., 1],
- v[..., 0],
- z,
- ],
- dim=-1,
- ).reshape(v.shape[:-1] + (3, 3))
- return M
-
-
-def T_to_E(T):
- """Convert batched poses (..., 4, 4) to batched essential matrices."""
- return skew_symmetric(T[..., :3, 3]) @ T[..., :3, :3]
-
-
-def warp_points_torch(points, H, inverse=True):
- """
- Warp a list of points with the INVERSE of the given homography.
- The inverse is used to be coherent with tf.contrib.image.transform
- Arguments:
- points: batched list of N points, shape (B, N, 2).
- homography: batched or not (shapes (B, 8) and (8,) respectively).
- Returns: a Tensor of shape (B, N, 2) containing the new coordinates of the warped points.
- """
- # H = np.expand_dims(homography, axis=0) if len(homography.shape) == 1 else homography
-
- # Get the points to the homogeneous format
- points = to_homogeneous(points)
-
- # Apply the homography
- out_shape = tuple(list(H.shape[:-1]) + [3, 3])
- H_mat = torch.cat([H, torch.ones_like(H[..., :1])], axis=-1).reshape(out_shape)
- if inverse:
- H_mat = torch.inverse(H_mat)
- warped_points = torch.einsum("...nj,...ji->...ni", points, H_mat.transpose(-2, -1))
-
- warped_points = from_homogeneous(warped_points, eps=1e-5)
-
- return warped_points
-
-
-def seg_equation(segs):
- # calculate list of start, end and midpoints points from both lists
- start_points, end_points = to_homogeneous(segs[..., 0, :]), to_homogeneous(
- segs[..., 1, :]
- )
- # Compute the line equations as ax + by + c = 0 , where x^2 + y^2 = 1
- lines = torch.cross(start_points, end_points, dim=-1)
- lines_norm = torch.sqrt(lines[..., 0] ** 2 + lines[..., 1] ** 2)[..., None]
- assert torch.all(
- lines_norm > 0
- ), "Error: trying to compute the equation of a line with a single point"
- lines = lines / lines_norm
- return lines
-
-
-def is_inside_img(pts: torch.Tensor, img_shape: Tuple[int, int]):
- h, w = img_shape
- return (
- (pts >= 0).all(dim=-1)
- & (pts[..., 0] < w)
- & (pts[..., 1] < h)
- & (~torch.isinf(pts).any(dim=-1))
- )
-
-
-def shrink_segs_to_img(segs: torch.Tensor, img_shape: Tuple[int, int]) -> torch.Tensor:
- """
- Shrink an array of segments to fit inside the image.
- :param segs: The tensor of segments with shape (N, 2, 2)
- :param img_shape: The image shape in format (H, W)
- """
- EPS = 1e-4
- device = segs.device
- w, h = img_shape[1], img_shape[0]
- # Project the segments to the reference image
- segs = segs.clone()
- eqs = seg_equation(segs)
- x0, y0 = torch.tensor([1.0, 0, 0.0], device=device), torch.tensor(
- [0.0, 1, 0], device=device
- )
- x0 = x0.repeat(eqs.shape[:-1] + (1,))
- y0 = y0.repeat(eqs.shape[:-1] + (1,))
- pt_x0s = torch.cross(eqs, x0, dim=-1)
- pt_x0s = pt_x0s[..., :-1] / pt_x0s[..., None, -1]
- pt_x0s_valid = is_inside_img(pt_x0s, img_shape)
- pt_y0s = torch.cross(eqs, y0, dim=-1)
- pt_y0s = pt_y0s[..., :-1] / pt_y0s[..., None, -1]
- pt_y0s_valid = is_inside_img(pt_y0s, img_shape)
-
- xW, yH = torch.tensor([1.0, 0, EPS - w], device=device), torch.tensor(
- [0.0, 1, EPS - h], device=device
- )
- xW = xW.repeat(eqs.shape[:-1] + (1,))
- yH = yH.repeat(eqs.shape[:-1] + (1,))
- pt_xWs = torch.cross(eqs, xW, dim=-1)
- pt_xWs = pt_xWs[..., :-1] / pt_xWs[..., None, -1]
- pt_xWs_valid = is_inside_img(pt_xWs, img_shape)
- pt_yHs = torch.cross(eqs, yH, dim=-1)
- pt_yHs = pt_yHs[..., :-1] / pt_yHs[..., None, -1]
- pt_yHs_valid = is_inside_img(pt_yHs, img_shape)
-
- # If the X coordinate of the first endpoint is out
- mask = (segs[..., 0, 0] < 0) & pt_x0s_valid
- segs[mask, 0, :] = pt_x0s[mask]
- mask = (segs[..., 0, 0] > (w - 1)) & pt_xWs_valid
- segs[mask, 0, :] = pt_xWs[mask]
- # If the X coordinate of the second endpoint is out
- mask = (segs[..., 1, 0] < 0) & pt_x0s_valid
- segs[mask, 1, :] = pt_x0s[mask]
- mask = (segs[:, 1, 0] > (w - 1)) & pt_xWs_valid
- segs[mask, 1, :] = pt_xWs[mask]
- # If the Y coordinate of the first endpoint is out
- mask = (segs[..., 0, 1] < 0) & pt_y0s_valid
- segs[mask, 0, :] = pt_y0s[mask]
- mask = (segs[..., 0, 1] > (h - 1)) & pt_yHs_valid
- segs[mask, 0, :] = pt_yHs[mask]
- # If the Y coordinate of the second endpoint is out
- mask = (segs[..., 1, 1] < 0) & pt_y0s_valid
- segs[mask, 1, :] = pt_y0s[mask]
- mask = (segs[..., 1, 1] > (h - 1)) & pt_yHs_valid
- segs[mask, 1, :] = pt_yHs[mask]
-
- assert (
- torch.all(segs >= 0)
- and torch.all(segs[..., 0] < w)
- and torch.all(segs[..., 1] < h)
- )
- return segs
-
-
-def warp_lines_torch(
- lines, H, inverse=True, dst_shape: Tuple[int, int] = None
-) -> Tuple[torch.Tensor, torch.Tensor]:
- """
- :param lines: A tensor of shape (B, N, 2, 2) where B is the batch size, N the number of lines.
- :param H: The homography used to convert the lines. batched or not (shapes (B, 8) and (8,) respectively).
- :param inverse: Whether to apply H or the inverse of H
- :param dst_shape:If provided, lines are trimmed to be inside the image
- """
- device = lines.device
- batch_size, n = lines.shape[:2]
- lines = warp_points_torch(lines.reshape(batch_size, -1, 2), H, inverse).reshape(
- lines.shape
- )
-
- if dst_shape is None:
- return lines, torch.ones(lines.shape[:-2], dtype=torch.bool, device=device)
-
- out_img = torch.any(
- (lines < 0) | (lines >= torch.tensor(dst_shape[::-1], device=device)), -1
- )
- valid = ~out_img.all(-1)
- any_out_of_img = out_img.any(-1)
- lines_to_trim = valid & any_out_of_img
-
- for b in range(batch_size):
- lines_to_trim_mask_b = lines_to_trim[b]
- lines_to_trim_b = lines[b][lines_to_trim_mask_b]
- corrected_lines = shrink_segs_to_img(lines_to_trim_b, dst_shape)
- lines[b][lines_to_trim_mask_b] = corrected_lines
-
- return lines, valid
diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/dataloader.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/dataloader.py
deleted file mode 100644
index b980dfd344714870ecdacd9e7a9742f51c3ee14d..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/utils/dataloader.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import numpy as np
-
-
-# --- PL-DATAMODULE ---
-
-
-def get_local_split(items: list, world_size: int, rank: int, seed: int):
- """The local rank only loads a split of the dataset."""
- n_items = len(items)
- items_permute = np.random.RandomState(seed).permutation(items)
- if n_items % world_size == 0:
- padded_items = items_permute
- else:
- padding = np.random.RandomState(seed).choice(
- items, world_size - (n_items % world_size), replace=True
- )
- padded_items = np.concatenate([items_permute, padding])
- assert (
- len(padded_items) % world_size == 0
- ), f"len(padded_items): {len(padded_items)}; world_size: {world_size}; len(padding): {len(padding)}"
- n_per_rank = len(padded_items) // world_size
- local_items = padded_items[n_per_rank * rank : n_per_rank * (rank + 1)]
-
- return local_items
diff --git a/spaces/RickyMartin-dev/Text_to_Image_Diffusion/text_to_image.py b/spaces/RickyMartin-dev/Text_to_Image_Diffusion/text_to_image.py
deleted file mode 100644
index 710994467b0e706bda0c14b1a12c1da5a53a4fdb..0000000000000000000000000000000000000000
--- a/spaces/RickyMartin-dev/Text_to_Image_Diffusion/text_to_image.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from transformers.tools.base import Tool, get_default_device
-from transformers.utils import is_accelerate_available
-import torch
-
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-
-# Descrition of Image Processing
-TEXT_TO_IMAGE_DESCRIPTION = (
- "This is a tool that creates an image according to a prompt"
-)
-
-# Defining a stable diffusion tool
-class TextToImageTool(Tool):
- default_checkpoint = "runwayml/stable-diffusion-v1-5"
- description = TEXT_TO_IMAGE_DESCRIPTION
- inputs = ['text']
- outputs = ['image']
-
- def __init__(self, device=None, **hub_kwargs) -> None:
- if not is_accelerate_available():
- raise ImportError("Accelerate should be installed in order to use tools.")
-
- super().__init__()
-
- self.device = device
- self.pipeline = None
- self.hub_kwargs = hub_kwargs
-
- def setup(self):
- if self.device is None:
- self.device = get_default_device()
-
- self.pipeline = DiffusionPipeline.from_pretrained(self.default_checkpoint)
- self.pipeline.scheduler = DPMSolverMultistepScheduler.from_config(self.pipeline.scheduler.config)
- self.pipeline.to(self.device)
-
- if self.device.type == "cuda":
- self.pipeline.to(torch_dtype=torch.float16)
-
- self.is_initialized = True
-
- def __call__(self, prompt):
- if not self.is_initialized:
- self.setup()
-
- negative_prompt = "low quality, bad quality, deformed, low resolution, janky"
- added_prompt = " , highest quality, highly realistic, very high resolution"
-
- return self.pipeline(prompt + added_prompt, negative_prompt=negative_prompt, num_inference_steps=25).images[0]
diff --git a/spaces/Rimi98/NegativeCommentClassifier/README.md b/spaces/Rimi98/NegativeCommentClassifier/README.md
deleted file mode 100644
index 757bdf65389767c54556aae81be3ed21aafbeb31..0000000000000000000000000000000000000000
--- a/spaces/Rimi98/NegativeCommentClassifier/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: NegativeCommentClassifier
-emoji: 💻
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/apis/test.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/apis/test.py
deleted file mode 100644
index e54b1b8c24efc448972c31ee5da63041d7f97a47..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/apis/test.py
+++ /dev/null
@@ -1,190 +0,0 @@
-import os.path as osp
-import pickle
-import shutil
-import tempfile
-import time
-
-import mmcv
-import torch
-import torch.distributed as dist
-from mmcv.image import tensor2imgs
-from mmcv.runner import get_dist_info
-
-from mmdet.core import encode_mask_results
-
-
-def single_gpu_test(model,
- data_loader,
- show=False,
- out_dir=None,
- show_score_thr=0.3):
- model.eval()
- results = []
- dataset = data_loader.dataset
- prog_bar = mmcv.ProgressBar(len(dataset))
- for i, data in enumerate(data_loader):
- with torch.no_grad():
- result = model(return_loss=False, rescale=True, **data)
-
- batch_size = len(result)
- if show or out_dir:
- if batch_size == 1 and isinstance(data['img'][0], torch.Tensor):
- img_tensor = data['img'][0]
- else:
- img_tensor = data['img'][0].data[0]
- img_metas = data['img_metas'][0].data[0]
- imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg'])
- assert len(imgs) == len(img_metas)
-
- for i, (img, img_meta) in enumerate(zip(imgs, img_metas)):
- h, w, _ = img_meta['img_shape']
- img_show = img[:h, :w, :]
-
- ori_h, ori_w = img_meta['ori_shape'][:-1]
- img_show = mmcv.imresize(img_show, (ori_w, ori_h))
-
- if out_dir:
- out_file = osp.join(out_dir, img_meta['ori_filename'])
- else:
- out_file = None
-
- model.module.show_result(
- img_show,
- result[i],
- show=show,
- out_file=out_file,
- score_thr=show_score_thr)
-
- # encode mask results
- if isinstance(result[0], tuple):
- result = [(bbox_results, encode_mask_results(mask_results))
- for bbox_results, mask_results in result]
- results.extend(result)
-
- for _ in range(batch_size):
- prog_bar.update()
- return results
-
-
-def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False):
- """Test model with multiple gpus.
-
- This method tests model with multiple gpus and collects the results
- under two different modes: gpu and cpu modes. By setting 'gpu_collect=True'
- it encodes results to gpu tensors and use gpu communication for results
- collection. On cpu mode it saves the results on different gpus to 'tmpdir'
- and collects them by the rank 0 worker.
-
- Args:
- model (nn.Module): Model to be tested.
- data_loader (nn.Dataloader): Pytorch data loader.
- tmpdir (str): Path of directory to save the temporary results from
- different gpus under cpu mode.
- gpu_collect (bool): Option to use either gpu or cpu to collect results.
-
- Returns:
- list: The prediction results.
- """
- model.eval()
- results = []
- dataset = data_loader.dataset
- rank, world_size = get_dist_info()
- if rank == 0:
- prog_bar = mmcv.ProgressBar(len(dataset))
- time.sleep(2) # This line can prevent deadlock problem in some cases.
- for i, data in enumerate(data_loader):
- with torch.no_grad():
- result = model(return_loss=False, rescale=True, **data)
- # encode mask results
- if isinstance(result[0], tuple):
- result = [(bbox_results, encode_mask_results(mask_results))
- for bbox_results, mask_results in result]
- results.extend(result)
-
- if rank == 0:
- batch_size = len(result)
- for _ in range(batch_size * world_size):
- prog_bar.update()
-
- # collect results from all ranks
- if gpu_collect:
- results = collect_results_gpu(results, len(dataset))
- else:
- results = collect_results_cpu(results, len(dataset), tmpdir)
- return results
-
-
-def collect_results_cpu(result_part, size, tmpdir=None):
- rank, world_size = get_dist_info()
- # create a tmp dir if it is not specified
- if tmpdir is None:
- MAX_LEN = 512
- # 32 is whitespace
- dir_tensor = torch.full((MAX_LEN, ),
- 32,
- dtype=torch.uint8,
- device='cuda')
- if rank == 0:
- mmcv.mkdir_or_exist('.dist_test')
- tmpdir = tempfile.mkdtemp(dir='.dist_test')
- tmpdir = torch.tensor(
- bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda')
- dir_tensor[:len(tmpdir)] = tmpdir
- dist.broadcast(dir_tensor, 0)
- tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip()
- else:
- mmcv.mkdir_or_exist(tmpdir)
- # dump the part result to the dir
- mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl'))
- dist.barrier()
- # collect all parts
- if rank != 0:
- return None
- else:
- # load results of all parts from tmp dir
- part_list = []
- for i in range(world_size):
- part_file = osp.join(tmpdir, f'part_{i}.pkl')
- part_list.append(mmcv.load(part_file))
- # sort the results
- ordered_results = []
- for res in zip(*part_list):
- ordered_results.extend(list(res))
- # the dataloader may pad some samples
- ordered_results = ordered_results[:size]
- # remove tmp dir
- shutil.rmtree(tmpdir)
- return ordered_results
-
-
-def collect_results_gpu(result_part, size):
- rank, world_size = get_dist_info()
- # dump result part to tensor with pickle
- part_tensor = torch.tensor(
- bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda')
- # gather all result part tensor shape
- shape_tensor = torch.tensor(part_tensor.shape, device='cuda')
- shape_list = [shape_tensor.clone() for _ in range(world_size)]
- dist.all_gather(shape_list, shape_tensor)
- # padding result part tensor to max length
- shape_max = torch.tensor(shape_list).max()
- part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda')
- part_send[:shape_tensor[0]] = part_tensor
- part_recv_list = [
- part_tensor.new_zeros(shape_max) for _ in range(world_size)
- ]
- # gather all result part
- dist.all_gather(part_recv_list, part_send)
-
- if rank == 0:
- part_list = []
- for recv, shape in zip(part_recv_list, shape_list):
- part_list.append(
- pickle.loads(recv[:shape[0]].cpu().numpy().tobytes()))
- # sort the results
- ordered_results = []
- for res in zip(*part_list):
- ordered_results.extend(list(res))
- # the dataloader may pad some samples
- ordered_results = ordered_results[:size]
- return ordered_results
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/sabl_retina_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/sabl_retina_head.py
deleted file mode 100644
index 4211622cb8b4fe807230a89bcaab8f4f1681bfc0..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/sabl_retina_head.py
+++ /dev/null
@@ -1,621 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init
-from mmcv.runner import force_fp32
-
-from mmdet.core import (build_anchor_generator, build_assigner,
- build_bbox_coder, build_sampler, images_to_levels,
- multi_apply, multiclass_nms, unmap)
-from ..builder import HEADS, build_loss
-from .base_dense_head import BaseDenseHead
-from .guided_anchor_head import GuidedAnchorHead
-
-
-@HEADS.register_module()
-class SABLRetinaHead(BaseDenseHead):
- """Side-Aware Boundary Localization (SABL) for RetinaNet.
-
- The anchor generation, assigning and sampling in SABLRetinaHead
- are the same as GuidedAnchorHead for guided anchoring.
-
- Please refer to https://arxiv.org/abs/1912.04260 for more details.
-
- Args:
- num_classes (int): Number of classes.
- in_channels (int): Number of channels in the input feature map.
- stacked_convs (int): Number of Convs for classification \
- and regression branches. Defaults to 4.
- feat_channels (int): Number of hidden channels. \
- Defaults to 256.
- approx_anchor_generator (dict): Config dict for approx generator.
- square_anchor_generator (dict): Config dict for square generator.
- conv_cfg (dict): Config dict for ConvModule. Defaults to None.
- norm_cfg (dict): Config dict for Norm Layer. Defaults to None.
- bbox_coder (dict): Config dict for bbox coder.
- reg_decoded_bbox (bool): If true, the regression loss would be
- applied directly on decoded bounding boxes, converting both
- the predicted boxes and regression targets to absolute
- coordinates format. Default False. It should be `True` when
- using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
- train_cfg (dict): Training config of SABLRetinaHead.
- test_cfg (dict): Testing config of SABLRetinaHead.
- loss_cls (dict): Config of classification loss.
- loss_bbox_cls (dict): Config of classification loss for bbox branch.
- loss_bbox_reg (dict): Config of regression loss for bbox branch.
- """
-
- def __init__(self,
- num_classes,
- in_channels,
- stacked_convs=4,
- feat_channels=256,
- approx_anchor_generator=dict(
- type='AnchorGenerator',
- octave_base_scale=4,
- scales_per_octave=3,
- ratios=[0.5, 1.0, 2.0],
- strides=[8, 16, 32, 64, 128]),
- square_anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- scales=[4],
- strides=[8, 16, 32, 64, 128]),
- conv_cfg=None,
- norm_cfg=None,
- bbox_coder=dict(
- type='BucketingBBoxCoder',
- num_buckets=14,
- scale_factor=3.0),
- reg_decoded_bbox=False,
- train_cfg=None,
- test_cfg=None,
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.5),
- loss_bbox_reg=dict(
- type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)):
- super(SABLRetinaHead, self).__init__()
- self.in_channels = in_channels
- self.num_classes = num_classes
- self.feat_channels = feat_channels
- self.num_buckets = bbox_coder['num_buckets']
- self.side_num = int(np.ceil(self.num_buckets / 2))
-
- assert (approx_anchor_generator['octave_base_scale'] ==
- square_anchor_generator['scales'][0])
- assert (approx_anchor_generator['strides'] ==
- square_anchor_generator['strides'])
-
- self.approx_anchor_generator = build_anchor_generator(
- approx_anchor_generator)
- self.square_anchor_generator = build_anchor_generator(
- square_anchor_generator)
- self.approxs_per_octave = (
- self.approx_anchor_generator.num_base_anchors[0])
-
- # one anchor per location
- self.num_anchors = 1
- self.stacked_convs = stacked_convs
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
-
- self.reg_decoded_bbox = reg_decoded_bbox
-
- self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
- self.sampling = loss_cls['type'] not in [
- 'FocalLoss', 'GHMC', 'QualityFocalLoss'
- ]
- if self.use_sigmoid_cls:
- self.cls_out_channels = num_classes
- else:
- self.cls_out_channels = num_classes + 1
-
- self.bbox_coder = build_bbox_coder(bbox_coder)
- self.loss_cls = build_loss(loss_cls)
- self.loss_bbox_cls = build_loss(loss_bbox_cls)
- self.loss_bbox_reg = build_loss(loss_bbox_reg)
-
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
-
- if self.train_cfg:
- self.assigner = build_assigner(self.train_cfg.assigner)
- # use PseudoSampler when sampling is False
- if self.sampling and hasattr(self.train_cfg, 'sampler'):
- sampler_cfg = self.train_cfg.sampler
- else:
- sampler_cfg = dict(type='PseudoSampler')
- self.sampler = build_sampler(sampler_cfg, context=self)
-
- self.fp16_enabled = False
- self._init_layers()
-
- def _init_layers(self):
- self.relu = nn.ReLU(inplace=True)
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- for i in range(self.stacked_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- self.cls_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.reg_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.retina_cls = nn.Conv2d(
- self.feat_channels, self.cls_out_channels, 3, padding=1)
- self.retina_bbox_reg = nn.Conv2d(
- self.feat_channels, self.side_num * 4, 3, padding=1)
- self.retina_bbox_cls = nn.Conv2d(
- self.feat_channels, self.side_num * 4, 3, padding=1)
-
- def init_weights(self):
- for m in self.cls_convs:
- normal_init(m.conv, std=0.01)
- for m in self.reg_convs:
- normal_init(m.conv, std=0.01)
- bias_cls = bias_init_with_prob(0.01)
- normal_init(self.retina_cls, std=0.01, bias=bias_cls)
- normal_init(self.retina_bbox_reg, std=0.01)
- normal_init(self.retina_bbox_cls, std=0.01)
-
- def forward_single(self, x):
- cls_feat = x
- reg_feat = x
- for cls_conv in self.cls_convs:
- cls_feat = cls_conv(cls_feat)
- for reg_conv in self.reg_convs:
- reg_feat = reg_conv(reg_feat)
- cls_score = self.retina_cls(cls_feat)
- bbox_cls_pred = self.retina_bbox_cls(reg_feat)
- bbox_reg_pred = self.retina_bbox_reg(reg_feat)
- bbox_pred = (bbox_cls_pred, bbox_reg_pred)
- return cls_score, bbox_pred
-
- def forward(self, feats):
- return multi_apply(self.forward_single, feats)
-
- def get_anchors(self, featmap_sizes, img_metas, device='cuda'):
- """Get squares according to feature map sizes and guided anchors.
-
- Args:
- featmap_sizes (list[tuple]): Multi-level feature map sizes.
- img_metas (list[dict]): Image meta info.
- device (torch.device | str): device for returned tensors
-
- Returns:
- tuple: square approxs of each image
- """
- num_imgs = len(img_metas)
-
- # since feature map sizes of all images are the same, we only compute
- # squares for one time
- multi_level_squares = self.square_anchor_generator.grid_anchors(
- featmap_sizes, device=device)
- squares_list = [multi_level_squares for _ in range(num_imgs)]
-
- return squares_list
-
- def get_target(self,
- approx_list,
- inside_flag_list,
- square_list,
- gt_bboxes_list,
- img_metas,
- gt_bboxes_ignore_list=None,
- gt_labels_list=None,
- label_channels=None,
- sampling=True,
- unmap_outputs=True):
- """Compute bucketing targets.
- Args:
- approx_list (list[list]): Multi level approxs of each image.
- inside_flag_list (list[list]): Multi level inside flags of each
- image.
- square_list (list[list]): Multi level squares of each image.
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
- img_metas (list[dict]): Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes.
- gt_bboxes_list (list[Tensor]): Gt bboxes of each image.
- label_channels (int): Channel of label.
- sampling (bool): Sample Anchors or not.
- unmap_outputs (bool): unmap outputs or not.
-
- Returns:
- tuple: Returns a tuple containing learning targets.
-
- - labels_list (list[Tensor]): Labels of each level.
- - label_weights_list (list[Tensor]): Label weights of each \
- level.
- - bbox_cls_targets_list (list[Tensor]): BBox cls targets of \
- each level.
- - bbox_cls_weights_list (list[Tensor]): BBox cls weights of \
- each level.
- - bbox_reg_targets_list (list[Tensor]): BBox reg targets of \
- each level.
- - bbox_reg_weights_list (list[Tensor]): BBox reg weights of \
- each level.
- - num_total_pos (int): Number of positive samples in all \
- images.
- - num_total_neg (int): Number of negative samples in all \
- images.
- """
- num_imgs = len(img_metas)
- assert len(approx_list) == len(inside_flag_list) == len(
- square_list) == num_imgs
- # anchor number of multi levels
- num_level_squares = [squares.size(0) for squares in square_list[0]]
- # concat all level anchors and flags to a single tensor
- inside_flag_flat_list = []
- approx_flat_list = []
- square_flat_list = []
- for i in range(num_imgs):
- assert len(square_list[i]) == len(inside_flag_list[i])
- inside_flag_flat_list.append(torch.cat(inside_flag_list[i]))
- approx_flat_list.append(torch.cat(approx_list[i]))
- square_flat_list.append(torch.cat(square_list[i]))
-
- # compute targets for each image
- if gt_bboxes_ignore_list is None:
- gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
- if gt_labels_list is None:
- gt_labels_list = [None for _ in range(num_imgs)]
- (all_labels, all_label_weights, all_bbox_cls_targets,
- all_bbox_cls_weights, all_bbox_reg_targets, all_bbox_reg_weights,
- pos_inds_list, neg_inds_list) = multi_apply(
- self._get_target_single,
- approx_flat_list,
- inside_flag_flat_list,
- square_flat_list,
- gt_bboxes_list,
- gt_bboxes_ignore_list,
- gt_labels_list,
- img_metas,
- label_channels=label_channels,
- sampling=sampling,
- unmap_outputs=unmap_outputs)
- # no valid anchors
- if any([labels is None for labels in all_labels]):
- return None
- # sampled anchors of all images
- num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])
- num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])
- # split targets to a list w.r.t. multiple levels
- labels_list = images_to_levels(all_labels, num_level_squares)
- label_weights_list = images_to_levels(all_label_weights,
- num_level_squares)
- bbox_cls_targets_list = images_to_levels(all_bbox_cls_targets,
- num_level_squares)
- bbox_cls_weights_list = images_to_levels(all_bbox_cls_weights,
- num_level_squares)
- bbox_reg_targets_list = images_to_levels(all_bbox_reg_targets,
- num_level_squares)
- bbox_reg_weights_list = images_to_levels(all_bbox_reg_weights,
- num_level_squares)
- return (labels_list, label_weights_list, bbox_cls_targets_list,
- bbox_cls_weights_list, bbox_reg_targets_list,
- bbox_reg_weights_list, num_total_pos, num_total_neg)
-
- def _get_target_single(self,
- flat_approxs,
- inside_flags,
- flat_squares,
- gt_bboxes,
- gt_bboxes_ignore,
- gt_labels,
- img_meta,
- label_channels=None,
- sampling=True,
- unmap_outputs=True):
- """Compute regression and classification targets for anchors in a
- single image.
-
- Args:
- flat_approxs (Tensor): flat approxs of a single image,
- shape (n, 4)
- inside_flags (Tensor): inside flags of a single image,
- shape (n, ).
- flat_squares (Tensor): flat squares of a single image,
- shape (approxs_per_octave * n, 4)
- gt_bboxes (Tensor): Ground truth bboxes of a single image, \
- shape (num_gts, 4).
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4).
- gt_labels (Tensor): Ground truth labels of each box,
- shape (num_gts,).
- img_meta (dict): Meta info of the image.
- label_channels (int): Channel of label.
- sampling (bool): Sample Anchors or not.
- unmap_outputs (bool): unmap outputs or not.
-
- Returns:
- tuple:
-
- - labels_list (Tensor): Labels in a single image
- - label_weights (Tensor): Label weights in a single image
- - bbox_cls_targets (Tensor): BBox cls targets in a single image
- - bbox_cls_weights (Tensor): BBox cls weights in a single image
- - bbox_reg_targets (Tensor): BBox reg targets in a single image
- - bbox_reg_weights (Tensor): BBox reg weights in a single image
- - num_total_pos (int): Number of positive samples \
- in a single image
- - num_total_neg (int): Number of negative samples \
- in a single image
- """
- if not inside_flags.any():
- return (None, ) * 8
- # assign gt and sample anchors
- expand_inside_flags = inside_flags[:, None].expand(
- -1, self.approxs_per_octave).reshape(-1)
- approxs = flat_approxs[expand_inside_flags, :]
- squares = flat_squares[inside_flags, :]
-
- assign_result = self.assigner.assign(approxs, squares,
- self.approxs_per_octave,
- gt_bboxes, gt_bboxes_ignore)
- sampling_result = self.sampler.sample(assign_result, squares,
- gt_bboxes)
-
- num_valid_squares = squares.shape[0]
- bbox_cls_targets = squares.new_zeros(
- (num_valid_squares, self.side_num * 4))
- bbox_cls_weights = squares.new_zeros(
- (num_valid_squares, self.side_num * 4))
- bbox_reg_targets = squares.new_zeros(
- (num_valid_squares, self.side_num * 4))
- bbox_reg_weights = squares.new_zeros(
- (num_valid_squares, self.side_num * 4))
- labels = squares.new_full((num_valid_squares, ),
- self.num_classes,
- dtype=torch.long)
- label_weights = squares.new_zeros(num_valid_squares, dtype=torch.float)
-
- pos_inds = sampling_result.pos_inds
- neg_inds = sampling_result.neg_inds
- if len(pos_inds) > 0:
- (pos_bbox_reg_targets, pos_bbox_reg_weights, pos_bbox_cls_targets,
- pos_bbox_cls_weights) = self.bbox_coder.encode(
- sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes)
-
- bbox_cls_targets[pos_inds, :] = pos_bbox_cls_targets
- bbox_reg_targets[pos_inds, :] = pos_bbox_reg_targets
- bbox_cls_weights[pos_inds, :] = pos_bbox_cls_weights
- bbox_reg_weights[pos_inds, :] = pos_bbox_reg_weights
- if gt_labels is None:
- # Only rpn gives gt_labels as None
- # Foreground is the first class
- labels[pos_inds] = 0
- else:
- labels[pos_inds] = gt_labels[
- sampling_result.pos_assigned_gt_inds]
- if self.train_cfg.pos_weight <= 0:
- label_weights[pos_inds] = 1.0
- else:
- label_weights[pos_inds] = self.train_cfg.pos_weight
- if len(neg_inds) > 0:
- label_weights[neg_inds] = 1.0
-
- # map up to original set of anchors
- if unmap_outputs:
- num_total_anchors = flat_squares.size(0)
- labels = unmap(
- labels, num_total_anchors, inside_flags, fill=self.num_classes)
- label_weights = unmap(label_weights, num_total_anchors,
- inside_flags)
- bbox_cls_targets = unmap(bbox_cls_targets, num_total_anchors,
- inside_flags)
- bbox_cls_weights = unmap(bbox_cls_weights, num_total_anchors,
- inside_flags)
- bbox_reg_targets = unmap(bbox_reg_targets, num_total_anchors,
- inside_flags)
- bbox_reg_weights = unmap(bbox_reg_weights, num_total_anchors,
- inside_flags)
- return (labels, label_weights, bbox_cls_targets, bbox_cls_weights,
- bbox_reg_targets, bbox_reg_weights, pos_inds, neg_inds)
-
- def loss_single(self, cls_score, bbox_pred, labels, label_weights,
- bbox_cls_targets, bbox_cls_weights, bbox_reg_targets,
- bbox_reg_weights, num_total_samples):
- # classification loss
- labels = labels.reshape(-1)
- label_weights = label_weights.reshape(-1)
- cls_score = cls_score.permute(0, 2, 3,
- 1).reshape(-1, self.cls_out_channels)
- loss_cls = self.loss_cls(
- cls_score, labels, label_weights, avg_factor=num_total_samples)
- # regression loss
- bbox_cls_targets = bbox_cls_targets.reshape(-1, self.side_num * 4)
- bbox_cls_weights = bbox_cls_weights.reshape(-1, self.side_num * 4)
- bbox_reg_targets = bbox_reg_targets.reshape(-1, self.side_num * 4)
- bbox_reg_weights = bbox_reg_weights.reshape(-1, self.side_num * 4)
- (bbox_cls_pred, bbox_reg_pred) = bbox_pred
- bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape(
- -1, self.side_num * 4)
- bbox_reg_pred = bbox_reg_pred.permute(0, 2, 3, 1).reshape(
- -1, self.side_num * 4)
- loss_bbox_cls = self.loss_bbox_cls(
- bbox_cls_pred,
- bbox_cls_targets.long(),
- bbox_cls_weights,
- avg_factor=num_total_samples * 4 * self.side_num)
- loss_bbox_reg = self.loss_bbox_reg(
- bbox_reg_pred,
- bbox_reg_targets,
- bbox_reg_weights,
- avg_factor=num_total_samples * 4 * self.bbox_coder.offset_topk)
- return loss_cls, loss_bbox_cls, loss_bbox_reg
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.approx_anchor_generator.num_levels
-
- device = cls_scores[0].device
-
- # get sampled approxes
- approxs_list, inside_flag_list = GuidedAnchorHead.get_sampled_approxs(
- self, featmap_sizes, img_metas, device=device)
-
- square_list = self.get_anchors(featmap_sizes, img_metas, device=device)
-
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
-
- cls_reg_targets = self.get_target(
- approxs_list,
- inside_flag_list,
- square_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels,
- sampling=self.sampling)
- if cls_reg_targets is None:
- return None
- (labels_list, label_weights_list, bbox_cls_targets_list,
- bbox_cls_weights_list, bbox_reg_targets_list, bbox_reg_weights_list,
- num_total_pos, num_total_neg) = cls_reg_targets
- num_total_samples = (
- num_total_pos + num_total_neg if self.sampling else num_total_pos)
- losses_cls, losses_bbox_cls, losses_bbox_reg = multi_apply(
- self.loss_single,
- cls_scores,
- bbox_preds,
- labels_list,
- label_weights_list,
- bbox_cls_targets_list,
- bbox_cls_weights_list,
- bbox_reg_targets_list,
- bbox_reg_weights_list,
- num_total_samples=num_total_samples)
- return dict(
- loss_cls=losses_cls,
- loss_bbox_cls=losses_bbox_cls,
- loss_bbox_reg=losses_bbox_reg)
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def get_bboxes(self,
- cls_scores,
- bbox_preds,
- img_metas,
- cfg=None,
- rescale=False):
- assert len(cls_scores) == len(bbox_preds)
- num_levels = len(cls_scores)
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
-
- device = cls_scores[0].device
- mlvl_anchors = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- result_list = []
- for img_id in range(len(img_metas)):
- cls_score_list = [
- cls_scores[i][img_id].detach() for i in range(num_levels)
- ]
- bbox_cls_pred_list = [
- bbox_preds[i][0][img_id].detach() for i in range(num_levels)
- ]
- bbox_reg_pred_list = [
- bbox_preds[i][1][img_id].detach() for i in range(num_levels)
- ]
- img_shape = img_metas[img_id]['img_shape']
- scale_factor = img_metas[img_id]['scale_factor']
- proposals = self.get_bboxes_single(cls_score_list,
- bbox_cls_pred_list,
- bbox_reg_pred_list,
- mlvl_anchors[img_id], img_shape,
- scale_factor, cfg, rescale)
- result_list.append(proposals)
- return result_list
-
- def get_bboxes_single(self,
- cls_scores,
- bbox_cls_preds,
- bbox_reg_preds,
- mlvl_anchors,
- img_shape,
- scale_factor,
- cfg,
- rescale=False):
- cfg = self.test_cfg if cfg is None else cfg
- mlvl_bboxes = []
- mlvl_scores = []
- mlvl_confids = []
- assert len(cls_scores) == len(bbox_cls_preds) == len(
- bbox_reg_preds) == len(mlvl_anchors)
- for cls_score, bbox_cls_pred, bbox_reg_pred, anchors in zip(
- cls_scores, bbox_cls_preds, bbox_reg_preds, mlvl_anchors):
- assert cls_score.size()[-2:] == bbox_cls_pred.size(
- )[-2:] == bbox_reg_pred.size()[-2::]
- cls_score = cls_score.permute(1, 2,
- 0).reshape(-1, self.cls_out_channels)
- if self.use_sigmoid_cls:
- scores = cls_score.sigmoid()
- else:
- scores = cls_score.softmax(-1)
- bbox_cls_pred = bbox_cls_pred.permute(1, 2, 0).reshape(
- -1, self.side_num * 4)
- bbox_reg_pred = bbox_reg_pred.permute(1, 2, 0).reshape(
- -1, self.side_num * 4)
- nms_pre = cfg.get('nms_pre', -1)
- if nms_pre > 0 and scores.shape[0] > nms_pre:
- if self.use_sigmoid_cls:
- max_scores, _ = scores.max(dim=1)
- else:
- max_scores, _ = scores[:, :-1].max(dim=1)
- _, topk_inds = max_scores.topk(nms_pre)
- anchors = anchors[topk_inds, :]
- bbox_cls_pred = bbox_cls_pred[topk_inds, :]
- bbox_reg_pred = bbox_reg_pred[topk_inds, :]
- scores = scores[topk_inds, :]
- bbox_preds = [
- bbox_cls_pred.contiguous(),
- bbox_reg_pred.contiguous()
- ]
- bboxes, confids = self.bbox_coder.decode(
- anchors.contiguous(), bbox_preds, max_shape=img_shape)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_confids.append(confids)
- mlvl_bboxes = torch.cat(mlvl_bboxes)
- if rescale:
- mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor)
- mlvl_scores = torch.cat(mlvl_scores)
- mlvl_confids = torch.cat(mlvl_confids)
- if self.use_sigmoid_cls:
- padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1)
- mlvl_scores = torch.cat([mlvl_scores, padding], dim=1)
- det_bboxes, det_labels = multiclass_nms(
- mlvl_bboxes,
- mlvl_scores,
- cfg.score_thr,
- cfg.nms,
- cfg.max_per_img,
- score_factors=mlvl_confids)
- return det_bboxes, det_labels
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/detr.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/detr.py
deleted file mode 100644
index 5ff82a280daa0a015f662bdf2509fa11542d46d4..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/detr.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from mmdet.core import bbox2result
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class DETR(SingleStageDetector):
- r"""Implementation of `DETR: End-to-End Object Detection with
- Transformers AI Mental Healthcare ChatBot
Sentiment and Emotion Score Graph
s / ||s||^2
- pair_wise_dot = torch.sum(s_estimate * s_target,
- dim=3, keepdim=True) # [B, C, C, 1]
- s_target_energy = torch.sum(
- s_target ** 2, dim=3, keepdim=True) + EPS # [B, 1, C, 1]
- pair_wise_proj = pair_wise_dot * s_target / s_target_energy # [B, C, C, T]
- # e_noise = s' - s_target
- e_noise = s_estimate - pair_wise_proj # [B, C, C, T]
- # SI-SNR = 10 * log_10(||s_target||^2 / ||e_noise||^2)
- pair_wise_si_snr = torch.sum(
- pair_wise_proj ** 2, dim=3) / (torch.sum(e_noise ** 2, dim=3) + EPS)
- pair_wise_si_snr = 10 * torch.log10(pair_wise_si_snr + EPS) # [B, C, C]
- pair_wise_si_snr = torch.transpose(pair_wise_si_snr, 1, 2)
-
- # Get max_snr of each utterance
- # permutations, [C!, C]
- perms = source.new_tensor(list(permutations(range(C))), dtype=torch.long)
- # one-hot, [C!, C, C]
- index = torch.unsqueeze(perms, 2)
- perms_one_hot = source.new_zeros((*perms.size(), C)).scatter_(2, index, 1)
- # [B, C!] <- [B, C, C] einsum [C!, C, C], SI-SNR sum of each permutation
- snr_set = torch.einsum('bij,pij->bp', [pair_wise_si_snr, perms_one_hot])
- max_snr_idx = torch.argmax(snr_set, dim=1) # [B]
- # max_snr = torch.gather(snr_set, 1, max_snr_idx.view(-1, 1)) # [B, 1]
- max_snr, _ = torch.max(snr_set, dim=1, keepdim=True)
- max_snr /= C
- return max_snr, perms, max_snr_idx, snr_set / C
-
-
-def reorder_source(source, perms, max_snr_idx):
- """
- Args:
- source: [B, C, T]
- perms: [C!, C], permutations
- max_snr_idx: [B], each item is between [0, C!)
- Returns:
- reorder_source: [B, C, T]
- """
- B, C, *_ = source.size()
- # [B, C], permutation whose SI-SNR is max of each utterance
- # for each utterance, reorder estimate source according this permutation
- max_snr_perm = torch.index_select(perms, dim=0, index=max_snr_idx)
- # print('max_snr_perm', max_snr_perm)
- # maybe use torch.gather()/index_select()/scatter() to impl this?
- reorder_source = torch.zeros_like(source)
- for b in range(B):
- for c in range(C):
- reorder_source[b, c] = source[b, max_snr_perm[b][c]]
- return reorder_source
-
-
-def get_mask(source, source_lengths):
- """
- Args:
- source: [B, C, T]
- source_lengths: [B]
- Returns:
- mask: [B, 1, T]
- """
- B, _, T = source.size()
- mask = source.new_ones((B, 1, T))
- for i in range(B):
- mask[i, :, source_lengths[i]:] = 0
- return mask
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/ssh.pl b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/ssh.pl
deleted file mode 100644
index 5d3e3e44d71112044ce59ce02b76ff03340dbf7f..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/ssh.pl
+++ /dev/null
@@ -1,219 +0,0 @@
-#!/usr/bin/env perl
-use warnings; #sed replacement for -w perl parameter
-
-use Cwd;
-use File::Basename;
-
-# This program is like run.pl except rather than just running on a local
-# machine, it can be configured to run on remote machines via ssh.
-# It requires that you have set up passwordless access to those machines,
-# and that Kaldi is running from a location that is accessible via the
-# same path on those machines (presumably via an NFS mount).
-#
-# It looks for a file .queue/machines that should have, on each line, the name
-# of a machine that you can ssh to (which may include this machine). It doesn't
-# have to be a fully qualified name.
-#
-# Later we may extend this so that on each line of .queue/machines you
-# can specify various resources that each machine has, such as how
-# many slots and how much memory, and make it wait if machines are
-# busy. But for now it simply ssh's to a machine from those in the list.
-
-# The command-line interface of this program is the same as run.pl;
-# see run.pl for more information about the usage.
-
-
-@ARGV < 2 && die "usage: ssh.pl log-file command-line arguments...";
-
-$jobstart = 1;
-$jobend = 1;
-$qsub_opts=""; # These will be ignored.
-
-# First parse an option like JOB=1:4, and any
-# options that would normally be given to
-# ssh.pl, which we will just discard.
-
-if (@ARGV > 0) {
- while (@ARGV >= 2 && $ARGV[0] =~ m:^-:) { # parse any options
- # that would normally go to qsub, but which will be ignored here.
- $switch = shift @ARGV;
- if ($switch eq "-V") {
- $qsub_opts .= "-V ";
- } else {
- $option = shift @ARGV;
- if ($switch eq "-sync" && $option =~ m/^[yY]/) {
- $qsub_opts .= "-sync "; # Note: in the
- # corresponding code in queue.pl it says instead, just "$sync = 1;".
- }
- $qsub_opts .= "$switch $option ";
- if ($switch eq "-pe") { # e.g. -pe smp 5
- $option2 = shift @ARGV;
- $qsub_opts .= "$option2 ";
- }
- }
- }
- if ($ARGV[0] =~ m/^([\w_][\w\d_]*)+=(\d+):(\d+)$/) { # e.g. JOB=1:10
- $jobname = $1;
- $jobstart = $2;
- $jobend = $3;
- shift;
- if ($jobstart > $jobend) {
- die "run.pl: invalid job range $ARGV[0]";
- }
- if ($jobstart <= 0) {
- die "run.pl: invalid job range $ARGV[0], start must be strictly positive (this is required for GridEngine compatibility)";
- }
- } elsif ($ARGV[0] =~ m/^([\w_][\w\d_]*)+=(\d+)$/) { # e.g. JOB=1.
- $jobname = $1;
- $jobstart = $2;
- $jobend = $2;
- shift;
- } elsif ($ARGV[0] =~ m/.+\=.*\:.*$/) {
- print STDERR "Warning: suspicious first argument to run.pl: $ARGV[0]\n";
- }
-}
-
-if ($qsub_opts ne "") {
- print STDERR "Warning: ssh.pl ignoring options \"$qsub_opts\"\n";
-}
-
-{ # Read .queue/machines
- if (!open(Q, "<.queue/machines")) {
- print STDERR "ssh.pl: expected the file .queue/machines to exist.\n";
- exit(1);
- }
- @machines = ();
- while () {
- chop;
- if ($_ ne "") {
- @A = split;
- if (@A != 1) {
- die "ssh.pl: bad line '$_' in .queue/machines.";
- }
- if ($A[0] !~ m/^[a-z0-9\.\-]+/) {
- die "ssh.pl: invalid machine name '$A[0]'";
- }
- push @machines, $A[0];
- }
- }
- if (@machines == 0) { die "ssh.pl: no machines listed in .queue/machines"; }
-}
-
-$logfile = shift @ARGV;
-
-if (defined $jobname && $logfile !~ m/$jobname/ &&
- $jobend > $jobstart) {
- print STDERR "ssh.pl: you are trying to run a parallel job but "
- . "you are putting the output into just one log file ($logfile)\n";
- exit(1);
-}
-
-{
- $offset = 0; # $offset will be an offset added to any index from the job-id
- # specified if the user does JOB=1:10. The main point of this is
- # that there are instances where a script will manually submit a
- # number of jobs to the queue, e.g. with log files foo.1.log,
- # foo.2.log and so on, and we don't want all of these to go
- # to the first machine.
- @A = split(".", basename($logfile));
- # if $logfile looks like foo.9.log, add 9 to $offset.
- foreach $a (@A) { if ($a =~ m/^\d+$/) { $offset += $a; } }
-}
-
-$cmd = "";
-
-foreach $x (@ARGV) {
- if ($x =~ m/^\S+$/) { $cmd .= $x . " "; }
- elsif ($x =~ m:\":) { $cmd .= "'$x' "; }
- else { $cmd .= "\"$x\" "; }
-}
-
-
-for ($jobid = $jobstart; $jobid <= $jobend; $jobid++) {
- $childpid = fork();
- if (!defined $childpid) { die "Error forking in ssh.pl (writing to $logfile)"; }
- if ($childpid == 0) {
- # We're in the child... this branch executes the job and returns (possibly
- # with an error status).
- if (defined $jobname) {
- $cmd =~ s/$jobname/$jobid/g;
- $logfile =~ s/$jobname/$jobid/g;
- }
- { # work out the machine to ssh to.
- $local_offset = $offset + $jobid - 1; # subtract 1 since jobs never start
- # from 0; we'd like the first job
- # to normally run on the first
- # machine.
- $num_machines = scalar @machines;
- # in the next line, the "+ $num_machines" is in case $local_offset is
- # negative, to ensure the modulus is calculated in the mathematical way, not
- # in the C way where (negative number % positive number) is negative.
- $machines_index = ($local_offset + $num_machines) % $num_machines;
- $machine = $machines[$machines_index];
- }
- if (!open(S, "|ssh $machine bash")) {
- print STDERR "ssh.pl failed to ssh to $machine";
- exit(1); # exits from the forked process within ssh.pl.
- }
- $cwd = getcwd();
- $logdir = dirname($logfile);
- # Below, we're printing into ssh which has opened a bash session; these are
- # bash commands.
- print S "set -e\n"; # if any of the later commands fails, we want it to exit.
- print S "cd $cwd\n";
- print S ". ./path.sh\n";
- print S "mkdir -p $logdir\n";
- print S "time1=\`date +\"%s\"\`\n";
- print S "( echo '#' Running on \`hostname\`\n";
- print S " echo '#' Started at \`date\`\n";
- print S " echo -n '# '; cat <
LICENSE
-The model is licensed with a bespoke non-commercial research-only license DeepFloyd IF Research License Agreement license. The license forbids you from sharing any content for commercial use, or that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license
- Biases and content acknowledgment
-Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, explicit content and violence. The model was trained on a subset of the LAION-5B dataset and is meant for research purposes. You can read more in the model card
- Crack Skelion Keygen Crack
-
- 4d29de3e1b
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Golaem Crowd 6.3.3 For Maya 2016-2018 Win Easy Fast and Artist Friendly.md b/spaces/bioriAsaeru/text-to-voice/Golaem Crowd 6.3.3 For Maya 2016-2018 Win Easy Fast and Artist Friendly.md
deleted file mode 100644
index f093dc12463f9e67f0229f286e89e67786668afe..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Golaem Crowd 6.3.3 For Maya 2016-2018 Win Easy Fast and Artist Friendly.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Golaem Crowd 6.3.3 For Maya 2016-2018 Win
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/HACK Techsoft 2D Design Version 2 License The Best Way to Create Stunning 2D Designs.md b/spaces/bioriAsaeru/text-to-voice/HACK Techsoft 2D Design Version 2 License The Best Way to Create Stunning 2D Designs.md
deleted file mode 100644
index 9beac12fb1ea2f3d601ed4554ffce4a392f09810..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/HACK Techsoft 2D Design Version 2 License The Best Way to Create Stunning 2D Designs.md
+++ /dev/null
@@ -1,6 +0,0 @@
-HACK Techsoft 2D Design Version 2 License Tested And Working
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Http Dl.free.fr Q1PcZAX7n.md b/spaces/bioriAsaeru/text-to-voice/Http Dl.free.fr Q1PcZAX7n.md
deleted file mode 100644
index b1e27168ca44684a00cec458bcbc672e86655ac3..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Http Dl.free.fr Q1PcZAX7n.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Http: Dl.free.fr Q1PcZAX7n
-
-Interactive malware hunting service. Any environments ready for live testing most type of threats. Without install. Without waiting. 4d29de3e1b
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Http Uploadsnack Com Dcxorh Password Txt Torrent Download Fix.md b/spaces/bioriAsaeru/text-to-voice/Http Uploadsnack Com Dcxorh Password Txt Torrent Download Fix.md
deleted file mode 100644
index cd976aa64f135e79e59966ebec88804d77f3c4a8..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Http Uploadsnack Com Dcxorh Password Txt Torrent Download Fix.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Http Uploadsnack Com Dcxorh Password Txt Torrent Download
-
-6 days ago - Result for /RCCln3 or http nd.2 - RELOADED rar password: Decrypted ... Password.txt file download, uploadsnack password file, uploadsnack ... 8 days ago - Result for /RebootR3 or http nd.5 - RELOADED rar password: Decrypted ...
-Password.txt file download, uploadsnack password file, uploadsnack ...
-3 days ago - Result for /r.c - RELOADED rar password: Decrypted ...
-1 day ago
-Revelation is a new generation MMORPG.
-This game has exciting adventures, powerful enemies, incredible riches and ...
-3 days ago - Result for /RebootR3 or http n 8a78ff9644
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Jazler RadioStar 2.2.30 [Full][Multilenguaje] Serial Key Keygen _HOT_.md b/spaces/bioriAsaeru/text-to-voice/Jazler RadioStar 2.2.30 [Full][Multilenguaje] Serial Key Keygen _HOT_.md
deleted file mode 100644
index 55f328122018fb53518cd295f32e8e523c085bb7..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Jazler RadioStar 2.2.30 [Full][Multilenguaje] Serial Key Keygen _HOT_.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-Jazler RadioStar 2.2.30: The Ultimate Radio Automation Software
-Jazler RadioStar 2.2.30 [Full][Multilenguaje] Serial Key keygen
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/botlik100/kaki/i18n.py b/spaces/botlik100/kaki/i18n.py
deleted file mode 100644
index 37f310fadd0b48b2f364877158fb2105d645fc03..0000000000000000000000000000000000000000
--- a/spaces/botlik100/kaki/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = locale.getdefaultlocale()[
- 0
- ] # getlocale can't identify the system's language ((None, None))
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "en_US"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- print("Use Language:", self.language)
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/__init__.py
deleted file mode 100644
index 259f669b78bd05815cb8d3351fd6c5fc9a1b85a1..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from . import transforms # isort:skip
-
-from .build import (
- build_batch_data_loader,
- build_detection_test_loader,
- build_detection_train_loader,
- get_detection_dataset_dicts,
- load_proposals_into_dataset,
- print_instances_class_histogram,
-)
-from .catalog import DatasetCatalog, MetadataCatalog, Metadata
-from .common import DatasetFromList, MapDataset, ToIterableDataset
-from .dataset_mapper import DatasetMapper
-
-# ensure the builtin datasets are registered
-from . import datasets, samplers # isort:skip
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/train_net.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/train_net.py
deleted file mode 100644
index d3414ddf8e7af49640dd1372d75df7acb0b8bb49..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/train_net.py
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-"""
-DeepLab Training Script.
-
-This script is a simplified version of the training script in detectron2/tools.
-"""
-
-import os
-
-import detectron2.data.transforms as T
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import DatasetMapper, MetadataCatalog, build_detection_train_loader
-from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch
-from detectron2.evaluation import CityscapesSemSegEvaluator, DatasetEvaluators, SemSegEvaluator
-from detectron2.projects.deeplab import add_deeplab_config, build_lr_scheduler
-
-
-def build_sem_seg_train_aug(cfg):
- augs = [
- T.ResizeShortestEdge(
- cfg.INPUT.MIN_SIZE_TRAIN, cfg.INPUT.MAX_SIZE_TRAIN, cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING
- )
- ]
- if cfg.INPUT.CROP.ENABLED:
- augs.append(
- T.RandomCrop_CategoryAreaConstraint(
- cfg.INPUT.CROP.TYPE,
- cfg.INPUT.CROP.SIZE,
- cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA,
- cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE,
- )
- )
- augs.append(T.RandomFlip())
- return augs
-
-
-class Trainer(DefaultTrainer):
- """
- We use the "DefaultTrainer" which contains a number pre-defined logic for
- standard training workflow. They may not work for you, especially if you
- are working on a new research project. In that case you can use the cleaner
- "SimpleTrainer", or write your own training loop.
- """
-
- @classmethod
- def build_evaluator(cls, cfg, dataset_name, output_folder=None):
- """
- Create evaluator(s) for a given dataset.
- This uses the special metadata "evaluator_type" associated with each builtin dataset.
- For your own dataset, you can simply create an evaluator manually in your
- script and do not have to worry about the hacky if-else logic here.
- """
- if output_folder is None:
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
- evaluator_list = []
- evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
- if evaluator_type == "sem_seg":
- return SemSegEvaluator(
- dataset_name,
- distributed=True,
- output_dir=output_folder,
- )
- if evaluator_type == "cityscapes_sem_seg":
- return CityscapesSemSegEvaluator(dataset_name)
- if len(evaluator_list) == 0:
- raise NotImplementedError(
- "no Evaluator for the dataset {} with the type {}".format(
- dataset_name, evaluator_type
- )
- )
- if len(evaluator_list) == 1:
- return evaluator_list[0]
- return DatasetEvaluators(evaluator_list)
-
- @classmethod
- def build_train_loader(cls, cfg):
- if "SemanticSegmentor" in cfg.MODEL.META_ARCHITECTURE:
- mapper = DatasetMapper(cfg, is_train=True, augmentations=build_sem_seg_train_aug(cfg))
- else:
- mapper = None
- return build_detection_train_loader(cfg, mapper=mapper)
-
- @classmethod
- def build_lr_scheduler(cls, cfg, optimizer):
- """
- It now calls :func:`detectron2.solver.build_lr_scheduler`.
- Overwrite it if you'd like a different scheduler.
- """
- return build_lr_scheduler(cfg, optimizer)
-
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- add_deeplab_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(cfg, args)
- return cfg
-
-
-def main(args):
- cfg = setup(args)
-
- if args.eval_only:
- model = Trainer.build_model(cfg)
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
- cfg.MODEL.WEIGHTS, resume=args.resume
- )
- res = Trainer.test(cfg, model)
- return res
-
- trainer = Trainer(cfg)
- trainer.resume_or_load(resume=args.resume)
- return trainer.train()
-
-
-if __name__ == "__main__":
- args = default_argument_parser().parse_args()
- print("Command Line Args:", args)
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/spaces/camillevanhoffelen/langchain-HuggingGPT/README.md b/spaces/camillevanhoffelen/langchain-HuggingGPT/README.md
deleted file mode 100644
index 1d61ed66dc7fd61316786ce82a0dc3eb9759f55d..0000000000000000000000000000000000000000
--- a/spaces/camillevanhoffelen/langchain-HuggingGPT/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Langchain HuggingGPT
-emoji: 🐢
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-python_version: 3.11.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageOps.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageOps.py
deleted file mode 100644
index 17702778c134abcb51d7632367fbbf1a2f3048fa..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageOps.py
+++ /dev/null
@@ -1,628 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# standard image operations
-#
-# History:
-# 2001-10-20 fl Created
-# 2001-10-23 fl Added autocontrast operator
-# 2001-12-18 fl Added Kevin's fit operator
-# 2004-03-14 fl Fixed potential division by zero in equalize
-# 2005-05-05 fl Fixed equalize for low number of values
-#
-# Copyright (c) 2001-2004 by Secret Labs AB
-# Copyright (c) 2001-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import functools
-import operator
-import re
-
-from . import ExifTags, Image, ImagePalette
-
-#
-# helpers
-
-
-def _border(border):
- if isinstance(border, tuple):
- if len(border) == 2:
- left, top = right, bottom = border
- elif len(border) == 4:
- left, top, right, bottom = border
- else:
- left = top = right = bottom = border
- return left, top, right, bottom
-
-
-def _color(color, mode):
- if isinstance(color, str):
- from . import ImageColor
-
- color = ImageColor.getcolor(color, mode)
- return color
-
-
-def _lut(image, lut):
- if image.mode == "P":
- # FIXME: apply to lookup table, not image data
- msg = "mode P support coming soon"
- raise NotImplementedError(msg)
- elif image.mode in ("L", "RGB"):
- if image.mode == "RGB" and len(lut) == 256:
- lut = lut + lut + lut
- return image.point(lut)
- else:
- msg = "not supported for this image mode"
- raise OSError(msg)
-
-
-#
-# actions
-
-
-def autocontrast(image, cutoff=0, ignore=None, mask=None, preserve_tone=False):
- """
- Maximize (normalize) image contrast. This function calculates a
- histogram of the input image (or mask region), removes ``cutoff`` percent of the
- lightest and darkest pixels from the histogram, and remaps the image
- so that the darkest pixel becomes black (0), and the lightest
- becomes white (255).
-
- :param image: The image to process.
- :param cutoff: The percent to cut off from the histogram on the low and
- high ends. Either a tuple of (low, high), or a single
- number for both.
- :param ignore: The background pixel value (use None for no background).
- :param mask: Histogram used in contrast operation is computed using pixels
- within the mask. If no mask is given the entire image is used
- for histogram computation.
- :param preserve_tone: Preserve image tone in Photoshop-like style autocontrast.
-
- .. versionadded:: 8.2.0
-
- :return: An image.
- """
- if preserve_tone:
- histogram = image.convert("L").histogram(mask)
- else:
- histogram = image.histogram(mask)
-
- lut = []
- for layer in range(0, len(histogram), 256):
- h = histogram[layer : layer + 256]
- if ignore is not None:
- # get rid of outliers
- try:
- h[ignore] = 0
- except TypeError:
- # assume sequence
- for ix in ignore:
- h[ix] = 0
- if cutoff:
- # cut off pixels from both ends of the histogram
- if not isinstance(cutoff, tuple):
- cutoff = (cutoff, cutoff)
- # get number of pixels
- n = 0
- for ix in range(256):
- n = n + h[ix]
- # remove cutoff% pixels from the low end
- cut = n * cutoff[0] // 100
- for lo in range(256):
- if cut > h[lo]:
- cut = cut - h[lo]
- h[lo] = 0
- else:
- h[lo] -= cut
- cut = 0
- if cut <= 0:
- break
- # remove cutoff% samples from the high end
- cut = n * cutoff[1] // 100
- for hi in range(255, -1, -1):
- if cut > h[hi]:
- cut = cut - h[hi]
- h[hi] = 0
- else:
- h[hi] -= cut
- cut = 0
- if cut <= 0:
- break
- # find lowest/highest samples after preprocessing
- for lo in range(256):
- if h[lo]:
- break
- for hi in range(255, -1, -1):
- if h[hi]:
- break
- if hi <= lo:
- # don't bother
- lut.extend(list(range(256)))
- else:
- scale = 255.0 / (hi - lo)
- offset = -lo * scale
- for ix in range(256):
- ix = int(ix * scale + offset)
- if ix < 0:
- ix = 0
- elif ix > 255:
- ix = 255
- lut.append(ix)
- return _lut(image, lut)
-
-
-def colorize(image, black, white, mid=None, blackpoint=0, whitepoint=255, midpoint=127):
- """
- Colorize grayscale image.
- This function calculates a color wedge which maps all black pixels in
- the source image to the first color and all white pixels to the
- second color. If ``mid`` is specified, it uses three-color mapping.
- The ``black`` and ``white`` arguments should be RGB tuples or color names;
- optionally you can use three-color mapping by also specifying ``mid``.
- Mapping positions for any of the colors can be specified
- (e.g. ``blackpoint``), where these parameters are the integer
- value corresponding to where the corresponding color should be mapped.
- These parameters must have logical order, such that
- ``blackpoint <= midpoint <= whitepoint`` (if ``mid`` is specified).
-
- :param image: The image to colorize.
- :param black: The color to use for black input pixels.
- :param white: The color to use for white input pixels.
- :param mid: The color to use for midtone input pixels.
- :param blackpoint: an int value [0, 255] for the black mapping.
- :param whitepoint: an int value [0, 255] for the white mapping.
- :param midpoint: an int value [0, 255] for the midtone mapping.
- :return: An image.
- """
-
- # Initial asserts
- assert image.mode == "L"
- if mid is None:
- assert 0 <= blackpoint <= whitepoint <= 255
- else:
- assert 0 <= blackpoint <= midpoint <= whitepoint <= 255
-
- # Define colors from arguments
- black = _color(black, "RGB")
- white = _color(white, "RGB")
- if mid is not None:
- mid = _color(mid, "RGB")
-
- # Empty lists for the mapping
- red = []
- green = []
- blue = []
-
- # Create the low-end values
- for i in range(0, blackpoint):
- red.append(black[0])
- green.append(black[1])
- blue.append(black[2])
-
- # Create the mapping (2-color)
- if mid is None:
- range_map = range(0, whitepoint - blackpoint)
-
- for i in range_map:
- red.append(black[0] + i * (white[0] - black[0]) // len(range_map))
- green.append(black[1] + i * (white[1] - black[1]) // len(range_map))
- blue.append(black[2] + i * (white[2] - black[2]) // len(range_map))
-
- # Create the mapping (3-color)
- else:
- range_map1 = range(0, midpoint - blackpoint)
- range_map2 = range(0, whitepoint - midpoint)
-
- for i in range_map1:
- red.append(black[0] + i * (mid[0] - black[0]) // len(range_map1))
- green.append(black[1] + i * (mid[1] - black[1]) // len(range_map1))
- blue.append(black[2] + i * (mid[2] - black[2]) // len(range_map1))
- for i in range_map2:
- red.append(mid[0] + i * (white[0] - mid[0]) // len(range_map2))
- green.append(mid[1] + i * (white[1] - mid[1]) // len(range_map2))
- blue.append(mid[2] + i * (white[2] - mid[2]) // len(range_map2))
-
- # Create the high-end values
- for i in range(0, 256 - whitepoint):
- red.append(white[0])
- green.append(white[1])
- blue.append(white[2])
-
- # Return converted image
- image = image.convert("RGB")
- return _lut(image, red + green + blue)
-
-
-def contain(image, size, method=Image.Resampling.BICUBIC):
- """
- Returns a resized version of the image, set to the maximum width and height
- within the requested size, while maintaining the original aspect ratio.
-
- :param image: The image to resize and crop.
- :param size: The requested output size in pixels, given as a
- (width, height) tuple.
- :param method: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :return: An image.
- """
-
- im_ratio = image.width / image.height
- dest_ratio = size[0] / size[1]
-
- if im_ratio != dest_ratio:
- if im_ratio > dest_ratio:
- new_height = round(image.height / image.width * size[0])
- if new_height != size[1]:
- size = (size[0], new_height)
- else:
- new_width = round(image.width / image.height * size[1])
- if new_width != size[0]:
- size = (new_width, size[1])
- return image.resize(size, resample=method)
-
-
-def pad(image, size, method=Image.Resampling.BICUBIC, color=None, centering=(0.5, 0.5)):
- """
- Returns a resized and padded version of the image, expanded to fill the
- requested aspect ratio and size.
-
- :param image: The image to resize and crop.
- :param size: The requested output size in pixels, given as a
- (width, height) tuple.
- :param method: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :param color: The background color of the padded image.
- :param centering: Control the position of the original image within the
- padded version.
-
- (0.5, 0.5) will keep the image centered
- (0, 0) will keep the image aligned to the top left
- (1, 1) will keep the image aligned to the bottom
- right
- :return: An image.
- """
-
- resized = contain(image, size, method)
- if resized.size == size:
- out = resized
- else:
- out = Image.new(image.mode, size, color)
- if resized.palette:
- out.putpalette(resized.getpalette())
- if resized.width != size[0]:
- x = round((size[0] - resized.width) * max(0, min(centering[0], 1)))
- out.paste(resized, (x, 0))
- else:
- y = round((size[1] - resized.height) * max(0, min(centering[1], 1)))
- out.paste(resized, (0, y))
- return out
-
-
-def crop(image, border=0):
- """
- Remove border from image. The same amount of pixels are removed
- from all four sides. This function works on all image modes.
-
- .. seealso:: :py:meth:`~PIL.Image.Image.crop`
-
- :param image: The image to crop.
- :param border: The number of pixels to remove.
- :return: An image.
- """
- left, top, right, bottom = _border(border)
- return image.crop((left, top, image.size[0] - right, image.size[1] - bottom))
-
-
-def scale(image, factor, resample=Image.Resampling.BICUBIC):
- """
- Returns a rescaled image by a specific factor given in parameter.
- A factor greater than 1 expands the image, between 0 and 1 contracts the
- image.
-
- :param image: The image to rescale.
- :param factor: The expansion factor, as a float.
- :param resample: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
- if factor == 1:
- return image.copy()
- elif factor <= 0:
- msg = "the factor must be greater than 0"
- raise ValueError(msg)
- else:
- size = (round(factor * image.width), round(factor * image.height))
- return image.resize(size, resample)
-
-
-def deform(image, deformer, resample=Image.Resampling.BILINEAR):
- """
- Deform the image.
-
- :param image: The image to deform.
- :param deformer: A deformer object. Any object that implements a
- ``getmesh`` method can be used.
- :param resample: An optional resampling filter. Same values possible as
- in the PIL.Image.transform function.
- :return: An image.
- """
- return image.transform(
- image.size, Image.Transform.MESH, deformer.getmesh(image), resample
- )
-
-
-def equalize(image, mask=None):
- """
- Equalize the image histogram. This function applies a non-linear
- mapping to the input image, in order to create a uniform
- distribution of grayscale values in the output image.
-
- :param image: The image to equalize.
- :param mask: An optional mask. If given, only the pixels selected by
- the mask are included in the analysis.
- :return: An image.
- """
- if image.mode == "P":
- image = image.convert("RGB")
- h = image.histogram(mask)
- lut = []
- for b in range(0, len(h), 256):
- histo = [_f for _f in h[b : b + 256] if _f]
- if len(histo) <= 1:
- lut.extend(list(range(256)))
- else:
- step = (functools.reduce(operator.add, histo) - histo[-1]) // 255
- if not step:
- lut.extend(list(range(256)))
- else:
- n = step // 2
- for i in range(256):
- lut.append(n // step)
- n = n + h[i + b]
- return _lut(image, lut)
-
-
-def expand(image, border=0, fill=0):
- """
- Add border to the image
-
- :param image: The image to expand.
- :param border: Border width, in pixels.
- :param fill: Pixel fill value (a color value). Default is 0 (black).
- :return: An image.
- """
- left, top, right, bottom = _border(border)
- width = left + image.size[0] + right
- height = top + image.size[1] + bottom
- color = _color(fill, image.mode)
- if image.palette:
- palette = ImagePalette.ImagePalette(palette=image.getpalette())
- if isinstance(color, tuple):
- color = palette.getcolor(color)
- else:
- palette = None
- out = Image.new(image.mode, (width, height), color)
- if palette:
- out.putpalette(palette.palette)
- out.paste(image, (left, top))
- return out
-
-
-def fit(image, size, method=Image.Resampling.BICUBIC, bleed=0.0, centering=(0.5, 0.5)):
- """
- Returns a resized and cropped version of the image, cropped to the
- requested aspect ratio and size.
-
- This function was contributed by Kevin Cazabon.
-
- :param image: The image to resize and crop.
- :param size: The requested output size in pixels, given as a
- (width, height) tuple.
- :param method: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :param bleed: Remove a border around the outside of the image from all
- four edges. The value is a decimal percentage (use 0.01 for
- one percent). The default value is 0 (no border).
- Cannot be greater than or equal to 0.5.
- :param centering: Control the cropping position. Use (0.5, 0.5) for
- center cropping (e.g. if cropping the width, take 50% off
- of the left side, and therefore 50% off the right side).
- (0.0, 0.0) will crop from the top left corner (i.e. if
- cropping the width, take all of the crop off of the right
- side, and if cropping the height, take all of it off the
- bottom). (1.0, 0.0) will crop from the bottom left
- corner, etc. (i.e. if cropping the width, take all of the
- crop off the left side, and if cropping the height take
- none from the top, and therefore all off the bottom).
- :return: An image.
- """
-
- # by Kevin Cazabon, Feb 17/2000
- # kevin@cazabon.com
- # https://www.cazabon.com
-
- # ensure centering is mutable
- centering = list(centering)
-
- if not 0.0 <= centering[0] <= 1.0:
- centering[0] = 0.5
- if not 0.0 <= centering[1] <= 1.0:
- centering[1] = 0.5
-
- if not 0.0 <= bleed < 0.5:
- bleed = 0.0
-
- # calculate the area to use for resizing and cropping, subtracting
- # the 'bleed' around the edges
-
- # number of pixels to trim off on Top and Bottom, Left and Right
- bleed_pixels = (bleed * image.size[0], bleed * image.size[1])
-
- live_size = (
- image.size[0] - bleed_pixels[0] * 2,
- image.size[1] - bleed_pixels[1] * 2,
- )
-
- # calculate the aspect ratio of the live_size
- live_size_ratio = live_size[0] / live_size[1]
-
- # calculate the aspect ratio of the output image
- output_ratio = size[0] / size[1]
-
- # figure out if the sides or top/bottom will be cropped off
- if live_size_ratio == output_ratio:
- # live_size is already the needed ratio
- crop_width = live_size[0]
- crop_height = live_size[1]
- elif live_size_ratio >= output_ratio:
- # live_size is wider than what's needed, crop the sides
- crop_width = output_ratio * live_size[1]
- crop_height = live_size[1]
- else:
- # live_size is taller than what's needed, crop the top and bottom
- crop_width = live_size[0]
- crop_height = live_size[0] / output_ratio
-
- # make the crop
- crop_left = bleed_pixels[0] + (live_size[0] - crop_width) * centering[0]
- crop_top = bleed_pixels[1] + (live_size[1] - crop_height) * centering[1]
-
- crop = (crop_left, crop_top, crop_left + crop_width, crop_top + crop_height)
-
- # resize the image and return it
- return image.resize(size, method, box=crop)
-
-
-def flip(image):
- """
- Flip the image vertically (top to bottom).
-
- :param image: The image to flip.
- :return: An image.
- """
- return image.transpose(Image.Transpose.FLIP_TOP_BOTTOM)
-
-
-def grayscale(image):
- """
- Convert the image to grayscale.
-
- :param image: The image to convert.
- :return: An image.
- """
- return image.convert("L")
-
-
-def invert(image):
- """
- Invert (negate) the image.
-
- :param image: The image to invert.
- :return: An image.
- """
- lut = []
- for i in range(256):
- lut.append(255 - i)
- return image.point(lut) if image.mode == "1" else _lut(image, lut)
-
-
-def mirror(image):
- """
- Flip image horizontally (left to right).
-
- :param image: The image to mirror.
- :return: An image.
- """
- return image.transpose(Image.Transpose.FLIP_LEFT_RIGHT)
-
-
-def posterize(image, bits):
- """
- Reduce the number of bits for each color channel.
-
- :param image: The image to posterize.
- :param bits: The number of bits to keep for each channel (1-8).
- :return: An image.
- """
- lut = []
- mask = ~(2 ** (8 - bits) - 1)
- for i in range(256):
- lut.append(i & mask)
- return _lut(image, lut)
-
-
-def solarize(image, threshold=128):
- """
- Invert all pixel values above a threshold.
-
- :param image: The image to solarize.
- :param threshold: All pixels above this greyscale level are inverted.
- :return: An image.
- """
- lut = []
- for i in range(256):
- if i < threshold:
- lut.append(i)
- else:
- lut.append(255 - i)
- return _lut(image, lut)
-
-
-def exif_transpose(image, *, in_place=False):
- """
- If an image has an EXIF Orientation tag, other than 1, transpose the image
- accordingly, and remove the orientation data.
-
- :param image: The image to transpose.
- :param in_place: Boolean. Keyword-only argument.
- If ``True``, the original image is modified in-place, and ``None`` is returned.
- If ``False`` (default), a new :py:class:`~PIL.Image.Image` object is returned
- with the transposition applied. If there is no transposition, a copy of the
- image will be returned.
- """
- image_exif = image.getexif()
- orientation = image_exif.get(ExifTags.Base.Orientation)
- method = {
- 2: Image.Transpose.FLIP_LEFT_RIGHT,
- 3: Image.Transpose.ROTATE_180,
- 4: Image.Transpose.FLIP_TOP_BOTTOM,
- 5: Image.Transpose.TRANSPOSE,
- 6: Image.Transpose.ROTATE_270,
- 7: Image.Transpose.TRANSVERSE,
- 8: Image.Transpose.ROTATE_90,
- }.get(orientation)
- if method is not None:
- transposed_image = image.transpose(method)
- if in_place:
- image.im = transposed_image.im
- image.pyaccess = None
- image._size = transposed_image._size
- exif_image = image if in_place else transposed_image
-
- exif = exif_image.getexif()
- if ExifTags.Base.Orientation in exif:
- del exif[ExifTags.Base.Orientation]
- if "exif" in exif_image.info:
- exif_image.info["exif"] = exif.tobytes()
- elif "Raw profile type exif" in exif_image.info:
- exif_image.info["Raw profile type exif"] = exif.tobytes().hex()
- elif "XML:com.adobe.xmp" in exif_image.info:
- for pattern in (
- r'tiff:Orientation="([0-9])"',
- r"data warehouse lifecycle toolkit by ralph kimball pdf free download
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Japanese Mom Porn Moviesgolkesgo.md b/spaces/cihyFjudo/fairness-paper-search/Japanese Mom Porn Moviesgolkesgo.md
deleted file mode 100644
index be431619d4bb0c3cfca4b0eb900f5a79dae425bc..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Japanese Mom Porn Moviesgolkesgo.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Japanese Mom Porn Moviesgolkesgo
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Swami Ranganathananda Bhagavad Gita 13.pdf The Secrets of Yoga and Meditation Unveiled by a Disciple of Ramakrishna.md b/spaces/cihyFjudo/fairness-paper-search/Swami Ranganathananda Bhagavad Gita 13.pdf The Secrets of Yoga and Meditation Unveiled by a Disciple of Ramakrishna.md
deleted file mode 100644
index 1166b5d74785dfd47d095fe065ac2cdea47941a3..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Swami Ranganathananda Bhagavad Gita 13.pdf The Secrets of Yoga and Meditation Unveiled by a Disciple of Ramakrishna.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Swami Ranganathananda Bhagavad Gita 13.pdf
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/[Download PIX4Dmapper software Pix4D](1).md b/spaces/cihyFjudo/fairness-paper-search/[Download PIX4Dmapper software Pix4D](1).md
deleted file mode 100644
index bdbc747308db506fb05bc4d20607442cdd421971..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/[Download PIX4Dmapper software Pix4D](1).md
+++ /dev/null
@@ -1,6 +0,0 @@
-Pix4D Pix4Dmapper Pro 2.0.104 (Mac amaral publisher cal
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/etree.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/etree.py
deleted file mode 100644
index 9d4a65c36014c8381306968c69432f50f0c0b886..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/etree.py
+++ /dev/null
@@ -1,478 +0,0 @@
-"""Shim module exporting the same ElementTree API for lxml and
-xml.etree backends.
-
-When lxml is installed, it is automatically preferred over the built-in
-xml.etree module.
-On Python 2.7, the cElementTree module is preferred over the pure-python
-ElementTree module.
-
-Besides exporting a unified interface, this also defines extra functions
-or subclasses built-in ElementTree classes to add features that are
-only availble in lxml, like OrderedDict for attributes, pretty_print and
-iterwalk.
-"""
-from fontTools.misc.textTools import tostr
-
-
-XML_DECLARATION = """"""
-
-__all__ = [
- # public symbols
- "Comment",
- "dump",
- "Element",
- "ElementTree",
- "fromstring",
- "fromstringlist",
- "iselement",
- "iterparse",
- "parse",
- "ParseError",
- "PI",
- "ProcessingInstruction",
- "QName",
- "SubElement",
- "tostring",
- "tostringlist",
- "TreeBuilder",
- "XML",
- "XMLParser",
- "register_namespace",
-]
-
-try:
- from lxml.etree import *
-
- _have_lxml = True
-except ImportError:
- try:
- from xml.etree.cElementTree import *
-
- # the cElementTree version of XML function doesn't support
- # the optional 'parser' keyword argument
- from xml.etree.ElementTree import XML
- except ImportError: # pragma: no cover
- from xml.etree.ElementTree import *
- _have_lxml = False
-
- import sys
-
- # dict is always ordered in python >= 3.6 and on pypy
- PY36 = sys.version_info >= (3, 6)
- try:
- import __pypy__
- except ImportError:
- __pypy__ = None
- _dict_is_ordered = bool(PY36 or __pypy__)
- del PY36, __pypy__
-
- if _dict_is_ordered:
- _Attrib = dict
- else:
- from collections import OrderedDict as _Attrib
-
- if isinstance(Element, type):
- _Element = Element
- else:
- # in py27, cElementTree.Element cannot be subclassed, so
- # we need to import the pure-python class
- from xml.etree.ElementTree import Element as _Element
-
- class Element(_Element):
- """Element subclass that keeps the order of attributes."""
-
- def __init__(self, tag, attrib=_Attrib(), **extra):
- super(Element, self).__init__(tag)
- self.attrib = _Attrib()
- if attrib:
- self.attrib.update(attrib)
- if extra:
- self.attrib.update(extra)
-
- def SubElement(parent, tag, attrib=_Attrib(), **extra):
- """Must override SubElement as well otherwise _elementtree.SubElement
- fails if 'parent' is a subclass of Element object.
- """
- element = parent.__class__(tag, attrib, **extra)
- parent.append(element)
- return element
-
- def _iterwalk(element, events, tag):
- include = tag is None or element.tag == tag
- if include and "start" in events:
- yield ("start", element)
- for e in element:
- for item in _iterwalk(e, events, tag):
- yield item
- if include:
- yield ("end", element)
-
- def iterwalk(element_or_tree, events=("end",), tag=None):
- """A tree walker that generates events from an existing tree as
- if it was parsing XML data with iterparse().
- Drop-in replacement for lxml.etree.iterwalk.
- """
- if iselement(element_or_tree):
- element = element_or_tree
- else:
- element = element_or_tree.getroot()
- if tag == "*":
- tag = None
- for item in _iterwalk(element, events, tag):
- yield item
-
- _ElementTree = ElementTree
-
- class ElementTree(_ElementTree):
- """ElementTree subclass that adds 'pretty_print' and 'doctype'
- arguments to the 'write' method.
- Currently these are only supported for the default XML serialization
- 'method', and not also for "html" or "text", for these are delegated
- to the base class.
- """
-
- def write(
- self,
- file_or_filename,
- encoding=None,
- xml_declaration=False,
- method=None,
- doctype=None,
- pretty_print=False,
- ):
- if method and method != "xml":
- # delegate to super-class
- super(ElementTree, self).write(
- file_or_filename,
- encoding=encoding,
- xml_declaration=xml_declaration,
- method=method,
- )
- return
-
- if encoding is not None and encoding.lower() == "unicode":
- if xml_declaration:
- raise ValueError(
- "Serialisation to unicode must not request an XML declaration"
- )
- write_declaration = False
- encoding = "unicode"
- elif xml_declaration is None:
- # by default, write an XML declaration only for non-standard encodings
- write_declaration = encoding is not None and encoding.upper() not in (
- "ASCII",
- "UTF-8",
- "UTF8",
- "US-ASCII",
- )
- else:
- write_declaration = xml_declaration
-
- if encoding is None:
- encoding = "ASCII"
-
- if pretty_print:
- # NOTE this will modify the tree in-place
- _indent(self._root)
-
- with _get_writer(file_or_filename, encoding) as write:
- if write_declaration:
- write(XML_DECLARATION % encoding.upper())
- if pretty_print:
- write("\n")
- if doctype:
- write(_tounicode(doctype))
- if pretty_print:
- write("\n")
-
- qnames, namespaces = _namespaces(self._root)
- _serialize_xml(write, self._root, qnames, namespaces)
-
- import io
-
- def tostring(
- element,
- encoding=None,
- xml_declaration=None,
- method=None,
- doctype=None,
- pretty_print=False,
- ):
- """Custom 'tostring' function that uses our ElementTree subclass, with
- pretty_print support.
- """
- stream = io.StringIO() if encoding == "unicode" else io.BytesIO()
- ElementTree(element).write(
- stream,
- encoding=encoding,
- xml_declaration=xml_declaration,
- method=method,
- doctype=doctype,
- pretty_print=pretty_print,
- )
- return stream.getvalue()
-
- # serialization support
-
- import re
-
- # Valid XML strings can include any Unicode character, excluding control
- # characters, the surrogate blocks, FFFE, and FFFF:
- # Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
- # Here we reversed the pattern to match only the invalid characters.
- # For the 'narrow' python builds supporting only UCS-2, which represent
- # characters beyond BMP as UTF-16 surrogate pairs, we need to pass through
- # the surrogate block. I haven't found a more elegant solution...
- UCS2 = sys.maxunicode < 0x10FFFF
- if UCS2:
- _invalid_xml_string = re.compile(
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uFFFE-\uFFFF]"
- )
- else:
- _invalid_xml_string = re.compile(
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uD800-\uDFFF\uFFFE-\uFFFF]"
- )
-
- def _tounicode(s):
- """Test if a string is valid user input and decode it to unicode string
- using ASCII encoding if it's a bytes string.
- Reject all bytes/unicode input that contains non-XML characters.
- Reject all bytes input that contains non-ASCII characters.
- """
- try:
- s = tostr(s, encoding="ascii", errors="strict")
- except UnicodeDecodeError:
- raise ValueError(
- "Bytes strings can only contain ASCII characters. "
- "Use unicode strings for non-ASCII characters."
- )
- except AttributeError:
- _raise_serialization_error(s)
- if s and _invalid_xml_string.search(s):
- raise ValueError(
- "All strings must be XML compatible: Unicode or ASCII, "
- "no NULL bytes or control characters"
- )
- return s
-
- import contextlib
-
- @contextlib.contextmanager
- def _get_writer(file_or_filename, encoding):
- # returns text write method and release all resources after using
- try:
- write = file_or_filename.write
- except AttributeError:
- # file_or_filename is a file name
- f = open(
- file_or_filename,
- "w",
- encoding="utf-8" if encoding == "unicode" else encoding,
- errors="xmlcharrefreplace",
- )
- with f:
- yield f.write
- else:
- # file_or_filename is a file-like object
- # encoding determines if it is a text or binary writer
- if encoding == "unicode":
- # use a text writer as is
- yield write
- else:
- # wrap a binary writer with TextIOWrapper
- detach_buffer = False
- if isinstance(file_or_filename, io.BufferedIOBase):
- buf = file_or_filename
- elif isinstance(file_or_filename, io.RawIOBase):
- buf = io.BufferedWriter(file_or_filename)
- detach_buffer = True
- else:
- # This is to handle passed objects that aren't in the
- # IOBase hierarchy, but just have a write method
- buf = io.BufferedIOBase()
- buf.writable = lambda: True
- buf.write = write
- try:
- # TextIOWrapper uses this methods to determine
- # if BOM (for UTF-16, etc) should be added
- buf.seekable = file_or_filename.seekable
- buf.tell = file_or_filename.tell
- except AttributeError:
- pass
- wrapper = io.TextIOWrapper(
- buf,
- encoding=encoding,
- errors="xmlcharrefreplace",
- newline="\n",
- )
- try:
- yield wrapper.write
- finally:
- # Keep the original file open when the TextIOWrapper and
- # the BufferedWriter are destroyed
- wrapper.detach()
- if detach_buffer:
- buf.detach()
-
- from xml.etree.ElementTree import _namespace_map
-
- def _namespaces(elem):
- # identify namespaces used in this tree
-
- # maps qnames to *encoded* prefix:local names
- qnames = {None: None}
-
- # maps uri:s to prefixes
- namespaces = {}
-
- def add_qname(qname):
- # calculate serialized qname representation
- try:
- qname = _tounicode(qname)
- if qname[:1] == "{":
- uri, tag = qname[1:].rsplit("}", 1)
- prefix = namespaces.get(uri)
- if prefix is None:
- prefix = _namespace_map.get(uri)
- if prefix is None:
- prefix = "ns%d" % len(namespaces)
- else:
- prefix = _tounicode(prefix)
- if prefix != "xml":
- namespaces[uri] = prefix
- if prefix:
- qnames[qname] = "%s:%s" % (prefix, tag)
- else:
- qnames[qname] = tag # default element
- else:
- qnames[qname] = qname
- except TypeError:
- _raise_serialization_error(qname)
-
- # populate qname and namespaces table
- for elem in elem.iter():
- tag = elem.tag
- if isinstance(tag, QName):
- if tag.text not in qnames:
- add_qname(tag.text)
- elif isinstance(tag, str):
- if tag not in qnames:
- add_qname(tag)
- elif tag is not None and tag is not Comment and tag is not PI:
- _raise_serialization_error(tag)
- for key, value in elem.items():
- if isinstance(key, QName):
- key = key.text
- if key not in qnames:
- add_qname(key)
- if isinstance(value, QName) and value.text not in qnames:
- add_qname(value.text)
- text = elem.text
- if isinstance(text, QName) and text.text not in qnames:
- add_qname(text.text)
- return qnames, namespaces
-
- def _serialize_xml(write, elem, qnames, namespaces, **kwargs):
- tag = elem.tag
- text = elem.text
- if tag is Comment:
- write("" % _tounicode(text))
- elif tag is ProcessingInstruction:
- write("%s?>" % _tounicode(text))
- else:
- tag = qnames[_tounicode(tag) if tag is not None else None]
- if tag is None:
- if text:
- write(_escape_cdata(text))
- for e in elem:
- _serialize_xml(write, e, qnames, None)
- else:
- write("<" + tag)
- if namespaces:
- for uri, prefix in sorted(
- namespaces.items(), key=lambda x: x[1]
- ): # sort on prefix
- if prefix:
- prefix = ":" + prefix
- write(' xmlns%s="%s"' % (prefix, _escape_attrib(uri)))
- attrs = elem.attrib
- if attrs:
- # try to keep existing attrib order
- if len(attrs) <= 1 or type(attrs) is _Attrib:
- items = attrs.items()
- else:
- # if plain dict, use lexical order
- items = sorted(attrs.items())
- for k, v in items:
- if isinstance(k, QName):
- k = _tounicode(k.text)
- else:
- k = _tounicode(k)
- if isinstance(v, QName):
- v = qnames[_tounicode(v.text)]
- else:
- v = _escape_attrib(v)
- write(' %s="%s"' % (qnames[k], v))
- if text is not None or len(elem):
- write(">")
- if text:
- write(_escape_cdata(text))
- for e in elem:
- _serialize_xml(write, e, qnames, None)
- write("" + tag + ">")
- else:
- write("/>")
- if elem.tail:
- write(_escape_cdata(elem.tail))
-
- def _raise_serialization_error(text):
- raise TypeError("cannot serialize %r (type %s)" % (text, type(text).__name__))
-
- def _escape_cdata(text):
- # escape character data
- try:
- text = _tounicode(text)
- # it's worth avoiding do-nothing calls for short strings
- if "&" in text:
- text = text.replace("&", "&")
- if "<" in text:
- text = text.replace("<", "<")
- if ">" in text:
- text = text.replace(">", ">")
- return text
- except (TypeError, AttributeError):
- _raise_serialization_error(text)
-
- def _escape_attrib(text):
- # escape attribute value
- try:
- text = _tounicode(text)
- if "&" in text:
- text = text.replace("&", "&")
- if "<" in text:
- text = text.replace("<", "<")
- if ">" in text:
- text = text.replace(">", ">")
- if '"' in text:
- text = text.replace('"', """)
- if "\n" in text:
- text = text.replace("\n", "
")
- return text
- except (TypeError, AttributeError):
- _raise_serialization_error(text)
-
- def _indent(elem, level=0):
- # From http://effbot.org/zone/element-lib.htm#prettyprint
- i = "\n" + level * " "
- if len(elem):
- if not elem.text or not elem.text.strip():
- elem.text = i + " "
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- for elem in elem:
- _indent(elem, level + 1)
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- else:
- if level and (not elem.tail or not elem.tail.strip()):
- elem.tail = i
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/ttProgram.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/ttProgram.py
deleted file mode 100644
index 84aa63f36301ec9a4ae21acff0cbc95010d956b7..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/ttProgram.py
+++ /dev/null
@@ -1,593 +0,0 @@
-"""ttLib.tables.ttProgram.py -- Assembler/disassembler for TrueType bytecode programs."""
-from __future__ import annotations
-
-from fontTools.misc.textTools import num2binary, binary2num, readHex, strjoin
-import array
-from io import StringIO
-from typing import List
-import re
-import logging
-
-
-log = logging.getLogger(__name__)
-
-# fmt: off
-
-# first, the list of instructions that eat bytes or words from the instruction stream
-
-streamInstructions = [
-#
-# opcode mnemonic argBits descriptive name pops pushes eats from instruction stream pushes
-#
- (0x40, 'NPUSHB', 0, 'PushNBytes', 0, -1), # n, b1, b2,...bn b1,b2...bn
- (0x41, 'NPUSHW', 0, 'PushNWords', 0, -1), # n, w1, w2,...w w1,w2...wn
- (0xb0, 'PUSHB', 3, 'PushBytes', 0, -1), # b0, b1,..bn b0, b1, ...,bn
- (0xb8, 'PUSHW', 3, 'PushWords', 0, -1), # w0,w1,..wn w0 ,w1, ...wn
-]
-
-
-# next, the list of "normal" instructions
-
-instructions = [
-#
-# opcode mnemonic argBits descriptive name pops pushes eats from instruction stream pushes
-#
- (0x7f, 'AA', 0, 'AdjustAngle', 1, 0), # p -
- (0x64, 'ABS', 0, 'Absolute', 1, 1), # n |n|
- (0x60, 'ADD', 0, 'Add', 2, 1), # n2, n1 (n1 + n2)
- (0x27, 'ALIGNPTS', 0, 'AlignPts', 2, 0), # p2, p1 -
- (0x3c, 'ALIGNRP', 0, 'AlignRelativePt', -1, 0), # p1, p2, ... , ploopvalue -
- (0x5a, 'AND', 0, 'LogicalAnd', 2, 1), # e2, e1 b
- (0x2b, 'CALL', 0, 'CallFunction', 1, 0), # f -
- (0x67, 'CEILING', 0, 'Ceiling', 1, 1), # n ceil(n)
- (0x25, 'CINDEX', 0, 'CopyXToTopStack', 1, 1), # k ek
- (0x22, 'CLEAR', 0, 'ClearStack', -1, 0), # all items on the stack -
- (0x4f, 'DEBUG', 0, 'DebugCall', 1, 0), # n -
- (0x73, 'DELTAC1', 0, 'DeltaExceptionC1', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 -
- (0x74, 'DELTAC2', 0, 'DeltaExceptionC2', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 -
- (0x75, 'DELTAC3', 0, 'DeltaExceptionC3', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 -
- (0x5d, 'DELTAP1', 0, 'DeltaExceptionP1', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 -
- (0x71, 'DELTAP2', 0, 'DeltaExceptionP2', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 -
- (0x72, 'DELTAP3', 0, 'DeltaExceptionP3', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 -
- (0x24, 'DEPTH', 0, 'GetDepthStack', 0, 1), # - n
- (0x62, 'DIV', 0, 'Divide', 2, 1), # n2, n1 (n1 * 64)/ n2
- (0x20, 'DUP', 0, 'DuplicateTopStack', 1, 2), # e e, e
- (0x59, 'EIF', 0, 'EndIf', 0, 0), # - -
- (0x1b, 'ELSE', 0, 'Else', 0, 0), # - -
- (0x2d, 'ENDF', 0, 'EndFunctionDefinition', 0, 0), # - -
- (0x54, 'EQ', 0, 'Equal', 2, 1), # e2, e1 b
- (0x57, 'EVEN', 0, 'Even', 1, 1), # e b
- (0x2c, 'FDEF', 0, 'FunctionDefinition', 1, 0), # f -
- (0x4e, 'FLIPOFF', 0, 'SetAutoFlipOff', 0, 0), # - -
- (0x4d, 'FLIPON', 0, 'SetAutoFlipOn', 0, 0), # - -
- (0x80, 'FLIPPT', 0, 'FlipPoint', -1, 0), # p1, p2, ..., ploopvalue -
- (0x82, 'FLIPRGOFF', 0, 'FlipRangeOff', 2, 0), # h, l -
- (0x81, 'FLIPRGON', 0, 'FlipRangeOn', 2, 0), # h, l -
- (0x66, 'FLOOR', 0, 'Floor', 1, 1), # n floor(n)
- (0x46, 'GC', 1, 'GetCoordOnPVector', 1, 1), # p c
- (0x88, 'GETINFO', 0, 'GetInfo', 1, 1), # selector result
- (0x91, 'GETVARIATION', 0, 'GetVariation', 0, -1), # - a1,..,an
- (0x0d, 'GFV', 0, 'GetFVector', 0, 2), # - px, py
- (0x0c, 'GPV', 0, 'GetPVector', 0, 2), # - px, py
- (0x52, 'GT', 0, 'GreaterThan', 2, 1), # e2, e1 b
- (0x53, 'GTEQ', 0, 'GreaterThanOrEqual', 2, 1), # e2, e1 b
- (0x89, 'IDEF', 0, 'InstructionDefinition', 1, 0), # f -
- (0x58, 'IF', 0, 'If', 1, 0), # e -
- (0x8e, 'INSTCTRL', 0, 'SetInstrExecControl', 2, 0), # s, v -
- (0x39, 'IP', 0, 'InterpolatePts', -1, 0), # p1, p2, ... , ploopvalue -
- (0x0f, 'ISECT', 0, 'MovePtToIntersect', 5, 0), # a1, a0, b1, b0, p -
- (0x30, 'IUP', 1, 'InterpolateUntPts', 0, 0), # - -
- (0x1c, 'JMPR', 0, 'Jump', 1, 0), # offset -
- (0x79, 'JROF', 0, 'JumpRelativeOnFalse', 2, 0), # e, offset -
- (0x78, 'JROT', 0, 'JumpRelativeOnTrue', 2, 0), # e, offset -
- (0x2a, 'LOOPCALL', 0, 'LoopAndCallFunction', 2, 0), # f, count -
- (0x50, 'LT', 0, 'LessThan', 2, 1), # e2, e1 b
- (0x51, 'LTEQ', 0, 'LessThenOrEqual', 2, 1), # e2, e1 b
- (0x8b, 'MAX', 0, 'Maximum', 2, 1), # e2, e1 max(e1, e2)
- (0x49, 'MD', 1, 'MeasureDistance', 2, 1), # p2,p1 d
- (0x2e, 'MDAP', 1, 'MoveDirectAbsPt', 1, 0), # p -
- (0xc0, 'MDRP', 5, 'MoveDirectRelPt', 1, 0), # p -
- (0x3e, 'MIAP', 1, 'MoveIndirectAbsPt', 2, 0), # n, p -
- (0x8c, 'MIN', 0, 'Minimum', 2, 1), # e2, e1 min(e1, e2)
- (0x26, 'MINDEX', 0, 'MoveXToTopStack', 1, 1), # k ek
- (0xe0, 'MIRP', 5, 'MoveIndirectRelPt', 2, 0), # n, p -
- (0x4b, 'MPPEM', 0, 'MeasurePixelPerEm', 0, 1), # - ppem
- (0x4c, 'MPS', 0, 'MeasurePointSize', 0, 1), # - pointSize
- (0x3a, 'MSIRP', 1, 'MoveStackIndirRelPt', 2, 0), # d, p -
- (0x63, 'MUL', 0, 'Multiply', 2, 1), # n2, n1 (n1 * n2)/64
- (0x65, 'NEG', 0, 'Negate', 1, 1), # n -n
- (0x55, 'NEQ', 0, 'NotEqual', 2, 1), # e2, e1 b
- (0x5c, 'NOT', 0, 'LogicalNot', 1, 1), # e ( not e )
- (0x6c, 'NROUND', 2, 'NoRound', 1, 1), # n1 n2
- (0x56, 'ODD', 0, 'Odd', 1, 1), # e b
- (0x5b, 'OR', 0, 'LogicalOr', 2, 1), # e2, e1 b
- (0x21, 'POP', 0, 'PopTopStack', 1, 0), # e -
- (0x45, 'RCVT', 0, 'ReadCVT', 1, 1), # location value
- (0x7d, 'RDTG', 0, 'RoundDownToGrid', 0, 0), # - -
- (0x7a, 'ROFF', 0, 'RoundOff', 0, 0), # - -
- (0x8a, 'ROLL', 0, 'RollTopThreeStack', 3, 3), # a,b,c b,a,c
- (0x68, 'ROUND', 2, 'Round', 1, 1), # n1 n2
- (0x43, 'RS', 0, 'ReadStore', 1, 1), # n v
- (0x3d, 'RTDG', 0, 'RoundToDoubleGrid', 0, 0), # - -
- (0x18, 'RTG', 0, 'RoundToGrid', 0, 0), # - -
- (0x19, 'RTHG', 0, 'RoundToHalfGrid', 0, 0), # - -
- (0x7c, 'RUTG', 0, 'RoundUpToGrid', 0, 0), # - -
- (0x77, 'S45ROUND', 0, 'SuperRound45Degrees', 1, 0), # n -
- (0x7e, 'SANGW', 0, 'SetAngleWeight', 1, 0), # weight -
- (0x85, 'SCANCTRL', 0, 'ScanConversionControl', 1, 0), # n -
- (0x8d, 'SCANTYPE', 0, 'ScanType', 1, 0), # n -
- (0x48, 'SCFS', 0, 'SetCoordFromStackFP', 2, 0), # c, p -
- (0x1d, 'SCVTCI', 0, 'SetCVTCutIn', 1, 0), # n -
- (0x5e, 'SDB', 0, 'SetDeltaBaseInGState', 1, 0), # n -
- (0x86, 'SDPVTL', 1, 'SetDualPVectorToLine', 2, 0), # p2, p1 -
- (0x5f, 'SDS', 0, 'SetDeltaShiftInGState', 1, 0), # n -
- (0x0b, 'SFVFS', 0, 'SetFVectorFromStack', 2, 0), # y, x -
- (0x04, 'SFVTCA', 1, 'SetFVectorToAxis', 0, 0), # - -
- (0x08, 'SFVTL', 1, 'SetFVectorToLine', 2, 0), # p2, p1 -
- (0x0e, 'SFVTPV', 0, 'SetFVectorToPVector', 0, 0), # - -
- (0x34, 'SHC', 1, 'ShiftContourByLastPt', 1, 0), # c -
- (0x32, 'SHP', 1, 'ShiftPointByLastPoint', -1, 0), # p1, p2, ..., ploopvalue -
- (0x38, 'SHPIX', 0, 'ShiftZoneByPixel', -1, 0), # d, p1, p2, ..., ploopvalue -
- (0x36, 'SHZ', 1, 'ShiftZoneByLastPoint', 1, 0), # e -
- (0x17, 'SLOOP', 0, 'SetLoopVariable', 1, 0), # n -
- (0x1a, 'SMD', 0, 'SetMinimumDistance', 1, 0), # distance -
- (0x0a, 'SPVFS', 0, 'SetPVectorFromStack', 2, 0), # y, x -
- (0x02, 'SPVTCA', 1, 'SetPVectorToAxis', 0, 0), # - -
- (0x06, 'SPVTL', 1, 'SetPVectorToLine', 2, 0), # p2, p1 -
- (0x76, 'SROUND', 0, 'SuperRound', 1, 0), # n -
- (0x10, 'SRP0', 0, 'SetRefPoint0', 1, 0), # p -
- (0x11, 'SRP1', 0, 'SetRefPoint1', 1, 0), # p -
- (0x12, 'SRP2', 0, 'SetRefPoint2', 1, 0), # p -
- (0x1f, 'SSW', 0, 'SetSingleWidth', 1, 0), # n -
- (0x1e, 'SSWCI', 0, 'SetSingleWidthCutIn', 1, 0), # n -
- (0x61, 'SUB', 0, 'Subtract', 2, 1), # n2, n1 (n1 - n2)
- (0x00, 'SVTCA', 1, 'SetFPVectorToAxis', 0, 0), # - -
- (0x23, 'SWAP', 0, 'SwapTopStack', 2, 2), # e2, e1 e1, e2
- (0x13, 'SZP0', 0, 'SetZonePointer0', 1, 0), # n -
- (0x14, 'SZP1', 0, 'SetZonePointer1', 1, 0), # n -
- (0x15, 'SZP2', 0, 'SetZonePointer2', 1, 0), # n -
- (0x16, 'SZPS', 0, 'SetZonePointerS', 1, 0), # n -
- (0x29, 'UTP', 0, 'UnTouchPt', 1, 0), # p -
- (0x70, 'WCVTF', 0, 'WriteCVTInFUnits', 2, 0), # n, l -
- (0x44, 'WCVTP', 0, 'WriteCVTInPixels', 2, 0), # v, l -
- (0x42, 'WS', 0, 'WriteStore', 2, 0), # v, l -
-]
-
-# fmt: on
-
-
-def bitRepr(value, bits):
- s = ""
- for i in range(bits):
- s = "01"[value & 0x1] + s
- value = value >> 1
- return s
-
-
-_mnemonicPat = re.compile(r"[A-Z][A-Z0-9]*$")
-
-
-def _makeDict(instructionList):
- opcodeDict = {}
- mnemonicDict = {}
- for op, mnemonic, argBits, name, pops, pushes in instructionList:
- assert _mnemonicPat.match(mnemonic)
- mnemonicDict[mnemonic] = op, argBits, name
- if argBits:
- argoffset = op
- for i in range(1 << argBits):
- opcodeDict[op + i] = mnemonic, argBits, argoffset, name
- else:
- opcodeDict[op] = mnemonic, 0, 0, name
- return opcodeDict, mnemonicDict
-
-
-streamOpcodeDict, streamMnemonicDict = _makeDict(streamInstructions)
-opcodeDict, mnemonicDict = _makeDict(instructions)
-
-
-class tt_instructions_error(Exception):
- def __init__(self, error):
- self.error = error
-
- def __str__(self):
- return "TT instructions error: %s" % repr(self.error)
-
-
-_comment = r"/\*.*?\*/"
-_instruction = r"([A-Z][A-Z0-9]*)\s*\[(.*?)\]"
-_number = r"-?[0-9]+"
-_token = "(%s)|(%s)|(%s)" % (_instruction, _number, _comment)
-
-_tokenRE = re.compile(_token)
-_whiteRE = re.compile(r"\s*")
-
-_pushCountPat = re.compile(r"[A-Z][A-Z0-9]*\s*\[.*?\]\s*/\* ([0-9]+).*?\*/")
-
-_indentRE = re.compile(r"^FDEF|IF|ELSE\[ \]\t.+")
-_unindentRE = re.compile(r"^ELSE|ENDF|EIF\[ \]\t.+")
-
-
-def _skipWhite(data, pos):
- m = _whiteRE.match(data, pos)
- newPos = m.regs[0][1]
- assert newPos >= pos
- return newPos
-
-
-class Program(object):
- def __init__(self) -> None:
- pass
-
- def fromBytecode(self, bytecode: bytes) -> None:
- self.bytecode = array.array("B", bytecode)
- if hasattr(self, "assembly"):
- del self.assembly
-
- def fromAssembly(self, assembly: List[str] | str) -> None:
- if isinstance(assembly, list):
- self.assembly = assembly
- elif isinstance(assembly, str):
- self.assembly = assembly.splitlines()
- else:
- raise TypeError(f"expected str or List[str], got {type(assembly).__name__}")
- if hasattr(self, "bytecode"):
- del self.bytecode
-
- def getBytecode(self) -> bytes:
- if not hasattr(self, "bytecode"):
- self._assemble()
- return self.bytecode.tobytes()
-
- def getAssembly(self, preserve=True) -> List[str]:
- if not hasattr(self, "assembly"):
- self._disassemble(preserve=preserve)
- return self.assembly
-
- def toXML(self, writer, ttFont) -> None:
- if (
- not hasattr(ttFont, "disassembleInstructions")
- or ttFont.disassembleInstructions
- ):
- try:
- assembly = self.getAssembly()
- except:
- import traceback
-
- tmp = StringIO()
- traceback.print_exc(file=tmp)
- msg = "An exception occurred during the decompilation of glyph program:\n\n"
- msg += tmp.getvalue()
- log.error(msg)
- writer.begintag("bytecode")
- writer.newline()
- writer.comment(msg.strip())
- writer.newline()
- writer.dumphex(self.getBytecode())
- writer.endtag("bytecode")
- writer.newline()
- else:
- if not assembly:
- return
- writer.begintag("assembly")
- writer.newline()
- i = 0
- indent = 0
- nInstr = len(assembly)
- while i < nInstr:
- instr = assembly[i]
- if _unindentRE.match(instr):
- indent -= 1
- writer.write(writer.indentwhite * indent)
- writer.write(instr)
- writer.newline()
- m = _pushCountPat.match(instr)
- i = i + 1
- if m:
- nValues = int(m.group(1))
- line: List[str] = []
- j = 0
- for j in range(nValues):
- if j and not (j % 25):
- writer.write(writer.indentwhite * indent)
- writer.write(" ".join(line))
- writer.newline()
- line = []
- line.append(assembly[i + j])
- writer.write(writer.indentwhite * indent)
- writer.write(" ".join(line))
- writer.newline()
- i = i + j + 1
- if _indentRE.match(instr):
- indent += 1
- writer.endtag("assembly")
- writer.newline()
- else:
- bytecode = self.getBytecode()
- if not bytecode:
- return
- writer.begintag("bytecode")
- writer.newline()
- writer.dumphex(bytecode)
- writer.endtag("bytecode")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont) -> None:
- if name == "assembly":
- self.fromAssembly(strjoin(content))
- self._assemble()
- del self.assembly
- else:
- assert name == "bytecode"
- self.fromBytecode(readHex(content))
-
- def _assemble(self) -> None:
- assembly = " ".join(getattr(self, "assembly", []))
- bytecode: List[int] = []
- push = bytecode.append
- lenAssembly = len(assembly)
- pos = _skipWhite(assembly, 0)
- while pos < lenAssembly:
- m = _tokenRE.match(assembly, pos)
- if m is None:
- raise tt_instructions_error(
- "Syntax error in TT program (%s)" % assembly[pos - 5 : pos + 15]
- )
- dummy, mnemonic, arg, number, comment = m.groups()
- pos = m.regs[0][1]
- if comment:
- pos = _skipWhite(assembly, pos)
- continue
-
- arg = arg.strip()
- if mnemonic.startswith("INSTR"):
- # Unknown instruction
- op = int(mnemonic[5:])
- push(op)
- elif mnemonic not in ("PUSH", "NPUSHB", "NPUSHW", "PUSHB", "PUSHW"):
- op, argBits, name = mnemonicDict[mnemonic]
- if len(arg) != argBits:
- raise tt_instructions_error(
- "Incorrect number of argument bits (%s[%s])" % (mnemonic, arg)
- )
- if arg:
- arg = binary2num(arg)
- push(op + arg)
- else:
- push(op)
- else:
- args = []
- pos = _skipWhite(assembly, pos)
- while pos < lenAssembly:
- m = _tokenRE.match(assembly, pos)
- if m is None:
- raise tt_instructions_error(
- "Syntax error in TT program (%s)" % assembly[pos : pos + 15]
- )
- dummy, _mnemonic, arg, number, comment = m.groups()
- if number is None and comment is None:
- break
- pos = m.regs[0][1]
- pos = _skipWhite(assembly, pos)
- if comment is not None:
- continue
- args.append(int(number))
- nArgs = len(args)
- if mnemonic == "PUSH":
- # Automatically choose the most compact representation
- nWords = 0
- while nArgs:
- while (
- nWords < nArgs
- and nWords < 255
- and not (0 <= args[nWords] <= 255)
- ):
- nWords += 1
- nBytes = 0
- while (
- nWords + nBytes < nArgs
- and nBytes < 255
- and 0 <= args[nWords + nBytes] <= 255
- ):
- nBytes += 1
- if (
- nBytes < 2
- and nWords + nBytes < 255
- and nWords + nBytes != nArgs
- ):
- # Will write bytes as words
- nWords += nBytes
- continue
-
- # Write words
- if nWords:
- if nWords <= 8:
- op, argBits, name = streamMnemonicDict["PUSHW"]
- op = op + nWords - 1
- push(op)
- else:
- op, argBits, name = streamMnemonicDict["NPUSHW"]
- push(op)
- push(nWords)
- for value in args[:nWords]:
- assert -32768 <= value < 32768, (
- "PUSH value out of range %d" % value
- )
- push((value >> 8) & 0xFF)
- push(value & 0xFF)
-
- # Write bytes
- if nBytes:
- pass
- if nBytes <= 8:
- op, argBits, name = streamMnemonicDict["PUSHB"]
- op = op + nBytes - 1
- push(op)
- else:
- op, argBits, name = streamMnemonicDict["NPUSHB"]
- push(op)
- push(nBytes)
- for value in args[nWords : nWords + nBytes]:
- push(value)
-
- nTotal = nWords + nBytes
- args = args[nTotal:]
- nArgs -= nTotal
- nWords = 0
- else:
- # Write exactly what we've been asked to
- words = mnemonic[-1] == "W"
- op, argBits, name = streamMnemonicDict[mnemonic]
- if mnemonic[0] != "N":
- assert nArgs <= 8, nArgs
- op = op + nArgs - 1
- push(op)
- else:
- assert nArgs < 256
- push(op)
- push(nArgs)
- if words:
- for value in args:
- assert -32768 <= value < 32768, (
- "PUSHW value out of range %d" % value
- )
- push((value >> 8) & 0xFF)
- push(value & 0xFF)
- else:
- for value in args:
- assert 0 <= value < 256, (
- "PUSHB value out of range %d" % value
- )
- push(value)
-
- pos = _skipWhite(assembly, pos)
-
- if bytecode:
- assert max(bytecode) < 256 and min(bytecode) >= 0
- self.bytecode = array.array("B", bytecode)
-
- def _disassemble(self, preserve=False) -> None:
- assembly = []
- i = 0
- bytecode = getattr(self, "bytecode", [])
- numBytecode = len(bytecode)
- while i < numBytecode:
- op = bytecode[i]
- try:
- mnemonic, argBits, argoffset, name = opcodeDict[op]
- except KeyError:
- if op in streamOpcodeDict:
- values = []
-
- # Merge consecutive PUSH operations
- while bytecode[i] in streamOpcodeDict:
- op = bytecode[i]
- mnemonic, argBits, argoffset, name = streamOpcodeDict[op]
- words = mnemonic[-1] == "W"
- if argBits:
- nValues = op - argoffset + 1
- else:
- i = i + 1
- nValues = bytecode[i]
- i = i + 1
- assert nValues > 0
- if not words:
- for j in range(nValues):
- value = bytecode[i]
- values.append(repr(value))
- i = i + 1
- else:
- for j in range(nValues):
- # cast to signed int16
- value = (bytecode[i] << 8) | bytecode[i + 1]
- if value >= 0x8000:
- value = value - 0x10000
- values.append(repr(value))
- i = i + 2
- if preserve:
- break
-
- if not preserve:
- mnemonic = "PUSH"
- nValues = len(values)
- if nValues == 1:
- assembly.append("%s[ ] /* 1 value pushed */" % mnemonic)
- else:
- assembly.append(
- "%s[ ] /* %s values pushed */" % (mnemonic, nValues)
- )
- assembly.extend(values)
- else:
- assembly.append("INSTR%d[ ]" % op)
- i = i + 1
- else:
- if argBits:
- assembly.append(
- mnemonic
- + "[%s] /* %s */" % (num2binary(op - argoffset, argBits), name)
- )
- else:
- assembly.append(mnemonic + "[ ] /* %s */" % name)
- i = i + 1
- self.assembly = assembly
-
- def __bool__(self) -> bool:
- """
- >>> p = Program()
- >>> bool(p)
- False
- >>> bc = array.array("B", [0])
- >>> p.fromBytecode(bc)
- >>> bool(p)
- True
- >>> p.bytecode.pop()
- 0
- >>> bool(p)
- False
-
- >>> p = Program()
- >>> asm = ['SVTCA[0]']
- >>> p.fromAssembly(asm)
- >>> bool(p)
- True
- >>> p.assembly.pop()
- 'SVTCA[0]'
- >>> bool(p)
- False
- """
- return (hasattr(self, "assembly") and len(self.assembly) > 0) or (
- hasattr(self, "bytecode") and len(self.bytecode) > 0
- )
-
- __nonzero__ = __bool__
-
- def __eq__(self, other) -> bool:
- if type(self) != type(other):
- return NotImplemented
- return self.__dict__ == other.__dict__
-
- def __ne__(self, other) -> bool:
- result = self.__eq__(other)
- return result if result is NotImplemented else not result
-
-
-def _test():
- """
- >>> _test()
- True
- """
-
- bc = b"""@;:9876543210/.-,+*)(\'&%$#"! \037\036\035\034\033\032\031\030\027\026\025\024\023\022\021\020\017\016\015\014\013\012\011\010\007\006\005\004\003\002\001\000,\001\260\030CXEj\260\031C`\260F#D#\020 \260FN\360M/\260\000\022\033!#\0213Y-,\001\260\030CX\260\005+\260\000\023K\260\024PX\261\000@8Y\260\006+\033!#\0213Y-,\001\260\030CXN\260\003%\020\362!\260\000\022M\033 E\260\004%\260\004%#Jad\260(RX!#\020\326\033\260\003%\020\362!\260\000\022YY-,\260\032CX!!\033\260\002%\260\002%I\260\003%\260\003%Ja d\260\020PX!!!\033\260\003%\260\003%I\260\000PX\260\000PX\270\377\3428!\033\260\0208!Y\033\260\000RX\260\0368!\033\270\377\3608!YYYY-,\001\260\030CX\260\005+\260\000\023K\260\024PX\271\000\000\377\3008Y\260\006+\033!#\0213Y-,N\001\212\020\261F\031CD\260\000\024\261\000F\342\260\000\025\271\000\000\377\3608\000\260\000<\260(+\260\002%\020\260\000<-,\001\030\260\000/\260\001\024\362\260\001\023\260\001\025M\260\000\022-,\001\260\030CX\260\005+\260\000\023\271\000\000\377\3408\260\006+\033!#\0213Y-,\001\260\030CXEdj#Edi\260\031Cd``\260F#D#\020 \260F\360/\260\000\022\033!! \212 \212RX\0213\033!!YY-,\001\261\013\012C#Ce\012-,\000\261\012\013C#C\013-,\000\260F#p\261\001F>\001\260F#p\261\002FE:\261\002\000\010\015-,\260\022+\260\002%E\260\002%Ej\260@\213`\260\002%#D!!!-,\260\023+\260\002%E\260\002%Ej\270\377\300\214`\260\002%#D!!!-,\260\000\260\022+!!!-,\260\000\260\023+!!!-,\001\260\006C\260\007Ce\012-, i\260@a\260\000\213 \261,\300\212\214\270\020\000b`+\014d#da\\X\260\003aY-,\261\000\003%EhT\260\034KPZX\260\003%E\260\003%E`h \260\004%#D\260\004%#D\033\260\003% Eh \212#D\260\003%Eh`\260\003%#DY-,\260\003% Eh \212#D\260\003%Edhe`\260\004%\260\001`#D-,\260\011CX\207!\300\033\260\022CX\207E\260\021+\260G#D\260Gz\344\033\003\212E\030i \260G#D\212\212\207 \260\240QX\260\021+\260G#D\260Gz\344\033!\260Gz\344YYY\030-, \212E#Eh`D-,EjB-,\001\030/-,\001\260\030CX\260\004%\260\004%Id#Edi\260@\213a \260\200bj\260\002%\260\002%a\214\260\031C`\260F#D!\212\020\260F\366!\033!!!!Y-,\001\260\030CX\260\002%E\260\002%Ed`j\260\003%Eja \260\004%Ej \212\213e\260\004%#D\214\260\003%#D!!\033 EjD EjDY-,\001 E\260\000U\260\030CZXEh#Ei\260@\213a \260\200bj \212#a \260\003%\213e\260\004%#D\214\260\003%#D!!\033!!\260\031+Y-,\001\212\212Ed#EdadB-,\260\004%\260\004%\260\031+\260\030CX\260\004%\260\004%\260\003%\260\033+\001\260\002%C\260@T\260\002%C\260\000TZX\260\003% E\260@aDY\260\002%C\260\000T\260\002%C\260@TZX\260\004% E\260@`DYY!!!!-,\001KRXC\260\002%E#aD\033!!Y-,\001KRXC\260\002%E#`D\033!!Y-,KRXED\033!!Y-,\001 \260\003%#I\260@`\260 c \260\000RX#\260\002%8#\260\002%e8\000\212c8\033!!!!!Y\001-,KPXED\033!!Y-,\001\260\005%\020# \212\365\000\260\001`#\355\354-,\001\260\005%\020# \212\365\000\260\001a#\355\354-,\001\260\006%\020\365\000\355\354-,F#F`\212\212F# F\212`\212a\270\377\200b# \020#\212\261KK\212pE` \260\000PX\260\001a\270\377\272\213\033\260F\214Y\260\020`h\001:-, E\260\003%FRX\260\002%F ha\260\003%\260\003%?#!8\033!\021Y-, E\260\003%FPX\260\002%F ha\260\003%\260\003%?#!8\033!\021Y-,\000\260\007C\260\006C\013-,\212\020\354-,\260\014CX!\033 F\260\000RX\270\377\3608\033\260\0208YY-, \260\000UX\270\020\000c\260\003%Ed\260\003%Eda\260\000SX\260\002\033\260@a\260\003Y%EiSXED\033!!Y\033!\260\002%E\260\002%Ead\260(QXED\033!!YY-,!!\014d#d\213\270@\000b-,!\260\200QX\014d#d\213\270 \000b\033\262\000@/+Y\260\002`-,!\260\300QX\014d#d\213\270\025Ub\033\262\000\200/+Y\260\002`-,\014d#d\213\270@\000b`#!-,KSX\260\004%\260\004%Id#Edi\260@\213a \260\200bj\260\002%\260\002%a\214\260F#D!\212\020\260F\366!\033!\212\021#\022 9/Y-,\260\002%\260\002%Id\260\300TX\270\377\3708\260\0108\033!!Y-,\260\023CX\003\033\002Y-,\260\023CX\002\033\003Y-,\260\012+#\020 <\260\027+-,\260\002%\270\377\3608\260(+\212\020# \320#\260\020+\260\005CX\300\033How to Download and Play Dark Riddle on PC
-What is Dark Riddle?
-dark riddle download pc
- A thrilling action game with puzzles and secrets
-A single-player adventure with different characters and creatures
-A challenging gameplay with obstacles, traps, and collectibles
-Why play Dark Riddle on PC?
-Enjoy a larger and better display
Playing Dark Riddle on PC will allow you to enjoy a larger and better display than your phone or tablet. You will be able to see more details and colors of the game's graphics and animations. You will also have a wider view of the game's environment and interface. You will be able to appreciate the game's design and art more on a bigger screen.
Playing Dark Riddle on PC will also give you a faster and smoother performance than your mobile device. You will not have to worry about lagging, crashing, or freezing issues that might ruin your gameplay. You will also not have to deal with battery drain, overheating, or storage problems that might affect your device. You will be able to play the game without any interruptions or distractions.
-Playing Dark Riddle on PC will also let you use keyboard and mouse controls for more accuracy and comfort. You will not have to rely on touch controls that might be inaccurate, unresponsive, or uncomfortable. You will be able to control your character and interact with the game's elements more easily and precisely. You will also be able to customize your key mapping and mouse sensitivity according to your preference. You will have a better gaming experience with keyboard and mouse controls.
-dark riddle game download for pc
-how to play dark riddle on pc
-dark riddle pc emulator
-dark riddle classic download pc
-dark riddle free download for windows
-dark riddle pc version
-dark riddle online game for pc
-dark riddle 2 download pc
-dark riddle pc gameplay
-dark riddle pc requirements
-dark riddle pc mod apk
-dark riddle for pc bluestacks
-dark riddle for windows 10
-dark riddle for mac download
-dark riddle pc cheats
-dark riddle pc hack
-dark riddle pc review
-dark riddle pc controls
-dark riddle pc update
-dark riddle pc full version
-dark riddle offline game for pc
-dark riddle for laptop download
-dark riddle for desktop download
-dark riddle for windows 7
-dark riddle for windows 8
-dark riddle for macbook pro
-dark riddle for macbook air
-dark riddle pc tips and tricks
-dark riddle pc walkthrough
-dark riddle pc guide
-dark riddle pc best settings
-dark riddle pc keyboard and mouse
-dark riddle pc nox player
-dark riddle pc ldplayer
-dark riddle pc memu play
-dark riddle pc gameloop
-dark riddle pc steam
-dark riddle pc epic games store
-dark riddle pc origin
-dark riddle pc gog.com
-dark riddle download for windows xp
-dark riddle download for windows vista
-dark riddle download for windows 11
-dark riddle download for mac os x
-dark riddle download for mac os catalina
-dark riddle download for mac os big sur
-dark riddle download for mac os monterey
Now that you know why playing Dark Riddle on PC is a good idea, you might be wondering how to do it. The answer is simple: you need an emulator. An emulator is a software that allows you to run Android apps and games on your PC or Mac. With an emulator, you can download and play Dark Riddle on PC just like you would on your mobile device. Here are the steps to follow:
-The first step is to choose a reliable and safe emulator that can run Dark Riddle on PC smoothly and securely. There are many emulators available online, but not all of them are trustworthy or compatible. Some emulators might contain malware, spyware, or viruses that might harm your PC or Mac. Some emulators might not support Dark Riddle or other games that you want to play. Some emulators might have poor performance, quality, or features that might affect your gameplay.
-Therefore, you need to do some research and comparison before choosing an emulator. You need to check the emulator's reputation, reviews, ratings, compatibility, security, performance, quality, and features. You need to make sure that the emulator can run Dark Riddle on PC without any problems or risks.
-One of the best emulators that we recommend is LDPlayer. LDPlayer is a free Android emulator for PC that can run Dark Riddle and other games smoothly and safely. LDPlayer has a high reputation, positive reviews, high ratings, wide compatibility, strong security, fast performance, excellent quality, and rich features. LDPlayer can provide you with the best gaming experience on PC.
-The second step is to install the emulator on your PC or Mac. This is a simple and easy process that will not take much time or effort. Here are the steps to follow:
-Congratulations! You have successfully installed LDPlayer on your PC or Mac.
-The third step is to sign in to Google Play Store or download the APK file of Dark Riddle on your PC or Mac. This is also a simple and easy process that will not take much time or effort. Here are the steps to follow:
-Alternatively, you can also download the APK file of Dark Riddle from a trusted source and drag it into LDPlayer. LDPlayer will automatically install it for you.
-Congratulations! You have successfully downloaded and installed Dark Riddle on your PC or Mac.
-The fourth step is
The fourth step is to install and launch Dark Riddle on the emulator. This is the final and most exciting step, as you will be able to play the game on your PC or Mac. Here are the steps to follow:
-Congratulations! You have successfully installed and launched Dark Riddle on your PC or Mac.
-Dark Riddle is a fantastic action game that will keep you entertained and engaged for hours. You will love the game's story, graphics, characters, puzzles, and secrets. You will also love playing the game on your PC or Mac, as you will get a better display, performance, and controls. All you need is an emulator like LDPlayer, and you can download and play Dark Riddle on PC easily and safely. So what are you waiting for? Download LDPlayer and Dark Riddle today and start your adventure!
-Here are some frequently asked questions about Dark Riddle and playing it on PC:
-Yes, Dark Riddle is free to play on both Android devices and PC or Mac with an emulator. However, the game does offer in-app purchases that can enhance your gameplay or unlock more features.
-Yes, Dark Riddle is safe to play on both Android devices and PC or Mac with an emulator. The game does not contain any harmful or inappropriate content that might affect your device or yourself. However, you should always be careful when downloading apps or games from unknown sources, as they might contain malware or viruses. You should also use a reliable and safe emulator like LDPlayer to play Dark Riddle on PC.
-Dark Riddle is a relatively long game that can take you several hours to complete. The game has many levels, puzzles, secrets, and endings that will keep you hooked and curious. The game also has a replay value, as you can try different choices and actions that might lead to different outcomes.
-No, Dark Riddle requires an internet connection to play on both Android devices and PC or Mac with an emulator. The game needs to access online features such as leaderboards, achievements, ads, etc. You also need an internet connection to download and update the game.
-No, Dark Riddle is a single-player game that does not support multiplayer or co-op modes. The game is designed to be a solo adventure that will challenge you to solve the mystery of your neighbor. However, you can still share your progress and achievements with your friends through social media or other platforms.
401be4b1e0WhatsApp is one of the most popular and widely used messaging and calling apps in the world. It allows you to send text messages, voice messages, photos, videos, documents, stickers, GIFs, and more to your contacts. You can also make voice and video calls with high quality and low data usage. But did you know that you can also use WhatsApp on your computer?
-Download ✶ https://urlca.com/2uOc1U
WhatsApp is a free app that uses your phone's internet connection (4G/3G/2G/EDGE or Wi-Fi, as available) to let you message and call friends and family. You can use it on your smartphone, tablet, or desktop. Using WhatsApp on your computer has many benefits, such as:
-To use WhatsApp on your computer, you need to have:
-You have two options for using WhatsApp on your computer:
-Both options are similar in functionality and appearance, but there are some differences. For example, WhatsApp Desktop allows you to use keyboard shortcuts, mute notifications, auto-start on login, etc. WhatsApp Web requires you to keep a browser tab open and may consume more battery power.
-How to download and install WhatsApp on PC in 2020
-WhatsApp for PC: A step-by-step guide to set up and use WhatsApp on your computer
-WhatsApp Web: How to access WhatsApp from your browser and sync with your phone
-How to run WhatsApp on Windows 10 with an emulator
-WhatsApp Desktop: The official app for using WhatsApp on your PC
-How to backup and restore WhatsApp chats on your PC
-How to use WhatsApp on multiple devices with one account
-How to send and receive files, photos, and videos with WhatsApp on your PC
-How to make video and voice calls with WhatsApp on your PC
-How to enable dark mode on WhatsApp for PC
-How to use WhatsApp stickers and emojis on your PC
-How to create and join WhatsApp groups on your PC
-How to mute and block contacts on WhatsApp for PC
-How to update WhatsApp on your PC and get the latest features
-How to fix common WhatsApp problems on your PC
-How to uninstall WhatsApp from your PC
-How to use WhatsApp Business on your PC
-How to secure your WhatsApp account on your PC
-How to transfer WhatsApp data from your phone to your PC
-How to use WhatsApp Web without scanning QR code
-How to download and run WhatsApp on Mac in 2020
-How to use WhatsApp on Linux with a web browser or an app
-How to use WhatsApp on Chromebook with Google Play Store or Chrome extension
-How to use keyboard shortcuts for WhatsApp on your PC
-How to change your WhatsApp profile picture and status on your PC
-How to delete messages and chats on WhatsApp for PC
-How to archive and pin chats on WhatsApp for PC
-How to manage notifications and sounds on WhatsApp for PC
-How to change language and theme settings on WhatsApp for PC
-How to clear cache and storage space on WhatsApp for PC
-How to verify your phone number and email address on WhatsApp for PC
-How to link your Facebook account with WhatsApp for PC
-How to use two-step verification and fingerprint lock on WhatsApp for PC
-How to report spam and abuse on WhatsApp for PC
-How to use live location and share contacts on WhatsApp for PC
-How to use QR codes and invite links for WhatsApp contacts and groups on your PC
-How to use disappearing messages and view once media on WhatsApp for PC
-How to use status updates and stories on WhatsApp for PC
-How to use chat wallpapers and custom notifications on WhatsApp for PC
-How to use broadcast lists and starred messages on WhatsApp for PC
In your computer's browser, go to the WhatsApp Download page. You will see different options for downloading WhatsApp Desktop for different operating systems.
-Select the version that matches your operating system. For example, if you have a Windows 10 64-bit computer, choose "Windows (64-bit)". If you have a Mac OS X 10.11 or newer computer, choose "Mac OS X". The download will start automatically.
-Once the download is complete, open the .exe or .d mg or .zip file and follow the prompts to install WhatsApp Desktop on your computer. The installation process may vary depending on your operating system, but it is usually simple and straightforward. You may need to agree to the terms and conditions, choose a destination folder, create a shortcut, etc.
-After the installation is complete, you can launch WhatsApp Desktop from your desktop, start menu, or applications folder. You will see a QR code on the screen that you need to scan with your phone.
-On your phone, open WhatsApp and tap the menu button (three dots) in the top right corner. Then tap "WhatsApp Web". You will see a camera screen that you need to point at the QR code on your computer. Once the scan is successful, you will be logged in to WhatsApp Desktop.
-Now you can use WhatsApp Desktop to chat and call with your contacts. You will see a familiar interface with your chats on the left and the chat window on the right. You can also access your settings, profile, status, etc. from the menu button in the top left corner. You can send and receive messages, media, documents, stickers, GIFs, etc. as you would on your phone. You can also make voice and video calls by clicking the phone or camera icon in the top right corner of the chat window.
-In your computer's browser, go to web.whatsapp.com. You will see a QR code on the screen that you need to scan with your phone.
-On your phone, open WhatsApp and tap the menu button (three dots) in the top right corner. Then tap "WhatsApp Web". You will see a camera screen that you need to point at the QR code on your computer. Once the scan is successful, you will be logged in to WhatsApp Web.
-Now you can use WhatsApp Web to chat and call with your contacts. You will see a similar interface as WhatsApp Desktop with your chats on the left and the chat window on the right. You can also access your settings, profile, status, etc. from the menu button in the top left corner. You can send and receive messages, media, documents, stickers, GIFs, etc. as you would on your phone. You can also make voice and video calls by clicking the phone or camera icon in the top right corner of the chat window.
-In this article, we have learned how to download and run WhatsApp on the computer. We have seen that there are two options for using WhatsApp on the computer: WhatsApp Desktop and WhatsApp Web. Both options allow you to message and call with your contacts from your computer using your phone's internet connection. Both options have similar functionality and appearance, but there are some differences in terms of features and performance.
-If you want to enjoy WhatsApp on your computer, we recommend that you try both options and see which one suits you better. You can download WhatsApp Desktop from the WhatsApp Download page or use WhatsApp Web from web.whatsapp.com. You will need to scan a QR code with your phone to log in to either option.
-We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy chatting!
- FAQs Q: Can I use WhatsApp on my computer without my phone? A: No, you cannot use WhatsApp on your computer without your phone. You need to have your phone connected to the internet and logged in to WhatsApp to use WhatsApp on your computer. Q: Can I use WhatsApp on multiple computers at the same time? A: No, you cannot use WhatsApp on multiple computers at the same time. You can only use one instance of WhatsApp Desktop or WhatsApp Web at a time. If you try to log in to another computer, you will be logged out from the previous one. Q: How can I log out of WhatsApp on my computer? A: To log out of WhatsApp on your computer, you can either click the menu button in the top left corner and then click "Log out" or go to WhatsApp on your phone and tap the menu button (three dots) in the top right corner. Then tap "WhatsApp Web" and then tap "Log out from all devices". Q: How can I update WhatsApp on my computer? A: To update WhatsApp on your computer, you can either download the latest version from the WhatsApp Download page or wait for the automatic update notification. If you see a message that says "Update available" on WhatsApp Desktop or WhatsApp Web, you can click it to update to the latest version. Q: How can I secure my WhatsApp account on my computer? A: To secure your WhatsApp account on your computer, you can enable two-step verification and lock your computer when not in use. Two-step verification adds an extra layer of security by requiring a PIN when you register your phone number with WhatsApp. You can enable it from WhatsApp on your phone by tapping the menu button (three dots) in the top right corner. Then tap "Settings" and then tap "Account" and then tap "Two-step verification". Locking your computer prevents unauthorized access to your WhatsApp account on your computer. You can lock your computer by pressing Ctrl+Alt+Delete or Windows+L on Windows, Command+Control+Q on Mac, or Super+L on Linux. 197e85843dIf you are looking for a free and fun platformer game for your Android device, you might want to check out Geometry Dash Lite 2.21 APK. This is a program by RobTop Games AB that lets you jump, fly, and flip your way through various levels of geometric shapes and obstacles. You can also create your own levels and share them with other players online.
-Geometry Dash Lite is a simplified version of the popular game Geometry Dash, which was released in 2013. Geometry Dash Lite has fewer levels, features, and modes than the full version, but it still offers a lot of fun and challenge for players who enjoy rhythm-based platformer games.
-Download ––– https://urlca.com/2uO7ZG
The gameplay of Geometry Dash Lite is simple but addictive. You control a square-shaped character that can jump, fly, and flip in the air. Your goal is to avoid hitting any obstacles or spikes that appear on your way. You can also collect stars and coins to unlock new icons and colors for your character.
-The game is synchronized with the music, which means that you have to time your jumps and movements according to the beat and tempo of the soundtrack. The game also has a practice mode that lets you save checkpoints along the way, so you can resume from where you left off if you die.
-Geometry Dash Lite has several features that make it an enjoyable and engaging game for Android users. Some of these features are:
-If you want to download and install Geometry Dash Lite 2.21 APK on your Android device, you need to follow some simple steps. Before you do that, however, you need to make sure that your device meets the requirements for running the game.
-The requirements for Geometry Dash Lite 2.21 APK are:
-The steps to download and install Geometry Dash Lite 2.21 APK are:
-geometry dash lite 2.21 download for android
-geometry dash lite 2.21 free apk
-geometry dash lite 2.21 mod apk unlimited everything
-geometry dash lite 2.21 latest version apk
-geometry dash lite 2.21 apk filehippo
-geometry dash lite 2.21 robtop games
-geometry dash lite 2.21 update apk
-geometry dash lite 2.21 full version apk
-geometry dash lite 2.21 hack apk
-geometry dash lite 2.21 apk pure
-geometry dash lite 2.21 apk mirror
-geometry dash lite 2.21 apk uptodown
-geometry dash lite 2.21 apk old version
-geometry dash lite 2.21 apk no ads
-geometry dash lite 2.21 apk offline
-geometry dash lite 2.21 apk revdl
-geometry dash lite 2.21 apk rexdl
-geometry dash lite 2.21 apk mob.org
-geometry dash lite 2.21 apk android oyun club
-geometry dash lite 2.21 apk android republic
-geometry dash lite 2.21 apk apkpure.com
-geometry dash lite 2.21 apk happymod.com
-geometry dash lite 2.21 apk moddroid.com
-geometry dash lite 2.21 apk an1.com
-geometry dash lite 2.21 apk apkmody.io
-geometry dash lite 2.21 apk apkmirror.com
-geometry dash lite 2.21 apk apknite.com
-geometry dash lite 2.21 apk apktada.com
-geometry dash lite 2.21 apk apksfree.com
-geometry dash lite 2.21 apk apksfull.com
-geometry dash lite 2.21 apk apksmod.com
-geometry dash lite 2.21 apk apksmash.com
-geometry dash lite 2.21 apk apksnake.com
-geometry dash lite 2.21 apk apksolo.com
-geometry dash lite 2.21 apk apksopo.com
-geometry dash lite 2.21 apk apksparadise.com
-geometry dash lite 2.21 apk apk
Geometry Dash Lite 2.21 APK is a game that can provide you with hours of entertainment and challenge. Whether you are a casual gamer or a hardcore fan of platformer games, you will find something to enjoy in this game. Here are some of the reasons why you should play Geometry Dash Lite 2.21 APK:
-Playing Geometry Dash Lite 2.21 APK can have several benefits for you, such as:
-Playing Geometry Dash Lite 2.21 APK can also have some challenges for you, such as:
-Geometry Dash Lite 2.21 APK is a free and fun platformer game for Android devices that lets you jump, fly, and flip through various levels of geometric shapes and obstacles. You can also create your own levels and share them with other players online. The game has simple but addictive gameplay, synchronized with the music, and several features that make it enjoyable and engaging. However, the game also has some challenges that might make it difficult or frustrating for some players. If you are looking for a game that can challenge your skills, stimulate your senses, and entertain you for hours, you should give Geometry Dash Lite 2.21 APK a try.
-Here are some of the frequently asked questions about Geometry Dash Lite 2.21 APK:
-If you are a fan of football games, you have probably heard of FIFA, the most popular and successful football simulation series by EA Sports. Every year, EA releases a new installment of FIFA with updated rosters, graphics, features, and modes. But how does FIFA 22 compare to its predecessors? Is it worth buying? What are the new and improved aspects of the game? In this article, we will answer these questions and more as we review FIFA 22, the latest entry in the franchise that promises to bring the game even closer to the real thing.
-FIFA 22 is the 29th installment in the FIFA series, which dates back to 1993. It is a football game that lets you play as your favorite teams and players from around the world, in various modes and competitions. You can also create your own custom teams, players, and clubs, and customize them to your liking. You can play solo or with friends, online or offline, in matches, tournaments, leagues, or career modes.
-Download Zip ->->->-> https://urlca.com/2uOaaG
FIFA 22 was released on October 1, 2021 for PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, Nintendo Switch, PC, and Stadia. It is developed by EA Vancouver and EA Romania, and published by EA Sports. It features more than 17,000 players, over 700 teams, and more than 30 leagues from around the world. It also includes some of the most prestigious tournaments in football history, such as the UEFA Champions League, the UEFA Europa League, the UEFA Europa Conference League, the CONMEBOL Libertadores, the CONMEBOL Sudamericana, the Premier League, La Liga, Bundesliga, Serie A, Ligue 1, MLS, and more.
-FIFA 22 boasts several new and improved features that make it stand out from previous games in the series. The most notable one is HyperMotion technology, which is exclusive to PlayStation 5, Xbox Series X/S, and Stadia. HyperMotion is a new motion-capture system that uses machine learning to create realistic animations for every player on the pitch. It also enhances player behaviors, reactions, interactions, and emotions. HyperMotion makes FIFA 22 look and feel more authentic than ever before.
-But HyperMotion is not the only innovation in FIFA 22. The game also introduces new gameplay features that change the way you play on the pitch. These include:
-Here is the continuation of the article:
-These new features aim to make FIFA 22 more immersive, dynamic, and fun to play.
-apkrabi fifa 22 download
-apkrabi fifa 22 mod apk
-apkrabi fifa 22 android
-apkrabi fifa 22 mobile
-apkrabi fifa 22 world cup mode
-apkrabi fifa 22 ultimate team
-apkrabi fifa 22 players ratings
-apkrabi fifa 22 gameplay
-apkrabi fifa 22 review
-apkrabi fifa 22 tips and tricks
-apkrabi fifa 22 manager mode
-apkrabi fifa 22 offline
-apkrabi fifa 22 online
-apkrabi fifa 22 cheats and hacks
-apkrabi fifa 22 best teams
-apkrabi fifa 22 icons and heroes
-apkrabi fifa 22 stadiums
-apkrabi fifa 22 kits and badges
-apkrabi fifa 22 coins and points
-apkrabi fifa 22 updates and news
-apkrabi fifa 22 vs pes 2022
-apkrabi fifa 22 vs real soccer
-apkrabi fifa 22 vs dream league soccer
-apkrabi fifa 22 vs score hero
-apkrabi fifa 22 vs soccer stars
-apkrabi fifa 22 free download
-apkrabi fifa 22 full version
-apkrabi fifa 22 cracked apk
-apkrabi fifa 22 premium apk
-apkrabi fifa 22 unlocked apk
-apkrabi fifa 22 latest version
-apkrabi fifa 22 old version
-apkrabi fifa 22 beta version
-apkrabi fifa 22 demo version
-apkrabi fifa 22 release date
-apkrabi fifa 22 system requirements
-apkrabi fifa 22 installation guide
-apkrabi fifa 22 how to play
-apkrabi fifa 22 features and benefits
-apkrabi fifa 22 pros and cons
FIFA 22 also offers a variety of modes to suit your preferences and playstyles. Whether you want to play solo or with friends, online or offline, casually or competitively, there is a mode for you. Some of the modes are:
-Career Mode is one of the most popular and long-running modes in FIFA. It lets you create your own player or manager and lead them to glory in their football career. You can choose from hundreds of clubs and leagues, and make decisions on and off the pitch that affect your performance, reputation, and relationships. You can also scout, sign, train, and sell players, as well as customize your team's tactics, kits, stadium, and more.
-Career Mode in FIFA 22 has been improved with more options and immersion. You can now create your own club from scratch and take them from the lower divisions to the top of the world. You can also enjoy an overhauled player career experience that gives you more ways to progress, achieve, and immerse yourself in your pro's journey through the game.
-Volta Football is a mode that brings back the street football vibe of FIFA Street. It lets you play in various urban locations around the world, with different rules, teams, and styles. You can create your own avatar and customize their appearance, skills, and gear. You can also join forces with other players online or offline, and compete in various modes such as Volta Squads, Volta Story, Volta League, Volta Arcade, and more.
-Volta Football in FIFA 22 returns with more flair and customization. You can now enjoy new locations such as Sydney, Paris, Dubai, Milan, and Cape Town. You can also unlock more items and outfits for your avatar, as well as new skill moves and celebrations. You can also play with or against real-life football stars in Volta Featured Battles.
-Ultimate Team is the most popular and lucrative mode in FIFA. It lets you build your dream team from scratch using players from different clubs, leagues, and nations. You can acquire players through packs, auctions, objectives, rewards, or events. You can also upgrade your players' attributes and chemistry using consumables. You can then compete with other players online or offline in various modes such as Division Rivals, Squad Battles, Friendlies, Drafts, and more.
-Ultimate Team in FIFA 22 introduces FUT Heroes and new ways to play. FUT Heroes are iconic players from the past who have a unique league-specific chemistry that boosts their links with other players from the same league. Some of the FUT Heroes are Mario Gomez, Tim Cahill, Diego Milito, Robbie Keane, Jorge Campos, and more. You can also enjoy new ways to play such as FUT Champions Finals (a revamped version of Weekend League), FUT Co-Op Seasons (a cooperative mode where you can play with a friend), FUT Events (a mode where you can join a team and contribute to global objectives), and more.
-Pro Clubs is a mode where you can create your own virtual pro and join a club with other players online. You can customize your pro's appearance, position, attributes, traits, and skills. You can also customize your club's name, logo, kit, stadium, tactics, and more. You can then play matches against other clubs online in various divisions and cups.
-Pro Clubs in FIFA 22 gets new customization and growth features. You can now choose from more than 30 archetypes for your pro's position and style. You can also unlock perks that enhance your pro's abilities on the pitch. You can also use skill points to improve your pro's attributes in different categories such as pace, shooting, passing, dribbling, defending,
Here is the continuation of the article:
-defending, and physical. You can also use new customization options for your club's logo, kit, stadium, and more.
-FIFA 22 is not a perfect game, and it has some drawbacks that may affect your enjoyment. Some of the drawbacks are:
-While some of the new gameplay features in FIFA 22 are welcome and beneficial, others are either unnecessary or unbalanced. For example, the explosive sprint mechanic can make some players too fast and hard to catch, especially on the wings. The new attacking tactics can also make some formations too defensive or offensive, creating unrealistic scenarios. The goalkeeper rewrite can also make some saves too easy or impossible, depending on the situation.
-One of the biggest criticisms of FIFA games is their reliance on microtransactions, especially in Ultimate Team mode. FIFA 22 is no exception, and it still encourages you to spend real money on FIFA Points, which you can use to buy packs, players, consumables, and other items. While you can earn some of these items through playing the game, the odds of getting high-rated players or rare items are very low, and the prices of some items are very high. This creates a pay-to-win environment where players who spend more money have an advantage over those who don't.
-Another common complaint about FIFA games is their menu design, which is often cluttered and confusing. FIFA 22 is no improvement, and it still has many menus that are hard to navigate or understand. For example, the main menu has too many tabs and icons that are not clearly labeled or explained. The career mode menu has too many submenus and options that are not intuitive or user-friendly. The ultimate team menu has too many screens and pop-ups that are annoying or distracting.
-FIFA 22 is a significant improvement over FIFA 21, and it delivers a new generation of football simulation. The HyperMotion technology and gameplay changes make it feel like a next-gen game, with realistic visuals, animations, and behaviors. The modes are refreshed and offer more variety and fun, with new features such as FUT Heroes, Volta Featured Battles, Pro Clubs archetypes and perks, and Career Mode club creation. However, some issues remain, such as microtransactions, menu design, and some unbalanced or unnecessary mechanics.
-If you are a fan of football games, FIFA 22 is worth buying, as it offers a lot of content and quality for your money. If you are new to football games, FIFA 22 is a good entry point, as it has many modes and options to suit your preferences and skill levels. If you are looking for a realistic, immersive, and enjoyable football game, FIFA 22 is a great choice.
-FIFA 22 is available on PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, Nintendo Switch, PC, and Stadia. However, some features such as HyperMotion technology are exclusive to PlayStation 5, Xbox Series X/S, and Stadia.
-The standard edition of FIFA 22 costs $59.99 USD for PlayStation 4, Xbox One, PC (Origin), and Stadia; $69.99 USD for PlayStation 5 and Xbox Series X/S; and $49.99 USD for Nintendo Switch. There are also other editions such as the Ultimate Edition ($99.99 USD) and the Legacy Edition ($39.99 USD) that offer different bonuses and content.
-FIFA 22 is worth buying if you enjoy football games or want to try one for the first time. It offers a lot of content and quality for your money, with realistic graphics, gameplay, and modes. It also has a large and active online community that you can play with or against.
-You can download FIFA 22 from the official website of EA Sports or from the digital store of your platform of choice (such as PlayStation Store, Microsoft Store, Nintendo eShop, Steam, or Google Play). You will need an internet connection to download the game and to access some of its features.
-You can play FIFA 22 online by connecting to EA servers through your platform's online service (such as PlayStation Network or Xbox Live
Here is the continuation of the article:
-or Xbox Live). You will also need an EA account and an online subscription (such as PlayStation Plus or Xbox Live Gold) to play online. You can then choose from various online modes such as Ultimate Team, Volta Football, Pro Clubs, Online Seasons, Online Friendlies, Co-Op Seasons, and more. You can also join online events, tournaments, and challenges that offer rewards and prizes.
401be4b1e0GTA 5 is one of the most popular and amazing games that you can play on your Android device. However, to enjoy this game fully, you need to download the OBB file for GTA 5 along with the Apk file. In this article, we will show you what is OBB file, why you need it, and how to download and install it on your device.
-GTA 5 is a game developed by Rockstar Games that lets you experience the life of a criminal in a fictional city called Los Santos. You can explore the open world, complete missions, interact with other characters, drive vehicles, use weapons, and more. GTA 5 is one of the best-selling games of all time and has received critical acclaim for its graphics, gameplay, story, and online mode.
-Download File ——— https://urlca.com/2uO4W5
GTA 5 is the fifth installment in the Grand Theft Auto series that was released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. In 2021, Rockstar Games announced that GTA 5 will be available for Android devices as well. However, unlike other games that you can download directly from the Google Play Store, GTA 5 requires an additional file called OBB file to run properly.
-OBB file stands for Opaque Binary Blob file and it is a data file that contains additional information that is not stored in the Apk file. OBB files are usually used by large games or apps that have high-quality graphics, sound, or video. OBB files are stored in a separate folder on your device's internal or external storage and are accessed by the app when needed.
-GTA 5 is a very large game that has a lot of data that cannot be stored in the Apk file alone. The Apk file only contains the basic information and code that allows the game to run on your device. The OBB file contains the rest of the data such as textures, models, sounds, videos, etc. that make the game look realistic and immersive. Without the OBB file, GTA 5 will not work properly or may not work at all on your device.
-Before you download and install GTA 5 on your Android device, you need to make sure that your device meets the minimum requirements for the game. These are:
-Your device must have Android 4.0 or higher to run GTA 5.
-Your device must have at least 2GB of RAM to run GTA 5 smoothly.
-Your device must have ARMv7 CPU architecture or higher (ARMv8-a compatible) to run GTA 5.
-Your device must have enough storage space to store both the Apk and OBB files of GTA 5. The Apk file size is about 3GB and the OBB file size is about about 35GB. Therefore, you need at least 40GB of free space on your device to install GTA 5.
-How to download obb file for gta 5 for android
-GTA 5 apk obb data latest download for android
-GTA 5 android apk free download with obb file
-GTA 5 apk obb data for mobile android download
-GTA 5 apk obb+data files for android (gta v)
-Download gta 5 apk and obb file for android
-GTA 5 apk obb data highly compressed download for android
-GTA 5 apk obb data offline download for android
-GTA 5 apk obb data full version download for android
-GTA 5 apk obb data mod download for android
-GTA 5 apk obb data no verification download for android
-GTA 5 apk obb data zip download for android
-GTA 5 apk obb data mega download for android
-GTA 5 apk obb data google drive download for android
-GTA 5 apk obb data mediafire download for android
-GTA 5 apk obb data online download for android
-GTA 5 apk obb data update download for android
-GTA 5 apk obb data size download for android
-GTA 5 apk obb data requirements download for android
-GTA 5 apk obb data features download for android
-Download gta 5 lite apk and obb file for android
-Download gta 5 real apk and obb file for android
-Download gta 5 beta apk and obb file for android
-Download gta 5 original apk and obb file for android
-Download gta 5 fan made apk and obb file for android
-Download gta 5 modded apk and obb file for android
-Download gta 5 unlimited money apk and obb file for android
-Download gta 5 cheats apk and obb file for android
-Download gta 5 graphics mod apk and obb file for android
-Download gta 5 san andreas mod apk and obb file for android
-Download gta 5 vice city mod apk and obb file for android
-Download gta 5 liberty city mod apk and obb file for android
-Download gta 5 iron man mod apk and obb file for android
-Download gta 5 spiderman mod apk and obb file for android
-Download gta 5 batman mod apk and obb file for android
-Download gta 5 superman mod apk and obb file for android
-Download gta 5 zombie mod apk and obb file for android
-Download gta 5 car mod apk and obb file for android
-Download gta 5 bike mod apk and obb file for android
-Download gta 5 weapon mod apk and obb file for android
-Download gta 5 skin mod apk and obb file for android
-Download gta 5 map mod apk and obb file for android
-Download gta 5 mission mod apk and obb file for android
-Download gta 5 sound mod apk and obb file for android
-Download gta 5 realistic mod apk and obb file for android
-Best site to download gta 5 apk and obb file for android
-How to install gta 5 apk and obb file on android
-How to play gta 5 on android with apk and obb file
-How to fix gta 5 not working on android with apk and obb file
-How to update gta 5 on android with apk and obb file
GTA 5 is not just a game, it is a masterpiece that offers you a lot of features and options to enjoy. Some of the features of GTA 5 Android Apk are:
-GTA 5 has stunning graphics that make you feel like you are in a real city. The game uses advanced lighting, shadows, reflections, and textures to create a realistic environment. You can see the details of every building, vehicle, character, and object in the game. You can also customize the graphics settings according to your device's performance.
-GTA 5 has an online mode called GTA Online that lets you play with other players from around the world. You can join or create your own crew, participate in various missions, races, heists, deathmatches, and more. You can also buy and customize your own properties, vehicles, weapons, clothes, and accessories. GTA Online is constantly updated with new content and features to keep you entertained.
-GTA 5 has a realistic gameplay that makes you feel like you are living in the game world. You can do whatever you want in the game, such as driving, shooting, fighting, stealing, flying, swimming, diving, parachuting, etc. You can also interact with other characters and objects in the game. The game has a dynamic weather system, day and night cycle, traffic system, radio stations, and more. The game also has a realistic physics engine that makes the game more fun and challenging.
-GTA 5 has an open world that lets you explore every corner of Los Santos and its surrounding areas. You can go anywhere you want in the game, such as the city center, the suburbs, the countryside, the mountains, the desert, the ocean, and more. The game has a lot of places to visit and activities to do in the game. You can also find hidden secrets and easter eggs in the game.
-Now that you know what GTA 5 is and what it offers, you might be wondering how to download and install it on your Android device. Well, don't worry because we will guide you through the process step by step. Just follow these instructions:
-The first thing you need to do is to download the GTA 5 Apk and OBB files from trusted sources. There are many websites that claim to provide these files but some of them may contain viruses or malware that can harm your device or steal your data. Therefore, you need to be careful and only download from reputable sources. One of the best sources to download GTA 5 Apk Obb files is [GTA5Mobile.com]. This website provides you with the latest version of GTA 5 Apk Obb files that are safe and secure.
-The next thing you need to do is to enable unknown sources in your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings > security > unknown sources > enable. This may vary depending on your device model and Android version.
-The next thing you need to do is to install GTA 5 Apk file on your device. To do this, locate the downloaded GTA 5 Apk file on your device using a file manager app. Tap on the file and follow the instructions on the screen to install it.
-The next thing you need to do is to extract GTA 5 OBB file to the Android/OBB folder using a file manager app. To do this, locate the downloaded GTA 5 OBB file on your device using a file manager app. Tap on the file and select extract option. Choose the destination folder as Android/OBB and wait for the extraction process to finish.
-The final thing you need to do is to launch GTA 5 and enjoy the game. To do this, go to your app drawer and tap on the GTA 5 icon. The game will start loading and verify the data files. After that, you will see the main menu of the game. You can choose to play the story mode or the online mode. You can also adjust the settings and controls according to your preference. That's it, you have successfully installed GTA 5 on your Android device and you can enjoy the game.
-GTA 5 is a fantastic game that you can play on your Android device. However, to play this game, you need to download and install the OBB file for GTA 5 along with the Apk file. In this article, we have explained what is OBB file, why you need it, and how to download and install it on your device. We have also provided you with the requirements and features of GTA 5 Android Apk. We hope that this article has helped you and answered your questions. If you have any doubts or queries, feel free to ask us in the comments section below.
-Here are some of the frequently asked questions about GTA 5 Android Apk Obb files:
-A: Yes, GTA 5 Android Apk Obb files are free to download from [GTA5Mobile.com]. However, you may need to complete some surveys or offers to unlock the download links.
-A: Yes, GTA 5 Android Apk Obb files are safe and secure to download and install on your device. They do not contain any viruses or malware that can harm your device or steal your data.
-A: The time it takes to download and install GTA 5 Android Apk Obb files depends on your internet speed and device performance. Generally, it may take from 30 minutes to 2 hours to complete the process.
-A: Yes, you can play GTA 5 offline on your Android device by choosing the story mode option. However, you will need an internet connection to play the online mode.
-A: No, you cannot use cheats or mods in GTA 5 Android Apk as they are not supported by the game. If you try to use them, you may face errors or crashes in the game.
401be4b1e0If you are a fan of basketball games, you might have heard of NBA 2K20, one of the most popular and realistic games in the genre. NBA 2K20 features amazing graphics, gameplay, modes, and customization options that let you create your own player and team. You can play with current or all-time great NBA teams, or compete in streetball tournaments in different locations. You can also enjoy a new story mode that follows your career from high school to the NBA.
-However, NBA 2K20 is not available for free on the Google Play Store. You have to pay a certain amount of money to download and install it on your Android device. But what if you want to play it for free? Is there a way to do that?
-Download File →→→ https://urlca.com/2uObaw
The answer is yes, there is. You can download and install NBA 2K20 APK+data files on your Android device for free. APK stands for Android Package Kit, which is a file format that contains all the necessary components of an app. Data files are additional files that contain game assets, such as graphics, sounds, and settings. By downloading and installing these files, you can bypass the Google Play Store and enjoy NBA 2K20 on your Android device.
-But how do you do that? What are the requirements and steps involved? In this article, we will show you how to download and install NBA 2K20 APK+data files on your Android device in a simple and easy way. Just follow these steps and you will be playing NBA 2K20 in no time.
-Before you start downloading and installing NBA 2K20 APK+data files, you need to make sure that your Android device meets some minimum requirements. These are:
-If your device meets these requirements, you are ready to proceed with the next steps.
-The first step is to download the NBA 2K20 APK and data files from a trusted source. There are many websites that offer these files for free, but not all of them are safe and reliable. Some of them may contain malware, viruses, or outdated versions of the game. Therefore, you need to be careful and choose a reputable website that provides the latest and working files.
-One of the websites that we recommend is APKPure, which is a popular and trusted platform for downloading APK and data files for various Android apps and games. You can download NBA 2K20 APK and data files from this website by following these steps:
-You can also use other websites that offer NBA 2K20 APK and data files, but make sure to check the following things before downloading:
-nba 2k20 mobile apk+data download
-nba 2k20 android apk+obb free
-nba 2k20 apk+data offline mod
-nba 2k20 apk+data highly compressed
-nba 2k20 apk+data latest version
-nba 2k20 apk+data for pc
-nba 2k20 apk+data unlimited money
-nba 2k20 apk+data full unlocked
-nba 2k20 apk+data no verification
-nba 2k20 apk+data gameplay
-nba 2k20 apk+data requirements
-nba 2k20 apk+data size
-nba 2k20 apk+data update
-nba 2k20 apk+data cheats
-nba 2k20 apk+data review
-nba 2k20 apk+data best settings
-nba 2k20 apk+data controller support
-nba 2k20 apk+data multiplayer
-nba 2k20 apk+data my career
-nba 2k20 apk+data run the streets mode
-nba 2k20 apk+data blacktop mode
-nba 2k20 apk+data all star teams
-nba 2k20 apk+data classic teams
-nba 2k20 apk+data legends teams
-nba 2k20 apk+data roster update
-nba 2k20 apk+data download link
-nba 2k20 apk+data google drive
-nba 2k20 apk+data mega.nz
-nba 2k20 apk+data mediafire.com
-nba 2k20 apk+data zippyshare.com
-nba 2k20 apk+data install guide
-nba 2k20 apk+data error fix
-nba 2k20 apk+data mod menu
-nba 2k20 apk+data unlimited vc
-nba 2k20 apk+data hack version
-nba 2k20 apk+data cracked version
-nba 2k20 apk+data premium version
-nba 2k20 apk+data pro version
-nba 2k20 apk+data vip version
-nba 2k20 apk+data original version
-nba 2k20 apk+data official version
-nba 2k20 apk+data safe version
-nba 2k20 apk+data virus free version
-nba 2k20 apk+data malware free version
-nba 2k20 apk+data ad free version
-nba 2k20 apk+data no root version
-nba 2k20 apk+data online version
-nba 2k20 apk+data offline version
The next step is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. By default, this option is disabled on most Android devices, so you need to enable it manually before installing NBA 2K20 APK file. To do this, follow these steps:
-You have now enabled unknown sources on your device. This will allow you to install NBA 2K20 APK file without any issues. However, you should only install apps from trusted sources and disable unknown sources after installing NBA 2K20 APK file.
-The third step is to install the NBA 2K20 APK file on your device. This is a simple process that involves locating the downloaded file and tapping on it. To install NBA 2K20 APK file on your device, follow these steps:
-You have now installed NBA 2K20 APK file on your device. However, you are not done yet. You still need to extract and copy the data files to the obb folder on your device. This is the final and most important step to play NBA 2K20 on your Android device.
-The last step is to extract and copy the NBA 2K20 data files to the obb folder on your device. The obb folder is a special folder that stores game data for apps that are not downloaded from the Google Play Store. You need to copy the NBA 2K20 data files to this folder in order to load the game assets and settings properly. To do this, follow these steps:
-You have now extracted and copied the NBA 2K20 data files to the obb folder on your device. You are ready to play NBA 2K20 on your Android device.
-In this article, we have shown you how to download and install NBA 2K20 APK+data files on your Android device for free. By following these steps, you can enjoy one of the best basketball games on your mobile device without paying anything. You can play with your favorite NBA teams and players, customize your own character and team, and compete in various modes and challenges.
-NBA 2K20 is a fun and addictive game that will keep you entertained for hours. Whether you want to play solo or with friends, online or offline, NBA 2K20 has something for everyone. You can also update the game regularly to get new features and improvements.
-We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. And if you liked this article, please share it with your friends who might be interested in playing NBA 2K20 on their Android devices.
-If you are a fan of basketball and want to experience the thrill of playing with your favorite NBA stars and teams on your Android device, then you might want to check out NBA 2K20 APK. This is a modified version of the official NBA 2K20 game that allows you to play it offline without any internet connection. In this article, we will tell you everything you need to know about NBA 2K20 APK, including its features, how to download and install it, its pros and cons, and some alternatives that you can try.
-NBA 2K20 APK is an Android application package that contains the game files of NBA 2K20, a popular basketball simulation game developed by 2K Games. The game features updated graphics, player models, animations, and gameplay mechanics, making it one of the most realistic basketball games available. You can play various game modes, such as Run The Streets, NBA Stories, MyCareer, The Association, and Multiplayer, with current or all-time great NBA teams and players.
-DOWNLOAD ⚹⚹⚹ https://urlca.com/2uOboN
There are several reasons why you might want to download NBA 2K20 APK offline instead of the official version from the Google Play Store. Here are some of them:
-NBA 2K20 APK has many features that make it an exciting and enjoyable game for basketball fans. Here are some of them:
-For the first time in any NBA 2K game, you can take your MyPlayer around the world in a series of 3-on-3 streetball competitions. You can get on a hot streak and takeover the game with greatly improved abilities and attributes. You can also compete against other players for a place on the Ranked Leaderboard or see how far you can go through the Championship.
-You can experience the history of some of the most famous NBA players and teams with 5 new NBA Stories to play through. You can relive or recreate some of the most memorable moments and games in NBA history, such as the 2016 NBA Finals, the 2001 Lakers, and the 1985 Celtics.
-You can build your own custom MyPlayer and go on a personal journey from college to the NBA. You can make choices that affect your path to stardom and interact with various characters, including Idris Elba, Rosario Dawson, and LeBron James. You can also improve your skills and attributes by playing games, practicing, and training.
-You can take full control of a NBA franchise and manage its every aspect, from roster moves, trades, scouting, finances, and game plans. You can play through multiple seasons and try to build a dynasty. You can also create your own custom league with up to 30 teams and adjust various settings and rules.
-nba 2k20 apk mod offline free download
-nba 2k20 apk obb offline free download
-nba 2k20 apk data offline free download
-nba 2k20 apk android offline free download
-nba 2k20 apk full version offline free download
-nba 2k20 apk latest version offline free download
-nba 2k20 apk unlimited money offline free download
-nba 2k20 apk no verification offline free download
-nba 2k20 apk cracked offline free download
-nba 2k20 apk hack offline free download
-nba 2k20 apk for pc offline free download
-nba 2k20 apk for ios offline free download
-nba 2k20 apk for tablet offline free download
-nba 2k20 apk for mobile offline free download
-nba 2k20 apk for laptop offline free download
-nba 2k20 apk for windows offline free download
-nba 2k20 apk for mac offline free download
-nba 2k20 apk for chromebook offline free download
-nba 2k20 apk for firestick offline free download
-nba 2k20 apk for smart tv offline free download
-nba 2k20 apk gameplay offline free download
-nba 2k20 apk graphics offline free download
-nba 2k20 apk update offline free download
-nba 2k20 apk patch offline free download
-nba 2k20 apk cheats offline free download
-nba 2k20 apk tips offline free download
-nba 2k20 apk tricks offline free download
-nba 2k20 apk guide offline free download
-nba 2k20 apk review offline free download
-nba 2k20 apk rating offline free download
-nba 2k20 apk best settings offline free download
-nba 2k20 apk best players offline free download
-nba 2k20 apk best teams offline free download
-nba 2k20 apk best modes offline free download
-nba 2k20 apk best stories offline free download
-nba 2k20 apk best soundtrack offline free download
-nba 2k20 apk how to play offline free download
-nba 2k20 apk how to install offline free download
-nba 2k20 apk how to update offline free download
-nba 2k20 apk how to unlock offline free download
-nba 2k20 apk how to customize offline free download
-nba 2k20 apk how to earn money offline free download
-nba 2k20 apk how to fix errors offline free download
-nba 2k20 apk where to find offline free download
-nba 2k20 apk where to get offline free download
-nba 2k20 apk where to buy offline free download
-nba 2k20 apk where to watch offline free download
-nba 2k20 apk where to stream offline free download
-nba 2k20 apk where to share offline free download
You can play online with or against other players in various modes, such as Quick Match, Ranked Match, Blacktop, and Online Association. You can also join or create your own crew with up to 10 players and compete in 5-on-5 matches with other crews.
-You can enjoy a diverse and dynamic soundtrack featuring songs from Drake, Diplo, T-Pain, Billie Eilish, Post Malone, and more. You can also discover new music from emerging artists through the UnitedMasters platform.
-If you want to play NBA 2K20 APK offline on your Android device, you need to follow these steps:
-You need to download two files: the APK file, which is the application file, and the OBB file, which is the data file. You can find these files from various sources on the internet, but make sure they are safe and compatible with your device. For example, you can download them from this link: [NBA 2K20 APK + OBB].
-Before you install the APK file, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the downloaded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.
-After you install the APK file, you need to extract the OBB file using a file manager app or a zip extractor app. You can find these apps on the Google Play Store or download them from other sources. Once you extract the OBB file, you need to copy it to the following location on your device: Android > obb > com.t2ksports.nba2k20and. Make sure that the OBB file is inside a folder named com.t2ksports.nba2k20and.
-Now that you have installed both the APK and OBB files, you are ready to launch the game and play it offline. Just tap on the game icon on your home screen or app drawer and start playing. You can access all the features and content of the game without any internet connection.
-NBA 2K20 APK offline has many advantages and disadvantages that you should consider before downloading it. Here are some of them:
-NBA 2K20 APK offline delivers a smooth and lifelike gameplay experience that captures the essence of basketball. The game features improved animations, physics, lighting, shadows, and textures that make every move and shot look natural and authentic. The game also has a revamped control scheme that gives you more options and precision when dribbling, passing, shooting, defending, and rebounding.
-NBA 2K20 APK offline has a compelling and immersive MyCareer mode that lets you create your own legend in the NBA. The game has a well-written and acted story that features star-studded cast members like Idris Elba, Rosario Dawson, LeBron James, Anthony Davis, and more. The game also has a branching storyline that changes based on your choices and actions. You can also customize your MyPlayer with various hairstyles, tattoos, accessories, and clothing.
-NBA 2K20 APK offline has a more balanced and rewarding progression system that makes it easier and faster to level up your MyPlayer. The game has reduced the amount of VC (virtual currency) required to upgrade your attributes and skills, and increased the amount of VC earned from playing games and completing challenges. The game also has a new dynamic potential feature that allows your MyPlayer to improve beyond their initial ratings based on their performance and consistency.
-NBA 2K20 APK offline does not have many significant changes or innovations compared to the previous NBA 2K games. The game mostly reuses and tweaks the existing features and modes, such as MyTeam, My Neighborhood, MyGM, and MyLeague. The game also has some bugs and glitches that affect the gameplay and performance.
-NBA 2K20 APK offline has the same My Neighborhood mode as NBA 2K19, which is a social hub where you can interact with other players, shop for items, play mini-games, and access various game modes. The game does not have any new or improved locations, activities, or events in the My Neighborhood mode, making it boring and repetitive.
-NBA 2K20 APK offline has a controversial MyTeam mode that encourages gambling and spending real money. The game has added new features such as slot machines, prize wheels, ball drops, and card packs that are based on luck and randomness. The game also has a pay-to-win system that favors players who spend more money on buying VC and acquiring better cards.
-If you are looking for other basketball games that you can play offline on your Android device, here are some alternatives that you can try:
-This is a simple but fun basketball game that lets you swipe your finger to shoot hoops. You can play various modes, such as Arcade, Time Attack, Tournament, and Multiplayer. You can also customize your player with different outfits, balls, and accessories.
-This is a multiplayer basketball game that lets you challenge other players online or offline in 1-on-1 matches. You can show off your skills and tricks by dribbling, feinting, shooting, blocking, and stealing. You can also unlock new courts, balls, and items for your player.
-This is a arcade-style basketball game that lets you play 2-on-2 matches against the computer or another player on the same device. You can perform dunks, alley-oops, crossovers, and blocks with easy controls. You can also upgrade your players and coaches with coins earned from winning matches.
-NBA 2K20 APK offline is a great option for basketball fans who want to play the best basketball game on their Android device without any internet connection. The game has many features and modes that offer realistic and immersive gameplay experience. However, the game also has some drawbacks that might disappoint some players, such as lack of innovation, copy-paste content, and gambling elements. If you are looking for other basketball games that you can play offline, you can try Swipe Basketball 2, Basketball Stars, or Basketball Battle.
-Do you love playing block-style games with your friends? Do you want to experience a sandbox game that lets you create, share, and explore different worlds? If yes, then you should try Blockman Go, a free app that includes minigames, chatting, and making friends. You can play various block style minigames here.
-Download File ✒ ✒ ✒ https://urlca.com/2uO9Ty
But what if you want to play Blockman Go on your PC instead of your mobile device? Is it possible to enjoy this fun game on a larger screen with better graphics and controls? The answer is yes! In this article, we will show you how to download Blockman Go for PC using an emulator software. We will also tell you what Blockman Go is, why you should play it on PC, what are some features of the game, and what are some alternatives to Blockman Go. Let's get started!
-Blockman Go is an arcade game developed by Blockman GO Studio. It is available for Android devices on Google Play Store and for iOS devices on App Store. It is also compatible with Windows 10 devices through Microsoft Store. According to the official website, Blockman Go is:
-Blockman Go allows you to join or create rooms with your friends or other players from all over the world. You can chat with them using text or voice messages, send emojis, stickers, or gifts, and add them as friends. You can also join clans or guilds to participate in clan wars or events.
Blockman Go gives you the freedom to create your own worlds using blocks and items. You can build anything you can imagine, from houses, castles, gardens, to cities, islands, or planets. You can also decorate your worlds with furniture, plants, animals, or NPCs. You can share your creations with other players or visit their worlds to see what they have made.
-How to download blockman go on pc for free
-Blockman go pc download windows 10
-Blockman go pc version download
-Blockman go blocky mods download pc
-Download blockman go mod apk for pc
-Blockman go online play on pc
-Blockman go pc game download
-Blockman go emulator for pc
-Blockman go download for laptop
-Blockman go download for mac
-Blockman go pc requirements
-Blockman go pc gameplay
-Blockman go pc controls
-Blockman go pc hack
-Blockman go pc cheat engine
-Blockman go bed wars download pc
-Blockman go sky block download pc
-Blockman go anime fighting simulator download pc
-Blockman go egg war download pc
-Blockman go free city download pc
-Download blockman go for windows 7
-Download blockman go for windows 8
-Download blockman go for windows xp
-Download blockman go for linux
-Download blockman go for ubuntu
-Download blockman go offline installer for pc
-Download blockman go latest version for pc
-Download blockman go beta version for pc
-Download blockman go old version for pc
-Download blockman go update for pc
-Download blockman go launcher for pc
-Download blockman go skins for pc
-Download blockman go maps for pc
-Download blockman go mods for pc
-Download blockman go texture packs for pc
-Download blockman go shaders for pc
-Download blockman go resource packs for pc
-Download blockman go server software for pc
-Download blockman go plugins for pc
-Download blockman go scripts for pc
-Download blockman go tools for pc
-Download blockman go editor for pc
-Download blockman go studio for pc
-Download blockman go sandbox mode for pc
-Download blockman go creative mode for pc
-Download blockman go survival mode for pc
-Download blockman go multiplayer mode for pc
-Download blockman go single player mode for pc
-Download blockman go custom games for pc
-Download blockman go mini games for pc
Blockman Go offers a variety of minigames that you can play with your friends or other players. You can choose from action, adventure, role playing, strategy, simulation, and more. Some of the popular minigames are Bed Wars, Egg War, Sky Block, Anime Fighting Simulator, Build and Shoot, and WWE School Simulator. You can also find new minigames every week on the app. Each minigame has its own rules, objectives, and rewards. You can also create your own minigames using the game editor.
While Blockman Go is designed for mobile devices, you can also play it on your PC using an emulator software. An emulator is a program that allows you to run Android or iOS apps on your computer. There are many benefits of playing Blockman Go on PC, such as:
-Playing Blockman Go on PC will give you a better visual experience than playing on a small screen. You can see more details and colors of the blocks and the worlds. You can also adjust the resolution and the graphics settings to suit your preferences.
Playing Blockman Go on PC will also give you more control over the game. You can use your keyboard and mouse to move, aim, shoot, jump, and interact with the game. You can also customize the key mapping and the sensitivity to fit your style. You will have an advantage over other players who use touch controls.
Playing Blockman Go on PC will also allow you to access other apps and tools on your computer while playing. You can use your browser, chat apps, video players, music players, or any other programs that you need. You can also record your gameplay, take screenshots, or stream your game online using various software.
To download Blockman Go for PC, you will need an emulator software that can run Android or iOS apps on your computer. There are many emulators available online, but we recommend using BlueStacks, MuMu Player, or MEmu as they are easy to use and compatible with most games. Here are the steps to download Blockman Go for PC using an emulator:
-You can download the emulator from their official websites or from other sources. Make sure you have enough space and system requirements to run the emulator smoothly. Follow the instructions to install the emulator on your PC.
After installing the emulator, launch it and sign in to your Google account. This will allow you to access Google Play Store and download apps from there. You can find Blockman Go in the app center or by typing its name in the search bar.
Once you find Blockman Go, click on it and install it on your emulator. It may take a few minutes depending on your internet speed and device performance. After installing Blockman Go, you can start playing it on your PC by clicking on its icon.
Blockman Go is a fun and exciting game that offers many features for its players. Some of these features are:
-Blockman Go allows you to create your own avatar using different blocks and items. You can change your hair style, skin color, eye color, clothes, shoes, hats, glasses, masks, wings, tails, and more. You can also buy more accessories and clothes from the shop using gold or diamonds.
Blockman Go enables you to communicate with other players using text or voice messages. You can chat with them in public rooms or private messages. You can also send them emojis, stickers, or gifts to express your feelings. You can add them as friends and join their rooms or invite them to yours.
Blockman Go rewards you with gold for playing minigames. You can use gold to buy items from the shop, such as accessories, clothes, furniture, blocks, or game tickets. You can also earn diamonds by completing tasks, watching ads, or buying them with real money. Diamonds can be used to buy premium items or VIP membership.
Blockman Go offers a wide range of minigames that you can play with your friends or other players. You can choose from different genres and themes, such as action, adventure, role playing, strategy, simulation, and more. You can also find new minigames every week on the app. Each minigame has its own rules, objectives, and rewards. You can also create your own minigames using the game editor.
If you are looking for more games like Blockman Go, you can try some of these alternatives:
-Name | -Description | -
---|---|
Minetest |
-Minetest is an open source voxel game engine that contains a wide variety of features. You can create and explore infinite worlds made of blocks, craft items and tools, build structures and machines, fight monsters and other players, and more. You can also download mods and texture packs to customize your game. Minetest is available for Windows, Linux, Mac OS X, Android, and iOS devices. |
-
Roblox |
-Roblox is a popular online game platform and game creation system. You can play millions of games created by other users or create your own games using Roblox Studio. You can also customize your avatar with clothes and accessories, chat and socialize with other players, join groups and communities, and earn Robux by selling your creations or buying premium membership. Roblox is available for Windows, Mac OS X, iOS, Android, Xbox One, and Oculus Rift devices. |
-
MineClone 2 |
-MineClone 2 is a free and open source Minecraft clone that runs on Minetest engine. It aims to be a faithful recreation of Minecraft in terms of gameplay, graphics, sounds, and features. You can play in survival or creative mode, mine blocks and resources, craft items and tools, build structures and farms, fight enemies and bosses, explore biomes and dungeons, and more. MineClone 2 is available for Windows, Linux, Mac OS X, Android, and iOS devices. |
-
Creativerse: The Definitive Edition |
-Creativerse: The Definitive Edition is a sandbox adventure game that lets you explore a vast world of blocks. You can collect resources and craft items, build houses and castles, tame animals and pets, fight monsters and bosses, complete quests and achievements, and more. You can also play with your friends online or offline in co-op mode. Creativerse: The Definitive Edition is available for Windows devices. |
-
LEGO Worlds |
-LEGO Worlds is a sandbox game developed by Traveller's Tales and published by Warner Bros. Interactive Entertainment. You can build anything you can imagine using LEGO bricks and pieces. You can also explore different worlds filled with LEGO characters and creatures. You can play solo or with your friends online or offline in co-op mode. LEGO Worlds is available for Windows, Xbox One, PlayStation 4, and Nintendo Switch devices. |
-
Blockman Go is a fun and exciting game that lets you play, craft, and share your experiences with your friends or other players. You can play various block style minigames with different genres and themes, customize your avatar with fashionable accessories and clothes, chat and meet new friends from all over the world, earn gold by playing minigames and use it to buy items, and explore the wonderland of minigames and discover new adventures every day.
-If you want to enjoy this game on your PC, you can download Blockman Go for PC using an emulator software. This will allow you to enjoy a larger screen and better graphics, use keyboard and mouse controls for more accuracy and convenience, and access thousands of productivity apps and tools on your computer.
-If you are looking for more games like Blockman Go, you can try some of the alternatives we have mentioned above, such as Minetest, Roblox, MineClone 2, Creativerse: The Definitive Edition, and LEGO Worlds. They are all sandbox games that offer similar features and gameplay to Blockman Go.
-We hope this article has helped you learn more about Blockman Go and how to download it for PC. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-Yes, Blockman Go is free to play. You can download it from Google Play Store, App Store, or Microsoft Store without paying anything. However, some items and features may require in-app purchases or premium membership.
Yes, Blockman Go is safe to play. It has been rated 12+ by Google Play Store and 9+ by App Store for moderate violence, mild horror, infrequent mild profanity or crude humor, infrequent mild sexual content or nudity, infrequent mild mature or suggestive themes, infrequent mild alcohol, tobacco or drug use or references. It also has parental controls and privacy settings that allow you to restrict or block certain content or users.
To update Blockman Go, you need to check if there is a new version available on the app store where you downloaded it from. If there is, you can tap on the update button and wait for the download and installation to finish. You can also enable automatic updates on your device settings to get the latest version of Blockman Go whenever it is released.
To contact Blockman Go support, you can visit their official website and click on the "Contact Us" button at the bottom of the page. You can also email them at service@blockmango.net or follow them on their social media accounts on Facebook, Twitter, Instagram, YouTube, or Discord. They will try to respond to your queries or issues as soon as possible.
To delete Blockman Go, you need to uninstall it from your device. You can do this by long-pressing the app icon and tapping on the uninstall option. You can also go to your device settings and find the app in the list of installed apps. Then tap on it and select the uninstall option. This will remove Blockman Go from your device along with its data and cache.
DOWNLOAD — https://ssurll.com/2uzxgX
Download 🗹 https://gohhs.com/2uFTuU
Download File ☆ https://gohhs.com/2uFU3g
counter-strike: global offensive (cs: go) is the latest installment of the series. developed by valve corporation, it was released on 26 october 2012, for microsoft windows, xbox 360, playstation 3, and linux. counter-strike: global offensive includes the game, counter-strike: source, the original mod by valve, and a huge amount of new maps and weapons. download counter strike: global offensive for pc and play counter-strike: global offensive free of charge. counter-strike: global offensive is a mod for counter-strike: source, developed by valve corporation, to be released in october 2012. counter-strike: global offensive is a remake of counter-strike: source. it is a remake of the original mod by valve called counter-strike: source, which was made available for download in november 2003. the game is available in four different versions. each version is available for different platforms. the standalone version of the game is available for windows. the standalone version of the game is available for windows pc and xbox 360 consoles. the steam version is available for windows, xbox 360 and the playstation 3. in addition to the counter-strike: global offensive standalone version, the steam version can be played through steam. counter-strike: global offensive is a remake of the original mod by valve called counter-strike: source, which was made available for download in november 2003.
-DOWNLOAD › https://gohhs.com/2uFVij
in the meantime, someone has started to port the original half-life game to source engine. half-life 2 is already a full-blown, multi-player game with hundreds of available maps, characters, and weapons. it also boasts a very advanced physics engine that allows for many challenging gameplay dynamics. while source is much easier to learn than unreal, it does take some more time to master, as it differs significantly from the traditional unreal engine.
899543212bFamily isnt who youre born with, its who you die for.
After years of combat in Vietnam, Lincoln Clay knows this truth: family isnt who youre born with, its who you die for. When his surrogate family, the black mob, is betrayed and wiped out by the Italian Mafia, Lincoln builds a new family on the ashes of the old and blazes a path of military-grade revenge and redemption through the Mafioso responsible.
-Billed as Swedish House Mafia x the Weeknd, the set was essentially split into two halves, with the former opening and roaring through a tight set of their own hits, then performing briefly with the Weeknd for a couple of the recent songs theyve released together, then ceding the stage to him for a tight megamix of his songs, ranging from global smashes like Blinding Lights, I Cant Feel My Face and Starboy to his verses on high-profile collabs with Kanye West, Drake, Future and Ty Dolla $ign (who were not present), Hurricane, Crew Love, Low Life and Or Nah, respectively.
-Download Zip ✅ https://gohhs.com/2uFUzU
Part one of the Mafia crime saga - 1930s, Lost Heaven, IL
Re-made from the ground up, rise through the ranks of the Mafia during the Prohibition era of organized crime. After a run-in with the mob, cab driver Tommy Angelo is thrust into a deadly underworld. Initially uneasy about falling in with the Salieri crime family, Tommy soon finds that the rewards are too big to ignore.
The set opened with the fan-favorite One After 909, with the band spinning gently back and forth across the stage. The crowd was awed by the complex and slyly pounding ritualistic jazz that flowed from the Swedes and then suddenly erupted into a throbbing, baying rave uproar, with towering billows of sound filling the massive space. For a second, it looked like they might be setting the stage for something truly special.
When the next song, Mary, came out and the crowd went wild, the effect was stunning. But only for a second. The headlining stage was freezing and the tent was not as big as the Radiohead tent. And even though Good Times was the next song, the group was already setting up and, well, House Mafia couldn't just do a four-minute encore. Except they could. They did, and what a way to cap this night, one that had been admittedly lacking in memorable moments until the closing set.
Download Zip ✺✺✺ https://gohhs.com/2uFUjt
If you are looking for a fun and action-packed game that will keep you entertained for hours, you might want to try Robokill, a top-view arcade shooter game that lets you control a robot that has to fight against an army of hostile robots. In this article, we will show you how to free download Robokill full version and enjoy its features.
-Robokill is a Flash-powered game developed by RockSolid Arcade that was released in 2008. It is a top-view arcade shooter game that combines elements of RPG and sci-fi. The game has two versions: Robokill: Titan Prime and Robokill 2: Leviathan Five. Both versions have similar gameplay and graphics, but different settings and levels.
-DOWNLOAD –––––>>> https://urlca.com/2uDbUI
The game's story revolves around your robot that has to liberate a space station from a hostile robot army. You have to clear out every room of the station by shooting and destroying all the enemies. Along the way, you can collect cash, weapons, items and experience points that will help you upgrade your robot and make it more powerful. You can also buy better weapons and items from the shop if you have enough money.
-The game has simple controls and gameplay. You use the arrow keys or WASD keys to move your robot and the mouse to aim and shoot. You can also use the number keys or the mouse wheel to switch weapons. The game has stunning graphics and awesome soundtrack that create an immersive atmosphere. The game also has smart enemy AI that will challenge your skills and reflexes.
-Robokill is a freeware game that you can download from various websites. However, some websites may offer only the demo version or require you to register or pay before downloading. To avoid these hassles, we recommend you to download Robokill full version from Softpedia, a trusted website that offers free software downloads.
-To download Robokill full version from Softpedia, follow these steps:
-Congratulations! You have successfully downloaded Robokill full version for free. Enjoy playing this amazing game and have fun!
-Robokill is a top-view arcade shooter game that will keep you hooked for hours with its action-packed gameplay, stunning graphics and awesome soundtrack. You can free download Robokill full version from Softpedia, a trusted website that offers free software downloads. Follow our simple steps above and start playing this amazing game right away!
-``` d5da3c52bfArticle with HTML Formatting | -
---|
- Flute Ringtone Download Love: How to Find and Enjoy Beautiful Flute Sounds for Your Phone-Do you love the sound of a flute? Do you want to make your phone more unique and pleasant with flute ringtones? If so, you are not alone. Flute ringtones are one of the most popular types of ringtones among people who appreciate music and nature. -Flute ringtones are melodies or tunes that are played by a flute, which is a wind instrument that produces sound by blowing air across an opening. Flute ringtones can be soothing, relaxing, uplifting, or romantic, depending on the style and mood of the music. -flute ringtone download loveDownload ❤ https://urllie.com/2uNweJ - In this article, you will learn about different types of flute ringtones, how to download them for free or for a fee, how to set them as your phone's ringtone, and how to enjoy them in various ways. Whether you are looking for a classical flute ringtone, a romantic flute ringtone, a Bollywood flute ringtone, or any other kind of flute ringtone, you will find something that suits your taste and personality. -Types of Flute Ringtones-There are many types of flute ringtones available on the internet, but here are some of the most common and popular ones: -Classical Flute Ringtones-If you are a fan of classical music, you will love classical flute ringtones. These are ringtones that feature flute solos or flute parts from famous classical compositions, such as Mozart's Flute Concerto in G Major, Bach's Suite No. 2 in B Minor, or Vivaldi's Flute Concerto in D Major. Classical flute ringtones are elegant, sophisticated, and timeless. They can make you feel calm, inspired, or joyful. -Romantic Flute Ringtones-If you are looking for a flute ringtone that expresses your love or romance, you will love romantic flute ringtones. These are ringtones that feature flute melodies that are soft, sweet, and sentimental. They can be from romantic songs, movies, or TV shows, such as Titanic's My Heart Will Go On, The Notebook's Main Theme, or Game of Thrones' The Rains of Castamere. Romantic flute ringtones are perfect for setting the mood for a date, a proposal, or a wedding. -Bollywood Flute Ringtones-If you are a fan of Bollywood movies and music, you will love Bollywood flute ringtones. These are ringtones that feature flute tunes from popular Bollywood songs, such as Dilwale Dulhania Le Jayenge's Tujhe Dekha To Ye Jaana Sanam, Kabhi Khushi Kabhie Gham's Suraj Hua Maddham, or Dhadak's Zingaat. Bollywood flute ringtones are catchy, lively, and colorful. They can make you feel happy, energetic, or nostalgic. -flute ringtone download love mp3 Instrumental Flute Ringtones-If you prefer instrumental music over vocal music, you will love instrumental flute ringtones. These are ringtones that feature flute music that is not accompanied by any lyrics or singing. They can be from various genres, such as jazz, blues, rock, or folk. Some examples of instrumental flute ringtones are Jethro Tull's Locomotive Breath, Herbie Mann's Memphis Underground, or Ian Anderson's Bourée. Instrumental flute ringtones are cool, creative, and diverse. They can showcase the versatility and skill of the flute player. -Other Flute Ringtones-Of course, there are many other types of flute ringtones that you can explore and enjoy. For instance, you can find flute ringtones that are inspired by different cultures and traditions, such as Native American flute ringtones, Chinese flute ringtones, or Irish flute ringtones. You can also find flute ringtones that are based on different themes and moods, such as nature flute ringtones, meditation flute ringtones, or funny flute ringtones. The possibilities are endless! -How to Download Flute Ringtones-Now that you know about the different types of flute ringtones, you might be wondering how to download them for your phone. There are two main ways to do this: using websites or using apps. -Websites that offer free or paid flute ringtones-One way to download flute ringtones is to use websites that offer free or paid downloads of various ringtones. Some examples of such websites are Zedge, Myxer, and Mobile9. These websites have large collections of flute ringtones that you can browse by category, genre, or popularity. You can listen to the previews of the ringtones before downloading them. You can also rate, comment, or share the ringtones with others. To download the ringtones, you need to register for a free account on the website and follow the instructions. Some websites may charge a fee for certain ringtones or require you to complete a survey or an offer before downloading them. -Apps that allow you to create or customize flute ringtones-Another way to download flute ringtones is to use apps that allow you to create or customize your own ringtones. Some examples of such apps are Ringtone Maker, Audiko, and MP3 Cutter and Ringtone Maker. These apps let you use your own music files or recordings, or choose from a library of flute sounds and music. You can edit, trim, mix, or add effects to the ringtones. You can also assign different ringtones to different contacts or notifications. To download the ringtones, you need to install the app on your phone and follow the instructions. Some apps may have in-app purchases or ads that you can remove by paying a fee. -How to Set Flute Ringtones as Your Default or Contact-Specific Ringtone-Once you have downloaded your favorite flute ringtones, you might want to set them as your default or contact-specific ringtone. This means that your phone will play the flute ringtone whenever you receive a call or a message, or when a specific person calls or texts you. To do this, you need to follow these steps: -
How to Enjoy Flute Ringtones-Now that you have set your flute ringtones, you might be wondering how to enjoy them in various ways. Here are some tips and suggestions on how to make the most of your flute ringtones: -Tips on how to choose the right flute ringtone for your mood or occasion-Flute ringtones can have different effects on your mood or occasion, depending on the style and mood of the music. For instance, if you are feeling stressed or anxious, you might want to choose a soothing or relaxing flute ringtone, such as a classical or nature flute ringtone. If you are feeling happy or cheerful, you might want to choose a lively or upbeat flute ringtone, such as a Bollywood or instrumental flute ringtone. If you are feeling romantic or sentimental, you might want to choose a sweet or emotional flute ringtone, such as a romantic or movie flute ringtone. -You can also choose your flute ringtone based on the occasion or event that you are attending or hosting. For example, if you are going to a formal or professional event, you might want to choose a elegant or sophisticated flute ringtone, such as a classical or jazz flute ringtone. If you are going to a casual or fun event, you might want to choose a cool or creative flute ringtone, such as a rock or folk flute ringtone. If you are going to a special or festive event, you might want to choose a catchy or colorful flute ringtone, such as a Bollywood or instrumental flute ringtone. -Suggestions on how to mix and match flute ringtones with other sounds or music-Flute ringtones can also be mixed and matched with other sounds or music to create a unique and personalized ringtone. For example, you can combine a flute ringtone with a drum beat, a guitar riff, a piano melody, or a vocal track. You can also blend a flute ringtone with a sound effect, such as a bird chirp, a water splash, a bell ring, or a whistle blow. You can use apps that allow you to create or customize ringtones to do this, or you can use online tools that let you mix and match different sounds and music. -Ideas on how to share or gift flute ringtones to your loved ones or friends-Flute ringtones can also be shared or gifted to your loved ones or friends as a way of expressing your feelings or appreciation. For example, you can send a flute ringtone to your partner as a romantic gesture, to your family as a greeting, to your friend as a joke, or to your colleague as a thank you. You can also surprise someone with a flute ringtone as a birthday present, an anniversary gift, a congratulations message, or an apology note. You can use websites or apps that allow you to send ringtones via email, text, or social media to do this, or you can use Bluetooth or Wi-Fi to transfer ringtones directly from your phone. -Conclusion-Flute ringtones are beautiful and versatile sounds that can make your phone more unique and pleasant. They come in various types and styles that suit different tastes and personalities. They can be downloaded for free or for a fee from websites or apps that offer various ringtones. They can be set as your default or contact-specific ringtone easily and quickly. They can also be enjoyed in various ways by choosing the right one for your mood or occasion, mixing and matching them with other sounds or music, and sharing or gifting them to your loved ones or friends. -If you love the sound of a flute, why not try out some flute ringtones for yourself? You might be surprised by how much they can enhance your phone experience and brighten up your day. To find more flute ringtones, you can visit [this website] that has a large collection of flute ringtones that you can download for free. -FAQs-Here are some frequently asked questions about flute ringtones: -What is the best flute ringtone for love?-The best flute ringtone for love depends on your personal preference and the message that you want to convey. However, some general suggestions are romantic flute ringtones that feature soft, sweet, and sentimental melodies, such as Titanic's My Heart Will Go On, The Notebook's Main Theme, or Game of Thrones' The Rains of Castamere. -How can I make my own flute ringtone?-You can make your own flute ringtone by using apps that allow you to create or customize ringtones. Some examples of such apps are Ringtone Maker, Audiko, and MP3 Cutter and Ringtone Maker. These apps let you use your own music files or recordings, or choose from a library of flute sounds and music. You can edit, trim, mix, or add effects to the ringtones. You can also assign different ringtones to different contacts or notifications. -Where can I find more flute ringtones?-You can find more flute ringtones by using websites or apps that offer various ringtones. Some examples of such websites are Zedge, Myxer, and Mobile9. Some examples of such apps are Ringtone Maker, Audiko, and MP3 Cutter and Ringtone Maker. These websites and apps have large collections of flute ringtones that you can browse by category, genre, or popularity. You can also search the web for specific types or styles of flute ringtones that you are interested in. -How can I change my flute ringtone?-You can change your flute ringtone by following the same steps that you used to set it as your default or contact-specific ringtone. Go to your phone's settings and look for the sound or ringtone option. Select the default ringtone option or the contact that you want to customize. Browse through your downloaded flute ringtones and choose the one that you want to use as your new ringtone. Confirm your selection and enjoy your new flute ringtone. -How can I delete my flute ringtone?-You can delete your flute ringtone by going to your phone's file manager or storage app and looking for the folder where your downloaded ringtones are stored. Find the flute ringtone that you want to delete and tap on it. Select the delete option and confirm your action. Alternatively, you can use apps that allow you to manage your ringtones, such as Ringtone Maker, Audiko, or MP3 Cutter and Ringtone Maker. These apps let you view, edit, or delete your ringtones easily and quickly. - |
-
If you are looking for a fun and casual game that lets you take care of a cute alien pet, then you might want to try Pou. Pou is a popular virtual pet game that has millions of fans around the world. In this game, you can feed, clean, play with, and watch your Pou grow up while leveling up and unlocking different wallpapers and outfits.
-Download Zip ✫✫✫ https://gohhs.com/2uPm9c
But what if you want to enjoy the game without spending real money or watching ads? What if you want to unlock all the items and features in the game without waiting for levels or achievements? What if you want to play the game without any limits or interruptions?
-Well, you are in luck, because there is a way to do all that and more. It is called Pou APK Hack Monedas Infinitas 2022. This is a modified version of the original Pou game that gives you unlimited coins and access to all the items and features in the game. With this hack, you can enjoy the game without any restrictions or interruptions.
-In this article, we will tell you everything you need to know about Pou APK Hack Monedas Infinitas 2022. We will explain the features of the Pou game, the benefits of the hack, how to download and install the hack, and some tips and tricks for playing Pou with the hack. By the end of this article, you will be ready to get unlimited coins and enjoy the game like never before.
-Pou is a game that simulates having a virtual pet. You can choose the name, gender, and color of your Pou, and then take care of it as if it were a real pet. Here are some of the features of the Pou game:
-One of the main tasks in the game is to feed and take care of your Pou. You can feed your Pou with different types of food, such as fruits, vegetables, candy, pizza, etc. You can also clean your Pou by taking it to the bathroom, showering it, or brushing its teeth. You can also play with your Pou by tickling it, petting it, or making it laugh. Your Pou will grow and level up as you take care of it.
-Another fun feature of the game is playing games in the game room. There are many mini-games that you can play with your Pou, such as Match Tap Color, Sky Jump, Hill Drive, Connect 2 Pou, etc. These games are not only entertaining but also help you earn coins that you can use to buy items and features in the game.
-pou infinito apk mod dinheiro ilimitado 2022
-pou hackeado apk download grátis monedas infinitas 2022
-pou apk mod monedas infinitas y nivel maximo 2022
-pou dinheiro infinito atualizado 2022 baixar apk
-pou hack apk mediafire monedas infinitas 2022
-pou mod apk unlimited coins and level 2022
-pou apk hack monedas infinitas sin root 2022
-pou infinito 2022 apk atualizado download
-pou hackeado monedas infinitas apk mega 2022
-pou mod apk monedas infinitas y ropa gratis 2022
-pou dinheiro infinito 2022 apk mod hack
-pou hackeado monedas infinitas descargar apk 2022
-pou mod apk unlimited money and potions 2022
-pou apk hack monedas infinitas android 1 2022
-pou infinito 2022 download grátis para celular
-pou hackeado monedas infinitas sin internet 2022
-pou mod apk unlimited coins and max level 2022
-pou apk hack monedas infinitas uptodown 2022
-pou dinheiro infinito 2022 baixar grátis mediafire
-pou hackeado monedas infinitas y diamantes 2022
-pou mod apk unlimited coins and food 2022
-pou apk hack monedas infinitas no ads 2022
-pou dinheiro infinito 2022 baixar pelo google drive
-pou hackeado monedas infinitas y juegos desbloqueados 2022
-pou mod apk unlimited coins and energy 2022
-pou apk hack monedas infinitas offline 2022
-pou dinheiro infinito 2022 baixar pelo mega
-pou hackeado monedas infinitas y sombreros gratis 2022
-pou mod apk unlimited coins and skins 2022
-pou apk hack monedas infinitas online 2022
-pou dinheiro infinito 2022 baixar pelo mediafire atualizado
-pou hackeado monedas infinitas y trucos secretos 2022
-pou mod apk unlimited coins and stars 2022
-pou apk hack monedas infinitas para pc 2022
-pou dinheiro infinito 2022 baixar pelo play store
-pou hackeado monedas infinitas y mascotas gratis 2022
-pou mod apk unlimited coins and potions unlocked 2022
-pou apk hack monedas infinitas para ios 2022
-pou dinheiro infinito 2022 baixar pelo aptoide
-pou hackeado monedas infinitas y todos los niveles 2022
If you want to change your Pou's appearance and abilities, you can experiment with potions in the lab. There are many potions that you can use on your Pou, such as Fat Burner, Energy Drink, Baby Potion, Adult Potion, etc. These potions can make your Pou bigger or smaller, faster or slower, younger or older, etc. Some potions have temporary effects while others have permanent effects.
-You can also customize your Pou's appearance and rooms according to your preference. You can dress up your Pou with different outfits, hats, eyeglasses, etc. You can also decorate your Pou's rooms with different wallpapers, floors, furniture, etc. There are many options to choose from and you can mix and match them as you like.
-As you play the game, you can unlock achievements and special items that will make your game more fun and rewarding. You can unlock achievements by completing certain tasks or reaching certain milestones in the game. You can also unlock special items by collecting stars or finding hidden objects in the game. These items include coins, potions, clothes, etc.
-You can also visit and play with friends who also have Pous. You can connect with other players through Facebook or other platforms and visit their Pous. You can chat with them, play games with them, or exchange gifts with them. You can also see their Pous' appearance and rooms and compare them with yours.
-Pou APK Hack Monedas Infinitas 2022 is a modified version of the original Pou game that gives you unlimited coins and access to all the items and features in the game. With this hack, you can enjoy the game without any restrictions or interruptions. Here are some of the benefits of using this hack:
-One of the main benefits of using this hack is that you get unlimited coins for free. Coins are the currency in the game that you need to buy items and features in the game. Normally, you have to earn coins by playing games or watching ads in the game. But with this hack, you get unlimited coins without spending real money or watching ads. You can use these coins to buy anything you want in the game.
-Another benefit of using this hack is that you unlock all items and features in the game. Normally, you have to wait for levels or achievements to unlock certain items and features in the game. But with this hack, you unlock all items and features from the start. You can access all the outfits, wallpapers, potions, games, etc. in the game without any restrictions.
-A final benefit of using this hack is that you enjoy the game without any restrictions or interruptions. Normally, you have to deal with limits or pop-ups in the game that can affect your gameplay. For example, you have to wait for your Pou's energy to refill, watch ads to get coins or items, or pay real money to get premium features. But with this hack, you don't have to worry about any of that. You can play the game as much as you want, without any ads or payments.
-If you are interested in using Pou APK Hack Monedas Infinitas 2022, you need to download and install it on your device. Here are the steps that you need to follow:
-The first step is to uninstall the original version of Pou from your device if you have it. This is because the hack version will not work if you have the original version installed. To uninstall the original version, go to your device settings, find the app manager, select Pou, and tap on uninstall.
-The next step is to download the hack APK file from a trusted source. An APK file is a file format that allows you to install apps on your device that are not available on the official app store. However, not all APK files are safe or reliable, so you need to be careful where you download them from. To download the hack APK file, you can use this link: Pou APK Hack Monedas Infinitas 2022 Download. This link will take you to a website where you can download the hack APK file safely and securely.
-The third step is to enable unknown sources on your device settings. This is because your device will not allow you to install apps from unknown sources by default, for security reasons. To enable unknown sources, go to your device settings, find the security option, and toggle on the unknown sources option.
-The fourth step is to install the hack APK file on your device. To do this, locate the hack APK file that you downloaded in step 2, and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for the process to finish.
-The final step is to launch the game and enjoy the hack. To do this, find the Pou icon on your device screen and tap on it. You will see a new screen with the hack logo and features. Tap on start and enjoy the game with unlimited coins and access to all items and features.
-Now that you have downloaded and installed Pou APK Hack Monedas Infinitas 2022, you can start playing the game with more fun and excitement. Here are some tips and tricks that will help you make the most of your gameplay:
-Potions are one of the most interesting features of the game, as they can change your Pou's appearance and abilities. However, they can also have some side effects or consequences that you need to be aware of. For example, some potions can make your Pou sick or unhappy, while others can make it harder to feed or clean your Pou. Therefore, use potions wisely and sparingly, and always check their effects before using them.
-Playing games in the game room is one of the best ways to earn coins in the game. However, not all games are equal in terms of difficulty or reward. Some games are easier or more fun than others, while some games give more coins than others. Therefore, play different games to find out which ones suit your preference and skill level, and which ones give more coins. You can also use potions or items to boost your performance or score in some games.
-Customizing your Pou according to your preference is one of the most fun and creative features of the game. You can dress up your Pou with different outfits, hats, eyeglasses, etc. You can also decorate your Pou's rooms with different wallpapers, floors, furniture, etc. There are many options to choose from and you can mix and match them as you like. However, you should also consider your Pou's mood and personality when customizing it. For example, some Pous may prefer certain colors or styles over others, while some Pous may have different reactions to certain items or decorations. Therefore, customize your Pou according to your preference, but also pay attention to your Pou's feedback and expression.
-Sharing your Pou with your friends and family is one of the most social and interactive features of the game. You can connect with other players through Facebook or other platforms and visit their Pous. You can chat with them, play games with them, or exchange gifts with them. You can also see their Pous' appearance and rooms and compare them with yours. Sharing your Pou with your friends and family can make your game more fun and engaging, as you can learn from each other, compete with each other, or cooperate with each other. However, you should also respect your friends and family's privacy and preferences when sharing your Pou with them. For example, some players may not want to share their Pou's name or gender, while some players may not want to receive certain gifts or messages. Therefore, share your Pou with your friends and family, but also be polite and considerate of their feelings and choices.
-Keeping your Pou happy and healthy is one of the most important and rewarding features of the game. You can keep your Pou happy and healthy by feeding it, cleaning it, playing with it, sleeping with it, etc. Your Pou will show its happiness and health by its mood, expression, color, etc. Keeping your Pou happy and healthy can make your game more enjoyable and satisfying, as you can see your Pou grow up and level up. However, you should also balance your Pou's needs and wants when keeping it happy and healthy. For example, some Pous may want more food or games than others, while some Pous may need more sleep or potions than others. Therefore, keep your Pou happy and healthy, but also be attentive and responsive to your Pou's signals and requests.
-Pou is a fun and casual game that lets you take care of a cute alien pet. You can feed, clean, play with, and watch your Pou grow up while leveling up and unlocking different wallpapers and outfits. However, if you want to enjoy the game without spending real money or watching ads, if you want to unlock all the items and features in the game without waiting for levels or achievements, if you want to play the game without any limits or interruptions, then you should try Pou APK Hack Monedas Infinitas 2022. This is a modified version of the original Pou game that gives you unlimited coins and access to all the items and features in the game.
-In this article, we have told you everything you need to know about Pou APK Hack Monedas Infinitas 2022. We have explained the features of the Pou game, the benefits of the hack, how to download and install the hack, and some tips and tricks for playing Pou with the hack. By following these steps and tips, you will be ready to get unlimited coins and enjoy the game like never before. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!
-Here are some of the frequently asked questions about Pou APK Hack Monedas Infinitas 2022:
-Yes, Pou APK Hack Monedas Infinitas 2022 is safe to use, as long as you download it from a trusted source and follow the installation steps correctly. However, you should always be careful when downloading and installing any APK file, as some of them may contain viruses or malware that can harm your device or data. Therefore, you should always scan the APK file with an antivirus software before installing it, and backup your data before using it.
-No, Pou APK Hack Monedas Infinitas 2022 is not legal to use, as it violates the terms and conditions of the original Pou game. By using this hack, you are modifying the game's code and data, which is considered as cheating and piracy. This can result in legal actions or penalties from the game developers or authorities. Therefore, you should use this hack at your own risk and discretion, and respect the rights and property of the game creators and owners.
-Pou APK Hack Monedas Infinitas 2022 will work on most devices that support Android operating system. However, some devices may not be compatible with the hack due to different specifications or settings. Therefore, you should check the requirements and compatibility of the hack before downloading and installing it. You should also make sure that your device has enough storage space and battery life to run the hack smoothly.
-No, you cannot update Pou APK Hack Monedas Infinitas 2022, as it is a modified version of the original Pou game. If you try to update the hack, you will lose all the hack features and revert back to the original version of the game. Therefore, you should avoid updating the hack, and enjoy it as it is.
-Yes, you can use Pou APK Hack Monedas Infinitas 2022 offline, as it does not require an internet connection to run. However, some features of the game may not work properly offline, such as visiting and playing with friends, sharing your Pou on social media, or accessing some online content or services. Therefore, you should use the hack online whenever possible, to enjoy all the features of the game.
197e85843d
- Turn any image into a video !
- To use this demo, simply upload an image and hit the Submit button.
- Don't forget to share your results with the Community ;)
-
{utils.format_directory(OUTPUT_DIR)}
- """, every=3, elem_id="files"
- )
- download_btn = gr.Button("Download All Files")
-
- chat_history = gr.State([[None, None]])
- api = gr.State(None)
-
- def start(open_ai_key, ai_name, ai_role, top_5_goals):
- auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals)
- return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api
-
- def bot_response(chat, api):
- messages = []
- for message in api.get_chatbot_response():
- messages.append(message)
- chat[-1][1] = "\n".join(messages) + "..."
- yield chat
- chat[-1][1] = "\n".join(messages)
- yield chat
-
- def send_message(count, chat, api, message="Y"):
- if message != "Y":
- count = 1
- for i in range(count):
- chat.append([message, None])
- yield chat, count - i
- api.send_message(message)
- for updated_chat in bot_response(chat, api):
- yield updated_chat, count - i
-
- def activate_inputs():
- return {
- yes_btn: gr.Button.update(interactive=True),
- consecutive_yes: gr.Slider.update(interactive=True),
- custom_response: gr.Textbox.update(interactive=True),
- }
-
- def deactivate_inputs():
- return {
- yes_btn: gr.Button.update(interactive=False),
- consecutive_yes: gr.Slider.update(interactive=False),
- custom_response: gr.Textbox.update(interactive=False),
- }
-
- start_btn.click(
- start,
- [open_ai_key, ai_name, ai_role, top_5_goals],
- [setup_pane, main_pane, api],
- ).then(bot_response, [chat_history, api], chatbot).then(
- activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- )
-
- yes_btn.click(
- deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- ).then(
- send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes]
- ).then(
- activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- )
- custom_response.submit(
- deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- ).then(
- send_message,
- [consecutive_yes, chat_history, api, custom_response],
- [chatbot, consecutive_yes],
- ).then(
- activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- )
-
- def download_all_files():
- shutil.make_archive("outputs", "zip", OUTPUT_DIR)
-
- download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS)
-
-app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR])
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/ddpm.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/ddpm.py
deleted file mode 100644
index f71a44af48c8cba8e97849b7e6813b3e6f9fe83c..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/ddpm.py
+++ /dev/null
@@ -1,1797 +0,0 @@
-"""
-wild mixture of
-https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
-https://github.com/CompVis/taming-transformers
--- merci
-"""
-
-import torch
-import torch.nn as nn
-import numpy as np
-import pytorch_lightning as pl
-from torch.optim.lr_scheduler import LambdaLR
-from einops import rearrange, repeat
-from contextlib import contextmanager, nullcontext
-from functools import partial
-import itertools
-from tqdm import tqdm
-from torchvision.utils import make_grid
-from pytorch_lightning.utilities.distributed import rank_zero_only
-from omegaconf import ListConfig
-
-from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
-from ldm.modules.ema import LitEma
-from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
-from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL
-from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from ldm.models.diffusion.ddim import DDIMSampler
-
-
-__conditioning_keys__ = {'concat': 'c_concat',
- 'crossattn': 'c_crossattn',
- 'adm': 'y'}
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-def uniform_on_device(r1, r2, shape, device):
- return (r1 - r2) * torch.rand(*shape, device=device) + r2
-
-
-class DDPM(pl.LightningModule):
- # classic DDPM with Gaussian diffusion, in image space
- def __init__(self,
- unet_config,
- timesteps=1000,
- beta_schedule="linear",
- loss_type="l2",
- ckpt_path=None,
- ignore_keys=[],
- load_only_unet=False,
- monitor="val/loss",
- use_ema=True,
- first_stage_key="image",
- image_size=256,
- channels=3,
- log_every_t=100,
- clip_denoised=True,
- linear_start=1e-4,
- linear_end=2e-2,
- cosine_s=8e-3,
- given_betas=None,
- original_elbo_weight=0.,
- v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
- l_simple_weight=1.,
- conditioning_key=None,
- parameterization="eps", # all assuming fixed variance schedules
- scheduler_config=None,
- use_positional_encodings=False,
- learn_logvar=False,
- logvar_init=0.,
- make_it_fit=False,
- ucg_training=None,
- reset_ema=False,
- reset_num_ema_updates=False,
- ):
- super().__init__()
- assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"'
- self.parameterization = parameterization
- print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
- self.cond_stage_model = None
- self.clip_denoised = clip_denoised
- self.log_every_t = log_every_t
- self.first_stage_key = first_stage_key
- self.image_size = image_size # try conv?
- self.channels = channels
- self.use_positional_encodings = use_positional_encodings
- self.model = DiffusionWrapper(unet_config, conditioning_key)
- count_params(self.model, verbose=True)
- self.use_ema = use_ema
- if self.use_ema:
- self.model_ema = LitEma(self.model)
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
-
- self.use_scheduler = scheduler_config is not None
- if self.use_scheduler:
- self.scheduler_config = scheduler_config
-
- self.v_posterior = v_posterior
- self.original_elbo_weight = original_elbo_weight
- self.l_simple_weight = l_simple_weight
-
- if monitor is not None:
- self.monitor = monitor
- self.make_it_fit = make_it_fit
- if reset_ema: assert exists(ckpt_path)
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)
- if reset_ema:
- assert self.use_ema
- print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,
- linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
-
- self.loss_type = loss_type
-
- self.learn_logvar = learn_logvar
- logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))
- if self.learn_logvar:
- self.logvar = nn.Parameter(self.logvar, requires_grad=True)
- else:
- self.register_buffer('logvar', logvar)
-
- self.ucg_training = ucg_training or dict()
- if self.ucg_training:
- self.ucg_prng = np.random.RandomState()
-
- def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if exists(given_betas):
- betas = given_betas
- else:
- betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,
- cosine_s=cosine_s)
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.linear_start = linear_start
- self.linear_end = linear_end
- assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (
- 1. - alphas_cumprod) + self.v_posterior * betas
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- if self.parameterization == "eps":
- lvlb_weights = self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))
- elif self.parameterization == "x0":
- lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))
- elif self.parameterization == "v":
- lvlb_weights = torch.ones_like(self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)))
- else:
- raise NotImplementedError("mu not supported")
- lvlb_weights[0] = lvlb_weights[1]
- self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)
- assert not torch.isnan(self.lvlb_weights).all()
-
- @contextmanager
- def ema_scope(self, context=None):
- if self.use_ema:
- self.model_ema.store(self.model.parameters())
- self.model_ema.copy_to(self.model)
- if context is not None:
- print(f"{context}: Switched to EMA weights")
- try:
- yield None
- finally:
- if self.use_ema:
- self.model_ema.restore(self.model.parameters())
- if context is not None:
- print(f"{context}: Restored training weights")
-
- @torch.no_grad()
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- if self.make_it_fit:
- n_params = len([name for name, _ in
- itertools.chain(self.named_parameters(),
- self.named_buffers())])
- for name, param in tqdm(
- itertools.chain(self.named_parameters(),
- self.named_buffers()),
- desc="Fitting old weights to new weights",
- total=n_params
- ):
- if not name in sd:
- continue
- old_shape = sd[name].shape
- new_shape = param.shape
- assert len(old_shape) == len(new_shape)
- if len(new_shape) > 2:
- # we only modify first two axes
- assert new_shape[2:] == old_shape[2:]
- # assumes first axis corresponds to output dim
- if not new_shape == old_shape:
- new_param = param.clone()
- old_param = sd[name]
- if len(new_shape) == 1:
- for i in range(new_param.shape[0]):
- new_param[i] = old_param[i % old_shape[0]]
- elif len(new_shape) >= 2:
- for i in range(new_param.shape[0]):
- for j in range(new_param.shape[1]):
- new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]]
-
- n_used_old = torch.ones(old_shape[1])
- for j in range(new_param.shape[1]):
- n_used_old[j % old_shape[1]] += 1
- n_used_new = torch.zeros(new_shape[1])
- for j in range(new_param.shape[1]):
- n_used_new[j] = n_used_old[j % old_shape[1]]
-
- n_used_new = n_used_new[None, :]
- while len(n_used_new.shape) < len(new_shape):
- n_used_new = n_used_new.unsqueeze(-1)
- new_param /= n_used_new
-
- sd[name] = new_param
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys:\n {missing}")
- if len(unexpected) > 0:
- print(f"\nUnexpected Keys:\n {unexpected}")
-
- def q_mean_variance(self, x_start, t):
- """
- Get the distribution q(x_t | x_0).
- :param x_start: the [N x C x ...] tensor of noiseless inputs.
- :param t: the number of diffusion steps (minus 1). Here, 0 means one step.
- :return: A tuple (mean, variance, log_variance), all of x_start's shape.
- """
- mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)
- variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def predict_start_from_z_and_v(self, x_t, t, v):
- # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v
- )
-
- def predict_eps_from_z_and_v(self, x_t, t, v):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, clip_denoised: bool):
- model_out = self.model(x, t)
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_loop(self, shape, return_intermediates=False):
- device = self.betas.device
- b = shape[0]
- img = torch.randn(shape, device=device)
- intermediates = [img]
- for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
- img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),
- clip_denoised=self.clip_denoised)
- if i % self.log_every_t == 0 or i == self.num_timesteps - 1:
- intermediates.append(img)
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, batch_size=16, return_intermediates=False):
- image_size = self.image_size
- channels = self.channels
- return self.p_sample_loop((batch_size, channels, image_size, image_size),
- return_intermediates=return_intermediates)
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
-
- def get_v(self, x, noise, t):
- return (
- extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise -
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x
- )
-
- def get_loss(self, pred, target, mean=True):
- if self.loss_type == 'l1':
- loss = (target - pred).abs()
- if mean:
- loss = loss.mean()
- elif self.loss_type == 'l2':
- if mean:
- loss = torch.nn.functional.mse_loss(target, pred)
- else:
- loss = torch.nn.functional.mse_loss(target, pred, reduction='none')
- else:
- raise NotImplementedError("unknown loss type '{loss_type}'")
-
- return loss
-
- def p_losses(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_out = self.model(x_noisy, t)
-
- loss_dict = {}
- if self.parameterization == "eps":
- target = noise
- elif self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")
-
- loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
-
- log_prefix = 'train' if self.training else 'val'
-
- loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})
- loss_simple = loss.mean() * self.l_simple_weight
-
- loss_vlb = (self.lvlb_weights[t] * loss).mean()
- loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})
-
- loss = loss_simple + self.original_elbo_weight * loss_vlb
-
- loss_dict.update({f'{log_prefix}/loss': loss})
-
- return loss, loss_dict
-
- def forward(self, x, *args, **kwargs):
- # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size
- # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- return self.p_losses(x, t, *args, **kwargs)
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = rearrange(x, 'b h w c -> b c h w')
- x = x.to(memory_format=torch.contiguous_format).float()
- return x
-
- def shared_step(self, batch):
- x = self.get_input(batch, self.first_stage_key)
- loss, loss_dict = self(x)
- return loss, loss_dict
-
- def training_step(self, batch, batch_idx):
- for k in self.ucg_training:
- p = self.ucg_training[k]["p"]
- val = self.ucg_training[k]["val"]
- if val is None:
- val = ""
- for i in range(len(batch[k])):
- if self.ucg_prng.choice(2, p=[1 - p, p]):
- batch[k][i] = val
-
- loss, loss_dict = self.shared_step(batch)
-
- self.log_dict(loss_dict, prog_bar=True,
- logger=True, on_step=True, on_epoch=True)
-
- self.log("global_step", self.global_step,
- prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- if self.use_scheduler:
- lr = self.optimizers().param_groups[0]['lr']
- self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- return loss
-
- @torch.no_grad()
- def validation_step(self, batch, batch_idx):
- _, loss_dict_no_ema = self.shared_step(batch)
- with self.ema_scope():
- _, loss_dict_ema = self.shared_step(batch)
- loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}
- self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
- self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
-
- def on_train_batch_end(self, *args, **kwargs):
- if self.use_ema:
- self.model_ema(self.model)
-
- def _get_rows_from_list(self, samples):
- n_imgs_per_row = len(samples)
- denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):
- log = dict()
- x = self.get_input(batch, self.first_stage_key)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- x = x.to(self.device)[:N]
- log["inputs"] = x
-
- # get diffusion row
- diffusion_row = list()
- x_start = x[:n_row]
-
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(x_start)
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- diffusion_row.append(x_noisy)
-
- log["diffusion_row"] = self._get_rows_from_list(diffusion_row)
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)
-
- log["samples"] = samples
- log["denoise_row"] = self._get_rows_from_list(denoise_row)
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.learn_logvar:
- params = params + [self.logvar]
- opt = torch.optim.AdamW(params, lr=lr)
- return opt
-
-
-class LatentDiffusion(DDPM):
- """main class"""
-
- def __init__(self,
- first_stage_config,
- cond_stage_config,
- num_timesteps_cond=None,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- force_null_conditioning=False,
- *args, **kwargs):
- self.force_null_conditioning = force_null_conditioning
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs['timesteps']
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = 'concat' if concat_mode else 'crossattn'
- if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning:
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- reset_ema = kwargs.pop("reset_ema", False)
- reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std:
- self.scale_factor = scale_factor
- else:
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
- self.instantiate_first_stage(first_stage_config)
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
- self.bbox_tokenizer = None
-
- self.restarted_from_ckpt = False
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys)
- self.restarted_from_ckpt = True
- if reset_ema:
- assert self.use_ema
- print(
- f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.")
- self.model_ema = LitEma(self.model)
- if reset_num_ema_updates:
- print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ")
- assert self.use_ema
- self.model_ema.reset_num_updates()
-
- def make_cond_schedule(self, ):
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
- self.cond_ids[:self.num_timesteps_cond] = ids
-
- @rank_zero_only
- @torch.no_grad()
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
- # only for very first batch
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
- # set rescale weight to 1./std of encodings
- print("### USING STD-RESCALING ###")
- x = super().get_input(batch, self.first_stage_key)
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- del self.scale_factor
- self.register_buffer('scale_factor', 1. / z.flatten().std())
- print(f"setting self.scale_factor to {self.scale_factor}")
- print("### USING STD-RESCALING ###")
-
- def register_schedule(self,
- given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule:
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config):
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != '__is_first_stage__'
- assert config != '__is_unconditional__'
- model = instantiate_from_config(config)
- self.cond_stage_model = model
-
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
- denoise_row = []
- for zd in tqdm(samples, desc=desc):
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
- force_not_quantize=force_no_decoder_quantization))
- n_imgs_per_row = len(denoise_row)
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
- return self.scale_factor * z
-
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
- def meshgrid(self, h, w):
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
-
- arr = torch.cat([y, x], dim=-1)
- return arr
-
- def delta_border(self, h, w):
- """
- :param h: height
- :param w: width
- :return: normalized distance to image border,
- wtith min distance = 0 at border and max dist = 0.5 at image center
- """
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
- arr = self.meshgrid(h, w) / lower_right_corner
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
- return edge_dist
-
- def get_weighting(self, h, w, Ly, Lx, device):
- weighting = self.delta_border(h, w)
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
- self.split_input_params["clip_max_weight"], )
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
-
- if self.split_input_params["tie_braker"]:
- L_weighting = self.delta_border(Ly, Lx)
- L_weighting = torch.clip(L_weighting,
- self.split_input_params["clip_min_tie_weight"],
- self.split_input_params["clip_max_tie_weight"])
-
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
- weighting = weighting * L_weighting
- return weighting
-
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
- """
- :param x: img of size (bs, c, h, w)
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
- """
- bs, nc, h, w = x.shape
-
- # number of crops in image
- Ly = (h - kernel_size[0]) // stride[0] + 1
- Lx = (w - kernel_size[1]) // stride[1] + 1
-
- if uf == 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
-
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
-
- elif uf > 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
- dilation=1, padding=0,
- stride=(stride[0] * uf, stride[1] * uf))
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
-
- elif df > 1 and uf == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
- dilation=1, padding=0,
- stride=(stride[0] // df, stride[1] // df))
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
-
- else:
- raise NotImplementedError
-
- return fold, unfold, normalization, weighting
-
- @torch.no_grad()
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
- cond_key=None, return_original_cond=False, bs=None, return_x=False):
- x = super().get_input(batch, k)
- if bs is not None:
- x = x[:bs]
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
-
- if self.model.conditioning_key is not None and not self.force_null_conditioning:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ['caption', 'coordinates_bbox', "txt"]:
- xc = batch[cond_key]
- elif cond_key in ['class_label', 'cls']:
- xc = batch
- else:
- xc = super().get_input(batch, cond_key).to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
- if bs is not None:
- c = c[:bs]
-
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- ckey = __conditioning_keys__[self.model.conditioning_key]
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {'pos_x': pos_x, 'pos_y': pos_y}
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_x:
- out.extend([x])
- if return_original_cond:
- out.append(xc)
- return out
-
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
- return self.first_stage_model.decode(z)
-
- @torch.no_grad()
- def encode_first_stage(self, x):
- return self.first_stage_model.encode(x)
-
- def shared_step(self, batch, **kwargs):
- x, c = self.get_input(batch, self.first_stage_key)
- loss = self(x, c)
- return loss
-
- def forward(self, x, c, *args, **kwargs):
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- if self.model.conditioning_key is not None:
- assert c is not None
- if self.cond_stage_trainable:
- c = self.get_learned_conditioning(c)
- if self.shorten_cond_schedule: # TODO: drop this option
- tc = self.cond_ids[t].to(self.device)
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
- return self.p_losses(x, c, t, *args, **kwargs)
-
- def apply_model(self, x_noisy, t, cond, return_ids=False):
- if isinstance(cond, dict):
- # hybrid case, cond is expected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond]
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
- cond = {key: cond}
-
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
-
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
-
- def _prior_bpd(self, x_start):
- """
- Get the prior KL term for the variational lower-bound, measured in
- bits-per-dim.
- This term can't be optimized, as it only depends on the encoder.
- :param x_start: the [N x C x ...] tensor of inputs.
- :return: a batch of [N] KL values (in bits), one per batch element.
- """
- batch_size = x_start.shape[0]
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
- return mean_flat(kl_prior) / np.log(2.0)
-
- def p_losses(self, x_start, cond, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_output = self.apply_model(x_noisy, t, cond)
-
- loss_dict = {}
- prefix = 'train' if self.training else 'val'
-
- if self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "eps":
- target = noise
- elif self.parameterization == "v":
- target = self.get_v(x_start, noise, t)
- else:
- raise NotImplementedError()
-
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
-
- logvar_t = self.logvar[t].to(self.device)
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
- if self.learn_logvar:
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
- loss_dict.update({'logvar': self.logvar.data.mean()})
-
- loss = self.l_simple_weight * loss.mean()
-
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
- loss += (self.original_elbo_weight * loss_vlb)
- loss_dict.update({f'{prefix}/loss': loss})
-
- return loss, loss_dict
-
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
- return_x0=False, score_corrector=None, corrector_kwargs=None):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
- log_every_t=None):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
- total=timesteps) if verbose else reversed(
- range(0, timesteps))
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised, return_x0=True,
- temperature=temperature[i], noise_dropout=noise_dropout,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(self, cond, shape, return_intermediates=False,
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, start_T=None,
- log_every_t=None):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
- range(0, timesteps))
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised)
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
- verbose=True, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, shape=None, **kwargs):
- if shape is None:
- shape = (batch_size, self.channels, self.image_size, self.image_size)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
- return self.p_sample_loop(cond,
- shape,
- return_intermediates=return_intermediates, x_T=x_T,
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
- mask=mask, x0=x0)
-
- @torch.no_grad()
- def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):
- if ddim:
- ddim_sampler = DDIMSampler(self)
- shape = (self.channels, self.image_size, self.image_size)
- samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size,
- shape, cond, verbose=False, **kwargs)
-
- else:
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
- return_intermediates=True, **kwargs)
-
- return samples, intermediates
-
- @torch.no_grad()
- def get_unconditional_conditioning(self, batch_size, null_label=None):
- if null_label is not None:
- xc = null_label
- if isinstance(xc, ListConfig):
- xc = list(xc)
- if isinstance(xc, dict) or isinstance(xc, list):
- c = self.get_learned_conditioning(xc)
- else:
- if hasattr(xc, "to"):
- xc = xc.to(self.device)
- c = self.get_learned_conditioning(xc)
- else:
- if self.cond_stage_key in ["class_label", "cls"]:
- xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device)
- return self.get_learned_conditioning(xc)
- else:
- raise NotImplementedError("todo")
- if isinstance(c, list): # in case the encoder gives us a list
- for i in range(len(c)):
- c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device)
- else:
- c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device)
- return c
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
- return_first_stage_outputs=True,
- force_c_encode=True,
- return_original_cond=True,
- bs=N)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', "cls"]:
- try:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- except KeyError:
- # probably no "human_label" in batch
- pass
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
- self.first_stage_model, IdentityFirstStage):
- # also display when quantizing x0 while sampling
- with ema_scope("Plotting Quantized Denoised"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- quantize_denoised=True)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
- # quantize_denoised=True)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_x0_quantized"] = x_samples
-
- if unconditional_guidance_scale > 1.0:
- uc = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- if self.model.conditioning_key == "crossattn-adm":
- uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if inpaint:
- # make a simple center square
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
- mask = torch.ones(N, h, w).to(self.device)
- # zeros will be filled in
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
- mask = mask[:, None, ...]
- with ema_scope("Plotting Inpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_inpainting"] = x_samples
- log["mask"] = mask
-
- # outpaint
- mask = 1. - mask
- with ema_scope("Plotting Outpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_outpainting"] = x_samples
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.cond_stage_trainable:
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
- params = params + list(self.cond_stage_model.parameters())
- if self.learn_logvar:
- print('Diffusion model optimizing logvar')
- params.append(self.logvar)
- opt = torch.optim.AdamW(params, lr=lr)
- if self.use_scheduler:
- assert 'target' in self.scheduler_config
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [opt], scheduler
- return opt
-
- @torch.no_grad()
- def to_rgb(self, x):
- x = x.float()
- if not hasattr(self, "colorize"):
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
- x = nn.functional.conv2d(x, weight=self.colorize)
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
- return x
-
-
-class DiffusionWrapper(pl.LightningModule):
- def __init__(self, diff_model_config, conditioning_key):
- super().__init__()
- self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False)
- self.diffusion_model = instantiate_from_config(diff_model_config)
- self.conditioning_key = conditioning_key
- assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm']
-
- def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None):
- if self.conditioning_key is None:
- out = self.diffusion_model(x, t)
- elif self.conditioning_key == 'concat':
- xc = torch.cat([x] + c_concat, dim=1)
- out = self.diffusion_model(xc, t)
- elif self.conditioning_key == 'crossattn':
- if not self.sequential_cross_attn:
- cc = torch.cat(c_crossattn, 1)
- else:
- cc = c_crossattn
- out = self.diffusion_model(x, t, context=cc)
- elif self.conditioning_key == 'hybrid':
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc)
- elif self.conditioning_key == 'hybrid-adm':
- assert c_adm is not None
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'crossattn-adm':
- assert c_adm is not None
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(x, t, context=cc, y=c_adm)
- elif self.conditioning_key == 'adm':
- cc = c_crossattn[0]
- out = self.diffusion_model(x, t, y=cc)
- else:
- raise NotImplementedError()
-
- return out
-
-
-class LatentUpscaleDiffusion(LatentDiffusion):
- def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs):
- super().__init__(*args, **kwargs)
- # assumes that neither the cond_stage nor the low_scale_model contain trainable params
- assert not self.cond_stage_trainable
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
- self.noise_level_key = noise_level_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False):
- if not log_mode:
- z, c = super().get_input(batch, k, force_c_encode=True, bs=bs)
- else:
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
- x_low = batch[self.low_scale_key][:bs]
- x_low = rearrange(x_low, 'b h w c -> b c h w')
- x_low = x_low.to(memory_format=torch.contiguous_format).float()
- zx, noise_level = self.low_scale_model(x_low)
- if self.noise_level_key is not None:
- # get noise level from batch instead, e.g. when extracting a custom noise level for bsr
- raise NotImplementedError('TODO')
-
- all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level}
- if log_mode:
- # TODO: maybe disable if too expensive
- x_low_rec = self.low_scale_model.decode(zx)
- return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True,
- unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N,
- log_mode=True)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- log["x_lr"] = x_low
- log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- # TODO explore better "unconditional" choices for the other keys
- # maybe guide away from empty text label and highest noise level and maximally degraded zx?
- uc = dict()
- for k in c:
- if k == "c_crossattn":
- assert isinstance(c[k], list) and len(c[k]) == 1
- uc[k] = [uc_tmp]
- elif k == "c_adm": # todo: only run with text-based guidance?
- assert isinstance(c[k], torch.Tensor)
- #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level
- uc[k] = c[k]
- elif isinstance(c[k], list):
- uc[k] = [c[k][i] for i in range(len(c[k]))]
- else:
- uc[k] = c[k]
-
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- if plot_progressive_rows:
- with ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- return log
-
-
-class LatentFinetuneDiffusion(LatentDiffusion):
- """
- Basis for different finetunas, such as inpainting or depth2image
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys: tuple,
- finetune_keys=("model.diffusion_model.input_blocks.0.0.weight",
- "model_ema.diffusion_modelinput_blocks00weight"
- ),
- keep_finetune_dims=4,
- # if model was trained without concat mode before and we would like to keep these channels
- c_concat_log_start=None, # to log reconstruction of c_concat codes
- c_concat_log_end=None,
- *args, **kwargs
- ):
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", list())
- super().__init__(*args, **kwargs)
- self.finetune_keys = finetune_keys
- self.concat_keys = concat_keys
- self.keep_dims = keep_finetune_dims
- self.c_concat_log_start = c_concat_log_start
- self.c_concat_log_end = c_concat_log_end
- if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint'
- if exists(ckpt_path):
- self.init_from_ckpt(ckpt_path, ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
-
- # make it explicit, finetune by including extra input channels
- if exists(self.finetune_keys) and k in self.finetune_keys:
- new_entry = None
- for name, param in self.named_parameters():
- if name in self.finetune_keys:
- print(
- f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only")
- new_entry = torch.zeros_like(param) # zero init
- assert exists(new_entry), 'did not find matching parameter to modify'
- new_entry[:, :self.keep_dims, ...] = sd[k]
- sd[k] = new_entry
-
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- ema_scope = self.ema_scope if use_ema_scope else nullcontext
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True)
- c_cat, c = c["c_concat"][0], c["c_crossattn"][0]
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption", "txt"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25)
- log["conditioning"] = xc
- elif self.cond_stage_key in ['class_label', 'cls']:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25)
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if not (self.c_concat_log_start is None and self.c_concat_log_end is None):
- log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end])
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with ema_scope("Sampling"):
- samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label)
- uc_cat = c_cat
- uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
- with ema_scope("Sampling with classifier-free guidance"):
- samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc_full,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- return log
-
-
-class LatentInpaintDiffusion(LatentFinetuneDiffusion):
- """
- can either run as pure inpainting model (only concat mode) or with mixed conditionings,
- e.g. mask as concat and text via cross-attn.
- To disable finetuning mode, set finetune_keys to None
- """
-
- def __init__(self,
- concat_keys=("mask", "masked_image"),
- masked_image_key="masked_image",
- *args, **kwargs
- ):
- super().__init__(concat_keys, *args, **kwargs)
- self.masked_image_key = masked_image_key
- assert self.masked_image_key in concat_keys
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- c_cat = list()
- for ck in self.concat_keys:
- cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- bchw = z.shape
- if ck != self.masked_image_key:
- cc = torch.nn.functional.interpolate(cc, size=bchw[-2:])
- else:
- cc = self.get_first_stage_encoding(self.encode_first_stage(cc))
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs)
- log["masked_image"] = rearrange(args[0]["masked_image"],
- 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float()
- return log
-
-
-class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion):
- """
- condition on monocular depth estimation
- """
-
- def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.depth_model = instantiate_from_config(depth_stage_config)
- self.depth_stage_key = concat_keys[0]
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- c_cat = list()
- for ck in self.concat_keys:
- cc = batch[ck]
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- cc = self.depth_model(cc)
- cc = torch.nn.functional.interpolate(
- cc,
- size=z.shape[2:],
- mode="bicubic",
- align_corners=False,
- )
-
- depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3],
- keepdim=True)
- cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1.
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- depth = self.depth_model(args[0][self.depth_stage_key])
- depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \
- torch.amax(depth, dim=[1, 2, 3], keepdim=True)
- log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1.
- return log
-
-
-class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion):
- """
- condition on low-res image (and optionally on some spatial noise augmentation)
- """
- def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None,
- low_scale_config=None, low_scale_key=None, *args, **kwargs):
- super().__init__(concat_keys=concat_keys, *args, **kwargs)
- self.reshuffle_patch_size = reshuffle_patch_size
- self.low_scale_model = None
- if low_scale_config is not None:
- print("Initializing a low-scale model")
- assert exists(low_scale_key)
- self.instantiate_low_stage(low_scale_config)
- self.low_scale_key = low_scale_key
-
- def instantiate_low_stage(self, config):
- model = instantiate_from_config(config)
- self.low_scale_model = model.eval()
- self.low_scale_model.train = disabled_train
- for param in self.low_scale_model.parameters():
- param.requires_grad = False
-
- @torch.no_grad()
- def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False):
- # note: restricted to non-trainable encoders currently
- assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft'
- z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True,
- force_c_encode=True, return_original_cond=True, bs=bs)
-
- assert exists(self.concat_keys)
- assert len(self.concat_keys) == 1
- # optionally make spatial noise_level here
- c_cat = list()
- noise_level = None
- for ck in self.concat_keys:
- cc = batch[ck]
- cc = rearrange(cc, 'b h w c -> b c h w')
- if exists(self.reshuffle_patch_size):
- assert isinstance(self.reshuffle_patch_size, int)
- cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w',
- p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size)
- if bs is not None:
- cc = cc[:bs]
- cc = cc.to(self.device)
- if exists(self.low_scale_model) and ck == self.low_scale_key:
- cc, noise_level = self.low_scale_model(cc)
- c_cat.append(cc)
- c_cat = torch.cat(c_cat, dim=1)
- if exists(noise_level):
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level}
- else:
- all_conds = {"c_concat": [c_cat], "c_crossattn": [c]}
- if return_first_stage_outputs:
- return z, all_conds, x, xrec, xc
- return z, all_conds
-
- @torch.no_grad()
- def log_images(self, *args, **kwargs):
- log = super().log_images(*args, **kwargs)
- log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w')
- return log
diff --git a/spaces/gfhayworth/chat_qa_demo2/azure_utils.py b/spaces/gfhayworth/chat_qa_demo2/azure_utils.py
deleted file mode 100644
index 4173eaa689abe9b7b6b66ed3fcf1ede591655a53..0000000000000000000000000000000000000000
--- a/spaces/gfhayworth/chat_qa_demo2/azure_utils.py
+++ /dev/null
@@ -1,155 +0,0 @@
-# This class stores Azure voice data. Specifically, the class stores several records containing
-# language, lang_code, gender, voice_id and engine. The class also has a method to return the
-# voice_id, lang_code and engine given a language and gender.
-
-NEURAL_ENGINE = "neural"
-STANDARD_ENGINE = "standard"
-
-
-class AzureVoiceData:
- def get_voice(self, language, gender):
- for voice in self.voice_data:
- if voice['language'] == language and voice['gender'] == gender:
- return voice['azure_voice']
- return None
-
- def __init__(self):
- self.voice_data = [
- {'language': 'Arabic',
- 'azure_voice': 'ar-EG-ShakirNeural',
- 'gender': 'Male'},
- {'language': 'Arabic (Gulf)',
- 'azure_voice': 'ar-KW-FahedNeural',
- 'gender': 'Male'},
- {'language': 'Catalan',
- 'azure_voice': 'ca-ES-EnricNeural',
- 'gender': 'Male'},
- {'language': 'Chinese (Cantonese)',
- 'azure_voice': 'yue-CN-YunSongNeural',
- 'gender': 'Male'},
- {'language': 'Chinese (Mandarin)',
- 'azure_voice': 'zh-CN-YunxiNeural',
- 'gender': 'Male'},
- {'language': 'Danish',
- 'azure_voice': 'da-DK-JeppeNeural',
- 'gender': 'Male'},
- {'language': 'Dutch',
- 'azure_voice': 'nl-NL-MaartenNeural',
- 'gender': 'Male'},
- {'language': 'English (Australian)',
- 'azure_voice': 'en-AU-KenNeural',
- 'gender': 'Male'},
- {'language': 'English (British)',
- 'azure_voice': 'en-GB-RyanNeural',
- 'gender': 'Male'},
- {'language': 'English (Indian)',
- 'azure_voice': 'en-IN-PrabhatNeural',
- 'gender': 'Male'},
- {'language': 'English (New Zealand)',
- 'azure_voice': 'en-NZ-MitchellNeural',
- 'gender': 'Male'},
- {'language': 'English (South African)',
- 'azure_voice': 'en-ZA-LukeNeural',
- 'gender': 'Male'},
- {'language': 'English (US)',
- 'azure_voice': 'en-US-ChristopherNeural',
- 'gender': 'Male'},
- {'language': 'English (Welsh)',
- 'azure_voice': 'cy-GB-AledNeural',
- 'gender': 'Male'},
- {'language': 'Finnish',
- 'azure_voice': 'fi-FI-HarriNeural',
- 'gender': 'Male'},
- {'language': 'French',
- 'azure_voice': 'fr-FR-HenriNeural',
- 'gender': 'Male'},
- {'language': 'French (Canadian)',
- 'azure_voice': 'fr-CA-AntoineNeural',
- 'gender': 'Male'},
- {'language': 'German',
- 'azure_voice': 'de-DE-KlausNeural',
- 'gender': 'Male'},
- {'language': 'German (Austrian)',
- 'azure_voice': 'de-AT-JonasNeural',
- 'gender': 'Male'},
- {'language': 'Hindi',
- 'azure_voice': 'hi-IN-MadhurNeural',
- 'gender': 'Male'},
- {'language': 'Icelandic',
- 'azure_voice': 'is-IS-GunnarNeural',
- 'gender': 'Male'},
- {'language': 'Italian',
- 'azure_voice': 'it-IT-GianniNeural',
- 'gender': 'Male'},
- {'language': 'Japanese',
- 'azure_voice': 'ja-JP-KeitaNeural',
- 'gender': 'Male'},
- {'language': 'Korean',
- 'azure_voice': 'ko-KR-GookMinNeural',
- 'gender': 'Male'},
- {'language': 'Norwegian',
- 'azure_voice': 'nb-NO-FinnNeural',
- 'gender': 'Male'},
- {'language': 'Polish',
- 'azure_voice': 'pl-PL-MarekNeural',
- 'gender': 'Male'},
- {'language': 'Portuguese (Brazilian)',
- 'azure_voice': 'pt-BR-NicolauNeural',
- 'gender': 'Male'},
- {'language': 'Portuguese (European)',
- 'azure_voice': 'pt-PT-DuarteNeural',
- 'gender': 'Male'},
- {'language': 'Romanian',
- 'azure_voice': 'ro-RO-EmilNeural',
- 'gender': 'Male'},
- {'language': 'Russian',
- 'azure_voice': 'ru-RU-DmitryNeural',
- 'gender': 'Male'},
- {'language': 'Spanish (European)',
- 'azure_voice': 'es-ES-TeoNeural',
- 'gender': 'Male'},
- {'language': 'Spanish (Mexican)',
- 'azure_voice': 'es-MX-LibertoNeural',
- 'gender': 'Male'},
- {'language': 'Spanish (US)',
- 'azure_voice': 'es-US-AlonsoNeural"',
- 'gender': 'Male'},
- {'language': 'Swedish',
- 'azure_voice': 'sv-SE-MattiasNeural',
- 'gender': 'Male'},
- {'language': 'Turkish',
- 'azure_voice': 'tr-TR-AhmetNeural',
- 'gender': 'Male'},
- {'language': 'Welsh',
- 'azure_voice': 'cy-GB-AledNeural',
- 'gender': 'Male'},
- ]
-
-
-# Run from the command-line
-if __name__ == '__main__':
- azure_voice_data = AzureVoiceData()
-
- azure_voice = azure_voice_data.get_voice('English (US)', 'Male')
- print('English (US)', 'Male', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('English (US)', 'Female')
- print('English (US)', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('French', 'Female')
- print('French', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('French', 'Male')
- print('French', 'Male', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Japanese', 'Female')
- print('Japanese', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Japanese', 'Male')
- print('Japanese', 'Male', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Hindi', 'Female')
- print('Hindi', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Hindi', 'Male')
- print('Hindi', 'Male', azure_voice)
diff --git a/spaces/godot-demo/godot-2d-threads/index.html b/spaces/godot-demo/godot-2d-threads/index.html
deleted file mode 100644
index efb2a1f785a0ade51d7abe55e7f9a3d9e12f9bf8..0000000000000000000000000000000000000000
--- a/spaces/godot-demo/godot-2d-threads/index.html
+++ /dev/null
@@ -1,247 +0,0 @@
-
-
-
-
-
- Download Zip ———>>> https://urlin.us/2uEvJk
DOWNLOAD ✦✦✦ https://urlin.us/2uEwB6
Kong: Skull Island is a 2017 action-adventure film that is a reboot of the King Kong franchise. It is the second film in the MonsterVerse, following Godzilla (2014). The film follows a team of scientists and soldiers who explore an uncharted island in the Pacific, where they encounter the giant ape Kong and other monstrous creatures. The film stars Tom Hiddleston, Samuel L. Jackson, Brie Larson, John Goodman, and John C. Reilly.
-If you are a fan of King Kong or monster movies, you should watch Kong: Skull Island (English) dual audio English Hindi. Here are some of the reasons why:
-Download Zip ✵ https://urlin.us/2uEyC4
Downloading Kong: Skull Island (English) dual audio English Hindi is easy and fast. Here are the steps you need to follow:
-Note: You may need to update your drivers or install some patches to run the film smoothly. You may also need to select your preferred audio track from the settings menu.
-Kong: Skull Island (English) dual audio English Hindi is a great film that offers you a fun and exciting adventure. It has amazing visuals, a gripping story, a stellar cast, and a dual audio option. If you want to download Kong: Skull Island (English) dual audio English Hindi, just follow the steps above and start your journey to Skull Island. You won't regret it!
-Kong: Skull Island (English) dual audio English Hindi is a high-quality version of the film that offers you many features that enhance your viewing experience. Here are some of them:
-Downloading Kong: Skull Island (English) dual audio English Hindi has many advantages that you cannot get from other sources. Here are some of them:
-Kong: Skull Island (English) dual audio English Hindi is a great version of the film that offers you a fun and exciting adventure. It has amazing features, advantages, and options that enhance your viewing experience. If you want to download Kong: Skull Island (English) dual audio English Hindi, just follow the steps above and start your journey to Skull Island. You won't regret it!
-Downloading Kong: Skull Island (English) dual audio English Hindi may seem easy and convenient, but it also comes with some challenges that you should be aware of. Here are some of them:
-Downloading Kong: Skull Island (English) dual audio English Hindi is not impossible or hopeless. There are ways to overcome the challenges that you may face while downloading the film. Here are some of them:
- -Kong: Skull Island (English) dual audio English Hindi is a great version of the film that offers you a fun and exciting adventure. It has amazing features, advantages, and options that enhance your viewing experience. However, it also has some challenges that you may face while downloading it. If you want to download Kong: Skull Island (English) dual audio English Hindi, just follow the steps above and overcome these challenges. You will surely enjoy your journey to Skull Island!
-Watching Kong: Skull Island (English) dual audio English Hindi is not only entertaining, but also beneficial for you. Here are some of the benefits that you can get from watching the film:
-Watching Kong: Skull Island (English) dual audio English Hindi is easy and fun, but there are some tips that you can follow to make your viewing experience even better. Here are some of them:
-Kong: Skull Island (English) dual audio English Hindi is a great version of the film that offers you a fun and exciting adventure. It has amazing features, advantages, options, benefits, and tips that enhance your viewing experience. If you want to watch Kong: Skull Island (English) dual audio English Hindi, just follow the steps above and enjoy your journey to Skull Island!
-Kong: Skull Island (English) dual audio English Hindi is a great version of the film that offers you a fun and exciting adventure. It has amazing features, advantages, options, benefits, and tips that enhance your viewing experience. If you want to watch Kong: Skull Island (English) dual audio English Hindi, just follow the steps above and enjoy your journey to Skull Island!
3cee63e6c2Download ⭐ https://urlin.us/2uExY6
Download Zip » https://tiurll.com/2uCksz
insecticides, such as insect repellents and traps, can be linked to health hazards. using repellents near children and pets is a significant health risk. some repellents, such as the "back bite" and "back kick" products, contain toxic pesticides. these products are generally not recommended. in addition, some insect repellents containing toxic pesticides have been removed from the market, including those that have been linked to neurological diseases.
-even if insecticides are used as directed, untreated areas of lawns may become heavily infested. avoid purchasing lawn care products from garden center vendors who do not provide epa-certified site assessments and who do not monitor pesticide use, if possible. contact your local public health department for help in identifying the most toxic chemicals in your garden and other yard activities.
-Download File »»» https://tiurll.com/2uClDY
your local public health department can provide a referral for lawn care services that are epa-certified. the american academy of pediatrics (aap) recommends that children and pregnant women be completely protected from lawn care applications to the arms and legs because of the potential health risks from these exposures. epa-certified guidelines limit how much a child may be exposed to the chemicals.
-consult your local building inspection department to determine which pesticides have been registered for use indoors. contact your local public health department for information about chemical pesticide use in lawns.
-if you have children, consult your pediatrician to determine whether they should avoid being exposed to toxic lawn chemicals. the health effects of pesticide exposure in children are poorly understood.
899543212bDino Time is a 2012 animated film that tells the story of three kids who travel back in time to the dinosaur era and meet a friendly T-rex named Tyra and her son Dodger. The film is directed by Yoon-suk Choi and John Kafka and features the voices of Rob Schneider, Melanie Griffith, Pamela Adlon, Jane Lynch, and Tara Strong.
- -The film was originally released in South Korea in 2012 and later dubbed in Hindi for the Indian audience. The film is available to watch online or download in Hindi on various platforms, such as YouTube, Dead Toons India, and Cartoon Network India. The film is also known as Back to the Jurassic or Dino King in other countries.
-Download Zip ✑ ✑ ✑ https://tiurll.com/2uClOC
The film begins with Ernie (Rob Schneider), a rebellious kid who loves dinosaurs and hates his mother Sue (Melanie Griffith), who is a paleontologist. Ernie sneaks into his mother's museum with his best friend Max (Pamela Adlon) and his sister Julia (Tara Strong) and finds a mysterious device that can transport them back in time.
- -Ernie activates the device and accidentally sends himself, Max, Julia, and a dinosaur egg back to the Cretaceous period. There, they meet Tyra (Jane Lynch), a motherly T-rex who thinks that Ernie is her son Dodger (Yuri Lowenthal), who was separated from her during an earthquake. Tyra adopts Ernie as her son and protects him from other dinosaurs.
- -Meanwhile, Dodger meets Sue, who has followed Ernie back in time using another device. Sue tries to find Ernie and bring him back to the present, but faces many dangers along the way. She also learns to appreciate Ernie's love for dinosaurs and understand his feelings.
- -Ernie, Max, Julia, and Dodger have many adventures in the dinosaur world, such as escaping from a pack of raptors, riding on a pteranodon, befriending a triceratops, and witnessing a volcanic eruption. They also learn to work together as a team and care for each other as a family.
- -The film ends with Ernie, Max, Julia, Dodger, and Sue returning to the present with the help of Tyra, who sacrifices herself to save them from a meteor strike. Ernie and Sue reconcile their differences and hug each other. Ernie also keeps the dinosaur egg as a souvenir and names it Tyra Jr.
- -Dino Time Hindi is a fun-filled adventure film that appeals to kids and adults alike with its colorful animation, humorous dialogues, thrilling action scenes, and heartwarming message. The film does not have a complex plot or logic, but relies on the charm and chemistry of the characters to entertain the audience.
- - -The film has some flaws, such as the cliched portrayal of dinosaurs, the weak characterization of the villains, the cheesy dialogues, -and the predictable twists. The film also has some scenes that are violent or scary for younger viewers, -such as the dinosaur attacks, -the volcanic eruption, -and the meteor strike.
- -The film's strengths are its animation, -voice acting, -music, -and theme. -The film has some impressive animation -that captures the beauty -and diversity -of the dinosaur world. -The film also has some expressive -and lively -voice acting -by Rob Schneider, -Melanie Griffith, -Pamela Adlon, -Jane Lynch, -and Tara Strong, -who bring their characters -to life. -The film also has some catchy songs -composed by Stephen Barton -and Chris Ridenhour, -such as "Dino Time", -"Back to Life", -and "We're Family". -The film also has a positive theme -of family, -friendship, -and adventure, -that inspires -and touches -the audience.
- -Dino Time Hindi is a fun-filled adventure film that appeals to kids and adults alike with its colorful animation, humorous dialogues, thrilling action scenes, and heartwarming message. The film is not meant to be taken seriously or critically analyzed, but enjoyed as a popcorn flick that celebrates dinosaurs, family, and friendship. The film is a perfect watch for anyone who loves dinosaurs or animation.
- -If you are interested in watching Dino Time Hindi online or download it in Hindi, -you can find it on various platforms, -such as YouTube, -Dead Toons India, -and Cartoon Network India. -You can also find reviews of the film on various websites, -such as IMDb, -Rotten Tomatoes, -Bollywood Hungama, -Times of India, -and Hindustan Times. -You can also write your own review of the film -and share your thoughts -and feelings -with others.
- -Dino Time Hindi is a film that will make you laugh, -cry, -and roar. -Watch it today -and enjoy the dino time -with Ernie, -Max, -Julia, -Dodger, -and Tyra.
-If you want to watch Dino Time Hindi offline or save it on your device, you might be looking for ways to download it for free. However, you should be careful about downloading movies from unauthorized sources, as it is illegal and unethical. Moreover, you might risk exposing your device to malware or viruses by visiting such websites.
- -The best way to download Dino Time Hindi for free is to use a trusted and legal platform that offers free downloads or streaming of movies. Some of these platforms are YouTube, Dead Toons India, and Cartoon Network India. These platforms have the official rights to distribute Dino Time Hindi and offer high-quality downloads or streaming of the movie.
- -To download Dino Time Hindi from YouTube, you need to follow these steps:
- -To download Dino Time Hindi from Dead Toons India, you need to follow these steps:
- -To download Dino Time Hindi from Cartoon Network India, you need to follow these steps:
- -By using these platforms, you can download Dino Time Hindi for free and watch it anytime you want. However, you should also respect the rights of the filmmakers and actors and avoid sharing or distributing the movie without their permission. You should also support them by buying or renting the DVD or Blu-ray of the movie from a trusted source.
-Dino Time Hindi is a movie that will take you on a journey to the past and make you experience the wonders and dangers of the dinosaur world. The movie is a blend of comedy, action, adventure, and drama that will keep you entertained and engaged throughout. The movie is suitable for kids and adults who love dinosaurs or animation.
- -The movie has some elements that you can expect from Dino Time Hindi, such as:
- -The movie also has some surprises and twists that you might not expect from Dino Time Hindi, such as:
- -Dino Time Hindi is a movie that will make you laugh, cry, and roar. The movie has a lot of fun and excitement that will appeal to your senses and emotions. The movie also has a lot of heart and message that will inspire and touch you. The movie is a must-watch for anyone who loves dinosaurs or animation.
-Dino Time Hindi is a movie that you can enjoy with your family and friends, as it has something for everyone. The movie is a fun-filled adventure that will make you laugh, cry, and roar. The movie is suitable for kids and adults who love dinosaurs or animation.
- -There are many ways to enjoy Dino Time Hindi with your family and friends, such as:
- -By enjoying Dino Time Hindi with your family and friends, you can have a memorable and fun time that will strengthen your bond and create lasting memories. You can also learn more about dinosaurs and appreciate their beauty and diversity. You can also discover more about yourself and others by relating to the characters and their emotions.
- -Dino Time Hindi is a fun-filled adventure film that appeals to kids and adults alike with its colorful animation, humorous dialogues, thrilling action scenes, and heartwarming message. The film is not meant to be taken seriously or critically analyzed, but enjoyed as a popcorn flick that celebrates dinosaurs, family, and friendship. The film is a perfect watch for anyone who loves dinosaurs or animation.
- -If you are interested in watching Dino Time Hindi online or download it in Hindi, -you can find it on various platforms, -such as YouTube, -Dead Toons India, -and Cartoon Network India. -You can also find reviews of the film on various websites, -such as IMDb, -Rotten Tomatoes, -Bollywood Hungama, -Times of India, -and Hindustan Times. -You can also write your own review of the film -and share your thoughts -and feelings -with others.
- -You can also enjoy Dino Time Hindi with your family and friends -by watching it together, -playing games or quizzes, -making crafts or drawings, -sharing opinions and feedback, -or recommending it to others. -You can have a memorable and fun time -that will strengthen your bond -and create lasting memories. -You can also learn more about dinosaurs -and appreciate their beauty -and diversity. -You can also discover more about yourself -and others -by relating to the characters -and their emotions.
- -Dino Time Hindi is a film that will make you laugh, -cry, -and roar. -Watch it today -and enjoy the dino time -with Ernie, -Max, -Julia, -Dodger, -and Tyra.
-Dino Time Hindi is a fun-filled adventure film that appeals to kids and adults alike with its colorful animation, humorous dialogues, thrilling action scenes, and heartwarming message. The film is not meant to be taken seriously or critically analyzed, but enjoyed as a popcorn flick that celebrates dinosaurs, family, and friendship. The film is a perfect watch for anyone who loves dinosaurs or animation.
- -If you are interested in watching Dino Time Hindi online or download it in Hindi, -you can find it on various platforms, -such as YouTube, -Dead Toons India, -and Cartoon Network India. -You can also find reviews of the film on various websites, -such as IMDb, -Rotten Tomatoes, -Bollywood Hungama, -Times of India, -and Hindustan Times. -You can also write your own review of the film -and share your thoughts -and feelings -with others.
- -You can also enjoy Dino Time Hindi with your family and friends -by watching it together, -playing games or quizzes, -making crafts or drawings, -sharing opinions and feedback, -or recommending it to others. -You can have a memorable and fun time -that will strengthen your bond -and create lasting memories. -You can also learn more about dinosaurs -and appreciate their beauty -and diversity. -You can also discover more about yourself -and others -by relating to the characters -and their emotions.
- -Dino Time Hindi is a film that will make you laugh, -cry, -and roar. -Watch it today -and enjoy the dino time -with Ernie, -Max, -Julia, -Dodger, -and Tyra.
3cee63e6c2-
Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers
Assassins Creed Syndicate is the ninth installment in the popular action-adventure franchise that takes players to the Victorian era London. The game follows the story of two twin assassins, Jacob and Evie Frye, who lead a gang of rebels against the corrupt Templars who control the city. The game features a vast open world with many historical landmarks, characters and events, as well as a dynamic combat system, stealth mechanics and a variety of weapons and gadgets.
-Download File https://bytlly.com/2uGxMj
The PC version of Assassins Creed Syndicate is available for download from various sources, including ^^nosTEAM^^ and SKIDROW. These are two well-known groups that provide cracked games for free. However, downloading games from these sources may come with some risks and drawbacks, such as malware, viruses, bugs, glitches, missing files, outdated patches and poor performance. Therefore, it is advisable to always scan your files before installing them and to backup your data regularly.
-If you want to enjoy Assassins Creed Syndicate PC full game ^^nosTEAM^^ SKIDROW without any problems, you may need to follow some steps and requirements. First of all, you need to have a decent PC that meets the minimum or recommended system specifications for the game. You can check them on the official website or on Steam. Secondly, you need to have enough free space on your hard drive to install the game and its updates. The game size is about 37.8 GB[^1^]. Thirdly, you need to have a stable internet connection to download the game files and to access some online features of the game, such as Uplay rewards and multiplayer modes.
-Once you have downloaded the game files from ^^nosTEAM^^ or SKIDROW, you need to extract them using a program like WinRAR or 7-Zip. Then, you need to run the setup.exe file and follow the instructions to install the game on your PC. You may also need to install some additional software, such as DirectX, Visual C++ or PhysX. After that, you can launch the game from the desktop shortcut or from the game folder. You may also need to apply some cracks or patches to make the game work properly.
-Assassins Creed Syndicate PC full game ^^nosTEAM^^ SKIDROW includes all the DLCs and extra content available for the game, such as The Last Maharaja, Dreadful Crimes, Jack The Ripper, The Darwin and Dickenâs Conspiracy, Runaway Train and Gold Edition Content[^1^]. It also comes with a soundtrack in mp3 format and an optional Uplay rewards unlocker[^1^]. However, some features of the game may not work correctly or at all, such as cloud saves, achievements, leaderboards and online co-op.
- -In conclusion, Assassins Creed Syndicate PC full game ^^nosTEAM^^ SKIDROW is a great way to experience one of the best games in the Assassins Creed series for free. However, it also comes with some risks and limitations that may affect your enjoyment of the game. Therefore, it is recommended to always support the developers and buy the original game if you can afford it.
d5da3c52bfDownload — https://bytlly.com/2uGwtg
Download ->->->-> https://bytlly.com/2uGyaI