diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX The Ultimate Review of the Award-Winning Simulation Game.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX The Ultimate Review of the Award-Winning Simulation Game.md deleted file mode 100644 index de19ecc4e48d277c04c879d72cbc9fb75fc3475d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX The Ultimate Review of the Award-Winning Simulation Game.md +++ /dev/null @@ -1,145 +0,0 @@ - -

Anno 2070 Deep Ocean: A Review of the Expansion Pack

-

Anno 2070, the latest entry in Ubisoft's long-running real-time strategy series, gets a major expansion pack in 2012, titled Deep Ocean. This add-on brings a new civilization level, new production chains and resources, new buildings and vehicles, new challenges and quests, and many other features and improvements to the game. In this article, we will review what Anno 2070 Deep Ocean has to offer, and how to install it on your PC.

-

What is Anno 2070 Deep Ocean?

-

Anno 2070 Deep Ocean is the add-on of Anno 2070, which was released in 2011. It is set in the year 2070, when global warming has melted the ice caps and raised the sea level, forcing humanity to adapt to the new conditions. The game features three factions: the Ecos, who are environmentally friendly and use renewable energy sources; the Tycoons, who are industrial and use fossil fuels; and the Techs, who are scientific and use advanced technology. The player can choose to ally with one or more factions, and build their own civilization on various islands and underwater plateaus.

-

Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX


Download Zip ✸✸✸ https://byltly.com/2uKvyW



-

The new civilization level: the Geniuses

-

For the first time in the history of the Anno series, an add-on brings a new civilization level: the Tech faction is expanded by the Genius population class. These are highly intelligent and innovative people who require neuroimplants, immunity drugs, laboratory instruments, and bionic suits to satisfy their needs. To produce these goods, new fertilities have been added to the underwater islands, such as coral, sponges, lithium, platinum, and enzymes. The Geniuses also unlock access to the Tech monument: the Science Forum, which opens up all building restrictions on the island and gives special tasks from F.A.T.H.E.R. 2.0, the artificial intelligence that guides the Techs.

-

The new production chains and resources

-

Anno 2070 Deep Ocean adds several new production chains and resources to the game, especially for the underwater islands. Some of them are:

- -

The new buildings and vehicles

-

Anno 2070 Deep Ocean also adds over 50 new buildings and vehicles to the game, some of them are:

- -

What are the benefits of playing Anno 2070 Deep Ocean?

-

Anno 2070 Deep Ocean not only adds more content to the game but also enhances its gameplay experience in various ways. Some of them are:

-

The new challenges and quests

-

The expansion pack introduces a new campaign mode that consists of six missions that follow the story of F.A.T.H.E.R.'s evolution. It also adds several new scenarios that test the player's skills in different situations. Moreover, it adds more random events and disasters that affect both land and sea, such as tsunamis, oil spills, meteor showers, etc.

-

The new features and improvements

-

The expansion pack also brings many new features and improvements to the game's mechanics and interface. Some of them are:

- -

The new graphics and sound effects

-

Anno 2070 Deep Ocean also improves the game's graphics and sound effects by adding more details and variety to its environments and animations. Some of them are:

- -

How to install Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX?

-

If you want to play Anno 2070 Deep Ocean on your PC, you need to have Anno 2070 installed first. Then you need to download Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX from a reliable source such as Steam or Ubisoft Store. Here are some steps to guide you through the installation process:

-

Anno 2070 Deep Ocean expansion pack download
-How to install Anno 2070 Deep Ocean crack
-Anno 2070 Deep Ocean CODEX torrent
-Anno 2070 Deep Ocean gameplay and features
-Anno 2070 Deep Ocean patch and update
-Anno 2070 Deep Ocean multiplayer crack
-Anno 2070 Deep Ocean system requirements
-Anno 2070 Deep Ocean review and rating
-Anno 2070 Deep Ocean cheats and mods
-Anno 2070 Deep Ocean free download full version
-Anno 2070 Deep Ocean keygen and serial number
-Anno 2070 Deep Ocean DLC and bonus content
-Anno 2070 Deep Ocean trainer and unlocker
-Anno 2070 Deep Ocean best settings and tips
-Anno 2070 Deep Ocean error fix and troubleshooting
-Anno 2070 Deep Ocean steam and origin activation
-Anno 2070 Deep Ocean skidrow and reloaded crack
-Anno 2070 Deep Ocean comparison and benchmark
-Anno 2070 Deep Ocean soundtrack and OST
-Anno 2070 Deep Ocean wallpaper and screenshots
-Anno 2070 Deep Ocean guide and walkthrough
-Anno 2070 Deep Ocean achievements and trophies
-Anno 2070 Deep Ocean mods and customization
-Anno 2070 Deep Ocean release date and price
-Anno 2070 Deep Ocean trailer and gameplay video
-Anno 2070 Deep Ocean iso and rar file download
-Anno 2070 Deep Ocean direct download link
-Anno 2070 Deep Ocean mega and google drive download
-Anno 2070 Deep Ocean crack only download
-Anno 2070 Deep Ocean language pack and subtitles
-Anno 2070 Deep Ocean repack and compressed download
-Anno 2070 Deep Ocean online and LAN play
-Anno 2070 Deep Ocean co-op and versus mode
-Anno 2070 Deep Ocean new missions and scenarios
-Anno 2070 Deep Ocean factions and tech tree
-Anno 2070 Deep Ocean underwater city building
-Anno 2070 Deep Ocean energy crisis and disaster management
-Anno 2070 Deep Ocean simulation and strategy game
-Anno 2070 Deep Ocean futuristic and sci-fi setting
-Anno 2070 Deep Ocean sandbox and endless mode
-Anno 2070 Deep Ocean world events and challenges
-Anno 2070 Deep Ocean graphics and performance optimization
-Anno 2070 Deep Ocean VR and controller support
-Anno 2070 Deep Ocean fan art and community creations
-Anno 2070 Deep Ocean wiki and FAQ page
-Anno 2070 Deep Ocean forum and discussion board
-Anno 2070 Deep Ocean news and updates
-Anno 2070 Deep Ocean crack status and working proof
-Anno 2070 Deep Ocean alternatives and similar games

-

The system requirements

-

Before you download Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX , you need to make sure that your PC meets the minimum system requirements for running it smoothly. These are:

- - -
Minimum System RequirementsRecommended System Requirements
OS : Windows XP / Windows Vista / Windows®7 Core 2 Duo E4400 @ 2.0 Ghz or AMD Athlon64 X2 3800+ @ 2.0Ghz Memory : 2 GB RAM Graphics : 512 MB DirectX® 9.0c–compatible with Shader Model 3.0 or higher (see supported list)* Hard Drive : 5 GB HD space Sound : DirectX 9.0c–compliantOS : Windows XP / Windows Vista / Windows®7 Processor : Intel® Core 2 Duo E6700 @ 2.6 GHz or AMD Athlon64 X2 6000+ @ 3.0Ghz or better Memory : 4 GB RAM Graphics : 512 MB DirectX® 9.0c–compatible with Shader Model 3.0 or higher (see supported list)* Hard Drive : 5 GB HD space Sound : DirectX® 9.0c–compliant
-

*Supported Video Cards at Time of Release: AMD Radeon™ HD2600XT or better/3000/4000/5000/6000 desktop series NVIDIA® GeForce® 8600GTS or better/9/GT200/GT400/GT500 desktop series Laptop versions of these cards may work but are NOT supported. These chipsets are the only ones that will run this game.

-

The download and installation steps

-

Once you have checked your system requirements, you can proceed to download Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX from your preferred source. Here are some steps to follow:

-
    -
  1. Download the file Anno_2070_Deep_Ocean_[PCDVD_Crack][Multi6]_(2012)_CODEX.rar from the link provided by the source.
  2. -
  3. Extract the file using a program such as WinRAR or 7-Zip.
  4. -
  5. Mount or burn the image Anno_2070_Deep_Ocean_[PCDVD_Crack][Multi6]_(2012)_CODEX.iso using a program such as Daemon Tools or PowerISO.
  6. -
  7. Run the setup.exe file and follow the instructions to install the game.
  8. -
  9. Copy the contents of the folder CODEX to the installation folder of Anno 2070.
  10. -
  11. Run the game from the desktop shortcut or the launcher.exe file in the installation folder.
  12. -
  13. Enjoy playing Anno 2070 Deep Ocean!
  14. -
-

The troubleshooting tips

-

If you encounter any problems while installing or playing Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX, here are some tips to help you fix them:

- -

Conclusion

-

Anno 2070 Deep Ocean is a great expansion pack for Anno 2070 that adds a lot of new content and features to the game. It allows you to explore and exploit the underwater world, build and manage a new civilization level, face new challenges and quests, and enjoy improved graphics and sound effects. If you are a fan of real-time strategy games and futuristic scenarios, you should definitely give Anno 2070 Deep Ocean a try. You can download it from Steam or Ubisoft Store, or use Anno 2070 Deep Ocean [PCDVD Crack][Multi6] (2012) CODEX to install it on your PC.

-

FAQs

-

Here are some frequently asked questions about Anno 2070 Deep Ocean:

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Direct Tax Laws Tn Manoharan Pdf REPACK Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Direct Tax Laws Tn Manoharan Pdf REPACK Download.md deleted file mode 100644 index f32f3cfa9aeae4a7710ca7d9c2eacd60e8b0d0fb..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Direct Tax Laws Tn Manoharan Pdf REPACK Download.md +++ /dev/null @@ -1,30 +0,0 @@ - -

How to Download Direct Tax Laws by TN Manoharan PDF for CA Final Exams

-

Direct Tax Laws by TN Manoharan is one of the most popular and comprehensive books for CA Final students who are preparing for the Direct Tax and International Taxation paper. The book covers the latest syllabus and amendments as per the Finance Act 2022 and provides numerous practical problems, case studies, illustrations and MCQs for practice.

-

If you are looking for a reliable source to download Direct Tax Laws by TN Manoharan PDF for free, you may be disappointed to know that there is no official or legal way to do so. The book is protected by copyright laws and any unauthorized distribution or reproduction of it is a violation of the intellectual property rights of the author and the publisher.

-

direct tax laws tn manoharan pdf download


Download File ::: https://byltly.com/2uKAaP



-

However, there are some alternative ways to access the book online without downloading it. Here are some of them:

- -

We hope this article helps you find the best way to access Direct Tax Laws by TN Manoharan PDF for your CA Final exams. Remember, reading the book is not enough; you also need to practice and revise the concepts regularly. All the best!

- -

Why Direct Tax Laws by TN Manoharan is a Must-Read for CA Final Students

-

Direct Tax Laws by TN Manoharan is a must-read for CA Final students because it covers the entire syllabus of Direct Tax and International Taxation in a lucid and comprehensive manner. The book is written by an eminent author and a former president of the Institute of Chartered Accountants of India (ICAI), who has vast experience and expertise in the field of taxation. The book is updated with the latest amendments and notifications as per the Finance Act 2022 and the Income Tax Act 1961.

-

The book is divided into two volumes: Volume I deals with Direct Tax Laws and Volume II deals with International Taxation. The book follows a systematic and logical approach to explain the concepts and provisions of the tax laws. The book also provides numerous examples, illustrations, case laws, MCQs and practical problems to help the students understand and apply the tax laws in various situations. The book also contains previous year question papers and suggested answers for reference and revision.

- -

How to Study Direct Tax Laws by TN Manoharan Effectively for CA Final Exams

-

Studying Direct Tax Laws by TN Manoharan effectively for CA Final exams requires a proper planning and strategy. Here are some tips to help you study the book efficiently:

-

- -

By following these tips, you can study Direct Tax Laws by TN Manoharan effectively for CA Final exams and score high marks in the paper.

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Eyeon Fusion 6.4 Crack Portable !!LINK!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Eyeon Fusion 6.4 Crack Portable !!LINK!!.md deleted file mode 100644 index a5802c2964b00acebee9e3c5b52ec41a908496a7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Eyeon Fusion 6.4 Crack Portable !!LINK!!.md +++ /dev/null @@ -1,73 +0,0 @@ -
-

Eyeon Fusion 6.4 Crack Portable: A Review

-

Eyeon Fusion 6.4 is a powerful and versatile software for creating stunning visual effects and motion graphics. It is used by professionals and enthusiasts alike for various projects such as films, commercials, games, and more. However, the software is not cheap and requires a license to use. If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for a cracked version of the software that can run on any Windows device without activation. This is what Eyeon Fusion 6.4 crack portable claims to offer.

-

In this article, we will review some of the features and benefits of Eyeon Fusion 6.4 crack portable and why you should or should not use it.

-

eyeon fusion 6.4 crack portable


Download File ===== https://imgfil.com/2uy0t2



-

What is Eyeon Fusion 6.4 crack portable?

-

Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and functions of the original software, such as:

- -

How to download and install Eyeon Fusion 6.4 crack portable?

-

To download and install Eyeon Fusion 6.4 crack portable, you need to find a reliable source that offers the cracked version of the software. There are many websites that claim to provide this service, but most of them are fake or malicious. They might contain viruses, malware, spyware, or adware that can harm your device or steal your personal information. They might also require you to complete surveys, download additional software, or enter your credit card details before giving you access to the download link.

-

Therefore, you need to be very careful and cautious when looking for Eyeon Fusion 6.4 crack portable online. You should always scan the files with a reputable antivirus program before opening them. You should also avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.

-

Here are some steps to download and install Eyeon Fusion 6.4 crack portable safely:

-
    -
  1. Go to a trusted website that offers Eyeon Fusion 6.4 crack portable for free.
  2. -
  3. Click on the download link and save the file to your device.
  4. -
  5. Extract the file using a program such as WinRAR or 7-Zip.
  6. -
  7. Run the executable file (EyeonFusion.exe) from the extracted folder.
  8. -
  9. Enjoy using Eyeon Fusion 6.4 crack portable without activation.
  10. -
-

What are the pros and cons of Eyeon Fusion 6.4 crack portable?

-

Eyeon Fusion 6.4 crack portable has some pros and cons that you should consider before using it. Here are some of them:

- - - - - -
ProsCons
Free: You can use Eyeon Fusion 6.4 without paying for it.Illegal: You are violating the terms and conditions of the original software by using a cracked version of it.
Portable: You can run Eyeon Fusion 6.4 from any removable device without installing it on your computer.Unstable: You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable.
Feature-rich: You can access all the features and functions of Eyeon Fusion 6.4 as if you were using the original software.Unsafe: You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable.
-

Conclusion

-

Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and benefits of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.

-

-

However, Eyeon Fusion 6.4 crack portable also has some drawbacks that you should consider before using it, such as being illegal, unstable, and unsafe. You might violate the terms and conditions of the original software by using a cracked version of it. You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable. You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable.

-

If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for Eyeon Fusion 6.4 crack portable online. However, you need to be very careful and cautious when looking for it online as most websites that offer it are fake or malicious. You need to scan the files with a reputable antivirus program before opening them and avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.

-

If you want to use Eyeon Fusion 6.4 legally and safely, you should buy a license from the official website or any other authorized source and use it on your computer with activation.

-

How to use Eyeon Fusion 6.4 crack portable?

-

Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and functions of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.

-

To use Eyeon Fusion 6.4 crack portable, you need to download and install it on your device as explained in the previous section. Then, you can run the software from the extracted folder and start creating your visual effects and motion graphics projects.

-

Using Eyeon Fusion 6.4 crack portable is very easy and intuitive. Here are some steps to get you started:

-
    -
  1. Run the executable file (EyeonFusion.exe) from the extracted folder.
  2. -
  3. Choose a project template or create a new project from scratch.
  4. -
  5. Add nodes to your flow by dragging them from the toolbar or using the right-click menu.
  6. -
  7. Connect nodes by dragging their output to another node's input.
  8. -
  9. Edit node properties by double-clicking on them or using the inspector panel.
  10. -
  11. Preview your results by clicking on the viewer button or pressing F4.
  12. -
  13. Render your project by clicking on the render button or pressing F5.
  14. -
  15. Save your project by clicking on the save button or pressing Ctrl+S.
  16. -
-

What are some tips and tricks for using Eyeon Fusion 6.4 crack portable?

-

Eyeon Fusion 6.4 crack portable is a powerful and versatile software that can help you create stunning visual effects and motion graphics. However, it also has some tips and tricks that can help you improve your workflow and results. Here are some of them:

- -

Summary

-

Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and benefits of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.

-

However, Eyeon Fusion 6.4 crack portable also has some drawbacks that you should consider before using it, such as being illegal, unstable, and unsafe. You might violate the terms and conditions of the original software by using a cracked version of it. You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable. You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable.

-

If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for Eyeon Fusion 6.4 crack portable online. However, you need to be very careful and cautious when looking for it online as most websites that offer it are fake or malicious. You need to scan the files with a reputable antivirus program before opening them and avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.

-

If you want to use Eyeon Fusion 6.4 legally and safely, you should buy a license from the official website or any other authorized source and use it on your computer with activation.

-

Eyeon Fusion 6.4 crack portable is a software that claims to be a cracked version of Eyeon Fusion 6.4 that can run on any Windows device without activation. It is supposed to have all the features and benefits of the original software, such as a node-based interface, a wide range of tools and plugins, a fast and high-quality rendering engine, a flexible and customizable workflow, and a comprehensive documentation and tutorial system.

-

However, Eyeon Fusion 6.4 crack portable also has some drawbacks that you should consider before using it, such as being illegal, unstable, and unsafe. You might violate the terms and conditions of the original software by using a cracked version of it. You might encounter bugs, errors, crashes, or compatibility issues when using Eyeon Fusion 6.4 crack portable. You might expose your device or personal information to viruses, malware, spyware, or adware when downloading or using Eyeon Fusion 6.4 crack portable.

-

If you want to try Eyeon Fusion 6.4 without paying for it, you might be tempted to look for Eyeon Fusion 6.4 crack portable online. However, you need to be very careful and cautious when looking for it online as most websites that offer it are fake or malicious. You need to scan the files with a reputable antivirus program before opening them and avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted software.

-

If you want to use Eyeon Fusion 6.4 legally and safely, you should buy a license from the official website or any other authorized source and use it on your computer with activation.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 Mobile Grand Theft Auto and Experience the Thrill of Action and Crime.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 Mobile Grand Theft Auto and Experience the Thrill of Action and Crime.md deleted file mode 100644 index 741131249f5904bfc8d051b47f1d7b0e69a5fd08..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 Mobile Grand Theft Auto and Experience the Thrill of Action and Crime.md +++ /dev/null @@ -1,105 +0,0 @@ - -

GTA 5 3D Game Download for Android: How to Play the Best Open-World Game on Your Smartphone

-

Introduction

-

GTA 5 is one of the most popular and acclaimed games of all time. It is an action-adventure open-world game that lets you experience the life of a criminal in the fictional city of Los Santos. You can play as one of three protagonists, each with their own story, personality, and skills. You can also switch between them at any time, creating a dynamic and immersive gameplay.

-

gta 5 3d game download for android


Download File ✒ ✒ ✒ https://urlin.us/2uSYQU



-

GTA 5 is not only a game, but also a cultural phenomenon. It has sold over 150 million copies worldwide, making it one of the best-selling games ever. It has also received numerous awards and accolades, such as Game of the Year, Best Game Design, Best Soundtrack, and more. It has also inspired many other games, movies, TV shows, and memes.

-

But what if you want to play GTA 5 on your Android device? Is it possible? And if so, how can you do it? In this article, we will answer these questions and show you how to download GTA 5 for Android and enjoy playing it on your smartphone. We will also give you some tips and tricks to make the most out of your gaming experience.

-

How to Download GTA 5 for Android

-

Unfortunately, GTA 5 is not officially available for Android devices. Rockstar Games, the developer of GTA 5, has not released a mobile version of the game yet. However, there are some ways to play GTA 5 on your Android device using some third-party apps and services. Here are two methods that you can try:

-

Method 1: Using Steam Link

-

Steam Link is an app that allows you to stream games from your PC to your Android device over a local network. You can use it to play GTA 5 on your Android device as long as you have a PC that can run the game and a stable Wi-Fi or Bluetooth connection. Here are the steps to follow:

-

Step 1: Download and install Steam Link on your Android device

-

You can download Steam Link from the Google Play Store for free. Once you have installed it, open it and tap on Settings. Then tap on Computer and scan for devices in the Bluetooth range or on the same Wi-Fi network as your PC.

-

gta 5 free open-world games for android devices
-gta 5 action-adventure game for android mobile
-gta 5 best alternatives for android phones
-gta 5 3d graphics and realistic physics for android
-gta 5 latest version download for android apk
-gta 5 offline mode and online multiplayer for android
-gta 5 cheats and mods for android users
-gta 5 epic games store free download for android
-gta 5 how to install and play on android devices
-gta 5 compatible android models and requirements
-gta 5 new features and updates for android gamers
-gta 5 tips and tricks for android beginners
-gta 5 comparison with other gta games for android
-gta 5 fan-made and unofficial versions for android
-gta 5 reviews and ratings for android players
-gta 5 custom skins and vehicles for android
-gta 5 fun activities and missions for android
-gta 5 sandbox and exploration mode for android
-gta 5 crime and gangster theme for android
-gta 5 soundtrack and voice acting for android
-gta 5 controller support and touch screen controls for android
-gta 5 performance and optimization for android
-gta 5 bugs and glitches for android
-gta 5 secrets and easter eggs for android
-gta 5 best locations and landmarks for android
-gta 5 role-playing and simulation mode for android
-gta 5 character customization and outfits for android
-gta 5 weapons and combat system for android
-gta 5 cars and bikes collection for android
-gta 5 helicopters and planes flying for android
-gta 5 boats and water activities for android
-gta 5 police and wanted level system for android
-gta 5 heists and robberies mode for android
-gta 5 races and stunts mode for android
-gta 5 minigames and side quests for android
-gta 5 story mode and plot summary for android
-gta 5 online mode and multiplayer features for android
-gta 5 online mode how to join and create sessions for android
-gta 5 online mode how to make money and buy properties for android
-gta 5 online mode how to customize your character and vehicle for android
-gta 5 online mode how to play with friends and chat with other players for android
-gta 5 online mode how to join or create crews and gangs for android
-gta 5 online mode how to participate in events and challenges for android
-gta 5 online mode how to rank up and unlock items for android
-gta 5 online mode how to deal with hackers and cheaters for android
-gta 5 online mode best modes and activities to play for android
-gta 5 online mode best tips and strategies to win for android
-gta 5 online mode best weapons and vehicles to use for android

-

Step 2: Connect your Android device and PC via Bluetooth or Wi-Fi

-

Once you have found your PC on the list of devices, tap on it and enter the PIN code that appears on your PC screen. This will pair your Steam Link app with your PC and allow you to stream games from it.

-

Step 3: Pair your Steam Link app with your PC and launch GTA 5

-

On your PC, open Steam and make sure that GTA 5 is installed and updated. Then, on your Android device, tap on Start Playing on the Steam Link app. This will launch Steam on your PC and show you your library of games. Find GTA 5 and tap on it to start the game.

-

Step 4: Enjoy playing GTA 5 on your Android device

-

Once the game is running, you can use your Android device as a touch screen controller or connect a compatible controller via Bluetooth or USB. You can also adjust the streaming quality and settings on the Steam Link app to optimize your experience. You can now play GTA 5 on your Android device as if you were playing it on your PC.

-

Method 2: Using Epic Games Store

-

Epic Games Store is another platform that allows you to download and play games on your PC. It also offers free games every week, and one of them was GTA 5 in May 2020. If you have claimed GTA 5 from Epic Games Store, you can use it to play the game on your Android device using a similar method as Steam Link. Here are the steps to follow:

-

Step 1: Download and install Epic Games Store on your PC

-

You can download Epic Games Store from its official website for free. Once you have installed it, open it and create an account or sign in with your existing one.

-

Step 2: Find GTA 5 on the store and download it for free

-

If you have claimed GTA 5 from Epic Games Store when it was free, you can find it in your library of games. If not, you can buy it from the store for $29.99. Once you have the game, download it and install it on your PC.

-

Step 3: Use Steam Link or any other remote play app to stream GTA 5 from your PC to your Android device

-

Since Epic Games Store does not have its own streaming app, you can use Steam Link or any other app that allows you to stream games from your PC to your Android device. Some examples are Parsec, Moonlight, and Rainway. You can follow the same steps as Method 1 to connect your Android device and PC and launch GTA 5.

-

Step 4: Enjoy playing GTA 5 on your Android device

-

Once the game is running, you can use your Android device as a touch screen controller or connect a compatible controller via Bluetooth or USB. You can also adjust the streaming quality and settings on the app to optimize your experience. You can now play GTA 5 on your Android device as if you were playing it on your PC.

-

Tips and Tricks for Playing GTA 5 on Android

-

Playing GTA 5 on Android can be a lot of fun, but it can also be challenging and frustrating at times. Here are some tips and tricks to help you enjoy the game more:

-

Adjust the graphics settings to optimize performance and battery life

-

GTA 5 is a very demanding game that requires a lot of resources from your PC and Android device. To avoid lagging, crashing, overheating, or draining your battery too fast, you should adjust the graphics settings of the game on your PC and the streaming app on your Android device. You can lower the resolution, frame rate, texture quality, shadows, anti-aliasing, and other options to make the game run smoother and save power.

-

Use a controller or a keyboard and mouse for better control and accuracy

-

GTA 5 is a game that involves a lot of shooting, driving, flying, and other actions that require precise and responsive controls. Using a touch screen controller may not be the best option for this game, as it can be inaccurate, uncomfortable, or obstructive. You may want to use a controller or a keyboard and mouse instead for better control and accuracy. You can connect them to your Android device via Bluetooth or USB, or use them directly on your PC if you are close enough.

-

Explore the vast open-world map and discover hidden secrets and easter eggs

-

GTA 5 has a huge open-world map that is full of details, variety, and surprises. You can explore different areas, such as the city, the countryside, the mountains, the desert, the ocean, and more. You can also find hidden secrets and easter eggs that reference other games, movies, TV shows, celebrities, or real-life events. Some examples are UFOs , Bigfoot, aliens, zombies, ghosts, and more. You can also interact with various characters, animals, vehicles, and objects that make the game more realistic and fun.

-

Try out different game modes and activities, such as races, heists, missions, and more

-

GTA 5 is not just a single-player game. It also has a multiplayer mode called GTA Online, where you can play with or against other players from around the world. You can join or create different game modes and activities, such as races, heists, missions, deathmatches, survival, and more. You can also customize your character, vehicle, weapons, and properties. GTA Online is constantly updated with new content and features, so you will never run out of things to do.

-

Conclusion

-

GTA 5 is one of the best games ever made, and you can play it on your Android device using some third-party apps and services. You can use Steam Link or Epic Games Store to stream the game from your PC to your Android device over a local network. You can also adjust the graphics settings, use a controller or a keyboard and mouse, explore the open-world map, and try out different game modes and activities to enhance your gaming experience.

-

If you are a fan of GTA 5 or want to try it out for the first time, you should definitely download it for Android and play it on your smartphone. It is a game that will keep you entertained for hours and hours. You will not regret it.

-

Have you played GTA 5 on Android? What are your thoughts on it? Let us know in the comments below!

-

FAQs

-

Here are some frequently asked questions about GTA 5 3D game download for Android:

-

Q: Is GTA 5 free for Android?

-

A: No, GTA 5 is not free for Android. You need to buy the game from Steam or Epic Games Store for your PC first. Then you can use Steam Link or any other remote play app to stream the game from your PC to your Android device.

-

Q: Is GTA 5 compatible with all Android devices?

-

A: No, GTA 5 is not compatible with all Android devices. You need to have a device that meets the minimum requirements for streaming games from your PC. These include a fast processor, a good amount of RAM, a decent graphics card, and a stable Wi-Fi or Bluetooth connection.

-

Q: Can I play GTA 5 offline on Android?

-

A: No, you cannot play GTA 5 offline on Android. You need to have an internet connection to stream the game from your PC to your Android device. You also need to have an internet connection to play GTA Online.

-

Q: Can I play GTA 5 with my friends on Android?

-

A: Yes, you can play GTA 5 with your friends on Android. You can join them in GTA Online or invite them to your private session. You can also chat with them using voice or text messages.

-

Q: How much storage space does GTA 5 take on Android?

-

A: GTA 5 does not take any storage space on Android. The game is stored on your PC and streamed to your Android device. However, you may need some storage space for the streaming app that you use.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download FIFA Mobile MOD APK (Unlocked All Money Menu) and Relive the Worlds Greatest Soccer Tournament with 32 Qualified Nations.md b/spaces/1phancelerku/anime-remove-background/Download FIFA Mobile MOD APK (Unlocked All Money Menu) and Relive the Worlds Greatest Soccer Tournament with 32 Qualified Nations.md deleted file mode 100644 index 9dfdad910fa123582f62972c90149e255476865e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download FIFA Mobile MOD APK (Unlocked All Money Menu) and Relive the Worlds Greatest Soccer Tournament with 32 Qualified Nations.md +++ /dev/null @@ -1,197 +0,0 @@ -
-

How to Download FIFA Mobile Mod APK for Android and iOS

-

If you are a fan of soccer games, you have probably heard of FIFA Mobile, the popular football simulation game developed by EA Sports. The game features real-world teams, players, stadiums, and tournaments, allowing you to create your own ultimate team and compete against others online. But what if you want to enjoy the game with more features, such as unlimited coins, unlocked players, menu mod, speed hack, and more? In that case, you might want to download FIFA Mobile mod APK, a modified version of the game that gives you access to these features and more.

-

In this article, we will show you how to download FIFA Mobile mod APK for Android and iOS devices, as well as the benefits and risks of using it. We will also give you some tips and tricks for playing FIFA Mobile and improving your skills. But before we get into that, let's take a look at some of the features and gameplay of FIFA Mobile.

-

download fifa mobile mod apk


DOWNLOADhttps://jinyurl.com/2uNNr1



-

FIFA Mobile Features and Gameplay

-

FIFA Mobile is one of the most popular soccer games on mobile devices, with over 100 million downloads on Google Play Store alone. The game offers a variety of modes, events, challenges, and rewards for you to enjoy. Here are some of the main features and gameplay aspects of FIFA Mobile:

-

FIFA World Cup 2022 Mode

-

Relive the world's greatest soccer tournament with FIFA World Cup 2022 mode, the only licensed FIFA World Cup mobile game where you can replay the official tournament brackets with any of the 32 qualified nations. You can also choose from 15 non-qualified nations and rewrite history by taking them to glory. You can play with authentic World Cup kits, badges, balls, and stadiums, as well as enjoy localized commentary that brings the match atmosphere to life.

-

Soccer Icons and Heroes

-

Build your ultimate team with over 100 soccer icons and heroes from different leagues and eras. You can score big with world soccer icons like Paolo Maldini,

Diego Maradona, Zinedine Zidane, and Cristiano Ronaldo, or discover new soccer heroes like Erling Haaland, Kylian Mbappé, and Bruno Fernandes. You can also upgrade your players with skill boosts, chemistry, and training to make them even more powerful.

-

Immersive Next-Level Soccer Simulation

-

Experience realistic soccer simulation with stunning graphics, immersive audio commentary, and 60 frames per second gameplay. You can play in over 30 official leagues, 700 clubs, and 17,000 players from around the world. You can also customize your controls, camera angles, and difficulty levels to suit your preferences. Whether you prefer fast-paced arcade action or tactical simulation, FIFA Mobile has something for you.

-

Manager Mode

-

Be the soccer manager of your own dream team and adjust your tactics in real time. You can choose from different formations, styles, and instructions to outsmart your opponents. You can also scout and sign new players, train and develop your squad, and manage your club's finances. You can compete in various leagues and tournaments to earn rewards and climb the leaderboards.

-

FIFA Mobile System Requirements and Compatibility

-

Before you download FIFA Mobile mod APK for Android or iOS devices, you need to make sure that your device meets the minimum system requirements and compatibility for the game. Here are the details:

-

How to download fifa mobile mod apk for free
-Download fifa mobile mod apk with unlimited money and coins
-FIFA Mobile v18.1.03 mod apk: unlock all players and features
-FIFA World Cup 2022 mode in fifa mobile mod apk
-Best soccer stars to build your ultimate team in fifa mobile mod apk
-FIFA Mobile mod apk vs FIFA 23: which one is better?
-FIFA Mobile mod apk tips and tricks: how to win every match
-FIFA Mobile mod apk review: is it worth downloading?
-FIFA Mobile mod apk download link and installation guide
-FIFA Mobile mod apk compatibility: which devices can run it?
-FIFA Mobile mod apk problems and solutions: how to fix common issues
-FIFA Mobile mod apk cheats and hacks: how to get unlimited resources
-FIFA Mobile mod apk updates and news: what's new in the latest version?
-FIFA Mobile mod apk gameplay and features: what can you do in the game?
-FIFA Mobile mod apk online and offline modes: how to play with or without internet
-FIFA Mobile mod apk graphics and sound quality: how realistic is the game?
-FIFA Mobile mod apk vs PES 23: which one has better soccer simulation?
-FIFA Mobile mod apk ratings and reviews: what do other users think of the game?
-FIFA Mobile mod apk alternatives: what are some other soccer games you can try?
-FIFA Mobile mod apk FAQs: answers to the most common questions about the game
-Download fifa mobile mod apk for android devices
-Download fifa mobile mod apk for ios devices
-Download fifa mobile mod apk for pc or laptop
-Download fifa mobile mod apk for windows or mac
-Download fifa mobile mod apk for firestick or smart tv
-Download fifa mobile mod apk with obb data file
-Download fifa mobile mod apk with no root or jailbreak required
-Download fifa mobile mod apk with anti-ban protection
-Download fifa mobile mod apk with all leagues and teams unlocked
-Download fifa mobile mod apk with all icons and heroes available
-Download fifa mobile mod apk with manager mode enabled
-Download fifa mobile mod apk with head-to-head mode enabled
-Download fifa mobile mod apk with vs attack mode enabled
-Download fifa mobile mod apk with world cup mode enabled
-Download fifa mobile mod apk with champions league mode enabled
-Download fifa mobile mod apk with realistic stadiums and commentary
-Download fifa mobile mod apk with high fps and smooth performance
-Download fifa mobile mod apk with easy controls and user interface
-Download fifa mobile mod apk with custom kits and logos
-Download fifa mobile mod apk with live events and rewards

-

Minimum Requirements for Downloading FIFA Mobile

-

To download FIFA Mobile on your Android or iOS device, you need to have at least 1 GB of free storage space and a stable internet connection. The game also requires the following operating system versions:

- -

Minimum Requirements for Playing Head to Head Mode and 60 FPS Mode

-

To play Head to Head mode and 60 FPS mode in FIFA Mobile, you need to have a device that supports these features. The game also requires the following specifications:

- -

List of Supported and Unsupported Devices

-

Here is a list of some of the supported and unsupported devices for FIFA Mobile:

- | Supported Devices | Unsupported Devices | | --- | --- | | Samsung Galaxy S7 and above | Samsung Galaxy S6 and below | | Huawei P10 and above | Huawei P9 and below | | OnePlus 5T and above | OnePlus 5 and below | | Google Pixel 2 and above | Google Pixel and below | | iPhone 7 and above | iPhone 6s and below | | iPad Air 2 and above | iPad Air and below |

If your device is not listed here, you can check the compatibility by visiting the official website of FIFA Mobile or by contacting the customer support team.

How to Download FIFA Mobile Mod APK for Android

-

If you have an Android device and you want to download FIFA Mobile mod APK, you need to follow these steps:

-

Step 1: Find a reliable source for the modded APK file and download it to your device

-

There are many websites that offer FIFA Mobile mod APK files, but not all of them are safe and trustworthy. Some of them may contain malware, viruses, or outdated versions of the game. Therefore, you need to be careful and do some research before downloading any file from the internet. You can use Google or any other search engine to find some reputable sources for FIFA Mobile mod APK files. You can also check the reviews, ratings, and comments of other users to see if they had any problems with the file.

-

Once you find a reliable source, you need to download the modded APK file to your device. You can use your browser or any other app that allows you to download files from the internet. Make sure that the file name ends with .apk and that the file size matches the one shown on the website. You can also scan the file with an antivirus app before installing it to make sure that it is safe.

-

Step 2: Enable unknown sources in your device settings and install the APK file

-

By default, Android devices do not allow you to install apps from unknown sources, which means sources other than Google Play Store. This is a security measure to prevent you from installing harmful or malicious apps on your device. However, if you want to install FIFA Mobile mod APK, you need to enable unknown sources in your device settings. Here is how you can do that:

- -

Now that you have enabled unknown sources, you can install the APK file that you downloaded in step 1. Here is how you can do that:

- -

Step 3: Launch the game and enjoy the modded features

-

Congratulations! You have successfully installed FIFA Mobile mod APK on your Android device. Now you can launch the game and enjoy the modded features, such as unlimited coins, unlocked players, menu mod, speed hack, and more. You can also access all the modes, events, challenges, and rewards that FIFA Mobile has to offer.

How to Download FIFA Mobile Mod APK for iOS

-

If you have an iOS device and you want to download FIFA Mobile mod APK, you need to follow these steps:

-

Step 1: Find a reliable source for the modded IPA file and download it to your computer

-

An IPA file is the equivalent of an APK file for iOS devices. It is the file format that contains the app data and code for iOS apps. To download FIFA Mobile mod APK for iOS devices, you need to find a reliable source for the modded IPA file and download it to your computer. You can use the same methods as you did for finding the modded APK file for Android devices, such as using Google or any other search engine, checking the reviews, ratings, and comments of other users, and scanning the file with an antivirus app.

-

Once you find a reliable source, you need to download the modded IPA file to your computer. You can use your browser or any other app that allows you to download files from the internet. Make sure that the file name ends with .ipa and that the file size matches the one shown on the website.

-

Step 2: Install Cydia Impactor on your computer and connect your iOS device to it

-

Cydia Impactor is a tool that allows you to install IPA files on your iOS device without jailbreaking it. You need to install Cydia Impactor on your computer and connect your iOS device to it in order to install FIFA Mobile mod APK on your iOS device. Here is how you can do that:

- -

Step 3: Drag and drop the IPA file to Cydia Impactor and enter your Apple ID and password

-

Now that you have Cydia Impactor and your iOS device ready, you can install FIFA Mobile mod APK on your iOS device. Here is how you can do that:

- -

Step 4: Trust the developer profile on your iOS device and launch the game

-

Congratulations! You have successfully installed FIFA Mobile mod APK on your iOS device. However, before you can launch the game, you need to trust the developer profile on your iOS device. Here is how you can do that:

-

Benefits and Risks of Using FIFA Mobile Mod APK

-

As you can see, downloading FIFA Mobile mod APK for Android or iOS devices can give you many advantages and enhance your gaming experience. However, it also comes with some drawbacks and dangers that you should be aware of. Here are some of the benefits and risks of using FIFA Mobile mod APK:

-

Benefits

-

Some of the benefits of using FIFA Mobile mod APK are:

- -

Risks

-

Some of the risks of using FIFA Mobile mod APK are:

- -

Tips and Tricks for Playing FIFA Mobile

-

If you want to play FIFA Mobile like a pro and improve your skills, you need to follow some tips and tricks that can help you win more matches and earn more rewards. Here are some of them:

-

Attack Mode Tips

-

Attack Mode is an asynchronous mode where you play against other players in turn-based matches. You only control your team's attacking moves, while your opponent controls their defending moves. Here are some tips for playing Attack Mode:

-

Head to Head Tips

-

Head to Head is a real-time mode where you play against other players in full matches. You control your team's attacking and defending moves, while your opponent does the same. Here are some tips for playing Head to Head:

- -

Manager Mode Tips

-

Manager Mode is an idle mode where you play as the soccer manager of your own team. You can plan your strategy, adjust your tactics, scout and sign new players, train and develop your squad, and manage your club's finances. Here are some tips for playing Manager Mode:

- -

General Tips

-

Here are some general tips that apply to all modes in FIFA Mobile:

- -

Conclusion

-

FIFA Mobile is a fun and exciting soccer game that you can play on your Android or iOS device. The game offers you various modes, features, and gameplay aspects that can keep you entertained for hours. However, if you want to enjoy the game with more features and advantages, you can download FIFA Mobile mod APK for Android or iOS devices.

-

In this article, we showed you how to download FIFA Mobile mod APK for Android or iOS devices using Cydia Impactor or unknown sources settings. We also showed you some of the benefits and risks of using FIFA Mobile mod APK, such as unlimited coins, unlocked players, menu mod, speed hack, and more. We also gave you some tips and tricks for playing FIFA Mobile and improving your skills, such as using skill moves, shooting smartly, adjusting your tactics, leveling up your players, improving your chemistry, and completing events.

-

We hope that this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. And if you liked this article, please share it with your friends and family who might be interested in FIFA Mobile mod APK.

-

Thank you for reading and happy gaming!

-

FAQs

-

Here are some of the frequently asked questions about FIFA Mobile mod APK:

-

Q: Is FIFA Mobile mod APK safe to use?

-

A: FIFA Mobile mod APK is not an official version of the game and it may contain malware, viruses, or spyware that can harm your device or steal your personal information. You should only download FIFA Mobile mod APK from reliable sources and scan the file with an antivirus app before installing it. You should also backup your data and files before using FIFA Mobile mod APK.

-

Q: Is FIFA Mobile mod APK legal to use?

-

A: FIFA Mobile mod APK is not a legal version of the game and it violates the terms of service of EA Sports. You may face legal consequences for using FIFA Mobile mod APK, such as account ban, game crashes, or lawsuits. You should only use FIFA Mobile mod APK at your own risk and responsibility.

-

Q: How can I update FIFA Mobile mod APK?

-

A: FIFA Mobile mod APK may not be compatible with the latest updates of the game. You may need to uninstall the modded version of the game and install the updated version from the official source or from a reliable source for the modded version. You should also check the compatibility and requirements of the updated version before installing it.

-

Q: How can I uninstall FIFA Mobile mod APK?

-

A: If you want to uninstall FIFA Mobile mod APK from your device, you can follow these steps:

- -

Q: How can I contact EA Sports for support or feedback?

-

A: If you want to contact EA Sports for support or feedback regarding FIFA Mobile or any other game, you can use these methods:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy FIFA Mobile 2023 on Your Smartphone Download the Apk Without Any Hassle.md b/spaces/1phancelerku/anime-remove-background/Enjoy FIFA Mobile 2023 on Your Smartphone Download the Apk Without Any Hassle.md deleted file mode 100644 index db63cc13ac4a1c527f9d42f9e9e6f7c174b26714..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy FIFA Mobile 2023 on Your Smartphone Download the Apk Without Any Hassle.md +++ /dev/null @@ -1,98 +0,0 @@ -
-

FIFA 23 APK Download No Verification: How to Play the Latest FIFA Mobile Game on Your Android Device

-

Introduction

-

If you are a fan of soccer games, you must have heard of FIFA 23, the latest installment in the popular FIFA series by EA Sports. FIFA 23 is a game that offers realistic graphics, gameplay, and features that will make you feel like you are on the pitch. However, if you want to play FIFA 23 on your Android device, you might encounter some challenges. For one thing, the game is not yet officially released on the Google Play Store. For another, you might need to verify your device or account before you can play the game. This can be frustrating and time-consuming, especially if you just want to enjoy the game right away. Fortunately, there is a way to bypass these obstacles and play FIFA 23 on your Android device without any verification. All you need to do is download the FIFA 23 APK file and install it on your device. In this article, we will show you how to do that, as well as what features and benefits you can expect from playing FIFA 23 Mobile.

-

What is FIFA 23 Mobile?

-

FIFA 23 Mobile is a version of FIFA 23 that is designed for mobile devices. It is a game that allows you to experience the thrill and excitement of soccer on your smartphone or tablet. You can create your own custom lineup, choose from hundreds of players and teams, and compete in various modes and events. You can also play online with other players from around the world, or challenge your friends in head-to-head matches. FIFA 23 Mobile is a game that will keep you entertained and engaged for hours.

-

fifa 23 apk download no verification


Download ————— https://jinyurl.com/2uNS1K



-

Why do you need to download the APK file?

-

An APK file is an application package file that contains all the data and files needed to run an app on an Android device. Normally, when you download an app from the Google Play Store, it automatically installs the APK file on your device. However, since FIFA 23 is not yet available on the Google Play Store, you need to download the APK file manually from another source. This way, you can install and play the game without waiting for the official release or verification.

-

How to download and install the FIFA 23 APK file?

-

To download and install the FIFA 23 APK file on your Android device, follow these steps:

-
    -
  1. Download the FIFA 23 APK file from one of these links:
  2. -
  3. Go to Settings > Security > Unknown Sources and enable installation from unknown sources.
  4. -
  5. Locate the downloaded APK file on your device and tap on it.
  6. -
  7. Follow the instructions on the screen to install the game.
  8. -
  9. Open the game and log in as a guest.
  10. -
  11. Enjoy playing FIFA 23 Mobile!
  12. -
-

Features of FIFA 23 Mobile

-

FIFA 23 Mobile is a game that offers many features that will enhance your gaming experience. Some of these features are:

-

New menus and UI

-

FIFA 23 Mobile has a new and improved user interface that makes it easier and faster to navigate through the game. The menus are more intuitive and responsive, and the graphics are more crisp and clear. You can also customize your home screen with your favorite players and teams.

-

Custom lineups

FIFA 23 Mobile allows you to create your own custom lineups with your favorite players and formations. You can also adjust the tactics and roles of each player according to your strategy. You can save up to five different lineups and switch between them anytime.

-

Advanced passing

-

FIFA 23 Mobile introduces a new and advanced passing system that gives you more control and accuracy over your passes. You can use gestures, buttons, or a combination of both to execute different types of passes, such as through balls, lobbed passes, or backheels. You can also use the new pass and move feature to make your players run after passing the ball.

-

Updated player roster and event players

-

FIFA 23 Mobile features an updated player roster with the latest transfers and ratings. You can choose from over 700 teams and 17,000 players from various leagues and countries. You can also unlock special event players that have boosted stats and skills. These players are available for a limited time during certain events, such as the Champions League, the World Cup, or the Halloween.

-

Updated audio commentary

-

FIFA 23 Mobile has a new and improved audio commentary that adds more realism and immersion to the game. The commentary is more dynamic and responsive to the actions on the pitch, and it also includes more languages and accents. You can also customize the volume and language of the commentary in the settings.

-

fifa 23 mobile apk obb data offline
-fifa 23 mod apk unlimited money and coins
-fifa 23 android apk free download full version
-fifa 23 beta apk download for android
-fifa 23 apk + data highly compressed
-fifa 23 mobile apk hack mod menu
-fifa 23 apk download without human verification
-fifa 23 mod apk latest version download
-fifa 23 android apk + obb + data zip
-fifa 23 mobile apk online gameplay
-fifa 23 apk download for android phone
-fifa 23 mod apk unlimited points and tokens
-fifa 23 android apk + data google drive
-fifa 23 beta apk download link
-fifa 23 apk + data size and requirements
-fifa 23 mobile apk mod offline
-fifa 23 mod apk all players unlocked
-fifa 23 android apk free download no survey
-fifa 23 beta apk release date and time
-fifa 23 apk + data mega download
-fifa 23 mobile apk update patch notes
-fifa 23 mod apk real faces and kits
-fifa 23 android apk + data mediafire download
-fifa 23 beta apk how to install and play
-fifa 23 apk + data features and gameplay
-fifa 23 mobile apk cheats and tips
-fifa 23 mod apk new stadiums and leagues
-fifa 23 android apk free download reddit
-fifa 23 beta apk feedback and reviews
-fifa 23 apk + data system requirements and compatibility
-fifa 23 mobile apk best players and teams
-fifa 23 mod apk career mode and manager mode
-fifa 23 android apk + data file download
-fifa 23 beta apk bugs and issues report
-fifa 23 apk + data graphics and sound quality
-fifa 23 mobile apk tricks and skills guide
-fifa 23 mod apk ultimate team and squad builder
-fifa 23 android apk free download apkpure
-fifa 23 beta apk news and updates
-fifa 23 apk + data download error and fix

-

Live OVR mini-events

-

FIFA 23 Mobile has a new feature called Live OVR mini-events that let you boost your team's overall rating (OVR) by completing certain tasks. These tasks include scoring goals, making assists, winning matches, or playing with specific players. The higher your OVR, the better your chances of winning matches and earning rewards.

-

VS Attack and Head to Head modes

-

FIFA 23 Mobile offers two modes for online multiplayer: VS Attack and Head to Head. VS Attack is a mode where you play against another player in a turn-based match. Each turn lasts for 90 seconds, and you have to score as many goals as possible while defending your own goal. The player with the most goals at the end of the match wins. Head to Head is a mode where you play against another player in a real-time match. You have full control over your players and you can use various tactics and strategies to outsmart your opponent. The match lasts for six minutes, and the player with the most goals at the end of the match wins.

-

Benefits of playing FIFA 23 Mobile

-

Playing FIFA 23 Mobile has many benefits that will make you enjoy the game even more. Some of these benefits are:

-

Enjoy the realistic graphics and gameplay

-

FIFA 23 Mobile has stunning graphics that will make you feel like you are watching a real soccer match. The game uses the Frostbite engine, which is also used for other EA games such as Battlefield and Need for Speed. The game also has realistic gameplay that simulates the physics, movements, and behaviors of real soccer players. You can see the expressions, emotions, and reactions of the players as they play on the pitch.

-

Compete with other players online

-

FIFA 23 Mobile lets you compete with other players online in various modes and events. You can test your skills and strategies against players from different countries and regions. You can also join leagues and tournaments to win trophies and prizes. You can also chat with other players and make friends through the game's social features.

-

Earn rewards and bonuses

-

FIFA 23 Mobile rewards you with coins, gems, packs, players, and other items for playing the game. You can earn these rewards by completing matches, events, achievements, or daily tasks. You can also get bonuses for logging in every day, watching ads, or inviting friends to play the game. You can use these rewards to upgrade your team, unlock new features, or buy more items in the game's store.

-

Conclusion

-

FIFA 23 Mobile is a game that will satisfy any soccer fan's cravings. It is a game that offers realistic graphics, gameplay, and features that will make you feel like you are on the pitch. It is also a game that lets you play online with other players from around the world, or challenge your friends in head-to-head matches. It is also a game that rewards you with coins, gems, packs, players, and other items for playing the game. FIFA 23 Mobile is a game that will keep you entertained and engaged for hours.

-

FAQs

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/2ndelement/voicevox/test/__init__.py b/spaces/2ndelement/voicevox/test/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index 55abcfdb87636a9ee85b8df5cdc1bec64098b5da..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,91 +0,0 @@ -import numpy as np -import pyworld - -from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/README.md b/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/README.md deleted file mode 100644 index 2530d1d0b19ac755a71446269b5e5bcb32c5079d..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/Image-Animation-using-Thin-Plate-Spline-Motion-Model/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Animation Using Thin Plate Spline Motion Model -emoji: 👁 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.0.19 -app_file: app.py -pinned: false -duplicated_from: gronkomatic/Image-Animation-using-Thin-Plate-Spline-Motion-Model ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/data.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/data.py deleted file mode 100644 index 1d80d598be97d4e04f1b7f3e53a877cfe82ce667..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/data.py +++ /dev/null @@ -1,977 +0,0 @@ -import ast -import json -import logging -import math -import os -import random -# import h5py -from dataclasses import dataclass -from audioldm.clap.training.params import parse_args -# import braceexpand -import numpy as np -import pandas as pd -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.datasets as datasets -import torchvision.transforms -# import webdataset as wds -from PIL import Image -from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler -from torch.utils.data.distributed import DistributedSampler -from functools import partial -import soundfile as sf -import io -from pathlib import Path -# import wget - -from audioldm.clap.open_clip.utils import ( - get_tar_path_from_dataset_name, - dataset_split, -) -from audioldm.clap.open_clip.utils import load_p, load_class_label -import copy - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - -try: - import torchaudio -except ImportError: - torchaudio = None - -from audioldm.clap.open_clip import tokenize - - -def tokenizer(text): - return tokenize(text).squeeze(0) - - -from transformers import RobertaTokenizer - -tokenize = RobertaTokenizer.from_pretrained("roberta-base") - - -def tokenizer(text): - result = tokenize( - text, - padding="max_length", - truncation=True, - max_length=77, - return_tensors="pt", - ) - return {k: v.squeeze(0) for k, v in result.items()} - - -# initizlied the audioset map -_AUDIOSET_MAP_PATH = os.path.join(Path(__file__).parent, "audioset_textmap.npy") -_AUDIOSET_MAP = np.load(_AUDIOSET_MAP_PATH, allow_pickle=True) - - -def int16_to_float32(x): - return (x / 32767.0).astype(np.float32) - - -def float32_to_int16(x): - x = np.clip(x, a_min=-1.0, a_max=1.0) - return (x * 32767.0).astype(np.int16) - - -# For Toy Dataset -# class ToyDataset(Dataset): -# def __init__(self, index_path, ipc, config, eval_mode=False): -# """Toy Dataset for testing the audioset input with text labels -# Parameters -# ---------- -# index_path: str -# the link to the h5 file of each audio -# idc: str -# the link to the npy file, the number of samples in each class -# config: dict -# the audio cfg file -# eval_model (bool): to indicate if the dataset is a testing dataset -# """ -# self.audio_cfg = config["audio_cfg"] -# self.text_cfg = config["text_cfg"] -# self.fp = h5py.File(index_path, "r") -# self.ipc = np.load(ipc, allow_pickle=True) -# self.total_size = len(self.fp["audio_name"]) -# self.classes_num = self.audio_cfg["class_num"] -# self.eval_mode = eval_mode - -# if not eval_mode: -# self.generate_queue() -# else: -# self.queue = [] -# for i in range(self.total_size): -# target = self.fp["target"][i] -# if np.sum(target) > 0: -# self.queue.append(i) -# self.total_size = len(self.queue) -# logging.info("total dataset size: %d" % (self.total_size)) -# logging.info("class num: %d" % (self.classes_num)) - -# def time_shifting(self, x): -# frame_num = len(x) -# shift_len = random.randint(0, frame_num - 1) -# new_sample = np.concatenate([x[shift_len:], x[:shift_len]], axis=0) -# return new_sample - -# def generate_queue(self): -# self.queue = [] -# while len(self.queue) < self.total_size: -# class_set = [*range(self.classes_num)] -# random.shuffle(class_set) -# self.queue += [ -# self.ipc[d][random.randint(0, len(self.ipc[d]) - 1)] for d in class_set -# ] -# self.queue = self.queue[: self.total_size] - -# logging.info("queue regenerated:%s" % (self.queue[-5:])) - -# def crop_wav(self, x): -# crop_size = self.audio_cfg["crop_size"] -# crop_pos = random.randint(0, len(x) - crop_size - 1) -# return x[crop_pos : crop_pos + crop_size] - -# def prompt_text(self, target): -# events = _AUDIOSET_MAP[np.where(target > 0)] -# event_text = "The sounds of " + ", ".join(events[:-1]) + " and " + events[-1] -# text = tokenize(event_text)[0] -# return text - -# def __getitem__(self, index): -# """Load waveform, text, and target of an audio clip - -# Parameters -# ---------- -# index: int -# the index number -# Return -# ------ -# output: dict { -# "hdf5_path": str, -# "index_in_hdf5": int, -# "audio_name": str, -# "waveform": list (audio_length,), -# "target": list (class_num, ), -# "text": torch.tensor (context_length,) -# } -# the output dictionary -# """ -# s_index = self.queue[index] - -# audio_name = self.fp["audio_name"][s_index].decode() -# # Hardcode here CHANGE -# hdf5_path = ( -# self.fp["hdf5_path"][s_index] -# .decode() -# .replace( -# "../workspace", -# "/home/la/kechen/Research/ke_zsasp/workspace", -# ) -# ) -# r_idx = self.fp["index_in_hdf5"][s_index] -# target = self.fp["target"][s_index].astype(np.float32) -# text = self.prompt_text(target) -# with h5py.File(hdf5_path, "r") as f: -# waveform = int16_to_float32(f["waveform"][r_idx])[ -# : self.audio_cfg["clip_samples"] -# ] -# assert ( -# len(waveform) == self.audio_cfg["clip_samples"] -# ), "The sample length is not match" -# # Time shift -# # if (self.config.enable_time_shift) and (not self.eval_mode): -# # waveform = self.time_shifting(waveform) -# # # Label Enhance -# # if (self.config.crop_size is not None) and (not self.eval_mode): -# # waveform = self.crop_wav(waveform) -# # # the label enhance rate is fixed 0.5 -# # if (self.config.enable_label_enhance) and (not self.eval_mode) and random.random() < 0.5: -# # kidx = np.where(target)[0] -# # for k in kidx: -# # for add_key in self.class_map[k][1]: -# # target[add_key] = 1.0 -# # if len(self.class_map[k][2]) > 0: -# # add_key = random.choice(self.class_map[k][2]) -# # target[add_key] = 1.0 - -# # missing the text input -# mel_spec = get_mel(torch.from_numpy(waveform), self.audio_cfg)[None, :, :] -# mel_spec = ( -# torch.cat( -# [mel_spec, mel_spec.clone(), mel_spec.clone(), mel_spec.clone()], dim=0 -# ) -# .cpu() -# .numpy() -# ) -# longer = random.choice([True, False]) -# if longer == False: -# mel_spec[1:, :, :] = 0.0 -# data_dict = { -# "hdf5_path": hdf5_path, -# "index_in_hdf5": r_idx, -# "audio_name": audio_name, -# "waveform": waveform, -# "class_label": target, -# "text": text, -# "longer": longer, -# "mel_fusion": mel_spec, -# } -# return data_dict - -# def __len__(self): -# return self.total_size - - -class CsvDataset(Dataset): - def __init__(self, input_filename, transforms, img_key, caption_key, sep="\t"): - logging.debug(f"Loading csv data from {input_filename}.") - df = pd.read_csv(input_filename, sep=sep) - - self.images = df[img_key].tolist() - self.captions = df[caption_key].tolist() - self.transforms = transforms - logging.debug("Done loading data.") - - def __len__(self): - return len(self.captions) - - def __getitem__(self, idx): - images = self.transforms(Image.open(str(self.images[idx]))) - texts = tokenize([str(self.captions[idx])])[0] - return images, texts - - -@dataclass -class DataInfo: - dataloader: DataLoader - sampler: DistributedSampler - - -def preprocess_txt(text): - return tokenize([str(text)])[0] - - -def get_dataset_size(shards, sizefilepath_=None, is_local=True): - if isinstance(shards, list): - size_list = [] - for s in shards: - size_list.append( - get_dataset_size(s, sizefilepath_=sizefilepath_, is_local=is_local)[0] - ) - else: - if not is_local: - for n in dataset_split.keys(): - if n in shards.split("/"): - break - for s in dataset_split[n]: - if s in shards.split("/"): - break - sizefilepath_ = f"./json_files/{n}/{s}/sizes.json" - shards_list = list(braceexpand.braceexpand(shards)) - dir_path = os.path.dirname(shards) - if sizefilepath_ is not None: - sizes = json.load(open(sizefilepath_, "r")) - total_size = sum( - [ - int(sizes[os.path.basename(shard.replace(".tar -", ".tar"))]) - for shard in shards_list - ] - ) - else: - sizes_filename = os.path.join(dir_path, "sizes.json") - len_filename = os.path.join(dir_path, "__len__") - if os.path.exists(sizes_filename): - sizes = json.load(open(sizes_filename, "r")) - total_size = sum( - [int(sizes[os.path.basename(shard)]) for shard in shards_list] - ) - elif os.path.exists(len_filename): - # FIXME this used to be eval(open(...)) but that seemed rather unsafe - total_size = ast.literal_eval(open(len_filename, "r").read()) - else: - raise Exception( - "Cannot find sizes file for dataset. Please specify the path to the file." - ) - # total_size = None # num samples undefined - # some common dataset sizes (at time of authors last download) - # cc3m-train: 2905954 - # cc12m: 10968539 - # LAION-400m: 407332084 - num_shards = len(shards_list) - if isinstance(shards, list): - return sum(size_list), len(shards) - else: - return total_size, num_shards - - -def get_imagenet(args, preprocess_fns, split): - assert split in ["train", "val", "v2"] - is_train = split == "train" - preprocess_train, preprocess_val = preprocess_fns - - if split == "v2": - from imagenetv2_pytorch import ImageNetV2Dataset - - dataset = ImageNetV2Dataset(location=args.imagenet_v2, transform=preprocess_val) - else: - if is_train: - data_path = args.imagenet_train - preprocess_fn = preprocess_train - else: - data_path = args.imagenet_val - preprocess_fn = preprocess_val - assert data_path - - dataset = datasets.ImageFolder(data_path, transform=preprocess_fn) - - if is_train: - idxs = np.zeros(len(dataset.targets)) - target_array = np.array(dataset.targets) - k = 50 - for c in range(1000): - m = target_array == c - n = len(idxs[m]) - arr = np.zeros(n) - arr[:k] = 1 - np.random.shuffle(arr) - idxs[m] = arr - - idxs = idxs.astype("int") - sampler = SubsetRandomSampler(np.where(idxs)[0]) - else: - sampler = None - - dataloader = torch.utils.data.DataLoader( - dataset, - batch_size=args.batch_size, - num_workers=args.workers, - sampler=sampler, - ) - - return DataInfo(dataloader, sampler) - - -def count_samples(dataloader): - os.environ["WDS_EPOCH"] = "0" - n_elements, n_batches = 0, 0 - for images, texts in dataloader: - n_batches += 1 - n_elements += len(images) - assert len(images) == len(texts) - return n_elements, n_batches - - -def filter_no_caption(sample): - return "txt" in sample - - -def log_and_continue(exn): - """Call in an exception handler to ignore any exception, isssue a warning, and continue.""" - logging.warning(f"Handling webdataset error ({repr(exn)}). Ignoring.") - return True - - -_SHARD_SHUFFLE_SIZE = 2000 -_SHARD_SHUFFLE_INITIAL = 500 -_SAMPLE_SHUFFLE_SIZE = 5000 -_SAMPLE_SHUFFLE_INITIAL = 1000 - - -def sample_prop(sizefile, inputs, proportion, is_local=True): - """ - Sample a proportion of the data. - """ - file_path_dict = { - os.path.split(inputs[i])[1]: os.path.split(inputs[i])[0] - for i in range(len(inputs)) - } - sampled_filepath_dict = {} - sampled_size_dict = {} - if not is_local: - if os.path.exists("sizes.json"): - os.remove("sizes.json") - wget.download(sizefile, "sizes.json") - sizefile = "sizes.json" - with open(sizefile, "r", encoding="UTF-8") as f: - load_dict = json.load(f) - L = int(len(file_path_dict) * proportion) - subkeys = random.sample(file_path_dict.keys(), L) - for k in subkeys: - sampled_size_dict[k] = load_dict[k] - sampled_filepath_dict[k] = file_path_dict[k] - return ( - sum(sampled_size_dict.values()), - L, - [os.path.join(v, k) for k, v in sampled_filepath_dict.items()], - sampled_size_dict, - ) - - -def get_mel(audio_data, audio_cfg): - # mel shape: (n_mels, T) - mel = torchaudio.transforms.MelSpectrogram( - sample_rate=audio_cfg["sample_rate"], - n_fft=audio_cfg["window_size"], - win_length=audio_cfg["window_size"], - hop_length=audio_cfg["hop_size"], - center=True, - pad_mode="reflect", - power=2.0, - norm=None, - onesided=True, - n_mels=64, - f_min=audio_cfg["fmin"], - f_max=audio_cfg["fmax"], - ).to(audio_data.device) - mel = mel(audio_data) - # Align to librosa: - # librosa_melspec = librosa.feature.melspectrogram( - # waveform, - # sr=audio_cfg['sample_rate'], - # n_fft=audio_cfg['window_size'], - # hop_length=audio_cfg['hop_size'], - # win_length=audio_cfg['window_size'], - # center=True, - # pad_mode="reflect", - # power=2.0, - # n_mels=64, - # norm=None, - # htk=True, - # f_min=audio_cfg['fmin'], - # f_max=audio_cfg['fmax'] - # ) - # we use log mel spectrogram as input - mel = torchaudio.transforms.AmplitudeToDB(top_db=None)(mel) - return mel.T # (T, n_mels) - - -def get_audio_features( - sample, audio_data, max_len, data_truncating, data_filling, audio_cfg -): - """ - Calculate and add audio features to sample. - Sample: a dict containing all the data of current sample. - audio_data: a tensor of shape (T) containing audio data. - max_len: the maximum length of audio data. - data_truncating: the method of truncating data. - data_filling: the method of filling data. - audio_cfg: a dict containing audio configuration. Comes from model_cfg['audio_cfg']. - """ - with torch.no_grad(): - if len(audio_data) > max_len: - if data_truncating == "rand_trunc": - longer = torch.tensor([True]) - elif data_truncating == "fusion": - # fusion - mel = get_mel(audio_data, audio_cfg) - # split to three parts - chunk_frames = ( - max_len // audio_cfg["hop_size"] + 1 - ) # the +1 related to how the spectrogram is computed - total_frames = mel.shape[0] - if chunk_frames == total_frames: - # there is a corner case where the audio length is - # larger than max_len but smaller than max_len+hop_size. - # In this case, we just use the whole audio. - mel_fusion = torch.stack([mel, mel, mel, mel], dim=0) - sample["mel_fusion"] = mel_fusion - longer = torch.tensor([False]) - else: - ranges = np.array_split( - list(range(0, total_frames - chunk_frames + 1)), 3 - ) - # print('total_frames-chunk_frames:', total_frames-chunk_frames, - # 'len(audio_data):', len(audio_data), - # 'chunk_frames:', chunk_frames, - # 'total_frames:', total_frames) - if len(ranges[1]) == 0: - # if the audio is too short, we just use the first chunk - ranges[1] = [0] - if len(ranges[2]) == 0: - # if the audio is too short, we just use the first chunk - ranges[2] = [0] - # randomly choose index for each part - idx_front = np.random.choice(ranges[0]) - idx_middle = np.random.choice(ranges[1]) - idx_back = np.random.choice(ranges[2]) - # select mel - mel_chunk_front = mel[idx_front : idx_front + chunk_frames, :] - mel_chunk_middle = mel[idx_middle : idx_middle + chunk_frames, :] - mel_chunk_back = mel[idx_back : idx_back + chunk_frames, :] - - # shrink the mel - mel_shrink = torchvision.transforms.Resize(size=[chunk_frames, 64])( - mel[None] - )[0] - # logging.info(f"mel_shrink.shape: {mel_shrink.shape}") - - # stack - mel_fusion = torch.stack( - [mel_chunk_front, mel_chunk_middle, mel_chunk_back, mel_shrink], - dim=0, - ) - sample["mel_fusion"] = mel_fusion - longer = torch.tensor([True]) - else: - raise NotImplementedError( - f"data_truncating {data_truncating} not implemented" - ) - # random crop to max_len (for compatibility) - overflow = len(audio_data) - max_len - idx = np.random.randint(0, overflow + 1) - audio_data = audio_data[idx : idx + max_len] - - else: # padding if too short - if len(audio_data) < max_len: # do nothing if equal - if data_filling == "repeatpad": - n_repeat = int(max_len / len(audio_data)) - audio_data = audio_data.repeat(n_repeat) - # audio_data = audio_data.unsqueeze(0).unsqueeze(0).unsqueeze(0) - # audio_data = F.interpolate(audio_data,size=max_len,mode="bicubic")[0,0,0] - audio_data = F.pad( - audio_data, - (0, max_len - len(audio_data)), - mode="constant", - value=0, - ) - elif data_filling == "pad": - audio_data = F.pad( - audio_data, - (0, max_len - len(audio_data)), - mode="constant", - value=0, - ) - elif data_filling == "repeat": - n_repeat = int(max_len / len(audio_data)) - audio_data = audio_data.repeat(n_repeat + 1)[:max_len] - else: - raise NotImplementedError( - f"data_filling {data_filling} not implemented" - ) - if data_truncating == "fusion": - mel = get_mel(audio_data, audio_cfg) - mel_fusion = torch.stack([mel, mel, mel, mel], dim=0) - sample["mel_fusion"] = mel_fusion - longer = torch.tensor([False]) - - sample["longer"] = longer - sample["waveform"] = audio_data - - return sample - - -def preprocess( - sample, - audio_ext, - text_ext, - max_len, - audio_cfg, - class_index_dict=None, - data_filling="pad", - data_truncating="rand_trunc", - text_augment_selection=None, -): - """ - Preprocess a single sample for wdsdataloader. - """ - audio_data, orig_sr = sf.read(io.BytesIO(sample[audio_ext])) - audio_data = int16_to_float32(float32_to_int16(audio_data)) - audio_data = torch.tensor(audio_data).float() - - # TODO: (yusong) to be include in the future - # # if torchaudio not installed, use soundfile to load audio - # if torchaudio is None: - # audio_data, orig_sr = sf.read(io.BytesIO(sample[audio_ext])) - # audio_data = torch.tensor(audio_data).float() - # else: - # # https://github.com/webdataset/webdataset/blob/main/webdataset/autodecode.py - # with tempfile.TemporaryDirectory() as dirname: - # os.makedirs(dirname, exist_ok=True) - # fname = os.path.join(dirname, f"file.flac") - # with open(fname, "wb") as stream: - # stream.write(sample[audio_ext]) - # audio_data, orig_sr = torchaudio.load(fname) - # audio_data = audio_data[0, :].float() - - sample = get_audio_features( - sample, audio_data, max_len, data_truncating, data_filling, audio_cfg - ) - del sample[audio_ext] - - try: - json_dict_raw = json.loads(sample[text_ext].decode("utf-8")) - except: - print("sample[__url__]:", sample["__url__"]) - - # For selecting augmented text from dataset - if text_augment_selection is None or text_augment_selection == "none": - texts = json_dict_raw["text"] - elif text_augment_selection == "all": - if "text_augment_all" in json_dict_raw.keys(): - texts = json_dict_raw["text_augment_all"] - else: - texts = json_dict_raw["text"] - elif text_augment_selection == "augment_only": - if "text_augment_all" in json_dict_raw.keys(): - if json_dict_raw["text_augment_t5"] is None: - texts = json_dict_raw["text"] - else: - texts = json_dict_raw["text_augment_t5"] - else: - texts = json_dict_raw["text"] - else: - raise NotImplementedError( - f"text_augment_selection {text_augment_selection} not implemented" - ) - sample["full_text"] = texts - - if isinstance(texts, list) and isinstance(texts[0], str) and len(texts) > 1: - texts = random.choice(texts) - sample["raw_text"] = texts - sample["text"] = tokenizer(texts) # text shape: [num_token] - if class_index_dict is not None: - # https://stackoverflow.com/questions/48004243/how-to-share-large-read-only-dictionary-list-across-processes-in-multiprocessing - # https://stackoverflow.com/questions/45693949/storing-strings-in-a-multiprocessing-sharedctypes-array - # key, val = class_index_dict - # key = key[:].split('\n') - # _dict = {k: v for k, v in zip(key, val)} - sample["class_label"] = np.zeros(len(class_index_dict.keys())) - for x in json_dict_raw["tag"]: - sample["class_label"][class_index_dict[x]] = 1 - sample["class_label"] = torch.tensor(sample["class_label"]).float() - del sample[text_ext] - sample["audio_name"] = sample["__key__"].split("/")[-1] + "." + audio_ext - sample["text_name"] = sample["__key__"].split("/")[-1] + "." + text_ext - sample["audio_orig_sr"] = orig_sr - return sample - - -def collate_fn(batch): - """ - Collate function for wdsdataloader. - batch: a list of dict, each dict is a sample - """ - # concatenate values in each dictionary. if it is a tensor, concatenate. if it is a list, extend. - batch_dict = {} - for k in batch[0].keys(): - if isinstance(batch[0][k], dict): # dealwith bert tokenizer output - batch_dict[k] = {} - for kk in batch[0][k].keys(): - tmp = [] - for i in range(len(batch)): - tmp.append(batch[i][k][kk]) - batch_dict[k][kk] = torch.vstack(tmp) - elif isinstance(batch[0][k], torch.Tensor): - batch_dict[k] = torch.stack([sample[k] for sample in batch]) - elif isinstance(batch[0][k], np.ndarray): - batch_dict[k] = torch.tensor(np.stack([sample[k] for sample in batch])) - else: - batch_dict[k] = [sample[k] for sample in batch] - return batch_dict - - -def get_wds_dataset( - args, - model_cfg, - is_train, - audio_ext="flac", - text_ext="json", - max_len=480000, - proportion=1.0, - sizefilepath_=None, - is_local=None, -): - """ - Get a dataset for wdsdataloader. - """ - if is_local is None and (not args.remotedata is None): - is_local = not args.remotedata - - input_shards = args.train_data if is_train else args.val_data - assert input_shards is not None - - if not sizefilepath_ is None: - sizefilepath = sizefilepath_ - else: - sizefilepath = os.path.join(os.path.dirname(input_shards[0]), "sizes.json") - - if proportion != 1.0: - num_samples, num_shards, input_shards, _ = sample_prop( - sizefilepath, input_shards, proportion, is_local=is_local - ) - else: - num_samples, num_shards = get_dataset_size( - input_shards, sizefilepath_=sizefilepath_, is_local=is_local - ) - - if not num_samples: - if is_train: - num_samples = args.train_num_samples - if not num_samples: - raise RuntimeError( - "Currently, number of dataset samples must be specified for training dataset. " - "Please specify via `--train-num-samples` if no dataset length info present." - ) - else: - num_samples = ( - args.val_num_samples or 0 - ) # eval will just exhaust the iterator if not specified - - pipeline = [wds.SimpleShardList(input_shards)] - # at this point we have an iterator over all the shards - # TODO: (yusong): add a if statement of distributed. If not, we don't need to split_by_node - if is_train or args.parallel_eval: - pipeline.extend( - [ - wds.detshuffle( - bufsize=_SHARD_SHUFFLE_SIZE, - initial=_SHARD_SHUFFLE_INITIAL, - seed=args.seed, - ), - wds.split_by_node, - wds.split_by_worker, - # at this point, we have an iterator over the shards assigned to each worker at each node - wds.tarfile_to_samples(handler=log_and_continue), - wds.shuffle( - bufsize=_SAMPLE_SHUFFLE_SIZE, - initial=_SAMPLE_SHUFFLE_INITIAL, - rng=random.Random(args.seed), - ), - # wds.repeatedly, # FIXME determine if this is beneficial - ] - ) - else: - pipeline.extend( - [ - wds.split_by_worker, - # at this point, we have an iterator over the shards assigned to each worker - wds.tarfile_to_samples(handler=log_and_continue), - ] - ) - pipeline.append( - wds.map( - partial( - preprocess, - audio_ext=audio_ext, - text_ext=text_ext, - max_len=max_len, - audio_cfg=model_cfg["audio_cfg"], - class_index_dict=copy.deepcopy(args.class_index_dict), - data_filling=args.data_filling, - data_truncating=args.data_truncating, - text_augment_selection=args.text_augment_selection, - ) - ), - ) - - pipeline.append( - wds.batched( - args.batch_size, - partial=not (is_train or args.parallel_eval), - collation_fn=collate_fn, - ) - ) - - dataset = wds.DataPipeline(*pipeline) - if is_train or args.parallel_eval: - # (yusong): Currently parallel evaluation will be not precise as we are repeat the last few samples. - # (yusong): See comments below. - # roll over and repeat a few samples to get same number of full batches on each node - global_batch_size = args.batch_size * args.world_size - num_batches = math.ceil(num_samples / global_batch_size) - num_workers = max(1, args.workers) - num_worker_batches = math.ceil( - num_batches / num_workers - ) # per dataloader worker - num_batches = num_worker_batches * num_workers - num_samples = num_batches * global_batch_size - dataset = dataset.with_epoch( - num_worker_batches - ) # each worker is iterating over this - else: - # last batches are partial, eval is done on single (master) node - num_batches = math.ceil(num_samples / args.batch_size) - - kwargs = {} - if args.horovod: # multi-node training on summit - kwargs["multiprocessing_context"] = "forkserver" - - dataloader = wds.WebLoader( - dataset, batch_size=None, shuffle=False, num_workers=args.workers, **kwargs - ) - - # FIXME not clear which approach is better, with_epoch before vs after dataloader? - # hoping to resolve via https://github.com/webdataset/webdataset/issues/169 - # if is_train: - # # roll over and repeat a few samples to get same number of full batches on each node - # global_batch_size = args.batch_size * args.world_size - # num_batches = math.ceil(num_samples / global_batch_size) - # num_workers = max(1, args.workers) - # num_batches = math.ceil(num_batches / num_workers) * num_workers - # num_samples = num_batches * global_batch_size - # dataloader = dataloader.with_epoch(num_batches) - # else: - # # last batches are partial, eval is done on single (master) node - # num_batches = math.ceil(num_samples / args.batch_size) - - # add meta-data to dataloader instance for convenience - dataloader.num_batches = num_batches - dataloader.num_samples = num_samples - - return DataInfo(dataloader, None) - - -def wds_batch_list2dict( - batch, - keys=[ - "__url__", - "__key__", - "waveform", - "text", - "raw_text", - "audio_name", - "text_name", - "audio_orig_sr", - ], -): - """ - Return a dictionary of the batch, with keys as the names of the fields. - """ - assert len(keys) == len( - batch - ), "batch must have same number of keys as keys argument" - return {keys[i]: batch[i] for i in range(len(batch))} - - -def get_csv_dataset(args, preprocess_fn, is_train): - input_filename = args.train_data if is_train else args.val_data - assert input_filename - dataset = CsvDataset( - input_filename, - preprocess_fn, - img_key=args.csv_img_key, - caption_key=args.csv_caption_key, - sep=args.csv_separator, - ) - num_samples = len(dataset) - sampler = DistributedSampler(dataset) if args.distributed and is_train else None - shuffle = is_train and sampler is None - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - shuffle=shuffle, - num_workers=args.workers, - pin_memory=True, - sampler=sampler, - drop_last=is_train, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - - -def get_toy_dataset(args, model_cfg, is_train): - index_path = args.train_data if is_train else args.val_data - ipc_path = args.train_ipc if is_train else args.val_ipc - assert index_path and ipc_path - eval_mode = not is_train - dataset = ToyDataset(index_path, ipc_path, model_cfg, eval_mode=eval_mode) - - num_samples = len(dataset) - sampler = ( - DistributedSampler(dataset, shuffle=False) - if args.distributed and is_train - else None - ) - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - shuffle=False, - num_workers=args.workers, - sampler=sampler, - drop_last=is_train, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - - -def get_dataset_fn(data_path, dataset_type): - if dataset_type == "webdataset": - return get_wds_dataset - elif dataset_type == "csv": - return get_csv_dataset - elif dataset_type == "auto": - ext = data_path.split(".")[-1] - if ext in ["csv", "tsv"]: - return get_csv_dataset - elif ext in ["tar"]: - return get_wds_dataset - else: - raise ValueError( - f"Tried to figure out dataset type, but failed for extention {ext}." - ) - elif dataset_type == "toy": - return get_toy_dataset - else: - raise ValueError(f"Unsupported dataset type: {dataset_type}") - - -def get_data(args, model_cfg): - data = {} - - args.class_index_dict = load_class_label(args.class_label_path) - - if args.datasetinfos is None: - args.datasetinfos = ["train", "unbalanced_train", "balanced_train"] - if args.dataset_type == "webdataset": - args.train_data = get_tar_path_from_dataset_name( - args.datasetnames, - args.datasetinfos, - islocal=not args.remotedata, - proportion=args.dataset_proportion, - dataset_path=args.datasetpath, - full_dataset=args.full_train_dataset, - ) - - if args.full_train_dataset is None: - args.full_train_dataset = [] - if args.exclude_eval_dataset is None: - args.exclude_eval_dataset = [] - excluded_eval_datasets = args.full_train_dataset + args.exclude_eval_dataset - - val_dataset_names = ( - [n for n in args.datasetnames if n not in excluded_eval_datasets] - if excluded_eval_datasets - else args.datasetnames - ) - args.val_dataset_names = val_dataset_names - args.val_data = get_tar_path_from_dataset_name( - val_dataset_names, - ["valid", "test", "eval"], - islocal=not args.remotedata, - proportion=1, - dataset_path=args.datasetpath, - full_dataset=None, - ) - - if args.train_data: - data["train"] = get_dataset_fn(args.train_data, args.dataset_type)( - args, model_cfg, is_train=True - ) - - if args.val_data: - data["val"] = get_dataset_fn(args.val_data, args.dataset_type)( - args, model_cfg, is_train=False - ) - - return data diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py deleted file mode 100644 index 7d1f4784898dbfeb69affefb6f624711adc8cb42..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py +++ /dev/null @@ -1,105 +0,0 @@ -import sys - -import os -import torch -import librosa -from open_clip import create_model -from training.data import get_audio_features -from training.data import int16_to_float32, float32_to_int16 -from transformers import RobertaTokenizer - -tokenize = RobertaTokenizer.from_pretrained("roberta-base") - - -def tokenizer(text): - result = tokenize( - text, - padding="max_length", - truncation=True, - max_length=77, - return_tensors="pt", - ) - return {k: v.squeeze(0) for k, v in result.items()} - - -PRETRAINED_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/checkpoints/epoch_top_0_audioset_no_fusion.pt" -WAVE_48k_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/audio/machine.wav" - - -def infer_text(): - device = "cuda:0" if torch.cuda.is_available() else "cpu" - precision = "fp32" - amodel = "HTSAT-tiny" # or 'PANN-14' - tmodel = "roberta" # the best text encoder in our training - enable_fusion = False # False if you do not want to use the fusion model - fusion_type = "aff_2d" - pretrained = PRETRAINED_PATH - - model, model_cfg = create_model( - amodel, - tmodel, - pretrained, - precision=precision, - device=device, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - # load the text, can be a list (i.e. batch size) - text_data = ["I love the contrastive learning", "I love the pretrain model"] - # tokenize for roberta, if you want to tokenize for another text encoder, please refer to data.py#L43-90 - text_data = tokenizer(text_data) - - text_embed = model.get_text_embedding(text_data) - print(text_embed.size()) - - -def infer_audio(): - - device = "cuda:0" if torch.cuda.is_available() else "cpu" - precision = "fp32" - amodel = "HTSAT-tiny" # or 'PANN-14' - tmodel = "roberta" # the best text encoder in our training - enable_fusion = False # False if you do not want to use the fusion model - fusion_type = "aff_2d" - pretrained = PRETRAINED_PATH - - model, model_cfg = create_model( - amodel, - tmodel, - pretrained, - precision=precision, - device=device, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - - # load the waveform of the shape (T,), should resample to 48000 - audio_waveform, sr = librosa.load(WAVE_48k_PATH, sr=48000) - # quantize - audio_waveform = int16_to_float32(float32_to_int16(audio_waveform)) - audio_waveform = torch.from_numpy(audio_waveform).float() - audio_dict = {} - - # the 'fusion' truncate mode can be changed to 'rand_trunc' if run in unfusion mode - import ipdb - - ipdb.set_trace() - audio_dict = get_audio_features( - audio_dict, - audio_waveform, - 480000, - data_truncating="fusion", - data_filling="repeatpad", - audio_cfg=model_cfg["audio_cfg"], - ) - # can send a list to the model, to process many audio tracks in one time (i.e. batch size) - audio_embed = model.get_audio_embedding([audio_dict]) - print(audio_embed.size()) - import ipdb - - ipdb.set_trace() - - -if __name__ == "__main__": - infer_text() - infer_audio() diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/common_processors.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/common_processors.py deleted file mode 100644 index 5cf79cfd118bc8ab13355ff57435a244688e4b22..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/common_processors.py +++ /dev/null @@ -1,86 +0,0 @@ -import os -import subprocess -import librosa -import numpy as np -from text_to_speech.data_gen.tts.wav_processors.base_processor import BaseWavProcessor, register_wav_processors -from text_to_speech.utils.audio import trim_long_silences -from text_to_speech.utils.audio.io import save_wav -from text_to_speech.utils.audio.rnnoise import rnnoise -from text_to_speech.utils.commons.hparams import hparams - - -@register_wav_processors(name='sox_to_wav') -class ConvertToWavProcessor(BaseWavProcessor): - @property - def name(self): - return 'ToWav' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - if input_fn[-4:] == '.wav': - return input_fn, sr - else: - output_fn = self.output_fn(input_fn) - subprocess.check_call(f'sox -v 0.95 "{input_fn}" -t wav "{output_fn}"', shell=True) - return output_fn, sr - - -@register_wav_processors(name='sox_resample') -class ResampleProcessor(BaseWavProcessor): - @property - def name(self): - return 'Resample' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - output_fn = self.output_fn(input_fn) - sr_file = librosa.core.get_samplerate(input_fn) - if sr != sr_file: - subprocess.check_call(f'sox -v 0.95 "{input_fn}" -r{sr} "{output_fn}"', shell=True) - y, _ = librosa.core.load(input_fn, sr=sr) - y, _ = librosa.effects.trim(y) - save_wav(y, output_fn, sr) - return output_fn, sr - else: - return input_fn, sr - - -@register_wav_processors(name='trim_sil') -class TrimSILProcessor(BaseWavProcessor): - @property - def name(self): - return 'TrimSIL' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - output_fn = self.output_fn(input_fn) - y, _ = librosa.core.load(input_fn, sr=sr) - y, _ = librosa.effects.trim(y) - save_wav(y, output_fn, sr) - return output_fn - - -@register_wav_processors(name='trim_all_sil') -class TrimAllSILProcessor(BaseWavProcessor): - @property - def name(self): - return 'TrimSIL' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - output_fn = self.output_fn(input_fn) - y, audio_mask, _ = trim_long_silences( - input_fn, vad_max_silence_length=preprocess_args.get('vad_max_silence_length', 12)) - save_wav(y, output_fn, sr) - if preprocess_args['save_sil_mask']: - os.makedirs(f'{processed_dir}/sil_mask', exist_ok=True) - np.save(f'{processed_dir}/sil_mask/{item_name}.npy', audio_mask) - return output_fn, sr - - -@register_wav_processors(name='denoise') -class DenoiseProcessor(BaseWavProcessor): - @property - def name(self): - return 'Denoise' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - output_fn = self.output_fn(input_fn) - rnnoise(input_fn, output_fn, out_sample_rate=sr) - return output_fn, sr diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Lockchat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Lockchat.py deleted file mode 100644 index 1bce74035403bf8615e68ccfcc9deb7e0151817a..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Lockchat.py +++ /dev/null @@ -1,32 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints -url = 'http://supertest.lockchat.app' -model = ['gpt-4', 'gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - - payload = { - "temperature": 0.7, - "messages": messages, - "model": model, - "stream": True, - } - headers = { - "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0", - } - response = requests.post("http://supertest.lockchat.app/v1/chat/completions", - json=payload, headers=headers, stream=True) - for token in response.iter_lines(): - if b'The model: `gpt-4` does not exist' in token: - print('error, retrying...') - _create_completion(model=model, messages=messages, stream=stream, temperature=temperature, **kwargs) - if b"content" in token: - token = json.loads(token.decode('utf-8').split('data: ')[1])['choices'][0]['delta'].get('content') - if token: yield (token) - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/AdamOswald1/finetuned_diffusion/utils.py b/spaces/AdamOswald1/finetuned_diffusion/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/AdamOswald1/finetuned_diffusion/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.d.ts deleted file mode 100644 index 24c4f07c0bcaff2b97d7a94963a0a8d9e5e5fedb..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/Factory.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -import InputText from './InputText.js'; - -export default function ( - config?: InputText.IConfig -): InputText; diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.d.ts deleted file mode 100644 index 3af0755e0f00da1f815731e886a5a505db183a05..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.d.ts +++ /dev/null @@ -1,7 +0,0 @@ -// import * as Phaser from 'phaser'; -import Pinch from "./Pinch"; - -export default function ( - gameObject: Phaser.GameObjects.GameObject | Phaser.Scene, - config?: Pinch.IConfig -): Pinch; \ No newline at end of file diff --git a/spaces/AkitoP/umamusume_bert_vits2/text/chinese_bert.py b/spaces/AkitoP/umamusume_bert_vits2/text/chinese_bert.py deleted file mode 100644 index 581e683b7c9112296770b0094371a594a51b32e9..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/text/chinese_bert.py +++ /dev/null @@ -1,108 +0,0 @@ -import torch -import sys -from transformers import AutoTokenizer, AutoModelForMaskedLM -import os -#如果D:\pyprojs\Bert-VITS2\bert\chinese-roberta-wwm-ext-large\pytorch_model存在就用这个 -local_bert = False -if os.path.exists("./bert/chinese-roberta-wwm-ext-large/pytorch_model.bin"): - local_bert = True - - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") if local_bert else AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext-large") - -models = dict() - - -def get_bert_feature(text, word2ph, device=None): - if ( - sys.platform == "darwin" - and torch.backends.mps.is_available() - and device == "cpu" - ): - device = "mps" - if not device: - device = "cuda" - if device not in models.keys(): - models[device] = AutoModelForMaskedLM.from_pretrained( - "./bert/chinese-roberta-wwm-ext-large" - ).to(device) if local_bert else AutoModelForMaskedLM.from_pretrained( - "hfl/chinese-roberta-wwm-ext-large" - ).to(device) - with torch.no_grad(): - inputs = tokenizer(text, return_tensors="pt") - for i in inputs: - inputs[i] = inputs[i].to(device) - res = models[device](**inputs, output_hidden_states=True) - res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text) + 2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - return phone_level_feature.T - - -if __name__ == "__main__": - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [ - 1, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 2, - 1, - 1, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 1, - ] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/train.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/train.py deleted file mode 100644 index 55eca2d0ad9463415970e09bccab8b722e496704..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/train.py +++ /dev/null @@ -1,141 +0,0 @@ -import argparse -import logging -import os - -import torch -import torch.distributed as dist -import torch.nn.functional as F -import torch.utils.data.distributed -from torch.nn.utils import clip_grad_norm_ - -import losses -from backbones import get_model -from dataset import MXFaceDataset, SyntheticDataset, DataLoaderX -from partial_fc import PartialFC -from utils.utils_amp import MaxClipGradScaler -from utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackModelCheckpoint -from utils.utils_config import get_config -from utils.utils_logging import AverageMeter, init_logging - - -def main(args): - cfg = get_config(args.config) - try: - world_size = int(os.environ['WORLD_SIZE']) - rank = int(os.environ['RANK']) - dist.init_process_group('nccl') - except KeyError: - world_size = 1 - rank = 0 - dist.init_process_group(backend='nccl', init_method="tcp://127.0.0.1:12584", rank=rank, world_size=world_size) - - local_rank = args.local_rank - torch.cuda.set_device(local_rank) - os.makedirs(cfg.output, exist_ok=True) - init_logging(rank, cfg.output) - - if cfg.rec == "synthetic": - train_set = SyntheticDataset(local_rank=local_rank) - else: - train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank) - - train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True) - train_loader = DataLoaderX( - local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size, - sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True) - backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank) - - if cfg.resume: - try: - backbone_pth = os.path.join(cfg.output, "backbone.pth") - backbone.load_state_dict(torch.load(backbone_pth, map_location=torch.device(local_rank))) - if rank == 0: - logging.info("backbone resume successfully!") - except (FileNotFoundError, KeyError, IndexError, RuntimeError): - if rank == 0: - logging.info("resume fail, backbone init successfully!") - - backbone = torch.nn.parallel.DistributedDataParallel( - module=backbone, broadcast_buffers=False, device_ids=[local_rank]) - backbone.train() - margin_softmax = losses.get_loss(cfg.loss) - module_partial_fc = PartialFC( - rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume, - batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes, - sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output) - - opt_backbone = torch.optim.SGD( - params=[{'params': backbone.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - opt_pfc = torch.optim.SGD( - params=[{'params': module_partial_fc.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - - num_image = len(train_set) - total_batch_size = cfg.batch_size * world_size - cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch - cfg.total_step = num_image // total_batch_size * cfg.num_epoch - - def lr_step_func(current_step): - cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch] - if current_step < cfg.warmup_step: - return current_step / cfg.warmup_step - else: - return 0.1 ** len([m for m in cfg.decay_step if m <= current_step]) - - scheduler_backbone = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_backbone, lr_lambda=lr_step_func) - scheduler_pfc = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_pfc, lr_lambda=lr_step_func) - - for key, value in cfg.items(): - num_space = 25 - len(key) - logging.info(": " + key + " " * num_space + str(value)) - - val_target = cfg.val_targets - callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec) - callback_logging = CallBackLogging(50, rank, cfg.total_step, cfg.batch_size, world_size, None) - callback_checkpoint = CallBackModelCheckpoint(rank, cfg.output) - - loss = AverageMeter() - start_epoch = 0 - global_step = 0 - grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None - for epoch in range(start_epoch, cfg.num_epoch): - train_sampler.set_epoch(epoch) - for step, (img, label) in enumerate(train_loader): - global_step += 1 - features = F.normalize(backbone(img)) - x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc) - if cfg.fp16: - features.backward(grad_amp.scale(x_grad)) - grad_amp.unscale_(opt_backbone) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - grad_amp.step(opt_backbone) - grad_amp.update() - else: - features.backward(x_grad) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - opt_backbone.step() - - opt_pfc.step() - module_partial_fc.update() - opt_backbone.zero_grad() - opt_pfc.zero_grad() - loss.update(loss_v, 1) - callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp) - callback_verification(global_step, backbone) - scheduler_backbone.step() - scheduler_pfc.step() - callback_checkpoint(global_step, backbone, module_partial_fc) - dist.destroy_process_group() - - -if __name__ == "__main__": - torch.backends.cudnn.benchmark = True - parser = argparse.ArgumentParser(description='PyTorch ArcFace Training') - parser.add_argument('config', type=str, help='py config file') - parser.add_argument('--local_rank', type=int, default=0, help='local_rank') - main(parser.parse_args()) diff --git a/spaces/Altinas/vits-uma-genshin-honkais/commons.py b/spaces/Altinas/vits-uma-genshin-honkais/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/Altinas/vits-uma-genshin-honkais/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py deleted file mode 100644 index 74737560cd8ee8167e2c7527ba4a8d08131e58bc..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py +++ /dev/null @@ -1,329 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from math import acos, sin -from typing import List, Tuple, Union - -import numpy as np -import torch -from PIL import Image - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import DDIMScheduler, DDPMScheduler -from ...utils import randn_tensor -from ..pipeline_utils import AudioPipelineOutput, BaseOutput, DiffusionPipeline, ImagePipelineOutput -from .mel import Mel - - -class AudioDiffusionPipeline(DiffusionPipeline): - """ - Pipeline for audio diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Parameters: - vqae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - mel ([`Mel`]): - Transform audio into a spectrogram. - scheduler ([`DDIMScheduler`] or [`DDPMScheduler`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`] or [`DDPMScheduler`]. - """ - - _optional_components = ["vqvae"] - - def __init__( - self, - vqvae: AutoencoderKL, - unet: UNet2DConditionModel, - mel: Mel, - scheduler: Union[DDIMScheduler, DDPMScheduler], - ): - super().__init__() - self.register_modules(unet=unet, scheduler=scheduler, mel=mel, vqvae=vqvae) - - def get_default_steps(self) -> int: - """Returns default number of steps recommended for inference. - - Returns: - `int`: - The number of steps. - """ - return 50 if isinstance(self.scheduler, DDIMScheduler) else 1000 - - @torch.no_grad() - def __call__( - self, - batch_size: int = 1, - audio_file: str = None, - raw_audio: np.ndarray = None, - slice: int = 0, - start_step: int = 0, - steps: int = None, - generator: torch.Generator = None, - mask_start_secs: float = 0, - mask_end_secs: float = 0, - step_generator: torch.Generator = None, - eta: float = 0, - noise: torch.Tensor = None, - encoding: torch.Tensor = None, - return_dict=True, - ) -> Union[ - Union[AudioPipelineOutput, ImagePipelineOutput], - Tuple[List[Image.Image], Tuple[int, List[np.ndarray]]], - ]: - """ - The call function to the pipeline for generation. - - Args: - batch_size (`int`): - Number of samples to generate. - audio_file (`str`): - An audio file that must be on disk due to [Librosa](https://librosa.org/) limitation. - raw_audio (`np.ndarray`): - The raw audio file as a NumPy array. - slice (`int`): - Slice number of audio to convert. - start_step (int): - Step to start diffusion from. - steps (`int`): - Number of denoising steps (defaults to `50` for DDIM and `1000` for DDPM). - generator (`torch.Generator`): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - mask_start_secs (`float`): - Number of seconds of audio to mask (not generate) at start. - mask_end_secs (`float`): - Number of seconds of audio to mask (not generate) at end. - step_generator (`torch.Generator`): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) used to denoise. - None - eta (`float`): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - noise (`torch.Tensor`): - A noise tensor of shape `(batch_size, 1, height, width)` or `None`. - encoding (`torch.Tensor`): - A tensor for [`UNet2DConditionModel`] of shape `(batch_size, seq_length, cross_attention_dim)`. - return_dict (`bool`): - Whether or not to return a [`AudioPipelineOutput`], [`ImagePipelineOutput`] or a plain tuple. - - Examples: - - For audio diffusion: - - ```py - import torch - from IPython.display import Audio - from diffusers import DiffusionPipeline - - device = "cuda" if torch.cuda.is_available() else "cpu" - pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device) - - output = pipe() - display(output.images[0]) - display(Audio(output.audios[0], rate=mel.get_sample_rate())) - ``` - - For latent audio diffusion: - - ```py - import torch - from IPython.display import Audio - from diffusers import DiffusionPipeline - - device = "cuda" if torch.cuda.is_available() else "cpu" - pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device) - - output = pipe() - display(output.images[0]) - display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) - ``` - - For other tasks like variation, inpainting, outpainting, etc: - - ```py - output = pipe( - raw_audio=output.audios[0, 0], - start_step=int(pipe.get_default_steps() / 2), - mask_start_secs=1, - mask_end_secs=1, - ) - display(output.images[0]) - display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) - ``` - - Returns: - `List[PIL Image]`: - A list of Mel spectrograms (`float`, `List[np.ndarray]`) with the sample rate and raw audio. - """ - - steps = steps or self.get_default_steps() - self.scheduler.set_timesteps(steps) - step_generator = step_generator or generator - # For backwards compatibility - if type(self.unet.config.sample_size) == int: - self.unet.config.sample_size = (self.unet.config.sample_size, self.unet.config.sample_size) - if noise is None: - noise = randn_tensor( - ( - batch_size, - self.unet.config.in_channels, - self.unet.config.sample_size[0], - self.unet.config.sample_size[1], - ), - generator=generator, - device=self.device, - ) - images = noise - mask = None - - if audio_file is not None or raw_audio is not None: - self.mel.load_audio(audio_file, raw_audio) - input_image = self.mel.audio_slice_to_image(slice) - input_image = np.frombuffer(input_image.tobytes(), dtype="uint8").reshape( - (input_image.height, input_image.width) - ) - input_image = (input_image / 255) * 2 - 1 - input_images = torch.tensor(input_image[np.newaxis, :, :], dtype=torch.float).to(self.device) - - if self.vqvae is not None: - input_images = self.vqvae.encode(torch.unsqueeze(input_images, 0)).latent_dist.sample( - generator=generator - )[0] - input_images = self.vqvae.config.scaling_factor * input_images - - if start_step > 0: - images[0, 0] = self.scheduler.add_noise(input_images, noise, self.scheduler.timesteps[start_step - 1]) - - pixels_per_second = ( - self.unet.config.sample_size[1] * self.mel.get_sample_rate() / self.mel.x_res / self.mel.hop_length - ) - mask_start = int(mask_start_secs * pixels_per_second) - mask_end = int(mask_end_secs * pixels_per_second) - mask = self.scheduler.add_noise(input_images, noise, torch.tensor(self.scheduler.timesteps[start_step:])) - - for step, t in enumerate(self.progress_bar(self.scheduler.timesteps[start_step:])): - if isinstance(self.unet, UNet2DConditionModel): - model_output = self.unet(images, t, encoding)["sample"] - else: - model_output = self.unet(images, t)["sample"] - - if isinstance(self.scheduler, DDIMScheduler): - images = self.scheduler.step( - model_output=model_output, - timestep=t, - sample=images, - eta=eta, - generator=step_generator, - )["prev_sample"] - else: - images = self.scheduler.step( - model_output=model_output, - timestep=t, - sample=images, - generator=step_generator, - )["prev_sample"] - - if mask is not None: - if mask_start > 0: - images[:, :, :, :mask_start] = mask[:, step, :, :mask_start] - if mask_end > 0: - images[:, :, :, -mask_end:] = mask[:, step, :, -mask_end:] - - if self.vqvae is not None: - # 0.18215 was scaling factor used in training to ensure unit variance - images = 1 / self.vqvae.config.scaling_factor * images - images = self.vqvae.decode(images)["sample"] - - images = (images / 2 + 0.5).clamp(0, 1) - images = images.cpu().permute(0, 2, 3, 1).numpy() - images = (images * 255).round().astype("uint8") - images = list( - (Image.fromarray(_[:, :, 0]) for _ in images) - if images.shape[3] == 1 - else (Image.fromarray(_, mode="RGB").convert("L") for _ in images) - ) - - audios = [self.mel.image_to_audio(_) for _ in images] - if not return_dict: - return images, (self.mel.get_sample_rate(), audios) - - return BaseOutput(**AudioPipelineOutput(np.array(audios)[:, np.newaxis, :]), **ImagePipelineOutput(images)) - - @torch.no_grad() - def encode(self, images: List[Image.Image], steps: int = 50) -> np.ndarray: - """ - Reverse the denoising step process to recover a noisy image from the generated image. - - Args: - images (`List[PIL Image]`): - List of images to encode. - steps (`int`): - Number of encoding steps to perform (defaults to `50`). - - Returns: - `np.ndarray`: - A noise tensor of shape `(batch_size, 1, height, width)`. - """ - - # Only works with DDIM as this method is deterministic - assert isinstance(self.scheduler, DDIMScheduler) - self.scheduler.set_timesteps(steps) - sample = np.array( - [np.frombuffer(image.tobytes(), dtype="uint8").reshape((1, image.height, image.width)) for image in images] - ) - sample = (sample / 255) * 2 - 1 - sample = torch.Tensor(sample).to(self.device) - - for t in self.progress_bar(torch.flip(self.scheduler.timesteps, (0,))): - prev_timestep = t - self.scheduler.config.num_train_timesteps // self.scheduler.num_inference_steps - alpha_prod_t = self.scheduler.alphas_cumprod[t] - alpha_prod_t_prev = ( - self.scheduler.alphas_cumprod[prev_timestep] - if prev_timestep >= 0 - else self.scheduler.final_alpha_cumprod - ) - beta_prod_t = 1 - alpha_prod_t - model_output = self.unet(sample, t)["sample"] - pred_sample_direction = (1 - alpha_prod_t_prev) ** (0.5) * model_output - sample = (sample - pred_sample_direction) * alpha_prod_t_prev ** (-0.5) - sample = sample * alpha_prod_t ** (0.5) + beta_prod_t ** (0.5) * model_output - - return sample - - @staticmethod - def slerp(x0: torch.Tensor, x1: torch.Tensor, alpha: float) -> torch.Tensor: - """Spherical Linear intERPolation. - - Args: - x0 (`torch.Tensor`): - The first tensor to interpolate between. - x1 (`torch.Tensor`): - Second tensor to interpolate between. - alpha (`float`): - Interpolation between 0 and 1 - - Returns: - `torch.Tensor`: - The interpolated tensor. - """ - - theta = acos(torch.dot(torch.flatten(x0), torch.flatten(x1)) / torch.norm(x0) / torch.norm(x1)) - return sin((1 - alpha) * theta) * x0 / sin(theta) + sin(alpha * theta) * x1 / sin(theta) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/test_consistency_models.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/test_consistency_models.py deleted file mode 100644 index 8dce903185053c68012281530414ecdb398c1732..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/consistency_models/test_consistency_models.py +++ /dev/null @@ -1,288 +0,0 @@ -import gc -import unittest - -import numpy as np -import torch -from torch.backends.cuda import sdp_kernel - -from diffusers import ( - CMStochasticIterativeScheduler, - ConsistencyModelPipeline, - UNet2DModel, -) -from diffusers.utils import randn_tensor, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_2, require_torch_gpu - -from ..pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS -from ..test_pipelines_common import PipelineTesterMixin - - -enable_full_determinism() - - -class ConsistencyModelPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = ConsistencyModelPipeline - params = UNCONDITIONAL_IMAGE_GENERATION_PARAMS - batch_params = UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS - - # Override required_optional_params to remove num_images_per_prompt - required_optional_params = frozenset( - [ - "num_inference_steps", - "generator", - "latents", - "output_type", - "return_dict", - "callback", - "callback_steps", - ] - ) - - @property - def dummy_uncond_unet(self): - unet = UNet2DModel.from_pretrained( - "diffusers/consistency-models-test", - subfolder="test_unet", - ) - return unet - - @property - def dummy_cond_unet(self): - unet = UNet2DModel.from_pretrained( - "diffusers/consistency-models-test", - subfolder="test_unet_class_cond", - ) - return unet - - def get_dummy_components(self, class_cond=False): - if class_cond: - unet = self.dummy_cond_unet - else: - unet = self.dummy_uncond_unet - - # Default to CM multistep sampler - scheduler = CMStochasticIterativeScheduler( - num_train_timesteps=40, - sigma_min=0.002, - sigma_max=80.0, - ) - - components = { - "unet": unet, - "scheduler": scheduler, - } - - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - - inputs = { - "batch_size": 1, - "num_inference_steps": None, - "timesteps": [22, 0], - "generator": generator, - "output_type": "np", - } - - return inputs - - def test_consistency_model_pipeline_multistep(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - pipe = ConsistencyModelPipeline(**components) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = pipe(**inputs).images - assert image.shape == (1, 32, 32, 3) - - image_slice = image[0, -3:, -3:, -1] - expected_slice = np.array([0.3572, 0.6273, 0.4031, 0.3961, 0.4321, 0.5730, 0.5266, 0.4780, 0.5004]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_consistency_model_pipeline_multistep_class_cond(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components(class_cond=True) - pipe = ConsistencyModelPipeline(**components) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - inputs["class_labels"] = 0 - image = pipe(**inputs).images - assert image.shape == (1, 32, 32, 3) - - image_slice = image[0, -3:, -3:, -1] - expected_slice = np.array([0.3572, 0.6273, 0.4031, 0.3961, 0.4321, 0.5730, 0.5266, 0.4780, 0.5004]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_consistency_model_pipeline_onestep(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components() - pipe = ConsistencyModelPipeline(**components) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - inputs["num_inference_steps"] = 1 - inputs["timesteps"] = None - image = pipe(**inputs).images - assert image.shape == (1, 32, 32, 3) - - image_slice = image[0, -3:, -3:, -1] - expected_slice = np.array([0.5004, 0.5004, 0.4994, 0.5008, 0.4976, 0.5018, 0.4990, 0.4982, 0.4987]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_consistency_model_pipeline_onestep_class_cond(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - components = self.get_dummy_components(class_cond=True) - pipe = ConsistencyModelPipeline(**components) - pipe = pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - inputs["num_inference_steps"] = 1 - inputs["timesteps"] = None - inputs["class_labels"] = 0 - image = pipe(**inputs).images - assert image.shape == (1, 32, 32, 3) - - image_slice = image[0, -3:, -3:, -1] - expected_slice = np.array([0.5004, 0.5004, 0.4994, 0.5008, 0.4976, 0.5018, 0.4990, 0.4982, 0.4987]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - -@slow -@require_torch_gpu -class ConsistencyModelPipelineSlowTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def get_inputs(self, seed=0, get_fixed_latents=False, device="cpu", dtype=torch.float32, shape=(1, 3, 64, 64)): - generator = torch.manual_seed(seed) - - inputs = { - "num_inference_steps": None, - "timesteps": [22, 0], - "class_labels": 0, - "generator": generator, - "output_type": "np", - } - - if get_fixed_latents: - latents = self.get_fixed_latents(seed=seed, device=device, dtype=dtype, shape=shape) - inputs["latents"] = latents - - return inputs - - def get_fixed_latents(self, seed=0, device="cpu", dtype=torch.float32, shape=(1, 3, 64, 64)): - if type(device) == str: - device = torch.device(device) - generator = torch.Generator(device=device).manual_seed(seed) - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - return latents - - def test_consistency_model_cd_multistep(self): - unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2") - scheduler = CMStochasticIterativeScheduler( - num_train_timesteps=40, - sigma_min=0.002, - sigma_max=80.0, - ) - pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler) - pipe.to(torch_device=torch_device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs() - image = pipe(**inputs).images - assert image.shape == (1, 64, 64, 3) - - image_slice = image[0, -3:, -3:, -1] - - expected_slice = np.array([0.0888, 0.0881, 0.0666, 0.0479, 0.0292, 0.0195, 0.0201, 0.0163, 0.0254]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 2e-2 - - def test_consistency_model_cd_onestep(self): - unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2") - scheduler = CMStochasticIterativeScheduler( - num_train_timesteps=40, - sigma_min=0.002, - sigma_max=80.0, - ) - pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler) - pipe.to(torch_device=torch_device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs() - inputs["num_inference_steps"] = 1 - inputs["timesteps"] = None - image = pipe(**inputs).images - assert image.shape == (1, 64, 64, 3) - - image_slice = image[0, -3:, -3:, -1] - - expected_slice = np.array([0.0340, 0.0152, 0.0063, 0.0267, 0.0221, 0.0107, 0.0416, 0.0186, 0.0217]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 2e-2 - - @require_torch_2 - def test_consistency_model_cd_multistep_flash_attn(self): - unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2") - scheduler = CMStochasticIterativeScheduler( - num_train_timesteps=40, - sigma_min=0.002, - sigma_max=80.0, - ) - pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler) - pipe.to(torch_device=torch_device, torch_dtype=torch.float16) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(get_fixed_latents=True, device=torch_device) - # Ensure usage of flash attention in torch 2.0 - with sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): - image = pipe(**inputs).images - assert image.shape == (1, 64, 64, 3) - - image_slice = image[0, -3:, -3:, -1] - - expected_slice = np.array([0.1875, 0.1428, 0.1289, 0.2151, 0.2092, 0.1477, 0.1877, 0.1641, 0.1353]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - @require_torch_2 - def test_consistency_model_cd_onestep_flash_attn(self): - unet = UNet2DModel.from_pretrained("diffusers/consistency_models", subfolder="diffusers_cd_imagenet64_l2") - scheduler = CMStochasticIterativeScheduler( - num_train_timesteps=40, - sigma_min=0.002, - sigma_max=80.0, - ) - pipe = ConsistencyModelPipeline(unet=unet, scheduler=scheduler) - pipe.to(torch_device=torch_device, torch_dtype=torch.float16) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_inputs(get_fixed_latents=True, device=torch_device) - inputs["num_inference_steps"] = 1 - inputs["timesteps"] = None - # Ensure usage of flash attention in torch 2.0 - with sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): - image = pipe(**inputs).images - assert image.shape == (1, 64, 64, 3) - - image_slice = image[0, -3:, -3:, -1] - - expected_slice = np.array([0.1663, 0.1948, 0.2275, 0.1680, 0.1204, 0.1245, 0.1858, 0.1338, 0.2095]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 62a0627ae2e9bb17974068e56ee660093e944e0d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/apcnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/data/yfcc100m.md b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/data/yfcc100m.md deleted file mode 100644 index 575c54bc4bab3972878291c8d227a313c9fc766e..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/data/yfcc100m.md +++ /dev/null @@ -1,14 +0,0 @@ -# The YFCC100M Subset - -In the paper, we performed a dataset ablation using a subset of the YFCC100M dataset and showed that the performance remained largely similar. - -The subset contains 14,829,396 images, about 15% of the full dataset, which have been filtered to only keep those with natural languag titles and/or descriptions in English. - -We provide the list of (line number, photo identifier, photo hash) of each image contained in this subset. These correspond to the first three columns in the dataset's metadata TSV file. - -``` -wget https://openaipublic.azureedge.net/clip/data/yfcc100m_subset_data.tsv.bz2 -bunzip2 yfcc100m_subset_data.tsv.bz2 -``` - -Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/). \ No newline at end of file diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/optimization/losses.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/optimization/losses.py deleted file mode 100644 index bbe076f59af9259fab74ab7c2a02645b1dd3ab93..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/optimization/losses.py +++ /dev/null @@ -1,17 +0,0 @@ -from torch.nn import functional as F - - -def d_clip_loss(x, y, use_cosine=False): - x = F.normalize(x, dim=-1) - y = F.normalize(y, dim=-1) - - if use_cosine: - distance = 1 - (x @ y.t()).squeeze() - else: - distance = (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2) - - return distance - - -def range_loss(input): - return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3]) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddim.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddim.py deleted file mode 100644 index 27ead0ea914c64c747b64e690662899fb3801144..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,336 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - dynamic_threshold=None, - ucg_schedule=None, - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - elif isinstance(conditioning, list): - for ctmp in conditioning: - if ctmp.shape[0] != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ucg_schedule=ucg_schedule - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None, - ucg_schedule=None): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - if ucg_schedule is not None: - assert len(ucg_schedule) == len(time_range) - unconditional_guidance_scale = ucg_schedule[i] - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - dynamic_threshold=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - model_output = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - if isinstance(c, dict): - assert isinstance(unconditional_conditioning, dict) - c_in = dict() - for k in c: - if isinstance(c[k], list): - c_in[k] = [torch.cat([ - unconditional_conditioning[k][i], - c[k][i]]) for i in range(len(c[k]))] - else: - c_in[k] = torch.cat([ - unconditional_conditioning[k], - c[k]]) - elif isinstance(c, list): - c_in = list() - assert isinstance(unconditional_conditioning, list) - for i in range(len(c)): - c_in.append(torch.cat([unconditional_conditioning[i], c[i]])) - else: - c_in = torch.cat([unconditional_conditioning, c]) - model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond) - - if self.model.parameterization == "v": - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == "eps", 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - if self.model.parameterization != "v": - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - - if dynamic_threshold is not None: - raise NotImplementedError() - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None, - unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None): - num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0] - - assert t_enc <= num_reference_steps - num_steps = t_enc - - if use_original_steps: - alphas_next = self.alphas_cumprod[:num_steps] - alphas = self.alphas_cumprod_prev[:num_steps] - else: - alphas_next = self.ddim_alphas[:num_steps] - alphas = torch.tensor(self.ddim_alphas_prev[:num_steps]) - - x_next = x0 - intermediates = [] - inter_steps = [] - for i in tqdm(range(num_steps), desc='Encoding Image'): - t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long) - if unconditional_guidance_scale == 1.: - noise_pred = self.model.apply_model(x_next, t, c) - else: - assert unconditional_conditioning is not None - e_t_uncond, noise_pred = torch.chunk( - self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)), - torch.cat((unconditional_conditioning, c))), 2) - noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond) - - xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next - weighted_noise_pred = alphas_next[i].sqrt() * ( - (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred - x_next = xt_weighted + weighted_noise_pred - if return_intermediates and i % ( - num_steps // return_intermediates) == 0 and i < num_steps - 1: - intermediates.append(x_next) - inter_steps.append(i) - elif return_intermediates and i >= num_steps - 2: - intermediates.append(x_next) - inter_steps.append(i) - if callback: callback(i) - - out = {'x_encoded': x_next, 'intermediate_steps': inter_steps} - if return_intermediates: - out.update({'intermediates': intermediates}) - return x_next, out - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False, callback=None): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - if callback: callback(i) - return x_dec \ No newline at end of file diff --git a/spaces/Apex-X/Tm/roop/processors/__init__.py b/spaces/Apex-X/Tm/roop/processors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/encodec.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/models/encodec.py deleted file mode 100644 index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/audiocraft/models/encodec.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -from einops import rearrange -import torch -from torch import nn - -from .. import quantization as qt - - -class CompressionModel(ABC, nn.Module): - - @abstractmethod - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - ... - - @abstractmethod - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """See `EncodecModel.encode`""" - ... - - @abstractmethod - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """See `EncodecModel.decode`""" - ... - - @property - @abstractmethod - def channels(self) -> int: - ... - - @property - @abstractmethod - def frame_rate(self) -> int: - ... - - @property - @abstractmethod - def sample_rate(self) -> int: - ... - - @property - @abstractmethod - def cardinality(self) -> int: - ... - - @property - @abstractmethod - def num_codebooks(self) -> int: - ... - - @property - @abstractmethod - def total_codebooks(self) -> int: - ... - - @abstractmethod - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - ... - - -class EncodecModel(CompressionModel): - """Encodec model operating on the raw waveform. - - Args: - encoder (nn.Module): Encoder network. - decoder (nn.Module): Decoder network. - quantizer (qt.BaseQuantizer): Quantizer network. - frame_rate (int): Frame rate for the latent representation. - sample_rate (int): Audio sample rate. - channels (int): Number of audio channels. - causal (bool): Whether to use a causal version of the model. - renormalize (bool): Whether to renormalize the audio before running the model. - """ - # we need assignement to override the property in the abstract class, - # I couldn't find a better way... - frame_rate: int = 0 - sample_rate: int = 0 - channels: int = 0 - - def __init__(self, - encoder: nn.Module, - decoder: nn.Module, - quantizer: qt.BaseQuantizer, - frame_rate: int, - sample_rate: int, - channels: int, - causal: bool = False, - renormalize: bool = False): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.quantizer = quantizer - self.frame_rate = frame_rate - self.sample_rate = sample_rate - self.channels = channels - self.renormalize = renormalize - self.causal = causal - if self.causal: - # we force disabling here to avoid handling linear overlap of segments - # as supported in original EnCodec codebase. - assert not self.renormalize, 'Causal model does not support renormalize' - - @property - def total_codebooks(self): - """Total number of quantizer codebooks available. - """ - return self.quantizer.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - """ - return self.quantizer.num_codebooks - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - self.quantizer.set_num_codebooks(n) - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - return self.quantizer.bins - - def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - scale: tp.Optional[torch.Tensor] - if self.renormalize: - mono = x.mean(dim=1, keepdim=True) - volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt() - scale = 1e-8 + volume - x = x / scale - scale = scale.view(-1, 1) - else: - scale = None - return x, scale - - def postprocess(self, - x: torch.Tensor, - scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor: - if scale is not None: - assert self.renormalize - x = x * scale.view(-1, 1, 1) - return x - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - assert x.dim() == 3 - length = x.shape[-1] - x, scale = self.preprocess(x) - - emb = self.encoder(x) - q_res = self.quantizer(emb, self.frame_rate) - out = self.decoder(q_res.x) - - # remove extra padding added by the encoder and decoder - assert out.shape[-1] >= length, (out.shape[-1], length) - out = out[..., :length] - - q_res.x = self.postprocess(out, scale) - - return q_res - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """Encode the given input tensor to quantized representation along with scale parameter. - - Args: - x (torch.Tensor): Float tensor of shape [B, C, T] - - Returns: - codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of: - codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep. - scale a float tensor containing the scale for audio renormalizealization. - """ - assert x.dim() == 3 - x, scale = self.preprocess(x) - emb = self.encoder(x) - codes = self.quantizer.encode(emb) - return codes, scale - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """Decode the given codes to a reconstructed representation, using the scale to perform - audio denormalization if needed. - - Args: - codes (torch.Tensor): Int tensor of shape [B, K, T] - scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value. - - Returns: - out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio. - """ - emb = self.quantizer.decode(codes) - out = self.decoder(emb) - out = self.postprocess(out, scale) - # out contains extra padding added by the encoder and decoder - return out - - -class FlattenedCompressionModel(CompressionModel): - """Wraps a CompressionModel and flatten its codebooks, e.g. - instead of returning [B, K, T], return [B, S, T * (K // S)] with - S the number of codebooks per step, and `K // S` the number of 'virtual steps' - for each real time step. - - Args: - model (CompressionModel): compression model to wrap. - codebooks_per_step (int): number of codebooks to keep per step, - this must divide the number of codebooks provided by the wrapped model. - extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1, - if each codebook has a cardinality N, then the first codebook will - use the range [0, N - 1], and the second [N, 2 N - 1] etc. - On decoding, this can lead to potentially invalid sequences. - Any invalid entry will be silently remapped to the proper range - with a modulo. - """ - def __init__(self, model: CompressionModel, codebooks_per_step: int = 1, - extend_cardinality: bool = True): - super().__init__() - self.model = model - self.codebooks_per_step = codebooks_per_step - self.extend_cardinality = extend_cardinality - - @property - def total_codebooks(self): - return self.model.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - - ..Warning:: this reports the number of codebooks after the flattening - of the codebooks! - """ - assert self.model.num_codebooks % self.codebooks_per_step == 0 - return self.codebooks_per_step - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - - ..Warning:: this sets the number of codebooks **before** the flattening - of the codebooks. - """ - assert n % self.codebooks_per_step == 0 - self.model.set_num_codebooks(n) - - @property - def num_virtual_steps(self) -> int: - """Return the number of virtual steps, e.g. one real step - will be split into that many steps. - """ - return self.model.num_codebooks // self.codebooks_per_step - - @property - def frame_rate(self) -> int: - return self.model.frame_rate * self.num_virtual_steps - - @property - def sample_rate(self) -> int: - return self.model.sample_rate - - @property - def channels(self) -> int: - return self.model.channels - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - if self.extend_cardinality: - return self.model.cardinality * self.num_virtual_steps - else: - return self.model.cardinality - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - raise NotImplementedError("Not supported, use encode and decode.") - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - indices, scales = self.model.encode(x) - B, K, T = indices.shape - indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step) - if self.extend_cardinality: - for virtual_step in range(1, self.num_virtual_steps): - indices[..., virtual_step] += self.model.cardinality * virtual_step - indices = rearrange(indices, 'b k t v -> b k (t v)') - return (indices, scales) - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - B, K, T = codes.shape - assert T % self.num_virtual_steps == 0 - codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps) - # We silently ignore potential errors from the LM when - # using extend_cardinality. - codes = codes % self.model.cardinality - return self.model.decode(codes, scale) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/alias.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/alias.py deleted file mode 100644 index 452a9244ea6766d8cf94425fb583583ef740baee..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/alias.py +++ /dev/null @@ -1,78 +0,0 @@ -from distutils.errors import DistutilsOptionError - -from setuptools.command.setopt import edit_config, option_base, config_file - - -def shquote(arg): - """Quote an argument for later parsing by shlex.split()""" - for c in '"', "'", "\\", "#": - if c in arg: - return repr(arg) - if arg.split() != [arg]: - return repr(arg) - return arg - - -class alias(option_base): - """Define a shortcut that invokes one or more commands""" - - description = "define a shortcut to invoke one or more commands" - command_consumes_arguments = True - - user_options = [ - ('remove', 'r', 'remove (unset) the alias'), - ] + option_base.user_options - - boolean_options = option_base.boolean_options + ['remove'] - - def initialize_options(self): - option_base.initialize_options(self) - self.args = None - self.remove = None - - def finalize_options(self): - option_base.finalize_options(self) - if self.remove and len(self.args) != 1: - raise DistutilsOptionError( - "Must specify exactly one argument (the alias name) when " - "using --remove" - ) - - def run(self): - aliases = self.distribution.get_option_dict('aliases') - - if not self.args: - print("Command Aliases") - print("---------------") - for alias in aliases: - print("setup.py alias", format_alias(alias, aliases)) - return - - elif len(self.args) == 1: - alias, = self.args - if self.remove: - command = None - elif alias in aliases: - print("setup.py alias", format_alias(alias, aliases)) - return - else: - print("No alias definition found for %r" % alias) - return - else: - alias = self.args[0] - command = ' '.join(map(shquote, self.args[1:])) - - edit_config(self.filename, {'aliases': {alias: command}}, self.dry_run) - - -def format_alias(name, aliases): - source, command = aliases[name] - if source == config_file('global'): - source = '--global-config ' - elif source == config_file('user'): - source = '--user-config ' - elif source == config_file('local'): - source = '' - else: - source = '--filename=%r' % source - return source + name + ' ' + command diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/meta_arch/grit.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/meta_arch/grit.py deleted file mode 100644 index 101725fd455e723360eaafc26db37beb226a9233..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/meta_arch/grit.py +++ /dev/null @@ -1,66 +0,0 @@ -from typing import Dict, List, Optional, Tuple -import torch -from detectron2.config import configurable -from detectron2.structures import ImageList, Instances, Boxes -from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY -from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN - - -@META_ARCH_REGISTRY.register() -class GRiT(GeneralizedRCNN): - @configurable - def __init__( - self, - **kwargs): - super().__init__(**kwargs) - assert self.proposal_generator is not None - - @classmethod - def from_config(cls, cfg): - ret = super().from_config(cfg) - return ret - - def inference( - self, - batched_inputs: Tuple[Dict[str, torch.Tensor]], - detected_instances: Optional[List[Instances]] = None, - do_postprocess: bool = True, - ): - assert not self.training - assert detected_instances is None - - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - proposals, _ = self.proposal_generator(images, features, None) - results, _ = self.roi_heads(features, proposals) - if do_postprocess: - assert not torch.jit.is_scripting(), \ - "Scripting is not supported for postprocess." - return GRiT._postprocess( - results, batched_inputs, images.image_sizes) - else: - return results - - def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): - if not self.training: - return self.inference(batched_inputs) - - images = self.preprocess_image(batched_inputs) - - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - - targets_task = batched_inputs[0]['task'] - for anno_per_image in batched_inputs: - assert targets_task == anno_per_image['task'] - - features = self.backbone(images.tensor) - proposals, proposal_losses = self.proposal_generator( - images, features, gt_instances) - proposals, roihead_textdecoder_losses = self.roi_heads( - features, proposals, gt_instances, targets_task=targets_task) - - losses = {} - losses.update(roihead_textdecoder_losses) - losses.update(proposal_losses) - - return losses \ No newline at end of file diff --git a/spaces/BalaBhaskarudu/Balu/README.md b/spaces/BalaBhaskarudu/Balu/README.md deleted file mode 100644 index fdfd9f595330d2c7be5b58790e0f37b278038831..0000000000000000000000000000000000000000 --- a/spaces/BalaBhaskarudu/Balu/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Balu -emoji: 📚 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Bloodbox.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Bloodbox.md deleted file mode 100644 index 363e45eb01e2af770315e1d7c0db7d9266e2817a..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Descargar Bloodbox.md +++ /dev/null @@ -1,113 +0,0 @@ - -

Simulador de barco PC Descargar: Cómo experimentar la simulación realista de la navegación de varios buques

-

¿Alguna vez has soñado con navegar en un enorme buque portacontenedores, un rompehielos, una unidad de rescate, un buque cisterna o un crucero? ¿Quieres explorar las aguas más bellas y desafiantes del mundo, desde la Antártida hasta Bora Bora? ¿Quieres sentir la emoción de enfrentarte a la tormenta perfecta, salvar ballenas en peligro o manejar un puerto ocupado?

-

Si respondiste sí a cualquiera de estas preguntas, entonces podrías estar interesado en jugar un juego de simulador de barcos en tu PC. Un juego de simulador de barcos es un tipo de juego de simulación que te permite controlar y operar diferentes tipos de embarcaciones en entornos y escenarios realistas. Puede aprender a navegar, maniobrar, atracar y manejar varias situaciones que los capitanes de barcos reales enfrentan todos los días.

-

cómo descargar bloodbox


Download ->>->>->> https://bltlly.com/2v6LS4



-

En este artículo, le mostraremos cómo elegir el mejor juego de simulador de barcos para su PC, cómo descargarlo e instalarlo, y cómo disfrutar de la simulación realista de navegar varios buques. También responderemos algunas preguntas frecuentes sobre juegos de simuladores de barcos. ¡Empecemos!

-

¿Qué es un juego de simulador de barcos?

-

Una breve historia de juegos de simuladores de barcos

-

Los juegos de simuladores de barcos no son un fenómeno nuevo. Han existido desde los primeros días de los juegos de ordenador, que se remontan a la década de 1980. Algunos de los primeros ejemplos de juegos de simuladores de barcos son Arpón, Naufragio, Puertos de llamada, y Servicio silencioso. Estos juegos se centraron en la guerra naval, el comercio o la simulación submarina.

-

A medida que la tecnología avanzó, también lo hicieron los gráficos, la física y el realismo de los juegos de simuladores de barcos. En la década de 1990 y 2000, algunos de los juegos de simuladores de barcos más populares fueron Titanic: Adventure Out of Time, Virtual Sailor, Ship Simulator, y European Ship Simulator. Estos juegos introdujeron más variedad, interactividad y personalización al género.

- -

Los beneficios de jugar juegos de simulador de barcos

-

Jugar juegos de simulador de barco puede ser divertido, relajante, educativo y gratificante. Estos son algunos de los beneficios de jugar juegos de simulador de barco:

- -

Cómo elegir el mejor juego de simulador de barco para su PC

-

Las características a buscar en un juego de simulador de barco

-

Hay muchos juegos de simulador de barcos disponibles en el mercado, pero no todos ellos valen su tiempo y dinero. Para ayudarle a elegir el mejor juego de simulador de barcos para su PC, aquí están algunas de las características para buscar:

- -

Los 3 mejores juegos de simuladores de barcos en Steam

-

Para ahorrarte tiempo y esfuerzo, hemos seleccionado los 3 mejores juegos de simuladores de barcos en Steam según sus calificaciones, reseñas, características y popularidad. Aquí están:

-

Extremos del simulador de buques

-

Ship Simulator Extremes es uno de los juegos de simulación de barcos más populares y aclamados en Steam. Fue lanzado en 2010 por VSTEP y Paradox Interactive. Cuenta con más de 30 embarcaciones, desde lanchas rápidas hasta cruceros; más de 50 misiones, desde operaciones de rescate hasta campañas ambientales; más de 40 ubicaciones, desde Sydney a San Francisco; y efectos meteorológicos realistas, como lluvia, niebla, viento y olas.

-

Ship Simulator Extremes también tiene un modo de campaña que te permite experimentar las historias de capitanes de la vida real; un modo de roaming gratuito que te permite explorar el mundo a tu propio ritmo; un modo multijugador que te permite jugar con o contra otros jugadores en línea; y un editor de misiones que te permite crear tus propios escenarios. También puedes descargar contenido adicional del Steam Workshop o del sitio web oficial.

-

-

Ship Simulator Extremes está disponible en Steam por $19.99 USD. Requiere Windows XP o superior; procesador de 3 GHz o superior; 2 GB de RAM o superior; NVIDIA GeForce 8800 o superior; DirectX 9.0c o superior; 3 GB de espacio en disco o superior; conexión a Internet de banda ancha o superior.

-

Simulador de nave realista

- -

Ship Simulator Realistic también tiene un modo de carrera que te permite comenzar como capitán novato y progresar a través de diferentes rangos y licencias; un modo sandbox que te permite personalizar tus embarcaciones y escenarios; un modo multijugador que le permite cooperar o competir con otros jugadores en línea; y un soporte de modificación que le permite agregar su propio contenido al juego.

-

Ship Simulator Realistic está disponible en Steam por $24.99 USD. Requiere Windows 7 o superior; Intel i5-6400 o superior; 8 GB de RAM o superior; NVIDIA GTX 970 o superior; DirectX 11 o superior; 10 GB de espacio en disco o superior; conexión a Internet de banda ancha o superior.

-

Naves 2022

-

Ships 2022 es uno de los juegos de simulación de barcos más esperados en Steam. Se espera que sea lanzado a finales de 202 2 por Games Box S.A. Cuenta con más de 30 buques, desde barcos pesqueros hasta buques de guerra; más de 15 ubicaciones, desde el Mar Báltico hasta el Mar Caribe; más de 200 misiones, desde la pesca hasta la piratería; y gráficos realistas, sonido, clima y agua.

-

Ships 2022 también tiene un modo de carrera que te permite construir tu propia flota y compañía; un modo sandbox que te permite navegar libremente y experimentar con diferentes embarcaciones y configuraciones; un modo multijugador que te permite unirte o alojar sesiones en línea con otros jugadores; y un soporte de taller que le permite acceder y compartir contenido generado por el usuario.

-

Ships 2022 está disponible para pre-pedido en Steam por $29.99 USD. Requiere Windows 10 o superior; Intel Core i5-8400 o superior; 16 GB de RAM o superior; NVIDIA GeForce GTX 1060 o superior; DirectX 12 o superior; 20 GB de espacio en disco o superior; conexión a Internet de banda ancha o superior.

-

Cómo descargar e instalar un juego de simulador de barcos en su PC

-

Los requisitos para ejecutar un juego de simulador de buques en su PC

- - -

Si su PC no cumple con los requisitos, es posible que experimente un rendimiento deficiente, baja calidad gráfica, retraso, estrellarse u otros problemas durante el juego. Es posible que tenga que actualizar los componentes de su PC o bajar la configuración del juego para mejorar el juego.

-

Los pasos para descargar e instalar un juego de simulador de barco en su PC

-

Una vez que haya confirmado que su PC cumple con los requisitos, puede proceder a descargar e instalar un juego de simulador de buques en su PC. Estos son los pasos a seguir:

-
    -
  1. Ve al sitio web de Steam y crea una cuenta o inicia sesión en tu cuenta existente.
  2. -
  3. Descargue e instale el cliente de Steam en su PC.
  4. -
  5. Inicia el cliente de Steam e inicia sesión en tu cuenta.
  6. -
  7. Vaya a la pestaña Tienda y busque el juego de simulador de barcos que desea comprar.
  8. -
  9. Haga clic en el título del juego y luego haga clic en el botón Añadir al carrito.
  10. -
  11. Vaya a su carrito y haga clic en el botón Comprar para mí.
  12. -
  13. Elige tu método de pago y completa la transacción.
  14. -
  15. Vaya a la pestaña Biblioteca y encuentre el juego de simulador de barcos que compró.
  16. -
  17. Haga clic en el título del juego y luego haga clic en el botón Instalar.
  18. -
  19. Espera a que el juego se descargue e instale en tu PC.
  20. -
  21. Haga clic en el botón Jugar y disfrutar del juego!
  22. -
-

Cómo disfrutar de la simulación realista de la navegación de varios buques en un juego de simulador de barcos

-

Los tipos de embarcaciones que puedes navegar en un juego de simulador de barcos

- - -

Cada tipo de embarcación tiene sus propias características, ventajas, desventajas y desafíos. Es necesario aprender a operar de manera adecuada y eficiente, así como cómo hacer frente a las situaciones específicas y los riesgos que pueden encontrar.

-

Los escenarios y misiones que puedes experimentar en un juego de simulador de barcos

-

Otro aspecto de jugar un juego de simulador de barco es que puedes experimentar varios escenarios y misiones que ponen a prueba tus habilidades y conocimientos como capitán de barco. Dependiendo del juego que elijas, puedes experimentar:

- -

Cada escenario y misión tiene sus propios objetivos, recompensas y consecuencias. Necesitas planificar tu estrategia cuidadosamente y ejecutarla hábilmente. También necesita adaptarse a las condiciones y circunstancias cambiantes que pueden afectar su rendimiento.

-

Los consejos y trucos para mejorar sus habilidades de navegación en un juego de simulador de barco

-

Jugar un juego de simulador de barco puede ser divertido y fácil si sabes lo que estás haciendo. Sin embargo, si eres nuevo en el género o quieres mejorar aún más tus habilidades de navegación, aquí hay algunos consejos y trucos que pueden ayudarte:

-