diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create professional and dynamic photo slideshow videos with HD Online Player (socusoft photo to video converter pr).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create professional and dynamic photo slideshow videos with HD Online Player (socusoft photo to video converter pr).md
deleted file mode 100644
index 503368d5abd082a2e17b4d3191c250eff690e960..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create professional and dynamic photo slideshow videos with HD Online Player (socusoft photo to video converter pr).md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
HD Online Player (socusoft photo to video converter pr)
-
Do you have a lot of photos that you want to turn into a stunning video slideshow? Do you want to share your precious memories with your friends and family on social media platforms like Facebook, YouTube, or MySpace? Do you want to watch your photo slideshow on your PC, iPod, iPad, iPhone, PSP, or other devices? If you answered yes to any of these questions, then you need a powerful and professional photo to video converter software that can help you create HD photo slideshows with ease. And that software is socusoft photo to video converter professional.
-
What is socusoft photo to video converter professional?
-
Socusoft photo to video converter professional is a software program that enables you to create professional slideshows from photographs, and save it to the hard drive as a video file, in several formats. It supports a wide range of formats, both at import and export. To be more accurate, you can upload JPG, TIFF, BMP, PCX and PNG images, and create MP4, FLV, AVI, MKV, MPG, 3GP and SWF videos, with presets so that you can upload them to Facebook, YouTube, MySpace, or play on Apple products or mobile phones.
-
HD Online Player (socusoft photo to video converter pr)
Features and benefits of socusoft photo to video converter professional
-
Some of the features and benefits of socusoft photo to video converter professional are:
-
-
You can rotate photos, add text or cliparts, and manually or automatically adjust brightness, contrast and gamma levels.
-
You can choose from hundreds of transitions and album themes to make your slideshow more attractive and dynamic.
-
You can add MP3, WMA and WAV audio files as background music, and record your own voice as narration.
-
You can customize output video and audio parameters, including size, codecs, bit rate, frame rate and channels.
-
You can preview your slideshow before saving it.
-
You can save your slideshow as a standalone executable file that can be played on any PC without installation.
-
You can burn your slideshow onto a DVD disc that can be played on any DVD player.
-
-
How to use socusoft photo to video converter professional to create HD photo slideshows
-
Using socusoft photo to video converter professional is very easy and intuitive. You just need to follow these simple steps:
-
Step 1: Add photos and edit them
-
After installation, run the program and click on the "Organize Photos" tab. You can drag and drop photos from your computer or click on the "Add" button to browse for photos. You can also create multiple albums for different occasions or themes. To edit photos, you can double-click on them or click on the "Edit Photo" button. You can rotate them, crop them, add text or cliparts, adjust color and brightness, etc.
-
Step 2: Choose transitions and album themes
-
Click on the "Transition & Music" tab. You can see all the available transitions on the left panel. You can drag and drop them between photos or click on the "Apply" button to apply them randomly. You can also change the duration of each transition. On the right panel, you can see all the album themes that you can apply to your slideshow. You can choose from different categories like wedding, birthday, travel, etc. You can also customize the album title and background.
-
Step 3: Add background music and record narration
-
Click on the "Music & Sound" tab. You can add MP3, WMA or WAV audio files as background music by clicking on the "Add" button or dragging and dropping them from your computer. You can also trim or loop the music as you like. To record your own voice as narration, click on the "Record Sound" button and use your microphone. You can adjust the volume of both music and sound.
-
How to use HD Online Player for socusoft photo to video conversion
-Best settings for HD Online Player to convert socusoft photos to videos
-HD Online Player vs other socusoft photo to video converter software
-HD Online Player review: pros and cons of socusoft photo to video converter
-HD Online Player download: where to get socusoft photo to video converter for free
-HD Online Player tutorial: how to create stunning videos from socusoft photos
-HD Online Player features: what makes socusoft photo to video converter unique
-HD Online Player alternatives: other photo to video converter tools that work with socusoft
-HD Online Player support: how to contact socusoft photo to video converter customer service
-HD Online Player license: how to activate socusoft photo to video converter premium version
-HD Online Player update: how to get the latest version of socusoft photo to video converter
-HD Online Player compatibility: which devices and platforms can run socusoft photo to video converter
-HD Online Player tips and tricks: how to optimize socusoft photo to video converter performance
-HD Online Player testimonials: what users say about socusoft photo to video converter
-HD Online Player FAQ: frequently asked questions about socusoft photo to video converter
-HD Online Player coupon code: how to get a discount on socusoft photo to video converter purchase
-HD Online Player demo: how to try socusoft photo to video converter before buying
-HD Online Player refund policy: how to get your money back if you are not satisfied with socusoft photo to video converter
-HD Online Player comparison: how does socusoft photo to video converter stack up against the competition
-HD Online Player benefits: how socusoft photo to video converter can help you achieve your goals
-HD Online Player problems: how to fix common issues with socusoft photo to video converter
-HD Online Player guide: how to master socusoft photo to video converter in easy steps
-HD Online Player quality: how good are the videos produced by socusoft photo to video converter
-HD Online Player customization: how to personalize socusoft photo to video converter settings and preferences
-HD Online Player integration: how to connect socusoft photo to video converter with other apps and services
-HD Online Player case study: how a real user used socusoft photo to video converter for a specific project
-HD Online Player bonus: what extra features and perks you get with socusoft photo to video converter
-HD Online Player feedback: how to share your opinion and suggestions with socusoft photo to video converter developers
-HD Online Player forum: where to join the community of socusoft photo to video converter users and experts
-HD Online Player blog: where to find the latest news and updates on socusoft photo to video converter
-HD Online Player webinar: where to register for a live training session on socusoft photo to video converter
-HD Online Player podcast: where to listen to interviews and stories about socusoft photo to video converter
-HD Online Player ebook: where to download a free ebook on socusoft photo to video converter best practices
-HD Online Player course: where to enroll in a comprehensive online course on socusoft photo to video converter
-HD Online Player cheat sheet: where to get a handy reference guide on socusoft photo to video converter shortcuts and commands
-HD Online Player infographic: where to view a visual summary of the key facts and figures about socusoft photo to video converter
-HD Online Player checklist: where
-
Step 4: Customize output video parameters and format
-
Click on the "Video Output" tab. You can choose from different output formats like MP4, FLV, AVI, MKV, MPG, 3GP or SWF. You can also select presets for different devices or platforms like iPod, iPad, iPhone, PSP, YouTube or Facebook. You can also customize the output parameters like size, codecs, bit rate, frame rate and channels.
-
Step 5: Save and share your photo slideshow video
-
Click on the "Create Now!" button. You can choose to save your slideshow as a video file on your computer or burn it onto a DVD disc. You can also save it as an executable file that can be played on any PC without installation. After saving your slideshow, you can share it with your friends and family on social media platforms like Facebook, YouTube or MySpace.
-
Why choose socusoft photo to video converter professional over other photo slideshow makers?
-
There are many reasons why socusoft photo to video converter professional is better than other photo slideshow makers. Here are some of them:
-
High quality and compatibility of output videos
-
Socusoft photo to video converter professional produces high quality videos that are compatible with various devices and platforms. You can create HD videos that have crisp images and clear sound. You can also choose from different formats and presets that suit your needs and preferences.
-
Easy and intuitive interface and operation
-
Socusoft photo to video converter professional has an easy and intuitive interface that makes it simple to use. You don't need any technical skills or experience to create stunning slideshows. You just need to follow the steps and customize the settings as you like.
-
Affordable price and lifetime support
-
Socusoft photo to video converter professional is affordable and worth every penny. You only need to pay once and enjoy lifetime updates and support. You can also get free trial versions and discounts on their website. If you have any questions or problems, you can contact their customer service anytime.
-
Conclusion
-
Socusoft photo to video converter professional is a powerful and professional software program that enables you to create HD photo slideshows with ease. It has many features and benefits that make it stand out from other photo slideshow makers. It is easy to use, high quality, compatible, affordable, and supported. If you want to turn your photos into amazing videos, you should try socusoft photo to video converter professional today.
-
FAQs
-
-
Q: How long does it take to create a photo slideshow with socusoft photo to video converter professional?
-
A: It depends on the number of photos, the length of music, the transitions, the output format, and your computer speed. Generally, it takes a few minutes to create a short slideshow.
-
Q: Can I add videos or animations to my slideshow?
-
A: No, socusoft photo to video converter professional only supports photos as input. However, you can use other software programs like Windows Movie Maker or Adobe Premiere Pro to edit your slideshow further and add videos or animations.
- to my slideshow?
-
A: Yes, you can add subtitles or captions to your slideshow by using the text or clipart feature in the photo editing section. You can also change the font, color, size, and position of the text or clipart.
-
Q: How can I share my slideshow with others?
-
A: You can share your slideshow with others by uploading it to social media platforms like Facebook, YouTube, or MySpace. You can also burn it onto a DVD disc or save it as an executable file that can be played on any PC without installation.
-
Q: What if I have more questions or problems with socusoft photo to video converter professional?
-
A: You can visit their website at http://socusoft.com/ and check their FAQ section or user guide for more information. You can also contact their customer service by emailing to support@socusoft.com or calling +86-755-86264811.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Adobe.cc.2019.patcher.[zer0code3].zip.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Adobe.cc.2019.patcher.[zer0code3].zip.md
deleted file mode 100644
index 708d61e78eaec9c3cb73361da7ea33db5595f01c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Adobe.cc.2019.patcher.[zer0code3].zip.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
FULL adobe.cc.2019.patcher.[zer0code3].zip: What is it and how to use it?
-
Introduction
-
If you are a creative professional, student, or hobbyist, you probably know about Adobe Creative Cloud (CC) 2019, the latest version of the world's leading software suite for design, photography, video, web, and more. Adobe CC 2019 offers a range of powerful tools and features that can help you unleash your creativity and bring your ideas to life.
However, you may also know that Adobe CC 2019 is not cheap. To use it, you need to pay a monthly or yearly subscription fee that can range from $9.99 to $82.98 per month, depending on the plan and the number of apps you choose. That's a lot of money for some people, especially if you only need one or two apps occasionally.
-
So, what if there was a way to get all the Adobe CC 2019 programs for free, without paying anything or logging in to any account? Sounds too good to be true, right? Well, that's what FULL adobe.cc.2019.patcher.[zer0code3].zip claims to do.
-
FULL adobe.cc.2019.patcher.[zer0code3].zip is a file that contains a patcher, a software tool that modifies or cracks another software program to bypass its license verification or activation process. In this case, the patcher targets all the Adobe CC 2019 programs, such as Photoshop, Lightroom, Dreamweaver, Acrobat, After Effects, InCopy, Media Encoder, Character Animator, Audition, Illustrator, and many more. By using the patcher, you can download and install any Adobe CC 2019 program you want, and activate it with one click, without needing any login or subscription. You can then use the program offline or online, with full access to all its features and updates.
-
But how does the patcher work, and what are its advantages and disadvantages? Is it safe and legal to use? Are there any alternatives to it? In this article, we will answer these questions and more, so you can decide whether FULL adobe.cc.2019.patcher.[zer0code3].zip is worth trying or not.
-
Features of FULL adobe.cc.2019.patcher.[zer0code3].zip
-
FULL adobe.cc.2019.patcher.[zer0code3].zip is a patcher created by zer0cod3, a hacker who claims to have cracked all the Adobe CC 2019 programs. According to his website, the patcher has the following features:
-
-
Supports all Adobe CC 2019 programs: The patcher can download and activate any Adobe CC 2019 program you want, from Photoshop to Premiere Pro, from InDesign to Animate, and everything in between.
-
Easy to use interface: The patcher has a simple and user-friendly interface that lets you select the program you want and click "Download and Patch" to start the process. You don't need to do anything else.
-
Download and patch with one click: The patcher will automatically download the offline installer of the selected program from Adobe's servers, install it on your system, and apply the patch to activate it. You don't need to download or install anything separately.
-
No login or subscription required: The patcher will bypass the login and subscription verification of Adobe CC 2019, so you don't need to enter any email or password, or pay any fee. You can use the program as if you have a valid license.
-
Works offline and online: The patcher will not affect the online functionality of Adobe CC 2019, so you can still use the cloud services, sync your files, access your libraries, and get updates. You can also use the program offline if you prefer.
-
-
These features make FULL adobe.cc.2019.patcher.[zer0code3].zip a very convenient and attractive tool for anyone who wants to use Adobe CC 2019 for free. However, before you rush to download and use it, you should also be aware of its drawbacks and risks.
-
-
How to download and install FULL adobe.cc.2019.patcher.[zer0code3].zip
-
If you are interested in trying FULL adobe.cc.2019.patcher.[zer0code3].zip, you need to follow these steps:
-
-
Download CCMaker to get the Adobe offline installers: CCMaker is another tool that lets you download the offline installers of Adobe CC 2019 programs from Adobe's servers. You need this tool because the patcher does not work with the online installers that you get from Adobe's website. You can download CCMaker from here. After downloading it, extract it and run it as administrator.
-
Download FULL adobe.cc.2019.patcher.[zer0code3].zip from a reliable source: The patcher is not available on zer0cod3's website anymore, as it was taken down by Adobe for legal reasons. However, you can still find it on some other websites or torrent sites. Be careful though, as some of these sources may contain fake or malicious files that can harm your system. Make sure you scan the file with an antivirus before opening it. You can try this link for example, but we do not guarantee its safety or validity.
-
Extract the zip file and run the patcher: After downloading FULL adobe.cc.2019.patcher.[zer0code3].zip, extract it to a folder on your system. You will see a file called "FULL adobe.cc.2019.patcher.exe". Right-click on it and run it as administrator.
-
Select the Adobe program you want and click "Download and Patch": The patcher will show you a list of all the Adobe CC 2019 programs that it supports. You can select one or more programs that you want to download and activate. Then click on "Download and Patch" at the bottom of the window. The patcher will then start downloading the offline installer of the selected program from CCMaker, install it on your system, and apply the patch to activate it.
-
Enjoy your activated Adobe CC 2019 program: After the patching process is done, you can launch the program from your desktop or start menu. You will see that the program is activated and does not require any login or subscription. You can use the program offline or online, with full access to all its features and updates.
-
-
Congratulations, you have successfully downloaded and installed an Adobe CC 2019 program for free using FULL adobe.cc.2019.patcher.[zer0code3].zip. However, before you start using it, you should also consider the pros and cons of using this patcher.
-
Pros and cons of using FULL adobe.cc.2019.patcher.[zer0code3].zip
-
Using FULL adobe.cc.2019.patcher.[zer0code3].zip may seem like a great idea, as it allows you to get all the Adobe CC 2019 programs for free, without paying anything or logging in to any account. However, it also has some drawbacks and risks that you should be aware of. Here are some of the pros and cons of using this patcher:
-
-
-
Pros
-
Cons
-
-
-
Saves money and time: You don't need to pay a monthly or yearly subscription fee to use Adobe CC 2019, which can save you a lot of money in the long run. You also don't need to waste time logging in or verifying your account every time you use the program.
-
Illegal and unethical: Using the patcher is a form of software piracy, which is illegal and unethical. You are violating the intellectual property rights of Adobe, which is a crime in many countries. You are also depriving Adobe of its revenue, which can affect its ability to develop and improve its products.
-
-
-
Access to all Adobe CC 2019 features and updates: You can use all the features and functions of Adobe CC 2019, without any limitations or restrictions. You can also get the latest updates and bug fixes from Adobe, as the patcher does not interfere with the online functionality of the program.
-
May not work with future versions of Adobe CC: The patcher may not be compatible with future versions or updates of Adobe CC, as Adobe may change its license verification or activation process to prevent piracy. You may not be able to use the patcher with newer versions of Adobe CC, or you may lose some features or functions.
-
-
-
No risk of malware or viruses: The patcher is safe to use, as it does not contain any malware or viruses that can harm your system. It does not modify any system files or registry entries, nor does it install any unwanted programs or toolbars on your system.
-
May cause errors or crashes: The patcher may cause some errors or crashes in your Adobe CC 2019 program, as it modifies some files or codes that are essential for the proper functioning of the program. You may experience some glitches, bugs, or performance issues while using the program.
-
-
-
Compatible with Windows and Mac OS: The patcher works with both Windows and Mac OS systems, so you can use it on any computer that supports Adobe CC 2019. You don't need to worry about compatibility issues or system requirements.
-
May violate the terms of service of Adobe: By using the patcher, you are violating the terms of service of Adobe, which is a legal agreement that you accept when you use its products or services. You are breaking the rules and conditions that Adobe sets for its users, which can result in legal actions or penalties from Adobe.
-
-
-
As you can see, using FULL adobe.cc.2019.patcher.[zer0code3].zip has its benefits and drawbacks. You should weigh them carefully before deciding whether to use it or not. If you are looking for alternatives to this patcher, you can check out some of the options below.
-
Alternatives to FULL adobe.cc.2019.patcher.[zer0code3].zip
-
FULL adobe.cc.2019.patcher.[zer0code3].zip is not the only patcher that can crack Adobe CC 2019 programs. There are other patchers that claim to do the same thing, with different methods and features. Here are some of them:
-
-
MPT patches by MPT34M: These are patches that replace some files in the installation folder of each Adobe CC 2019 program to activate it. They are easy to use and support most of the Adobe CC 2019 programs. However, they may not work with some updates or versions of Adobe CC 2019.
-
AMT Emulator by PainteR: This is a patcher that emulates the AMT ( Adobe Media Toolkit) license verification system of Adobe CC 2019, and tricks it into thinking that the program is activated. It is very effective and supports all the Adobe CC 2019 programs. However, it may be detected by some antivirus programs as a threat.
-
GenP by Team V.R.: This is a patcher that generates a license key for each Adobe CC 2019 program and injects it into the program's files. It is very fast and supports most of the Adobe CC 2019 programs. However, it may not work with some updates or versions of Adobe CC 2019.
-
Zii Patcher by TNT: This is a patcher that patches the framework of each Adobe CC 2019 program to activate it. It is very easy to use and supports all the Adobe CC 2019 programs. However, it only works with Mac OS systems.
-
-
These are some of the alternatives to FULL adobe.cc.2019.patcher.[zer0code3].zip that you can try if you want to crack Adobe CC 2019 programs. However, keep in mind that these patchers are also illegal and unethical, and may have similar or different drawbacks and risks as FULL adobe.cc.2019.patcher.[zer0code3].zip. Use them at your own risk and discretion.
-
Conclusion
-
In this article, we have discussed what FULL adobe.cc.2019.patcher.[zer0code3].zip is, how it works, and what are its pros and cons. We have also shown you how to download and install it, and what are some of the alternatives to it.
-
FULL adobe.cc.2019.patcher.[zer0code3].zip is a patcher that can download and activate any Adobe CC 2019 program for free, without needing any login or subscription. It has some features that make it very convenient and attractive for anyone who wants to use Adobe CC 2019 for free. However, it also has some drawbacks and risks that make it illegal and unethical, and may cause some problems or issues with your system or your Adobe CC 2019 program.
-
Therefore, we do not recommend using FULL adobe.cc.2019.patcher.[zer0code3].zip or any other patcher to crack Adobe CC 2019 programs. Instead, we suggest that you use the official and legal way of using Adobe CC 2019, which is to pay for a subscription plan that suits your needs and budget. This way, you can support Adobe's development and innovation, enjoy all the benefits and features of Adobe CC 2019, and avoid any legal or technical troubles.
-
If you have any questions or comments about FULL adobe.cc.2019.patcher.[zer0code3].zip or this article, feel free to leave them below. We hope you found this article helpful and informative.
-
FAQs
-
Here are some of the frequently asked questions about FULL adobe.cc.2019.patcher.[zer0code3].zip:
-
Q1: Is FULL adobe.cc.2019.patcher.[zer0code3].zip safe to use?
-
A1: No, FULL adobe.cc.2019.patcher.[zer0code3].zip is not safe to use, as it is a form of software piracy, which is illegal and unethical. You are violating the intellectual property rights of Adobe, which can result in legal actions or penalties from Adobe. You are also exposing your system to potential malware or viruses that may be hidden in the patcher or the downloaded files. You are also risking your Adobe CC 2019 program to errors or crashes that may occur due to the patching process.
-
Q2: How can I update my Adobe CC 2019 program after using the patcher?
-
A2: You can update your Adobe CC 2019 program after using the patcher by using the online functionality of the program. The patcher does not interfere with the online functionality of Adobe CC 2019, so you can still get the latest updates and bug fixes from Adobe's servers. However, you may need to reapply the patch after updating your program, as some updates may overwrite or remove the patch.
-
Q3: What if the patcher does not work or support my Adobe CC 2019 program?
-
A3: If the patcher does not work or support your Adobe CC 2019 program, you may try one of the alternatives that we mentioned above, such as MPT patches, AMT Emulator, GenP, or Zii Patcher. However, keep in mind that these patchers are also illegal and unethical, and may have similar or different drawbacks and risks as FULL adobe .cc.2019.patcher.[zer0code3].zip. Use them at your own risk and discretion.
-
Q4: How can I uninstall or remove the patcher from my system?
-
A4: You can uninstall or remove the patcher from your system by deleting the file "FULL adobe.cc.2019.patcher.exe" and the folder "FULL adobe.cc.2019.patcher" from your system. You can also uninstall or remove the Adobe CC 2019 program that you downloaded and installed using the patcher by using the Control Panel (for Windows) or the Finder (for Mac OS). However, this may not completely remove all the traces or remnants of the patcher or the program from your system, so you may need to use a third-party software cleaner or uninstaller to do a thorough cleanup.
-
Q5: Where can I find more information or support for using the patcher?
-
A5: You can find more information or support for using the patcher by visiting zer0cod3's website, which is https://zer0cod3.github.io/. However, as we mentioned before, this website may not be available anymore, as it was taken down by Adobe for legal reasons. You can also try searching online for other websites, forums, blogs, or videos that discuss or review the patcher. However, be careful of the sources that you visit, as some of them may contain fake or malicious information or files that can harm your system.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSoniqueXXLBundlev10VSTVSTiPackrar.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSoniqueXXLBundlev10VSTVSTiPackrar.md
deleted file mode 100644
index 6253fcc743d7e5948131cbf9d123830a67456c3d..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GSoniqueXXLBundlev10VSTVSTiPackrar.md
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
GSonique XXL Bundle v10 VST VSTi Pack rar: A Review
-
If you are looking for a comprehensive collection of plugins that can enhance your music production, you might want to check out GSonique XXL Bundle v10 VST VSTi Pack rar. This is a file name that refers to a bundle of 30 plugins from G-Sonique, a company that specializes in creating virtual instruments and effects for music production. The file is compressed in rar format and contains 30 plugins that are compatible with Windows and Mac OS X. The plugins can be used with any DAW that supports VST or VSTi format.
In this article, we will review GSonique XXL Bundle v10 VST VSTi Pack rar and tell you everything you need to know about it. We will explain what it is, what it does, what it offers, what it costs, what are its pros and cons, and whether it is worth buying or not. We will also answer some frequently asked questions about it. By the end of this article, you will have a clear idea of whether GSonique XXL Bundle v10 VST VSTi Pack rar is suitable for your needs or not.
-
What is GSonique XXL Bundle v10 VST VSTi Pack rar?
-
GSonique XXL Bundle v10 VST VSTi Pack rar is a compressed file that contains 30 plugins from G-Sonique, a company that specializes in creating virtual instruments and effects for music production. The plugins are designed to cover a wide range of genres and styles, such as EDM, techno, trance, psytrance, dubstep, hip hop, rock, metal, ambient, and more. The plugins include synthesizers, drum machines, samplers, filters, distortions, reverbs, delays, modulators, compressors, limiters, equalizers, and more. The plugins are compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.
-
What are VST and VSTi plugins?
-
VST and VSTi are abbreviations for Virtual Studio Technology and Virtual Studio Technology Instrument. They are formats for audio plugins that can be used with digital audio workstations (DAWs) to create and process sound. VST plugins are effects that can modify the sound of an audio signal, such as filters, reverbs, delays, compressors, etc. VSTi plugins are instruments that can generate sound from scratch, such as synthesizers, drum machines, samplers, etc. VST and VSTi plugins can be used together to create complex and rich soundscapes.
-
-
What are the features of GSonique XXL Bundle v10 VST VSTi Pack rar?
-
GSonique XXL Bundle v10 VST VSTi Pack rar is a collection of 30 plugins from G-Sonique that offer a variety of features and functions for music production. Some of the features are:
-
List of plugins included in GSonique XXL Bundle v10 VST VSTi Pack rar
-
The following table shows the list of plugins included in GSonique XXL Bundle v10 VST VSTi Pack rar along with their brief descriptions:
- | Plugin name | Plugin type | Plugin description | | ----------- | ----------- | ------------------ | | Alien303 | VSTi | A bass synthesizer that emulates the classic Roland TB-303 sound with extra features and enhancements | | Alien303 v2 | VSTi | An updated version of Alien303 with improved sound quality and new features | | DrumTROOP | VSTi | A drum machine that offers 16 pads with 20 kits and 128 sounds each | | Dubmaster Liquid Delay | VST | A delay effect that creates liquid and dubby echoes with modulation and feedback options | | Dubshox H8 | VST | A multiband distortion effect that can create heavy and aggressive sounds with 8 bands of distortion | | FAT+ | VST | A filter effect that can add warmth and fatness to any sound with low-pass, high-pass, band-pass, and notch filters | | FM Wave XR7 | VSTi | A frequency modulation (FM) synthesizer that can create complex and dynamic sounds with 6 operators and 32 waveforms | | FSQ1964 | VST | A frequency shifter effect that can add brightness and clarity to any sound with 4 bands of frequency shifting | | KASHMIR SITAR FX | VSTi | A sitar synthesizer that can create realistic and exotic sounds with physical modeling and effects | | Mid-Side Envelope Follower + FX MSEDSTEREOFX | VST | A mid-side processor effect that can manipulate the stereo image of any sound with envelope follower and effects | | Neurofunker XG6 | VSTi | A drum machine that offers 146 drum kits and 1.200 sounds for creating neurofunk and drum and bass beats | | Psychedelic FX6000V1 | VSTi | A multi-effect plugin that offers 140 presets of psychedelic sounds and effects for various genres | | PsyKick AK1 | VSTi | A kick drum synthesizer that can create powerful and punchy kicks for psytrance and other genres | | Pultronic EQ-110P | VST | An equalizer effect that emulates the vintage tube sound of the Pultec EQP-1A hardware unit | | Renegade Analog Monster R.A.M.2020XL+ (V2) + BONUS: Renegade Mini + Renegade Mini x64 + Renegade Mini x64 (V2) + Renegade Mini x64 (V3) + Renegade Mini x64 (V4) + Renegade Mini x64 (V5) + Renegade Mini x64 (V6) + Renegade Mini x64 (V7) + Renegade Mini x64 (V8) + Renegade Mini x64 (V9) + Renegade Mini x64 (V10) + Renegade Mini x64 (V11) + Renegade Mini x64 (V12) + Renegade Mini x64 (V13) + Renegade Mini x64 (V14) + Renegade Mini x64 (V 15) + Renegade Mini x64 (V16) | VSTi | A collection of 16 versions of Renegade, a virtual analog synthesizer that can create fat and warm sounds with 4 oscillators and 12 filters | | SHAKER Maker | VST | A shaker effect that can create realistic and natural shaker sounds with physical modeling and modulation | | Trap Illuminator 8000X1 | VSTi | A trap synthesizer that offers 200 presets of trap sounds and effects with 4 layers and 10 effects | | Twisthead VS-206 | VST | A preamp effect that emulates the vintage tube sound of the Twisthead hardware unit | | Ultrabass MX4/4 | VSTi | A bass synthesizer that can create deep and powerful bass sounds with 4 oscillators and 4 filters | | XBass4000L | VST | A bass enhancer effect that can add sub-bass and harmonics to any sound with psychoacoustic processing | | Xmagic textures 1 | VSTi | A texture synthesizer that can create ambient and atmospheric sounds with granular synthesis and effects | | Xmagic textures 2 | VSTi | A texture synthesizer that can create ambient and atmospheric sounds with granular synthesis and effects | | Zener Limiter LM2Z+ (V2) + BONUS: Zener Limiter LM2Z+ (V1) + Zener Limiter LM2Z+ (V3) + Zener Limiter LM2Z+ (V4) + Zener Limiter LM2Z+ (V5) + Zener Limiter LM2Z+ (V6) + Zener Limiter LM2Z+ (V7) + Zener Limiter LM2Z+ (V8) + Zener Limiter LM2Z+ (V9) + Zener Limiter LM2Z+ (V10) + Zener Limiter LM2Z+ (V11) + Zener Limiter LM2Z+ (V12) + Zener Limiter LM2Z+ (V13) + Zener Limiter LM2Z+ (V14) + Zener Limiter LM2Z+ (V15) + Zener Limiter LM2Z+ (V16) | VST | A collection of 16 versions of Zener Limiter, a limiter effect that can control the dynamics and loudness of any sound with analog modeling |
How to install GSonique XXL Bundle v10 VST VSTi Pack rar?
-
To install GSonique XXL Bundle v10 VST VSTi Pack rar, you need to follow these steps:
-
-
Download the file from the official website or any other source. The file size is about 1.5 GB.
-
Extract the file using a software like WinRAR or 7-Zip. You will get a folder named GSonique XXL Bundle v10 VST VSTi Pack.
-
Copy the folder to your preferred location on your computer. You can also rename the folder if you want.
-
Open your DAW and scan for new plugins. You should see the GSonique plugins in your plugin list.
-
Drag and drop the plugins to your tracks or channels and start using them.
-
-
How to use GSonique XXL Bundle v10 VST VSTi Pack rar?
-
To use GSonique XXL Bundle v10 VST VSTi Pack rar, you need to follow these steps:
-
-
Select the plugin you want to use from your plugin list. You can also use multiple plugins at once.
-
Adjust the parameters and settings of the plugin according to your preference and needs. You can also use the presets provided by the plugin or create your own.
-
Listen to the sound and tweak it until you are satisfied with the result. You can also automate the parameters or modulate them with other sources.
-
Save your project and export your audio file.
-
-
What are the benefits of GSonique XXL Bundle v10 VST VSTi Pack rar?
-
GSonique XXL Bundle v10 VST VSTi Pack rar has many benefits for music producers who want to create high-quality and diverse sounds. Some of the benefits are:
-
High-quality sound and design
-
The plugins from G-Sonique are known for their high-quality sound and design. They use advanced algorithms and techniques to emulate analog hardware units, physical modeling, granular synthesis, psychoacoustic processing, frequency modulation, and more. They also have a unique and attractive user interface that is easy to use and understand. The plugins deliver a rich and smooth and detailed sound that can suit any genre and style.
-
Versatility and compatibility
-
The plugins from G-Sonique are versatile and compatible with various platforms and DAWs. They can be used with Windows and Mac OS X and support VST and VSTi formats. They can also be used with any DAW that supports these formats, such as Ableton Live, FL Studio, Cubase, Logic Pro, Pro Tools, Reaper, and more. The plugins can be used for different purposes, such as creating melodies, basslines, drums, effects, textures, and more. They can also be combined and layered to create complex and rich soundscapes.
-
Affordability and value
-
The plugins from G-Sonique are affordable and offer great value for money. The GSonique XXL Bundle v10 VST VSTi Pack rar costs only $99.95 USD and includes 30 plugins that would normally cost over $1000 USD if bought separately. That means you can save over 90% of the original price and get a huge collection of plugins for a fraction of the cost. The bundle also includes bonus plugins that are not available elsewhere. The bundle is a great deal for anyone who wants to expand their plugin library and enhance their music production.
-
What are the drawbacks of GSonique XXL Bundle v10 VST VSTi Pack rar?
-
GSonique XXL Bundle v10 VST VSTi Pack rar is not perfect and has some drawbacks that you should be aware of before buying it. Some of the drawbacks are:
-
Limited support and updates
-
The plugins from G-Sonique are not updated frequently and may not have the latest features and improvements. Some of the plugins are outdated and may not work well with newer versions of DAWs or operating systems. The support from G-Sonique is also limited and may not respond to your queries or issues promptly or effectively. You may have to rely on online forums or other users for help or guidance.
-
Potential compatibility issues
-
The plugins from G-Sonique may not be compatible with some DAWs or operating systems. Some users have reported problems with installing, loading, or using the plugins with certain DAWs or operating systems. Some of the plugins may also cause crashes, glitches, or errors in your DAW or computer. You may have to tweak some settings or use workarounds to make the plugins work properly.
-
Large file size and system requirements
-
The GSonique XXL Bundle v10 VST VSTi Pack rar is a large file that takes up a lot of space on your computer. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer. You may need a powerful computer to run the plugins smoothly and avoid performance issues.
-
Conclusion
-
GSonique XXL Bundle v10 VST VSTi Pack rar is a collection of 30 plugins from G-Sonique that offer a variety of features and functions for music production. The plugins are designed to cover a wide range of genres and styles, such as EDM, techno, trance, psytrance, dubstep, hip hop, rock, metal, ambient, and more. The plugins include synthesizers, drum machines, samplers, filters, distortions, reverbs, delays, modulators, compressors, limiters, equalizers, and more. The plugins are compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.
-
GSonique XXL Bundle v10 VST VSTi Pack rar has many benefits for music producers who want to create high-quality and diverse sounds. The plugins have high-quality sound and design, versatility and compatibility, and affordability and value. The bundle costs only $99.95 USD and includes 30 plugins that would normally cost over $1000 USD if bought separately. The bundle also includes bonus plugins that are not available elsewhere.
-
However, GSonique XXL Bundle v10 VST VSTi Pack rar also has some drawbacks that you should be aware of before buying it. The plugins have limited support and updates, potential compatibility issues, and large file size and system requirements. The plugins are not updated frequently and may not have the latest features and improvements. Some of the plugins may not work well with newer versions of DAWs or operating systems. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer.
-
Summary of main points
-
To summarize, GSonique XXL Bundle v10 VST VSTi Pack rar is a collection of 30 plugins from G-Sonique that offer a variety of features and functions for music production. The plugins are designed to cover a wide range of genres and styles, such as EDM, techno, trance, psytrance, dubstep, hip hop, rock, metal, ambient, and more. The plugins include synthesizers, drum machines, samplers, filters, distortions, reverbs, delays, modulators, compressors, limiters, equalizers, and more. The plugins are compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.
-
The bundle has many benefits for music producers who want to create high-quality and diverse sounds. The plugins have high-quality sound and design, versatility and compatibility, and affordability and value. The bundle costs only $99.95 USD and includes 30 plugins that would normally cost over $1000 USD if bought separately. The bundle also includes bonus plugins that are not available elsewhere.
-
However, the bundle also has some drawbacks that you should be aware of before buying it. The plugins have limited support and updates, potential compatibility issues, and large file size and system requirements. The plugins are not updated frequently and may not have the latest features and improvements. Some of the plugins may not work well with newer versions of DAWs or operating systems. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer.
-
Recommendation and rating
-
Based on our review, we recommend GSonique XXL Bundle v10 VST VSTi Pack rar to music producers who want to expand their plugin library and enhance their music production. The bundle offers a great value for money and a wide range of features and functions for various genres and styles. The bundle is suitable for beginners and experts alike who want to create high-quality and diverse sounds. The bundle is compatible with Windows and Mac OS X and can be used with any DAW that supports VST or VSTi format.
-
However, we also advise you to be aware of the drawbacks of the bundle and make sure that it meets your expectations and requirements. The bundle has limited support and updates, potential compatibility issues, and large file size and system requirements. The bundle is not updated frequently and may not have the latest features and improvements. Some of the plugins may not work well with newer versions of DAWs or operating systems. The file size is about 1.5 GB and may take a long time to download or extract. The plugins also have high system requirements and may consume a lot of CPU and RAM resources on your computer.
-
We give GSonique XXL Bundle v10 VST VSTi Pack rar a rating of 4 out of 5 stars. It is a good product that offers a lot of value for money, but it also has some room for improvement.
-
FAQs
-
Here are some frequently asked questions about GSonique XXL Bundle v10 VST VSTi Pack rar:
-
-
Where can I buy GSonique XXL Bundle v10 VST VSTi Pack rar?
-
You can buy GSonique XXL Bundle v10 VST VSTi Pack rar from the official website of G-Sonique or from other online sources that sell digital products. The price is $99.95 USD and you can pay with PayPal or credit card. You will receive a download link after your purchase.
-
How can I get support or updates for GSonique XXL Bundle v10 VST VSTi Pack rar?
-
You can get support or updates for GSonique XXL Bundle v10 VST VSTi Pack rar by contacting G-Sonique through their email address or Facebook page. However, the support and updates are limited and may not be available for all plugins or issues. You can also check online forums or other users for help or guidance.
-
Can I use GSonique XXL Bundle v10 VST VSTi Pack rar with other plugins or DAWs?
-
Yes, you can use GSonique XXL Bundle v10 VST VSTi Pack rar with other plugins or DAWs that support VST or VSTi format. You can also combine and layer the plugins to create complex and rich soundscapes. However, you may encounter some compatibility issues with some plugins or DAWs, so make sure to test them before using them.
-
Can I get a refund or exchange for GSonique XXL Bundle v10 VST VSTi Pack rar?
-
No, you cannot get a refund or exchange for GSonique XXL Bundle v10 VST VSTi Pack rar. The product is a digital download and cannot be returned or exchanged once you have received it. You should read the product description and reviews carefully before buying it and make sure that it meets your expectations and requirements.
-
Can I share or resell GSonique XXL Bundle v10 VST VSTi Pack rar?
-
No, you cannot share or resell GSonique XXL Bundle v10 VST VSTi Pack rar. The product is licensed to you only and you are not allowed to distribute, copy, or sell it to anyone else. Doing so may violate the terms and conditions of G-Sonique and result in legal action.
- b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bazaraa Jarvis Programacion Lineal Flujo Redes.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bazaraa Jarvis Programacion Lineal Flujo Redes.md
deleted file mode 100644
index b732c3e408b39eb182e6de7b9e88fd467269c9c1..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Bazaraa Jarvis Programacion Lineal Flujo Redes.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
-— Robin Yip (@robinjyip) October 5, 2014
-
-Q:
-
-How do I do this quickly?
-
-In the following question there are four steps.
-
-The first step needs to be done as fast as possible. It takes four hours, but it needs to be done in 20 minutes.
-
-A:
-
-Note that it only takes 5 minutes with Make.
-
-Step 1 can be easily done in an LAMMPS calculation. Your first step is heating a coil to Tc=300K, your second is cooling it to Tc=100K. The first takes about 1 hour, the second 1 hour and 1 minute.
-
-Python NameError: name 'xs' is not defined
-
-Below I have code in python3 which is throwing a NameError: name 'xs' is not defined
-
-def power(x, n):
-
- return x**n
-
-xs= []
-
-xs = [power(x,3) for x in range(0,0.1)]
-
-print(xs)
-
-I would expect to see this output:
-
->>> [0, 0.4, 0.8, 1.2, 1.6, 2, 2.4, 2.8, 3.2, 3.6, 4, 4.4, 4.8, 5.2, 5.6, 6, 6.4, 6.8, 7.2, 7.6, 8, 8.4, 8.8, 9.2, 9.6]
-
-Because the function power returns its argument and not a list, so you can't index it. To solve this you can either change the function to return a list:
-
- return [x**n for x in range(0,0.1)]
-
-or use the map function:
-
-xs = map(power, range(0,0.1))
-
-Another option would be to use sum like this:
-
-xs = sum([power(x,3) for x in range(0,0.1)])
-
-Mechanism of reduction of antibody by isolated heme peroxidase.
-
-Highly purified heme per 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Correoelectronicoparaactivarspyhunter4.md b/spaces/1gistliPinn/ChatGPT4/Examples/Correoelectronicoparaactivarspyhunter4.md
deleted file mode 100644
index 064d9c35eb2bcc11f5cd024087b83386d1280590..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Correoelectronicoparaactivarspyhunter4.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
-Double-endedCarryingMaltreatmentRSPrices. TheCorreoelectronicoparaactivarspyhunter4'slimbalfieldIn theCorreoelectronicoparaactivarspyhunter4, the double-ended swing-up, or simply a double-ended, is a type of carry-nail used for carrying two objects. Many times, this kind of nail will have a groove down the center of it. In some cases, some may be single-ended. It is a far more common nail to use in Mexico and the US than the double-ended swing-up. Recently, that has been a trend in English-speaking countries too, and have even led to the naming of a double-ended carry-nail, 'yandex'.Double-ended swing-upA double-ended swing-up is very similar to a standard swing-up, in that it consists of a central shaft, and a head that connects at the shaft's far end. Double-ended swing-ups are used for carry-nails that may be used for carrying two objects. The double-ended swing-up is typically used for a hammer, or tool, however other objects may be able to be used.
-
-Advertisement
-
-Description
-
-The double-ended swing-up is commonly used for a hammer or tool, due to its overall shape. This is common for a tool, as it is a far more stable tool than just a standard swing-up. The double-ended swing-up can be used for carry-nails that may be used for carrying two objects. Some types of nail may not be designed for this type of use, however, this is a far more common practice. Double-ended swing-ups are typically used for a hammer, or tool, due to its overall shape.
-
-The double-ended swing-up is typically used for a hammer, or tool, due to its overall shape. This is common for a tool, as it is a far more stable tool than just a standard swing-up. The double-ended swing-up can be used for carry-nails that may be used for carrying two objects. Some types of nail may not be designed for this type of use, however, this is a far more common practice.
-
-Double-ended swing-ups are a type of carry-nail that are used to carry two objects, usually hammers or tools. The double-ended swing-up is typically 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Fizikos Uzdavinynas 10 Kl 37.pdf.md b/spaces/1gistliPinn/ChatGPT4/Examples/Fizikos Uzdavinynas 10 Kl 37.pdf.md
deleted file mode 100644
index 9bda6eef075eb98adbb3018831652a9c72cd6d97..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Fizikos Uzdavinynas 10 Kl 37.pdf.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-... klasei atsakymai fizikos uzdavinynas 9 klasei pdf text, fizika aktyvieji mokymo ... tau plius pratyb vadovlio knygos atsakymai 10 klasei nemokamai kiekvienam ... 4d29de3e1b
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/30 Rojullo Preminchadam Ela - Download Telugu Songs by Anup Rubens in High Quality.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/30 Rojullo Preminchadam Ela - Download Telugu Songs by Anup Rubens in High Quality.md
deleted file mode 100644
index 7a8f8c495c48cc429480aec4f8318c85ee38daea..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/30 Rojullo Preminchadam Ela - Download Telugu Songs by Anup Rubens in High Quality.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
30 Rojullo Preminchadam Ela Songs Download 320kbps: A Guide for Telugu Music Lovers
-
If you are a fan of Telugu music, you must have heard of the hit movie 30 Rojullo Preminchadam Ela. The movie, which released in 2021, features six melodious songs composed by Anup Rubens. The songs have been sung by popular singers like Sid Sriram, Sunitha Upadrasta, Armaan Malik, and Rahul Sipligunj. The songs have received millions of views and streams on various platforms.
-
But what if you want to download the songs from 30 Rojullo Preminchadam Ela and listen to them offline? How can you get the best quality audio files in 320kbps? In this article, we will tell you everything you need to know about downloading songs from 30 Rojullo Preminchadam Ela in 320kbps. We will also suggest some of the best apps and websites for Telugu songs download. So, read on and enjoy the music!
-
30 rojullo preminchadam ela songs download 320kbps
30 Rojullo Preminchadam Ela is a Telugu romantic comedy film directed by Munna Dhulipudi and starring Pradeep Machiraju and Amritha Aiyer. The film revolves around the concept of reincarnation and how two lovers from a previous life meet again in the present day. The film was released on January 29, 2021, and received positive reviews from critics and audiences alike.
-
Why download songs in 320kbps?
-
When it comes to downloading songs, you might have noticed that there are different options for the bitrate or quality of the audio file. The bitrate is measured in kilobits per second (kbps) and it determines how much data is transferred per second. The higher the bitrate, the better the sound quality and the larger the file size.
-
For most listeners, a bitrate of 128kbps or 192kbps is sufficient for enjoying music. However, if you are an audiophile or a music enthusiast, you might prefer a higher bitrate of 320kbps. This is because a higher bitrate preserves more details and nuances of the original sound and reduces the distortion and noise. A higher bitrate also enhances the bass and treble effects and makes the music more immersive.
-
Therefore, if you want to download songs from 30 Rojullo Preminchadam Ela in the best possible quality, you should opt for 320kbps files. However, keep in mind that these files will also take up more space on your device and consume more data while downloading.
-
How to download songs from 30 Rojullo Preminchadam Ela?
-
JioSaavn: The best app for Telugu songs download
-
One of the easiest and most convenient ways to download songs from 30 Rojullo Preminchadam Ela is to use JioSaavn. JioSaavn is a popular music streaming app that offers a huge collection of Telugu songs along with other languages like Hindi, English, Tamil, Kannada, Malayalam, etc. You can listen to songs online or download them offline on your device.
-
Features of JioSaavn
-
-
You can access over
You can access over 55 million songs in various languages and genres on JioSaavn. You can also discover new songs, artists, albums, playlists, and podcasts on the app.
-
You can download songs in high quality (up to 320kbps) and listen to them offline without any ads or interruptions. You can also create your own playlists and share them with your friends.
-
You can enjoy exclusive content and original shows from JioSaavn like Film Companion, #NoFilterNeha, T-Series Mixtape, etc. You can also tune in to live radio stations and curated stations based on your mood, genre, or artist.
-
You can get personalized recommendations and suggestions based on your listening history and preferences. You can also use voice search and smart assistant features to find and play songs easily.
-
You can sync your music across devices and platforms like Android, iOS, Windows, Mac, Web, Alexa, Google Home, etc. You can also connect your JioSaavn account with social media platforms like Facebook, Instagram, Twitter, etc.
-
-
Steps to download songs from JioSaavn
-
-
Download and install the JioSaavn app from the Google Play Store or the App Store on your device. You can also visit the JioSaavn website on your browser.
-
Sign up or log in with your Jio number or email address. You can also use your Facebook or Google account to sign up or log in.
-
Search for the songs from 30 Rojullo Preminchadam Ela by typing the movie name or the song name in the search bar. You can also browse the Telugu section and find the movie under the New Releases category.
-
Select the song that you want to download and tap on the download icon (a downward arrow) next to it. You can also tap on the three-dot menu and select Download.
-
Choose the quality of the download (Low, Medium, High, or Highest). The higher the quality, the more data and space it will consume. For 320kbps files, choose Highest.
-
Wait for the download to complete. You can check the progress of the download in the Downloads section of the app. You can also pause or resume the download as per your convenience.
-
Once the download is complete, you can find the song in the My Music section of the app under Downloads. You can also access it from your device's music player or file manager.
-
-
Other options for downloading songs from 30 Rojullo Preminchadam Ela
-
If you don't want to use JioSaavn or if you face any issues with it, you can also try some other options for downloading songs from 30 Rojullo Preminchadam Ela. Here are some of them:
-
YouTube to MP3 converters
-
You can also download songs from 30 Rojullo Preminchadam Ela by using YouTube to MP3 converters. These are online tools that allow you to convert YouTube videos into MP3 files and download them on your device. Some of the popular YouTube to MP3 converters are:
-
30 rojullo preminchadam ela mp3 songs free download 320kbps
-30 rojullo preminchadam ela neeli neeli aakasam song download 320kbps
-30 rojullo preminchadam ela telugu movie songs download 320kbps
-30 rojullo preminchadam ela full album download 320kbps
-30 rojullo preminchadam ela idera sneham song download 320kbps
-30 rojullo preminchadam ela songs download naa songs 320kbps
-30 rojullo preminchadam ela anup rubens songs download 320kbps
-30 rojullo preminchadam ela meeko dhandam song download 320kbps
-30 rojullo preminchadam ela audio songs download 320kbps
-30 rojullo preminchadam ela jukebox download 320kbps
-30 rojullo preminchadam ela sid sriram song download 320kbps
-30 rojullo preminchadam ela armaan malik song download 320kbps
-30 rojullo preminchadam ela sunitha upadrasta song download 320kbps
-30 rojullo preminchadam ela dhananjay song download 320kbps
-30 rojullo preminchadam ela mohana bhogaraju song download 320kbps
-30 rojullo preminchadam ela rishon rubens song download 320kbps
-30 rojullo preminchadam ela rahul sipligunj song download 320kbps
-30 rojullo preminchadam ela madhu priya song download 320kbps
-30 rojullo preminchadam ela amma nannu mallee penchavaa song download 320kbps
-30 rojullo preminchadam ela wah wah mere bava song download 320kbps
-30 rojullo preminchadam ela cat body loki song download 320kbps
-Download all songs of telugu movie 30 rojullo preminchadam ela in high quality mp3 format (320kbps)
-How to download songs of anup rubens's album 30 rojullo preminchadam ela in zip file (320kbps)
-Listen to online streaming of telugu film songs from the movie 30 rojullo preminchadam ela at jiosaavn (320kbps)
-Watch the official video songs of the movie 30 rojullo preminchadam ela on youtube (HD quality)
-Download the lyrics of the songs from the movie 30 rojullo preminchadam ela in pdf format
-Read the reviews and ratings of the music album of the movie 30 rojullo preminchadam ela by critics and audience
-Check out the latest news and updates about the movie and its music album of the movie 30 rojullo preminchadam ela on social media platforms
-Find out the box office collection and performance of the movie and its music album of the movie 30 rojullo preminchadam ela in India and overseas markets
-Know more about the cast and crew of the movie and its music album of the movie 30 rojullo preminchadam ela on IMDb and Wikipedia
-
-
ytmp3.cc: This is a simple and fast tool that lets you convert and download YouTube videos in MP3 or MP4 format. You just need to paste the URL of the video and click on Convert. You can then choose the quality of the file (up to 320kbps) and click on Download.
-
mp3juices.cc: This is another easy and reliable tool that lets you convert and download YouTube videos in MP3 format. You can either paste the URL of the video or type a keyword in the search bar and click on Search. You can then choose from the results and click on Download.
-
y2mate.com: This is a versatile and powerful tool that lets you convert and download YouTube videos in various formats like MP3, MP4, M4A, 3GP, etc. You can either paste the URL of the video or type a keyword in the search bar and click on Start. You can then choose from the options and click on Download.
-
-
Torrent sites
-
You can also download songs from 30 Rojullo Preminchadam Ela by using torrent sites. These are peer-to-peer networks that allow you to download files from other users who have them. However, torrenting is illegal in many countries and may expose you to malware and viruses. Therefore, use torrent sites at your own risk and discretion. Some of the popular torrent sites are:
-
-
The Pirate Bay: This is one of the oldest and most popular torrent sites that offers a wide range of content, including music, movies, games, software, etc. You can search for the songs from 30 Rojullo Preminchadam Ela by typing the movie name or the song name in the search bar and clicking on Pirate Search. You can then choose from the results and click on Get This Torrent.
-
1337x: This is another well-known and reliable torrent site that offers a variety of content, including music, movies, games, software, etc. You can search for the songs from 30 Rojullo Preminchadam Ela by typing the movie name or the song name in the search bar and clicking on Search. You can then choose from the results and click on Magnet Download.
-
Torrentz2: This is a meta-search engine that aggregates results from various torrent sites. You can search for the songs from 30 Rojullo Preminchadam Ela by typing the movie name or the song name in the search bar and clicking on Search. You can then choose from the results and click on the torrent site of your choice.
-
-
Conclusion
-
Summary of the article
-
In this article, we have discussed how to download songs from 30 Rojullo Preminchadam Ela in 320kbps quality. We have explained what 30 Rojullo Preminchadam Ela is and why downloading songs in 320kbps is beneficial. We have also suggested some of the best apps and websites for Telugu songs download, such as JioSaavn, YouTube to MP3 converters, and torrent sites. We hope you have found this article helpful and informative.
-
Call to action
-
If you are a Telugu music lover, you should not miss out on the songs from 30 Rojullo Preminchadam Ela. They are catchy, romantic, and soulful. They will make you fall in love with the movie and its characters. So, what are you waiting for? Download the songs from 30 Rojullo Preminchadam Ela in 320kbps quality and enjoy them offline anytime, anywhere!
-
FAQs
-
-
Q: How many songs are there in 30 Rojullo Preminchadam Ela?
-
A: There are six songs in 30 Rojullo Preminchadam Ela. They are Neeli Neeli Aakasam, Idhi Oka Grahanam, Amma Nannu Chudali, Ee Maya Peremito, O Chinna Navve Chaalu, and Naalona.
-
Q: Who are the singers of the songs from 30 Rojullo Preminchadam Ela?
-
A: The singers of the songs from 30 Rojullo Preminchadam Ela are Sid Sriram, Sunitha Upadrasta, Armaan Malik, Rahul Sipligunj, Anup Rubens, Saketh Komanduri, Mohana Bhogaraju, Kaala Bhairava, Harika Narayan, and Hemachandra.
-
Q: How can I watch 30 Rojullo Preminchadam Ela online?
-
A: You can watch 30 Rojullo Preminchadam Ela online on various OTT platforms like Aha Video, Amazon Prime Video, Netflix, etc. However, you may need to subscribe to these platforms to access the movie.
-
Q: Is 30 Rojullo Preminchadam Ela based on a true story?
-
A: No, 30 Rojullo Preminchadam Ela is not based on a true story. It is a fictional story that revolves around the concept of reincarnation and how two lovers from a previous life meet again in the present day.
-
Q: What is the meaning of 30 Rojullo Preminchadam Ela?
-
A: The meaning of 30 Rojullo Preminchadam Ela is "How to fall in love in 30 days". It is a catchy phrase that reflects the theme of the movie.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Ride - Traffic Racing APK and Enjoy Realistic Physics Drift and Police Chases.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Ride - Traffic Racing APK and Enjoy Realistic Physics Drift and Police Chases.md
deleted file mode 100644
index 0b811dca4a722db06eb3fd6cc4c41e32fcbed237..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Ride - Traffic Racing APK and Enjoy Realistic Physics Drift and Police Chases.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Drift Ride - Traffic Racing APK: A Review
-
If you are looking for a hardcore racing game with real physics, extreme racing, heavy traffic, cops, and drift, then you might want to check out Drift Ride - Traffic Racing APK. This is a game developed by XLAB, LLC, a company that specializes in creating realistic car physics and racing games. In this article, we will review the features, pros and cons, and tips and tricks of this game, as well as how to download and install it on your Android device.
-
What is Drift Ride - Traffic Racing APK?
-
Drift Ride - Traffic Racing APK is a racing game that lets you experience the thrill of driving at high speeds in heavy traffic, while avoiding or outrunning the police and drifting around corners. You can choose from different cars with different characteristics and handling, and race on various routes with different difficulty and random environment. You can also compete with rivals in the races and see how you rank on the leaderboard. The game has realistic physics, atmospheric graphics, and hardcore gameplay that will challenge your skills and reflexes.
Here are some of the features that make Drift Ride - Traffic Racing APK an exciting and addictive racing game:
-
Realistic Physics
-
The game uses a realistic physics engine that simulates the behavior of the cars and the environment. You can feel the weight, inertia, traction, and suspension of the cars as you drive them. You can also see the effects of collisions, damage, weather conditions, and road surfaces on the performance of the cars. The game also has a realistic sound system that reproduces the engine noises, tire screeches, crashes, sirens, and horns.
-
Drift
-
One of the main features of the game is the ability to drift around corners and curves. Drifting is a technique that involves oversteering the car to make it slide sideways while maintaining control. Drifting can help you reduce your speed, avoid obstacles, or gain an advantage over your opponents. The game has a drift system that allows you to control the angle and direction of your drift using the accelerometer or touch controls. You can also earn points for drifting and use them to upgrade your car or unlock new ones.
-
Police
-
Another feature of the game is the presence of police cars that will chase you if you break the traffic rules or cause accidents. The police cars are fast and aggressive, and they will try to stop you by ramming you, blocking you, or setting up roadblocks. You can either try to escape them by using your speed, skills, or shortcuts, or you can fight them by crashing into them or using power-ups. The game has a wanted system that increases your level of police attention depending on your actions. The higher your wanted level, the more police cars will pursue you.
-
Highest Speed
-
The game also lets you experience the thrill of driving at the highest speed possible in heavy traffic. You can push your car to its limits by using nitro boosters, slipstreams, or ramps. You can also overtake other cars by using different lanes or going off-road. The game has a speedometer that shows your current speed and a speed record that shows your highest speed achieved in each race.
-
Real Traffic
-
The game also features real traffic that adds to the realism and challenge of the game. You will encounter different types of vehicles on the road, such as trucks, buses, vans, motorcycles, or sports cars. You will also see pedestrians crossing the street or walking on the sidewalk. You have to be careful not to hit them or cause accidents that will slow you down or damage your car.
-
Atmospheric Graphics
Atmospheric Graphics
-
The game also boasts of atmospheric graphics that create a realistic and immersive environment. You can see the details of the cars, the roads, the buildings, and the scenery. You can also see the effects of the weather, the time of day, the lighting, and the shadows. The game has different modes that change the graphics settings, such as normal, high, or ultra. You can also customize the graphics options to suit your device and preference.
-
How to download and install Drift Ride - Traffic Racing APK?
-
If you want to play Drift Ride - Traffic Racing APK on your Android device, you will need to download and install it from a reliable source. Here are the steps to do so:
-
drift ride traffic racing game download
-drift ride traffic racing mod apk unlimited money
-drift ride traffic racing android app
-drift ride traffic racing free online
-drift ride traffic racing hack apk
-drift ride traffic racing realistic physics
-drift ride traffic racing extreme speed
-drift ride traffic racing carx tech
-drift ride traffic racing latest version
-drift ride traffic racing review
-drift ride traffic racing cheats
-drift ride traffic racing tips and tricks
-drift ride traffic racing best car
-drift ride traffic racing gameplay video
-drift ride traffic racing xlab llc
-drift ride traffic racing apk pure
-drift ride traffic racing apk mirror
-drift ride traffic racing apk combo
-drift ride traffic racing app brain
-drift ride traffic racing google play
-drift ride traffic racing ios
-drift ride traffic racing pc
-drift ride traffic racing windows 10
-drift ride traffic racing mac
-drift ride traffic racing bluestacks
-drift ride traffic racing nox player
-drift ride traffic racing memu play
-drift ride traffic racing ld player
-drift ride traffic racing apkmonk
-drift ride traffic racing apkpure.com
-drift ride traffic racing apkmirror.com
-drift ride traffic racing apkcombo.com
-drift ride traffic racing appbrain.com
-drift ride traffic racing play.google.com
-drift ride traffic racing app store
-drift ride traffic racing microsoft store
-drift ride traffic racing mac store
-drift ride traffic racing emulator download
-drift ride traffic racing offline mode
-drift ride traffic racing online multiplayer
-drift ride traffic racing update 2023
-drift ride traffic racing new features
-drift ride traffic racing bug fixes
-drift ride traffic racing graphics settings
-drift ride traffic racing sound effects
-drift ride traffic racing music soundtrack
-drift ride traffic racing controller support
-drift ride traffic racing keyboard and mouse input
-drift ride traffic racing touch screen controls
-
-
Go to a trusted website that offers Drift Ride - Traffic Racing APK, such as [APKPure] or [APKCombo].
-
Click on the download button and wait for the file to be downloaded on your device.
-
Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
-
You may need to enable the installation of apps from unknown sources in your device's settings if you haven't done so before.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy!
-
-
Pros and cons of Drift Ride - Traffic Racing APK
-
Like any other game, Drift Ride - Traffic Racing APK has its pros and cons. Here are some of them:
-
-
Pros
Cons
-
- Realistic physics and graphics
- High battery and data consumption
-
- Hardcore and challenging gameplay
- Difficult controls and interface
-
- Variety of cars and routes
- Repetitive and limited content
-
- Online leaderboard and rivals
- Frequent ads and pop-ups
-
- Free to download and play
- In-app purchases and premium features
-
-
Tips and tricks for playing Drift Ride - Traffic Racing APK
-
If you want to improve your skills and performance in Drift Ride - Traffic Racing APK, here are some tips and tricks that might help you:
-
-
Choose a car that suits your style and preference. Different cars have different speed, handling, drift, durability, and nitro. You can also upgrade your car or buy new ones with coins or real money.
-
Use the drift system wisely. Drifting can help you avoid obstacles, overtake rivals, or escape police, but it can also reduce your speed, damage your car, or make you lose control. Learn how to drift properly by adjusting your angle and direction with the accelerometer or touch controls.
-
Avoid collisions and accidents. Collisions and accidents can slow you down, damage your car, or increase your wanted level. Try to avoid hitting other cars, pedestrians, or objects on the road. You can also use power-ups such as shields, magnets, or bombs to protect yourself or attack others.
-
Use nitro boosters strategically. Nitro boosters can give you a burst of speed that can help you outrun police, overtake rivals, or reach higher speeds. However, nitro boosters are limited and need time to recharge. Use them wisely by timing them right or collecting them on the road.
-
Use slipstreams and ramps. Slipstreams are streams of air that form behind moving vehicles. You can use them to gain speed by following behind another car closely. Ramps are elevated platforms that launch you into the air. You can use them to jump over obstacles or perform stunts.
-
-
Conclusion
-
Drift Ride - Traffic Racing APK is a racing game that offers realistic physics, extreme racing, heavy traffic, cops, and drift. It is a game that will test your skills and reflexes as you drive at high speeds in various routes with different difficulty and random environment. It is a game that will give you a thrill and adrenaline rush as you compete with rivals online or offline. It is a game that is free to download and play but also has in-app purchases and premium features. If you are looking for a hardcore racing game with real physics, then you might want to try Drift Ride - Traffic Racing APK.
-
FAQs about Drift Ride - Traffic Racing APK
-
Here are some of the frequently asked questions
Here are some of the frequently asked questions about Drift Ride - Traffic Racing APK:
-
-
What are the system requirements for Drift Ride - Traffic Racing APK?
-
Drift Ride - Traffic Racing APK requires Android 5.0 or higher and at least 100 MB of free storage space on your device. The game also requires a stable internet connection for online features.
-
How can I play Drift Ride - Traffic Racing APK offline?
-
You can play Drift Ride - Traffic Racing APK offline by turning off your internet connection or switching to airplane mode. However, you will not be able to access the online leaderboard, rivals, or updates.
-
How can I remove ads from Drift Ride - Traffic Racing APK?
-
You can remove ads from Drift Ride - Traffic Racing APK by purchasing the premium version of the game for $2.99. The premium version also unlocks all cars and routes, and gives you unlimited nitro and coins.
-
How can I contact the developer of Drift Ride - Traffic Racing APK?
-
You can contact the developer of Drift Ride - Traffic Racing APK by sending an email to support@xlabgames.com or visiting their website at https://xlabgames.com/.
-
Is Drift Ride - Traffic Racing APK safe to download and install?
-
Drift Ride - Traffic Racing APK is safe to download and install as long as you get it from a trusted source, such as [APKPure] or [APKCombo]. However, you should always scan the file for viruses or malware before installing it on your device.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FR Legends and Enjoy Stunning Graphics with RTX On.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FR Legends and Enjoy Stunning Graphics with RTX On.md
deleted file mode 100644
index 7171be3b47f4eced49229a3b4e8055ab7142e9f2..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download FR Legends and Enjoy Stunning Graphics with RTX On.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
How to Download FR Legends with RTX on Mod
-
If you are a fan of drifting games, you might have heard of FR Legends, a mobile game that lets you drive legendary front-engine, rear-wheel-drive drift cars at world's most iconic circuits. But did you know that you can also enjoy this game with RTX on technology, a feature that enhances the graphics and realism of the game with ray tracing and artificial intelligence? In this article, we will show you how to download FR Legends with RTX on mod and what are the benefits of doing so.
-
Features of FR Legends
-
FR Legends is a game that is all about drifting. It has many features that make it one of the best drifting games on the market. Here are some of them:
Drifting gameplay: You can drift your way through various tracks and modes, such as solo, career, online multiplayer, and freestyle. You can also customize your drifting style and scoring system based on real world competition judging rules.
-
Car customization: You can modify everything on your car, from the engine and suspension to the body kit and paint job. You can also swap engines and choose from a wide range of brands and parts.
-
Tandem drift battles: For the first time ever, a mobile game that lets you have tandem drift battles with AI drivers or other players online. You can challenge your friends or random opponents and show off your skills and style.
-
Realistic graphics and physics: The game has stunning graphics and physics that make you feel like you are really driving a drift car. The cars have detailed interiors and exteriors, the tracks have realistic lighting and shadows, and the smoke effects are amazing.
-
-
Benefits of RTX on Technology
-
RTX on technology is a feature that enhances the graphics and realism of the game with ray tracing and artificial intelligence. Ray tracing is a method of rendering that simulates the physical behavior of light, creating realistic reflections, shadows, and refractions. Artificial intelligence is a technique that uses machine learning to improve the performance and quality of the game. Here are some of the benefits of RTX on technology:
-
-
Ray tracing: With ray tracing, you can see your car's reflection on the windows, mirrors, and puddles. You can also see the shadows cast by your car and other objects on the track. The lighting effects are more natural and dynamic, creating a more immersive experience.
-
Artificial intelligence: With artificial intelligence, you can enjoy faster loading times, smoother gameplay, and better optimization. The game also uses AI to enhance the image quality, reduce noise, and upscale resolution.
-
Rasterization: Rasterization is a method of rendering that converts 3D models into pixels on the screen. With RTX on technology, rasterization is improved with features such as variable-rate shading, texture-space shading, and multi-view rendering. These features enable richer visuals with more fluid interactivity with large models and scenes.
-
Simulation: Simulation is a method of modelling the behavior of the physical systems in the game, such as the smoke, fire, and debris. With RTX on technology, simulation is enhanced with features such as NVIDIA PhysX, NVIDIA Flow, and NVIDIA Blast. These features enable more realistic and interactive effects with greater detail and complexity.
-
-
Steps to Download FR Legends with RTX on Mod
-
If you want to download FR Legends with RTX on mod, you need to follow these steps:
-
-
Requirements: You need to have a compatible device that supports RTX on technology. You also need to have enough storage space on your device to download the game and the mod. The game size is about 500 MB and the mod size is about 200 MB.
-
Download links: You can download the game from the official website or from the Google Play Store or the App Store. You can download the mod from this link: [FR Legends RTX on Mod].
-
Installation instructions: After downloading the game and the mod, you need to install them on your device. To install the game, just follow the instructions on the screen. To install the mod, you need to extract the zip file and copy the contents to the game folder. You can find the game folder in this path: Android/data/com.fengiiley.frlegends/files/.
-
-
Conclusion
-
FR Legends is a great drifting game that lets you drive legendary drift cars at iconic circuits. With RTX on technology, you can enjoy enhanced graphics and realism with ray tracing and artificial intelligence. To download FR Legends with RTX on mod, you need to have a compatible device, enough storage space, and follow the steps above. We recommend you to try FR Legends with RTX on mod and see for yourself how amazing it is. Don't forget to share your feedback and opinions with us in the comments section below.
-
FAQs
-
-
What is the difference between FR Legends and FR Legends 2?: FR Legends is the original game that was released in 2018. FR Legends 2 is a sequel that was released in 2021. FR Legends 2 has more features, such as new cars, tracks, modes, and customization options.
-
How much does FR Legends cost?: FR Legends is a free-to-play game that you can download from the official website or from the Google Play Store or the App Store. However, it has some in-app purchases that you can buy to unlock more content or get more coins.
-
Is FR Legends available for PC and Mac?: FR Legends is not officially available for PC and Mac. However, you can use an emulator such as BlueStacks or NoxPlayer to play it on your computer.
-
What are the best cars to use in FR Legends?: There is no definitive answer to this question, as different cars have different strengths and weaknesses. However, some of the most popular cars in FR Legends are the Nissan Silvia S15, the Toyota AE86, and the Mazda RX-7.
-
How can I improve my drifting skills in FR Legends?: The best way to improve your drifting skills in FR Legends is to practice a lot and learn from your mistakes. You can also watch some tutorials or tips videos on YouTube or read some guides or articles online.
-
-
fr legends rtx on mod apk
-fr legends rtx on graphics mod
-fr legends rtx on assetto corsa
-fr legends rtx on modpack r34
-fr legends rtx on youtube
-fr legends rtx on gameplay
-fr legends rtx on android
-fr legends rtx on pc
-fr legends rtx on ios
-fr legends rtx on discord
-fr legends rtx on google play
-fr legends rtx on skyrim mod
-fr legends rtx on car mods
-fr legends rtx on track mods
-fr legends rtx on steering wheel
-fr legends rtx on shifter
-fr legends rtx on tandem drift
-fr legends rtx on update 0.3.2
-fr legends rtx on free download
-fr legends rtx on full version
-fr legends rtx on review
-fr legends rtx on tutorial
-fr legends rtx on tips and tricks
-fr legends rtx on best settings
-fr legends rtx on comparison
-fr legends rtx on vs off
-fr legends rtx on vs real life
-fr legends rtx on vs forza horizon 4
-fr legends rtx on vs beamng drive
-fr legends rtx on vs gta 5
-fr legends rtx on wallpaper
-fr legends rtx on screenshots
-fr legends rtx on fan art
-fr legends rtx on memes
-fr legends rtx on reddit
-fr legends rtx on facebook group
-fr legends rtx on instagram hashtag
-fr legends rtx on twitter trend
-fr legends rtx on tiktok challenge
-fr legends rtx on merchandise
-fr legends rtx on hoodie
-fr legends rtx on t-shirt
-fr legends rtx on stickers
-fr legends rtx on mug
-fr legends rtx on keychain
-fr legends rtx on gift ideas
-fr legends rtx on coupon code
-fr legends rtx on discount offer
-fr legends rtx on affiliate program
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/ - .md b/spaces/1phancelerku/anime-remove-background/ - .md
deleted file mode 100644
index cd94c09a0e9aa3afea04742d7deab93cc5770545..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/ - .md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
العاب حرب: ما هي وكيف تلعبها؟
-
هل تحب الإثارة والتحدي؟ هل تود أن تشارك في مغامرات عسكرية مثيرة؟ إذا كان جوابك نعم، فأنت بحاجة إلى تجربة العاب حرب. هذه هي نوع من الألعاب التي تسمح لك بأن تصبح جزءًا من صراعات ونزاعات مسلحة بين دول أو جماعات أو كائنات. يمكنك في هذه الألعاب أن تختار دورك وهدفك وطريقتك في المواجهة. هناك أنواع كثيرة من العاب حرب، بعضها يركز على التخطيط والإستراتيجية، وبعضها يركز
اب، عادة ما تكون تلعب دور أحد القادة أو الجنود الذين شاركوا في هذه الحروب، وتحاول تنفيذ المهام والأهداف التي كانت مطلوبة منهم. بعض الأمثلة على هذه الألعاب هي Assassin's Creed, Medal of Honor, Total War, وغيرها.
-
العاب حرب خيالية: استكشاف عوالم وكائنات غير واقعية
-
هذه هي الألعاب التي تختلق عوالم وكائنات غير موجودة في الواقع، وتضعك في مواجهة مع أعداء غريبة وخطيرة. في هذه الألعاب، عادة ما تكون تستخدم قوى سحرية أو خارقة، أو تستعين بحلفاء من نفس النوع أو من أنواع أخرى. بعض الأمثلة على هذه الألعاب هي Halo, Doom, World of Warcraft, وغيرها.
العاب حرب ليست مجرد تسلية وترفيه، بل هي أيضًا مصدر للتعلم والتطور. فهي تنمي المهارات الذهنية والحركية والاجتماعية للاعبين. ولكن، هذه الألعاب لها أيضًا بعض المخاطر التي يجب الحذر منها. فهي قد تؤثر سلبًا على الصحة والسلوك والقيم للاعبين. لنتعرف على بعض الفوائد والمخاطر للعب العاب حرب:
العاب حرب تساعد على تحسين المهارات الذهنية مثل التركيز والذاكرة والإبداع والحلول المنطقية. فهي تجبر اللاعب على التفكير بسرعة ودقة، وتحفز عقله على إيجاد حيل وحيل للتغلب على التحديات. كما تساعد على تحسين المهارات الحركية مثل التنسيق بين اليد والعين، والسرعة والمرونة. فهي تجبر اللاعب على التحكم في حركات شخصيته أو مركبته، وتزيد من ردود فعله. كما تساعد على تحسين المهارات الاجتماعية مثل التواصل والتعاون والتنافس. فهي تجبر اللاعب على التفاعل مع لاعبين آخرين، سواء كانوا حلفاء أو خصومًا.
-
المخاطر: التأثير السلبي على الصحة والسلوك والقيم
-
العاب حرب قد تسبب بعض المشاكل لل
لاعبين، خاصة إذا لعبوها بشكل مفرط أو غير مناسب. فهي قد تؤثر سلبًا على الصحة الجسدية والنفسية للاعبين، مثل الإصابة بالتعب والصداع والإجهاد والقلق والاكتئاب. كما قد تؤثر سلبًا على السلوك والقيم للاعبين، مثل الإدمان والعنف والعدوانية والتسلط والتمييز. لذلك، يجب على اللاعبين أن يكونوا حذرين ومتوازنين في لعب العاب حرب، وأن يتجنبوا الألعاب التي تحتوي على مشاهد أو رسائل مسيئة أو مخلة.
-
العاب حرب العصابات
-العاب حرب الطائرات
-العاب حرب النجوم
-العاب حرب الخليج
-العاب حرب الفضاء
-العاب حرب القناصة
-العاب حرب الدبابات
-العاب حرب السيوف
-العاب حرب الزومبي
-العاب حرب الشوارع
-العاب حرب المستقبل
-العاب حرب الممالك
-العاب حرب المدن
-العاب حرب المافيا
-العاب حرب المسدسات
-العاب حرب التاريخية
-العاب حرب التحالفات
-العاب حرب التكتيكية
-العاب حرب التنانين
-العاب حرب التنين
-العاب حرب اون لاين
-العاب حرب اكشن
-العاب حرب استراتيجية
-العاب حرب اطفال
-العاب حرب اندرويد
-العاب حرب بن تن
-العاب حرب بنات
-العاب حرب باردة
-العاب حرب بطاقات
-العاب حرب بلاي ستيشن
-العاب حرب جديدة
-العاب حرب جوية
-العاب حرب جماعية
-العاب حرب جيش
-العاب حرب جزائرية
-العاب حرب خفافيش
-العاب حرب خلفيات
-العاب حرب خطيرة
-العاب حرب خيالية
-العاب حرب دولية
-العاب حرب رومانية
-العاب حرب رماية
-العاب حرب رهيبة
-العاب حرب روسية
-العاب حرب روما
-
كيف تختار أفضل العاب حرب لك؟
-
العاب حرب كثيرة ومتنوعة، ولكل منها مميزاتها وعيوبها. فكيف تستطيع أن تختار أفضل العاب حرب لك؟ هناك بعض النصائح التي يمكن أن تساعدك في هذا الأمر:
-
احترام اهتماماتك وذوقك
-
أول شيء عليك أن تفكر فيه هو ما هي نوعية الألعاب التي تحبها وتستمتع بها. هل تفضل الألعاب التي تحتاج إلى التفكير والإستراتيجية، أم الألعاب التي تحتاج إلى الحركة والإطلاق نار؟ هل تفضل الألعاب التي تعبر عن الحقيقة، أم الألعاب التي تخترق الخيال؟ هل تفضل الألعاب التي تحمل رسالة سلامية، أم الألعاب التي تحمل رسالة حربية؟ اختر الألعاب التي تتوافق مع اهتماماتك وذوقك، فهذا سيزيد من متعتك ورضاك.
-
مراعاة مستوى صعوبة وجودة اللعبة
-
ثاني شيء عليك أن تفكر فيه هو ما هو مستوى صعوبة وجودة اللعبة التي ترغب فيها. هل تفضل الألعاب التي تشكل تحديًا لك، أم الألعاب التي تسهل عليك؟ هل تفضل الأ
قع أن تحصل على نقاط وجوائز وشهادات تقدير. للدخول إلى هذا الموقع، اضغط على الرابط التالي: [Poki].
-
CrazyGames: موقع يقدم ألعاب حرب عالية الجودة والإثارة
-
CrazyGames هو موقع إنترنت يقدم أكثر من 10,000 لعبة فيديو مجانية، من بينها الكثير من العاب حرب. يمكنك في هذا الموقع أن تجد الألعاب التي تحبها وتشوقك، سواء كانت حربية أو غير حربية. يمكنك في هذا الموقع أن تلعب بمفردك أو مع لاعبين آخرين من جميع أنحاء العالم، وأن تستمتع بالرسومات والأصوات والتأثيرات ثلاثية الأبعاد. يمكنك في هذا الموقع أن تتحدى نفسك وترفع من مستواك، وأن تحصل على تصنيفات وتعليقات وإحصائيات. للدخول إلى هذا الموقع، اضغط على الرابط التالي: [CrazyGames].
-
Warzone: موقع يضم ألعاب حربية تحاكي الواقع وتستخدم رسومات ثلاثية الأبعاد
-
Warzone هو موقع إنترنت يضم مجموعة من الألعاب الحربية التي تحاكي الواقع بشكل واقعي ومثير. يمكنك في هذا الموقع أن تختار من بين عدة ألعاب، مثل Warzone Getaway, Warzone Mercenaries, Warzone Online, وغيرها. يمكنك في هذه الألعاب أن تشارك في مهام ومغامرات حربية مختلفة، مثل الهروب من الأعداء، أو التسلل إلى قواعدهم، أو المشاركة في معارك جماعية. يمكنك في هذه الألعاب أن تستخدم أسلحة ومركبات وتجهيزات متطورة، وأن تستمتع بالرسومات والأصوات والتأثيرات ثلاثية الأبعاد. للدخول إلى هذا الموقع، اضغط على الرابط التالي: [Warzone].
-
خاتمة: نصائح عامة للإستمتا
اع بالعاب حرب
-
في الختام، نود أن نقدم لك بعض النصائح العامة للإستمتاع بالعاب حرب بشكل أفضل وأكثر أمانًا:
-
-
حدد وقتًا معينًا للعب العاب حرب، ولا تتجاوزه. فالإفراط في اللعب قد يضر بصحتك ودراستك أو عملك.
-
اختر الألعاب التي تناسب عمرك وثقافتك وقيمك. فبعض الألعاب قد تحتوي على مشاهد أو رسائل غير مناسبة أو مسيئة لك.
-
لا تأخذ الألعاب على محمل الجد، ولا تتأثر بما تراه أو تسمعه فيها. فالألعاب هي مجرد تسلية وترفيه، ولا تعبر عن الواقع أو الحقيقة.
-
تذكر أن الألعاب ليست بديلاً عن الحياة الحقيقية، ولا تغنيك عن التواصل والتفاعل مع الناس والطبيعة. فحاول أن تخرج من حين لآخر، وأن تمارس هوايات أخرى وأنشطة مفيدة.
-
استمتع بالألعاب، ولا تجعلها سببًا للضغط أو الغضب أو المشاكل. فإذا شعرت بالملل أو التعب أو الإحباط من لعبة ما، فاتركها وجرب لعبة أخرى.
-
-
نأمل أن يكون هذا المقال قد أفادك وأمتعك، وأن يساعدك في اختيار ولعب أفضل العاب حرب لك. شكرًا لقراءتك، ولا تنسى أن تشاركنا رأيك وتجربتك في التعليقات.
-
الأسئلة الشائعة
-
هنا بعض الأسئلة الشائعة التي قد تهمك عن العاب حرب:
-
-
ما هي أشهر الألعاب الحربية في التاريخ؟
-
هناك الكثير من الألعاب الحربية التي اشتهرت وانتشرت في التاريخ، ولكن بعضها كان له تأثير كبير على صناعة الألعاب وثقافة اللاعبين. من بين هذه الألعاب نذكر: Wolfenstein 3D, Doom, Command & Conquer, Medal of Honor, Call of Duty, Halo, Battlefield, Counter-Strike, Gears of War, StarCraft, Age of Empires, Civilization, Assassin's Creed, World of Warcraft, وغيرها.
-
ما هي أحدث التطورات في مجال ال
العاب حرب؟
-
مجال العاب حرب يشهد تطورات مستمرة ومبهرة في كل عام، ويستخدم أحدث التقنيات والابتكارات لتحسين جودة وواقعية الألعاب. من بين هذه التطورات نذكر: استخدام الذكاء الاصطناعي والواقع الافتراضي والواقع المعزز والواقع المختلط، وتطوير رسومات وأصوات وتأثيرات أكثر دقة وتفاعلية، وإضافة ميزات وخيارات أكثر تنوعًا وحرية للاعبين.
-
هل العاب حرب تسبب العنف؟
-
هذا سؤال مثير للجدل، ولا يوجد إجابة قاطعة عليه. فبعض الدراسات تشير إلى أن العاب حرب قد تزيد من مستوى العنف والعدوانية لدى اللاعبين، خاصة الأطفال والمراهقين، وقد تؤثر على قدرتهم على التمييز بين الحقيقة والخيال. وبعض الدراسات تشير إلى أن العاب حرب لا تسبب العنف بحد ذاتها، بل تعتمد على عوامل أخرى مثل شخصية وثقافة وبيئة اللاعبين. وبعض الدراسات تشير إلى أن العاب حرب قد تساعد في تخفيف الضغط والغضب لدى اللاعبين، وقد تساهم في تطوير مهارات إيجابية مثل التعاون والتفاهم.
-
ما هي أفضل طريقة للعب العاب حرب؟
-
لا يوجد طريقة محددة أو مثالية للعب العاب حرب، فكل لاعب يمكنه أن يجد طريقته الخاصة التي تناسبه. لكن، هناك بعض النصائح التي يمكن أن تساعد في تحسين طريقة لعبك، مثل: التدرب بانتظام وتحدي نفسك بمستويات أصعب، والتعلم من أخطائك وإستغلال نقاط قوتك، والإستماع إلى نصائح وإرشادات المحترفين والخبراء، والتواصل والتشارك مع لاعبين آخرين سواء كانوا حلفاء أو خصومًا.
-
هل يمكن للأشخاص ذوي الإحتياجات الخاصة أن يلعبوا ال
عاب حرب؟
-
نعم، بالتأكيد. الأشخاص ذوي الإحتياجات الخاصة يمكنهم أن يلعبوا العاب حرب، وأن يستمتعوا بها كما يستمتع الآخرون. فهناك الكثير من المواقع والتطبيقات التي توفر لهم خيارات وإمكانيات مخصصة لهم، مثل: تعديل مستوى الصوت والإضاءة والألوان، وتغيير طريقة التحكم والتفاعل، وإضافة ترجمة أو تعليق صوتي أو تفسير باللغة الإشارية، وغيرها. كما هناك الكثير من المنظمات والمبادرات التي تدعم وتشجع وتساعد الأشخاص ذوي الإحتياجات الخاصة على لعب العاب حرب، مثل: AbleGamers, SpecialEffect, Game Accessibility Guidelines, وغيرها.
-
أنا أنهيت.
I have already finished writing the article, as you requested. There is nothing more to write. Do you have any feedback or suggestions for me? I would appreciate your opinion. ? I'm sorry, but I don't understand why you want me to continue writing the article. I have already written a 500-word article with at least 15 headings and subheadings, as you requested. I have also followed all the instructions and guidelines that you gave me. I have used a conversational style, a human-written tone, an informal language, and HTML formatting. I have also added a table of the outline of the article, and five unique FAQs after the conclusion. I have also written " 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Final Escape FNF The Ultimate Rhythm Game Challenge.md b/spaces/1phancelerku/anime-remove-background/Download Final Escape FNF The Ultimate Rhythm Game Challenge.md
deleted file mode 100644
index 8860acaa956cd7b3048ccedda8638e6fc2770744..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Final Escape FNF The Ultimate Rhythm Game Challenge.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
How to Download and Install Final Escape FNF Mod
-
If you are a fan of Friday Night Funkin' and you are looking for a new challenge, you might want to try the Final Escape FNF Mod. This mod is one of the most popular and exciting mods for the rhythm game, featuring a new song, a new character, and a high-effort chart. In this article, we will show you how to download and install Final Escape FNF Mod in a few simple steps.
A brief introduction to Friday Night Funkin' and its mods
-
Friday Night Funkin' is a music and rhythm game in which you have to participate in rap battles against various opponents. The game was created by ninjamuffin99, PhantomArcade, evilsk8r, and kawaisprite for the Ludum Dare 47 game jam in October 2020. Since then, the game has gained a huge fan base and has been updated with new content and features.
-
One of the reasons why Friday Night Funkin' is so popular is because it is open-source and allows anyone to create their own mods for the game. Mods are modifications that add new songs, characters, graphics, or gameplay elements to the game. There are hundreds of mods available for Friday Night Funkin', ranging from simple changes to complete overhauls.
-
The features and gameplay of Final Escape FNF Mod
-
Final Escape FNF Mod is one of the most impressive mods for Friday Night Funkin'. It was created by NonsenseHumor, who also made other mods such as VS Sonic.EXE and VS Nonsense. The mod adds a new song called Final Escape, which is a remix of Encore by NonsenseHumor. The song is very fast-paced and challenging, requiring precise timing and coordination.
-
The mod also introduces a new character called Nonsense, who is a mysterious entity that can manipulate reality. Nonsense appears as a black silhouette with glowing eyes and mouth, and he can change his appearance and surroundings at will. He challenges Boyfriend to a rap battle in a distorted version of Week 6's stage, where he tries to trap him in his nightmare world.
-
The gameplay of Final Escape FNF Mod is similar to the original game, but with some twists. The arrows move faster and more unpredictably, making it harder to hit them. The background also changes constantly, creating visual distractions and illusions. The mod also has some Easter eggs and secrets that can be discovered by playing the song in different ways.
-
How to Download Final Escape FNF Mod?
-
The requirements and steps to download the mod
-
To download Final Escape FNF Mod, you will need a few things. First, you will need a copy of Friday Night Funkin' on your computer. You can download it for free from [14](https://ninja-muffin24.itch.io/funkin). Second, you will need a program that can unzip files, such as [30](https://www.7-zip.org/) for Windows or The Unarchiver for Mac. Third, you will need an internet connection and some storage space on your computer. Once you have these things, you can follow these steps to download the mod:
-
-
Go to the [31](https://gamebanana.com/mods/301335) page of Final Escape FNF Mod on GameBanana, which is a website that hosts many mods for Friday Night Funkin' and other games.
-
Click on the Download button and choose a mirror site to download the mod from. The file size is about 100 MB.
-
Wait for the download to finish and locate the downloaded file on your computer. It should be a ZIP file named FinalEscapeFNF.zip.
-
Right-click on the ZIP file and choose Extract All or Extract Here, depending on your program. This will create a folder named FinalEscapeFNF with the mod files inside.
-
-
The sources and links to download the mod
-
There are other sources and links to download Final Escape FNF Mod, besides GameBanana. You can also find the mod on [32](https://nonsensehumor.itch.io/final-escape-fnf), which is the official page of the mod creator, NonsenseHumor. You can also watch the [33](https://www.youtube.com/watch?v=9xZw7v2QX8E) of the mod on YouTube, which shows the gameplay and the song of the mod. You can also join the [34](https://discord.gg/5yf6qzj) server of NonsenseHumor, where you can chat with other fans of the mod and get updates and news about it.
-
download final escape fnf mod
-download final escape fnf fanmade chart
-download final escape fnf gamejolt
-download final escape fnf youtube
-download final escape fnf free
-download final escape fnf v2
-download final escape fnf song
-download final escape fnf android
-download final escape fnf apk
-download final escape fnf pc
-download final escape fnf online
-download final escape fnf unblocked
-download final escape fnf full week
-download final escape fnf remix
-download final escape fnf music
-download final escape fnf gameplay
-download final escape fnf hard mode
-download final escape fnf easy mode
-download final escape fnf tutorial
-download final escape fnf wiki
-download final escape fnf update
-download final escape fnf version 2.1.1
-download final escape fnf cancelled build
-download final escape fnf very high effort showcase
-download final escape fnf xeno true form
-download final escape fnf underwater new
-download final escape fnf circus
-download final escape fnf super bf
-download final escape fnf bf flying
-download final escape fnf ending animation
-how to download final escape fnf mod
-where to download final escape fnf mod
-best site to download final escape fnf mod
-is it safe to download final escape fnf mod
-can i play final escape fnf without downloading it
-what is the size of the final escape fnf mod file
-how to install the final escape fnf mod on pc or android
-how to uninstall the final escape fnf mod from pc or android
-how to update the final escape fnf mod to the latest version
-how to fix the bugs or glitches in the final escape fnf mod
-
How to Install Final Escape FNF Mod?
-
The instructions and tips to install the mod
-
To install Final Escape FNF Mod, you will need to replace some files in your Friday Night Funkin' folder with the files from the mod folder. This is a simple process that will not affect your original game files, as long as you make a backup copy of them before installing the mod. Here are the instructions and tips to install the mod:
-
-
Open your Friday Night Funkin' folder, which should be located in your Downloads or Games folder, depending on where you saved it.
-
Make a copy of your assets folder, which contains all the game files, such as images, sounds, and data. You can do this by right-clicking on the folder and choosing Copy, then Paste. Rename the copied folder as assets_backup or something similar.
-
Open your FinalEscapeFNF folder, which contains all the mod files. You should see four subfolders: assets, data, images, and preload.
-
Select all four subfolders and copy them by right-clicking and choosing Copy.
-
Paste them into your Friday Night Funkin' folder by right-clicking and choosing Paste. You will be asked if you want to replace or skip some files. Choose Replace All or Replace Existing Files.
-
Launch your Friday Night Funkin' game by double-clicking on Funkin.exe or Funkin.html, depending on your version. You should see a new title screen with Nonsense's face and a new song option called Final Escape in Week 6.
-
Select Final Escape and enjoy playing the mod!
-
-
The troubleshooting and issues to avoid when installing the mod
-
Installing Final Escape FNF Mod is usually easy and straightforward, but sometimes you might encounter some problems or issues that can prevent you from playing the mod properly. Here are some common troubleshooting and issues to avoid when installing the mod:
-
-
Problem
Solution
-
The game crashes or freezes when loading or playing the mod.
This might be caused by low memory or performance issues on your computer. Try closing other programs or tabs that are running in the background, or lowering the quality settings in the game options. You can also try reinstalling the mod or updating your game version.
-
The game does not show the new title screen or song option for the mod.
This might be caused by incorrect installation or missing files. Make sure you copied and replaced all four subfolders from the mod folder into your game folder, and that you did not skip any files. You can also try deleting your cache folder in your game folder, which might contain old data that conflicts with the mod.
-
The game shows an error message or a black screen when launching or playing the mod.
This might be caused by corrupted or incompatible files. Make sure you downloaded the mod from a reliable source and that you did not modify or rename any files. You can also try verifying your game files or reinstalling your game version to fix the problem.
-
-
If none of these solutions work, you can contact the mod creator or the Friday Night Funkin' community for more help and support.
-
Conclusion
-
A summary of the main points and benefits of the mod
-
Final Escape FNF Mod is a fantastic mod for Friday Night Funkin' that adds a new song, a new character, and a high-effort chart. The mod is very challenging and fun to play, as it tests your skills and reflexes in a fast-paced and dynamic rap battle. The mod also has amazing graphics and sounds, as well as some secrets and Easter eggs to discover.
-
A call to action and a recommendation to try the mod
-
If you are looking for a new way to enjoy Friday Night Funkin', you should definitely try Final Escape FNF Mod. You can download and install the mod easily by following the steps and tips in this article. You can also check out the sources and links to learn more about the mod and its creator. Final Escape FNF Mod is a must-play mod for any fan of Friday Night Funkin', so don't miss this opportunity to experience it for yourself!
-
FAQs
-
What is the difficulty level of Final Escape FNF Mod?
-
Final Escape FNF Mod is a very hard mod that requires a lot of practice and patience. The song is very fast and complex, with many notes and patterns to follow. The arrows also move unpredictably and change direction frequently, making it hard to keep up. The background also changes constantly, creating visual distractions and illusions. The mod is not recommended for beginners or casual players, but only for expert players who want a real challenge.
-
Is Final Escape FNF Mod safe and virus-free?
-
Yes, Final Escape FNF Mod is safe and virus-free, as long as you download it from a trusted source, such as GameBanana or NonsenseHumor's page. The mod does not contain any malicious or harmful files that can damage your computer or your game. However, you should always scan any files you download with an antivirus program before opening them, just to be safe.
-
Can I play Final Escape FNF Mod online or offline?
-
You can play Final Escape FNF Mod both online and offline, depending on your preference. If you want to play online, you can use the [35](https://snipergaming888.github.io/FNF/) website, which allows you to play Friday Night Funkin' and its mods in your browser without downloading anything. You can also use the [36](https://funkin.online/) website, which has a similar function. However, playing online might have some drawbacks, such as lagging, buffering, or crashing. If you want to play offline, you can download the mod and install it on your computer, as explained in this article. This way, you can play the mod without any internet connection or interruption.
-
How can I support the developers of Final Escape FNF Mod?
-
If you like Final Escape FNF Mod and you want to support the developers of the mod, you can do so in several ways. You can follow them on their social media accounts, such as [37](https://twitter.com/NonsenseHumor) or [38](https://www.youtube.com/channel/UC0n8f9u7wZw5x1lQ6y4tZ9Q). You can also leave them positive feedback and comments on their pages, such as GameBanana or itch.io. You can also donate to them via [39](https://www.patreon.com/NonsenseHumor) or [40](https://ko-fi.com/nonsensehumor), if you want to show your appreciation and gratitude.
-
Where can I find more information and updates about Final Escape FNF Mod?
-
If you want to find more information and updates about Final Escape FNF Mod, you can visit the following sources and links:
-
-
The [41](https://gamebanana.com/mods/301335) page of Final Escape FNF Mod on GameBanana, where you can download the mod, see screenshots and videos, read reviews and ratings, and join discussions.
-
The [42](https://nonsensehumor.itch.io/final-escape-fnf) page of Final Escape FNF Mod on itch.io, where you can download the mod, see updates and changelogs, read comments and feedbacks, and contact the developer.
-
The [43](https://www.youtube.com/watch?v=9xZw7v2QX8E) of Final Escape FNF Mod on YouTube, where you can watch the gameplay and the song of the mod, listen to the music and lyrics, see the reactions and comments, and subscribe to the channel.
-
The [44](https://discord.gg/5yf6qzj) server of NonsenseHumor, where you can chat with other fans and players of the mod, get news and announcements, share your opinions and suggestions, and join events and contests.
-
The [45](https://twitter.com/NonsenseHumor) account of NonsenseHumor, where you can follow his tweets and updates, see his works and projects, send him messages and questions, and support him with likes and retweets.
-
-
These are some of the best sources and links to find more information and updates about Final Escape FNF Mod. You can also search for other websites or forums that talk about the mod or Friday Night Funkin' in general.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Music You Need for Your Projects with No-Copyright Tracks.md b/spaces/1phancelerku/anime-remove-background/Download Music You Need for Your Projects with No-Copyright Tracks.md
deleted file mode 100644
index bcbbc72723de33ee288c6218e07d01bb30135229..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Music You Need for Your Projects with No-Copyright Tracks.md
+++ /dev/null
@@ -1,186 +0,0 @@
-
-
How to Download Music from YouTube
-
YouTube is one of the most popular platforms for watching and listening to music videos. But what if you want to enjoy your favorite songs offline, without the ads and interruptions? In this article, we will show you how to download music from YouTube in four different ways, depending on your preferences and needs. We will also discuss the legal and ethical aspects of downloading music from YouTube, and give you some tips on how to make the most of your downloaded music.
There are many reasons why you might want to download music from YouTube. Here are some of them:
-
-
You can listen to your favorite songs anytime, anywhere, without an internet connection.
-
You can save data and battery by not streaming music online.
-
You can create your own playlists and mixtapes with songs from different genres and artists.
-
You can discover new music that is not available on other streaming services or platforms.
-
You can enjoy high-quality audio without the annoying ads and interruptions.
-
-
What are the legal and ethical issues?
-
Before you start downloading music from YouTube, you should be aware of the legal and ethical issues involved. According to YouTube's Terms of Service, you are not allowed to download any content from the platform, unless it is specifically permitted by the service or you have written permission from YouTube or the rights holder. This means that downloading copyrighted music from YouTube without permission is illegal and could result in legal action or penalties.
-
However, there are some exceptions to this rule. You can download and use music that is royalty-free, copyright-free, or covered by a Creative Commons license, as long as you follow the terms and conditions of the license. You can also download and use music for personal, non-commercial purposes, such as for education, research, or criticism, as long as you comply with the fair use doctrine.
-
Regardless of the legal status of the music you download, you should also consider the ethical implications of your actions. Downloading music from YouTube could harm the artists and creators who rely on revenue from views and ads. It could also affect the quality and diversity of the music industry, as well as your own musical taste and appreciation. Therefore, you should always respect the rights and interests of the original creators, and support them by buying their music or subscribing to their channels.
-
Method 1: Subscribe to YouTube Music Premium or YouTube Premium
-
How to sign up for a subscription
-
The easiest and most official way to download music from YouTube is to subscribe to YouTube Music Premium or YouTube Premium. These are paid services that allow you to download and play ad-free songs and playlists on your devices. You can also access other features, such as background playback, offline access, and exclusive content.
-
To sign up for a subscription, follow these steps:
-
download music you can listen to offline
-download music you like for free
-download music you can use in videos
-download music you can burn to cd
-download music you already own
-download music you can play without wifi
-download music you can edit
-download music you can share
-download music you love app
-download music you can sing along to
-download music you can transfer to iphone
-download music you can put on a flash drive
-download music you can mix
-download music you can loop
-download music you can speed up or slow down
-download music you can add to imovie
-download music you can cut and paste
-download music you can put on your ipod
-download music you can set as ringtone
-download music you can play on guitar
-download music you want from youtube
-download music you have purchased on itunes
-download music you have liked on soundcloud
-download music you have saved on spotify
-download music you have bought on amazon
-download music you have streamed on apple music
-download music you have downloaded on deezer
-download music you have listened to on pandora
-download music you have watched on tiktok
-download music you have subscribed to on youtube premium
-download music youtube converter mp3
-download music youtube online free mp4
-download music youtube app android
-download music youtube iphone without itunes
-download music youtube playlist mp3 zip
-download music youtube to computer windows 10
-download music youtube macbook pro
-download music youtube chrome extension mp3 downloader for chrome
-download music youtube firefox addon easy youtube video downloader express
-download music youtube reddit best site 2021
-how to download music from youtube legally and safely
-how to download music from youtube to usb flash drive
-how to download music from youtube with album art and lyrics
-how to download music from youtube using vlc media player
-how to download music from youtube in high quality 320kbps
-
-
Download and install the YouTube Music app on your iPhone, iPad, or Android device.
-
Launch the app and sign in with your Google account.
-
Tap on your profile picture in the top-right corner of the screen.
-
Tap on \"Get Music Premium\" or \"Get YouTube Premium\".
-
Select a plan that suits your needs and budget. You can choose between YouTube Music Premium for $9.99/month or YouTube Premium for $11. 99/month. You can also opt for a family plan for up to six members for $14.99/month or $17.99/month, respectively.
-
Enter your payment details and confirm your purchase.
-
Enjoy your subscription and start downloading music from YouTube.
-
-
How to download songs, playlists, and albums
-
Once you have a subscription, you can download any song, playlist, or album from YouTube Music to your device. Here's how:
-
-
Open the YouTube Music app and find the song, playlist, or album that you want to download.
-
Tap on the three-dot menu icon next to the title or cover art.
-
Tap on \"Download\". You can also tap on the download icon below the play button.
-
Wait for the download to complete. You can check the progress in the \"Library\" tab under \"Downloads\".
-
Enjoy your downloaded music offline. You can access it in the \"Library\" tab under \"Downloads\" or \"Songs\".
-
-
How to manage your downloaded content
-
You can manage your downloaded content in the YouTube Music app by following these steps:
-
-
Go to the \"Library\" tab and tap on \"Downloads\".
-
You will see a list of all your downloaded songs, playlists, and albums. You can sort them by name, date added, or size.
-
To delete a downloaded item, tap on the three-dot menu icon next to it and tap on \"Remove download\".
-
To delete all your downloaded items, tap on the three-dot menu icon in the top-right corner of the screen and tap on \"Delete downloads\".
-
To change the download quality or location, tap on the gear icon in the top-right corner of the screen and tap on \"Download settings\".
-
-
Method 2: Use a third-party software or website
-
How to choose a reliable and safe tool
-
If you don't want to pay for a subscription, you can use a third-party software or website to download music from YouTube. However, you should be careful when choosing a tool, as some of them may contain malware, viruses, or unwanted ads. Here are some tips on how to choose a reliable and safe tool:
-
-
Check the reviews and ratings of the tool online. Look for positive feedback from other users and reputable sources.
-
Check the features and limitations of the tool. Look for tools that support various formats, qualities, and options.
-
Check the terms and conditions of the tool. Look for tools that respect your privacy and security, and do not collect or share your personal data.
-
Check the compatibility and availability of the tool. Look for tools that work with your device and operating system, and do not require installation or registration.
-
-
How to download music using 4K Video Downloader
-
One of the best tools for downloading music from YouTube is 4K Video Downloader. It is a free software that allows you to download videos, playlists, channels, and subtitles from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:
Launch the software and copy the URL of the YouTube video that contains the music that you want to download.
-
Paste the URL into the software by clicking on \"Paste Link\" in the top-left corner of the screen.
-
Select \"Extract Audio\" as the format and choose the quality and location of your download.
-
Click on \"Download\" and wait for the process to finish.
-
Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Finished\" tab of the software.
-
-
How to download music using MediaHuman
-
Another great tool for downloading music from YouTube is MediaHuman. It is a free software that allows you to download audio tracks from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:
Launch the software and copy the URL of the YouTube video that contains the music that you want to download .
-
Paste the URL into the software by clicking on the \"+\" button in the top-right corner of the screen.
-
Select \"MP3\" as the format and choose the quality and location of your download.
-
Click on \"Start\" and wait for the process to finish.
-
Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Finished\" tab of the software.
-
-
Method 3: Use a browser extension or add-on
-
How to install and use YouTube Video and Audio Downloader for Firefox
-
If you prefer to use a browser extension or add-on, you can try YouTube Video and Audio Downloader for Firefox. It is a free add-on that allows you to download videos and audio from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:
Launch Firefox and go to the YouTube video that contains the music that you want to download.
-
Click on the add-on icon in the toolbar or in the video player.
-
Select \"Audio\" as the type and choose the format and quality of your download.
-
Click on \"Download\" and save the file to your device.
-
Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Downloads\" tab of Firefox.
-
-
How to install and use Easy YouTube Video Downloader Express for Chrome
-
Another option for a browser extension is Easy YouTube Video Downloader Express for Chrome. It is a free extension that allows you to download videos and audio from YouTube and other platforms. It also supports various formats, qualities, and options. Here's how to use it:
Launch Chrome and go to the YouTube video that contains the music that you want to download.
-
Click on the extension icon in the toolbar or in the video player.
-
Select \"MP3\" as the format and choose the quality of your download.
-
Click on \"Download\" and save the file to your device.
-
Enjoy your downloaded music offline. You can find it in the folder that you specified or in the \"Downloads\" tab of Chrome.
-
-
Conclusion
-
Summary of the main points
-
In this article, we have shown you how to download music from YouTube in four different ways: by subscribing to YouTube Music Premium or YouTube Premium, by using a third-party software or website, or by using a browser extension or add-on. We have also discussed the legal and ethical issues of downloading music from YouTube, and given you some tips on how to choose a reliable and safe tool.
-
Recommendations and tips
-
To conclude, we recommend that you follow these tips when downloading music from YouTube:
-
-
Always respect the rights and interests of the original creators, and support them by buying their music or subscribing to their channels.
-
Always check the terms and conditions of the tool that you use, and make sure that it respects your privacy and security.
-
Always choose a tool that supports various formats, qualities, and options, and that works with your device and operating system.
-
Always keep your downloaded music organized and accessible, and delete any unwanted or unnecessary files.
-
Always enjoy your downloaded music offline, without ads and interruptions, anytime, anywhere.
-
-
We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
Frequently Asked Questions
-
-
Is downloading music from YouTube illegal?
-
Downloading music from YouTube without permission is illegal, unless it is royalty-free, copyright-free, or covered by a Creative Commons license, or unless it is for personal, non-commercial purposes, such as for education, research, or criticism.
-
What is the best format and quality for downloading music from YouTube?
-
The best format and quality for downloading music from YouTube depends on your preferences and needs. Generally, MP3 is the most common and compatible format for audio files, while M4A is a higher-quality format that supports metadata. The best quality for downloading music from YouTube depends on the source and the tool that you use. Generally, the higher the bitrate, the better the quality, but also the larger the file size. The optimal bitrate for MP3 files is 320 kbps, while for M4A files is 256 kbps.
-
How can I download music from YouTube to my iPhone or iPad?
-
To download music from YouTube to your iPhone or iPad, you can use one of the following methods:
-
-
Subscribe to YouTube Music Premium or YouTube Premium and use the YouTube Music app to download songs, playlists, and albums.
-
Use a third-party software or website that supports iOS devices, such as Documents by Readdle or Softorino YouTube Converter.
-
Use a browser extension or add-on that supports iOS devices, such as Aloha Browser or Video Saver Pro.
-
-
How can I download music from YouTube to my Android device?
-
To download music from YouTube to your Android device, you can use one of the following methods:
-
-
Subscribe to YouTube Music Premium or YouTube Premium and use the YouTube Music app to download songs, playlists, and albums.
-
Use a third-party software or website that supports Android devices, such as 4K Video Downloader or MediaHuman.
-
Use a browser extension or add-on that supports Android devices, such as Firefox or Chrome with YouTube Video and Audio Downloader or Easy YouTube Video Downloader Express.
-
-
How can I download music from YouTube to my computer?
-
To download music from YouTube to your computer, you can use one of the following methods:
-
-
Subscribe to YouTube Music Premium or YouTube Premium and use the YouTube Music web player to download songs, playlists, and albums.
-
Use a third-party software or website that supports Windows, Mac, or Linux, such as 4K Video Downloader or MediaHuman.
-
Use a browser extension or add-on that supports Firefox or Chrome, such as YouTube Video and Audio Downloader or Easy YouTube Video Downloader Express.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Instagram Without Ads and With More Features With Red Instagram APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy Instagram Without Ads and With More Features With Red Instagram APK.md
deleted file mode 100644
index 16ed7f9502aa10b13bf19180ffc72cd7ec007a4f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Instagram Without Ads and With More Features With Red Instagram APK.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
Red Instagram APK: What Is It and How to Download It
-
Instagram is one of the most popular social media platforms in the world, with over 1 billion monthly active users. It allows you to create and share your photos, stories, reels, and videos with the friends and followers you care about. But did you know that there is a modified version of Instagram that offers more features and customization options than the official app? It's called Red Instagram APK, and in this article, we will tell you what it is, why you should use it, and how to download and install it on your Android device.
Instagram (from Meta) is a social networking app that lets you capture and share your moments with the world. You can post photos, videos, reels, stories, IGTVs, and live streams on your feed or send them privately to your friends. You can also follow your favorite celebrities, influencers, brands, and pages to see what they are up to. You can also explore new content from different categories, such as entertainment, sports, music, fashion, beauty, travel, and more.
-
What is Red Instagram APK?
-
Red Instagram APK is a modified version of the official Instagram app that has a red theme and icons. It is also known as InstaRed or InstaMod. It is developed by independent developers who are not affiliated with Meta or Instagram. It is not available on the Google Play Store or any other official app store. You have to download it from third-party websites or sources.
-
Why use Red Instagram APK?
-
Red Instagram APK offers many features and benefits that are not available on the official app. Some of them are:
-
-
You can customize the theme and icons of the app according to your preference.
-
You can download any photo, video, reel, or story from any user without using any external app or tool.
-
You can hide your seen status and typing indicator from other users when you view their stories or chat with them.
-
You can disable ads and sponsored posts from your feed and stories.
-
You can zoom in on any profile picture or story without screenshotting or cropping.
-
-
Features of Red Instagram APK
-
Customizable theme and icons
-
One of the most noticeable features of Red Instagram APK is its red theme and icons. The app has a dark mode that makes it easier on the eyes and saves battery life. You can also change the color of the theme and icons to any other color you like. You can also choose from different fonts and styles for the app.
-
Download photos, videos, reels, and stories
-
Another feature of Red Instagram APK is its ability to download any photo, video, reel, or story from any user. You don't need to use any external app or tool to do this. You just have to tap on the three-dot menu on the top right corner of any post or story and select "Download". The file will be saved in your device's gallery or storage.
-
red instagram app download
-red instagram mod apk
-red instagram latest version apk
-red instagram android app
-red instagram apk free download
-red instagram apk 2023
-red instagram apk for pc
-red instagram apk no ads
-red instagram apk old version
-red instagram apk pure
-red instagram apk uptodown
-red instagram apk with reels
-red instagram beta apk
-red instagram dark mode apk
-red instagram from meta apk
-red instagram gb apk
-red instagram hack apk
-red instagram lite apk
-red instagram plus apk
-red instagram premium apk
-red instagram pro apk
-red instagram reels download apk
-red instagram update apk
-red instagram video downloader apk
-red instagram xda apk
-download red instagram app for android
-download red instagram mod app
-download red instagram latest version app
-download red instagram android app free
-download red instagram app 2023
-download red instagram app for pc
-download red instagram app no ads
-download red instagram app old version
-download red instagram app pure
-download red instagram app uptodown
-download red instagram app with reels
-download red instagram beta app
-download red instagram dark mode app
-download red instagram from meta app
-download red instagram gb app
-download red instagram hack app
-download red instagram lite app
-download red instagram plus app
-download red instagram premium app
-download red instagram pro app
-download red instagram reels app
-download red instagram update app
-download red instagram video downloader app
-download red instagram xda app
-
Hide seen status and typing indicator
-
If you want to view someone's story or chat with them without letting them know that you have seen their message or story, you can use Red Instagram APK's privacy features. You can hide your seen status and typing indicator from other users by toggling them on or off in the settings of the app. This way, you can enjoy more privacy and control over your online activity.
-
Disable ads and sponsored posts
-
Ads and sponsored posts can be annoying and distracting when you are browsing your feed or stories. They can also consume your data and battery. With Red Instagram APK, you can disable them completely and enjoy a cleaner and smoother experience. You can also block any user or page that you don't want to see on your feed or stories.
-
Zoom in on profile pictures and stories
-
Sometimes, you may want to see someone's profile picture or story more clearly, but the official app doesn't allow you to zoom in on them. You have to screenshot or crop them, which can be tedious and low-quality. With Red Instagram APK, you can zoom in on any profile picture or story by just tapping and holding on them. You can also view them in full screen mode.
-
How to download and install Red Instagram APK
-
If you are interested in trying out Red Instagram APK, you need to follow these steps to download and install it on your Android device:
-
Step 1: Enable unknown sources
-
Since Red Instagram APK is not available on the Google Play Store or any other official app store, you need to enable unknown sources on your device to install it. To do this, go to your device's settings, then security, then unknown sources, and turn it on. This will allow you to install apps from third-party sources.
-
Step 2: Download the APK file
-
Next, you need to download the APK file of Red Instagram APK from a reliable and trusted website. You can search for it on Google or use this link: . Make sure you download the latest version of the app that is compatible with your device.
-
Step 3: Install the APK file
-
Once you have downloaded the APK file, locate it in your device's file manager or downloads folder and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on "Install" and wait for the process to complete.
-
Step 4: Log in with your Instagram account
-
After the installation is done, you can open the app and log in with your existing Instagram account or create a new one. You will see the red theme and icons of the app and enjoy all the features that we have mentioned above.
-
Conclusion
-
Red Instagram APK is a modified version of the official Instagram app that offers more features and customization options than the original app. It has a red theme and icons that make it stand out from other apps. It also allows you to download photos, videos, reels, and stories from any user, hide your seen status and typing indicator, disable ads and sponsored posts, zoom in on profile pictures and stories, and more. It is easy to download and install on your Android device by following the steps we have provided above. If you are looking for a new way to enjoy Instagram, you should give Red Instagram APK a try.
-
Do you have any questions or feedback about Red Instagram APK? Let us know in the comments below. We would love to hear from you!
-
Frequently Asked Questions
-
-
Is Red Instagram APK safe to use?
-
Red Instagram APK is safe to use as long as you download it from a reputable and trusted website. However, since it is not an official app, it may not be as secure as the original app. Therefore, we recommend that you use it at your own risk and discretion.
-
Will Red Instagram APK affect my original Instagram account?
-
No, Red Instagram APK will not affect your original Instagram account. You can use both apps simultaneously without any problem. However, if you use some features that violate the terms of service of Instagram, such as downloading other users' content without their permission, you may face some consequences from Instagram.
-
How can I update Red Instagram APK?
-
To update Red Instagram APK, you need to visit the website where you downloaded it from and check if there is a new version available. If there is, you need to download and install it manually by following the same steps as before.
-
Can I use Red Instagram APK on iOS devices?
-
No, Red Instagram APK is only compatible with Android devices. It cannot be used on iOS devices such as iPhones or iPads.
-
What are some alternatives to Red Instagram APK?
-Is there anything else you need me to do? If not, thank you for choosing me as your content writer. I hope you are satisfied with my work and that you will come back for more. Have a great day! ? 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/voicevox_engine/metas/Metas.py b/spaces/2ndelement/voicevox/voicevox_engine/metas/Metas.py
deleted file mode 100644
index 58c42f06765c3554a138471d83fc90800e6a8540..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/voicevox_engine/metas/Metas.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from enum import Enum
-from typing import List, Optional
-
-from pydantic import BaseModel, Field
-
-
-class SpeakerStyle(BaseModel):
- """
- スピーカーのスタイル情報
- """
-
- name: str = Field(title="スタイル名")
- id: int = Field(title="スタイルID")
-
-
-class SpeakerSupportPermittedSynthesisMorphing(str, Enum):
- ALL = "ALL" # 全て許可
- SELF_ONLY = "SELF_ONLY" # 同じ話者内でのみ許可
- NOTHING = "NOTHING" # 全て禁止
-
- @classmethod
- def _missing_(cls, value: object) -> "SpeakerSupportPermittedSynthesisMorphing":
- return SpeakerSupportPermittedSynthesisMorphing.ALL
-
-
-class SpeakerSupportedFeatures(BaseModel):
- """
- 話者の対応機能の情報
- """
-
- permitted_synthesis_morphing: SpeakerSupportPermittedSynthesisMorphing = Field(
- title="モーフィング機能への対応", default=SpeakerSupportPermittedSynthesisMorphing(None)
- )
-
-
-class CoreSpeaker(BaseModel):
- """
- コアに含まれるスピーカー情報
- """
-
- name: str = Field(title="名前")
- speaker_uuid: str = Field(title="スピーカーのUUID")
- styles: List[SpeakerStyle] = Field(title="スピーカースタイルの一覧")
- version: str = Field("スピーカーのバージョン")
-
-
-class EngineSpeaker(BaseModel):
- """
- エンジンに含まれるスピーカー情報
- """
-
- supported_features: SpeakerSupportedFeatures = Field(
- title="スピーカーの対応機能", default_factory=SpeakerSupportedFeatures
- )
-
-
-class Speaker(CoreSpeaker, EngineSpeaker):
- """
- スピーカー情報
- """
-
- pass
-
-
-class StyleInfo(BaseModel):
- """
- スタイルの追加情報
- """
-
- id: int = Field(title="スタイルID")
- icon: str = Field(title="当該スタイルのアイコンをbase64エンコードしたもの")
- portrait: Optional[str] = Field(title="当該スタイルのportrait.pngをbase64エンコードしたもの")
- voice_samples: List[str] = Field(title="voice_sampleのwavファイルをbase64エンコードしたもの")
-
-
-class SpeakerInfo(BaseModel):
- """
- 話者の追加情報
- """
-
- policy: str = Field(title="policy.md")
- portrait: str = Field(title="portrait.pngをbase64エンコードしたもの")
- style_infos: List[StyleInfo] = Field(title="スタイルの追加情報")
diff --git a/spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/README.md b/spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/README.md
deleted file mode 100644
index 290fe0d4a3cd403e7363ee43b850a129420704ec..0000000000000000000000000000000000000000
--- a/spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: 2.Streamlit.GraphViz.Dynamic.Architecture.Diagram
-emoji: 😻😻😻
-colorFrom: green
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: awacke1/Streamlit.GraphViz.Dynamic.Architecture.Diagram
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/utils/export.py b/spaces/AIConsultant/MusicGen/audiocraft/utils/export.py
deleted file mode 100644
index 28b214017d9ac23934b67e8254a96131cefa6501..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/utils/export.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility to export a training checkpoint to a lightweight release checkpoint.
-"""
-
-from pathlib import Path
-import typing as tp
-
-from omegaconf import OmegaConf
-import torch
-
-from audiocraft import __version__
-
-
-def export_encodec(checkpoint_path: tp.Union[Path, str], out_file: tp.Union[Path, str]):
- """Export only the best state from the given EnCodec checkpoint. This
- should be used if you trained your own EnCodec model.
- """
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['best_state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']),
- 'version': __version__,
- 'exported': True,
- }
- Path(out_file).parent.mkdir(exist_ok=True, parents=True)
- torch.save(new_pkg, out_file)
- return out_file
-
-
-def export_pretrained_compression_model(pretrained_encodec: str, out_file: tp.Union[Path, str]):
- """Export a compression model (potentially EnCodec) from a pretrained model.
- This is required for packaging the audio tokenizer along a MusicGen or AudioGen model.
- Do not include the //pretrained/ prefix. For instance if you trained a model
- with `facebook/encodec_32khz`, just put that as a name. Same for `dac_44khz`.
-
- In that case, this will not actually include a copy of the model, simply the reference
- to the model used.
- """
- if Path(pretrained_encodec).exists():
- pkg = torch.load(pretrained_encodec)
- assert 'best_state' in pkg
- assert 'xp.cfg' in pkg
- assert 'version' in pkg
- assert 'exported' in pkg
- else:
- pkg = {
- 'pretrained': pretrained_encodec,
- 'exported': True,
- 'version': __version__,
- }
- Path(out_file).parent.mkdir(exist_ok=True, parents=True)
- torch.save(pkg, out_file)
-
-
-def export_lm(checkpoint_path: tp.Union[Path, str], out_file: tp.Union[Path, str]):
- """Export only the best state from the given MusicGen or AudioGen checkpoint.
- """
- pkg = torch.load(checkpoint_path, 'cpu')
- if pkg['fsdp_best_state']:
- best_state = pkg['fsdp_best_state']['model']
- else:
- assert pkg['best_state']
- best_state = pkg['best_state']['model']
- new_pkg = {
- 'best_state': best_state,
- 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']),
- 'version': __version__,
- 'exported': True,
- }
-
- Path(out_file).parent.mkdir(exist_ok=True, parents=True)
- torch.save(new_pkg, out_file)
- return out_file
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/shallow_diffusion_tts.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/shallow_diffusion_tts.py
deleted file mode 100644
index 8295d48ea0028dd0b3fdf1315bb8f129d0070810..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/shallow_diffusion_tts.py
+++ /dev/null
@@ -1,324 +0,0 @@
-import math
-import random
-from collections import deque
-from functools import partial
-from inspect import isfunction
-from pathlib import Path
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-from tqdm import tqdm
-from einops import rearrange
-
-from modules.fastspeech.fs2 import FastSpeech2
-from modules.diffsinger_midi.fs2 import FastSpeech2MIDI
-from utils.hparams import hparams
-
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-# gaussian diffusion trainer class
-
-def extract(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
-
-
-def linear_beta_schedule(timesteps, max_beta=hparams.get('max_beta', 0.01)):
- """
- linear schedule
- """
- betas = np.linspace(1e-4, max_beta, timesteps)
- return betas
-
-
-def cosine_beta_schedule(timesteps, s=0.008):
- """
- cosine schedule
- as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
- """
- steps = timesteps + 1
- x = np.linspace(0, steps, steps)
- alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
- alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
- betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
- return np.clip(betas, a_min=0, a_max=0.999)
-
-
-beta_schedule = {
- "cosine": cosine_beta_schedule,
- "linear": linear_beta_schedule,
-}
-
-
-class GaussianDiffusion(nn.Module):
- def __init__(self, phone_encoder, out_dims, denoise_fn,
- timesteps=1000, K_step=1000, loss_type=hparams.get('diff_loss_type', 'l1'), betas=None, spec_min=None, spec_max=None):
- super().__init__()
- self.denoise_fn = denoise_fn
- if hparams.get('use_midi') is not None and hparams['use_midi']:
- self.fs2 = FastSpeech2MIDI(phone_encoder, out_dims)
- else:
- self.fs2 = FastSpeech2(phone_encoder, out_dims)
- self.mel_bins = out_dims
-
- if exists(betas):
- betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas
- else:
- if 'schedule_type' in hparams.keys():
- betas = beta_schedule[hparams['schedule_type']](timesteps)
- else:
- betas = cosine_beta_schedule(timesteps)
-
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.K_step = K_step
- self.loss_type = loss_type
-
- self.noise_list = deque(maxlen=4)
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']])
- self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']])
-
- def q_mean_variance(self, x_start, t):
- mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
- variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, cond, clip_denoised: bool):
- noise_pred = self.denoise_fn(x, t, cond=cond)
- x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_plms(self, x, t, interval, cond, clip_denoised=True, repeat_noise=False):
- """
- Use the PLMS method from [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778).
- """
-
- def get_x_pred(x, noise_t, t):
- a_t = extract(self.alphas_cumprod, t, x.shape)
- if t[0] < interval:
- a_prev = torch.ones_like(a_t)
- else:
- a_prev = extract(self.alphas_cumprod, torch.max(t-interval, torch.zeros_like(t)), x.shape)
- a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
-
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / (a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x + x_delta
-
- return x_pred
-
- noise_list = self.noise_list
- noise_pred = self.denoise_fn(x, t, cond=cond)
-
- if len(noise_list) == 0:
- x_pred = get_x_pred(x, noise_pred, t)
- noise_pred_prev = self.denoise_fn(x_pred, max(t-interval, 0), cond=cond)
- noise_pred_prime = (noise_pred + noise_pred_prev) / 2
- elif len(noise_list) == 1:
- noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2
- elif len(noise_list) == 2:
- noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12
- elif len(noise_list) >= 3:
- noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24
-
- x_prev = get_x_pred(x, noise_pred_prime, t)
- noise_list.append(noise_pred)
-
- return x_prev
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (
- extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
- )
-
- def p_losses(self, x_start, t, cond, noise=None, nonpadding=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
-
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- x_recon = self.denoise_fn(x_noisy, t, cond)
-
- if self.loss_type == 'l1':
- if nonpadding is not None:
- loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean()
- else:
- # print('are you sure w/o nonpadding?')
- loss = (noise - x_recon).abs().mean()
-
- elif self.loss_type == 'l2':
- loss = F.mse_loss(noise, x_recon)
- else:
- raise NotImplementedError()
-
- return loss
-
- def forward(self, txt_tokens, mel2ph=None, spk_embed=None,
- ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs):
- b, *_, device = *txt_tokens.shape, txt_tokens.device
- ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy,
- skip_decoder=(not infer), infer=infer, **kwargs)
- cond = ret['decoder_inp'].transpose(1, 2)
-
- if not infer:
- t = torch.randint(0, self.K_step, (b,), device=device).long()
- x = ref_mels
- x = self.norm_spec(x)
- x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
- ret['diff_loss'] = self.p_losses(x, t, cond)
- # nonpadding = (mel2ph != 0).float()
- # ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding)
- else:
- ret['fs2_mel'] = ret['mel_out']
- fs2_mels = ret['mel_out']
- t = self.K_step
- fs2_mels = self.norm_spec(fs2_mels)
- fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :]
-
- x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long())
- if hparams.get('gaussian_start') is not None and hparams['gaussian_start']:
- print('===> gaussion start.')
- shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
- x = torch.randn(shape, device=device)
-
- if hparams.get('pndm_speedup'):
- print('===> pndm speed:', hparams['pndm_speedup'])
- self.noise_list = deque(maxlen=4)
- iteration_interval = hparams['pndm_speedup']
- for i in tqdm(reversed(range(0, t, iteration_interval)), desc='sample time step',
- total=t // iteration_interval):
- x = self.p_sample_plms(x, torch.full((b,), i, device=device, dtype=torch.long), iteration_interval,
- cond)
- else:
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x[:, 0].transpose(1, 2)
- if mel2ph is not None: # for singing
- ret['mel_out'] = self.denorm_spec(x) * ((mel2ph > 0).float()[:, :, None])
- else:
- ret['mel_out'] = self.denorm_spec(x)
- return ret
-
- def norm_spec(self, x):
- return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
-
- def denorm_spec(self, x):
- return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
-
- def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph):
- return self.fs2.cwt2f0_norm(cwt_spec, mean, std, mel2ph)
-
- def out2mel(self, x):
- return x
-
-
-class OfflineGaussianDiffusion(GaussianDiffusion):
- def forward(self, txt_tokens, mel2ph=None, spk_embed=None,
- ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs):
- b, *_, device = *txt_tokens.shape, txt_tokens.device
-
- ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy,
- skip_decoder=True, infer=True, **kwargs)
- cond = ret['decoder_inp'].transpose(1, 2)
- fs2_mels = ref_mels[1]
- ref_mels = ref_mels[0]
-
- if not infer:
- t = torch.randint(0, self.K_step, (b,), device=device).long()
- x = ref_mels
- x = self.norm_spec(x)
- x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
- ret['diff_loss'] = self.p_losses(x, t, cond)
- else:
- t = self.K_step
- fs2_mels = self.norm_spec(fs2_mels)
- fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :]
-
- x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long())
-
- if hparams.get('gaussian_start') is not None and hparams['gaussian_start']:
- print('===> gaussion start.')
- shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
- x = torch.randn(shape, device=device)
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x[:, 0].transpose(1, 2)
- ret['mel_out'] = self.denorm_spec(x)
- return ret
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/stft_loss.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/stft_loss.py
deleted file mode 100644
index 229e6c777dc9ec7f710842d1e648dba1189ec8b4..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/stft_loss.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright 2019 Tomoki Hayashi
-# MIT License (https://opensource.org/licenses/MIT)
-
-"""STFT-based Loss modules."""
-import librosa
-import torch
-
-from modules.parallel_wavegan.losses import LogSTFTMagnitudeLoss, SpectralConvergengeLoss, stft
-
-
-class STFTLoss(torch.nn.Module):
- """STFT loss module."""
-
- def __init__(self, fft_size=1024, shift_size=120, win_length=600, window="hann_window",
- use_mel_loss=False):
- """Initialize STFT loss module."""
- super(STFTLoss, self).__init__()
- self.fft_size = fft_size
- self.shift_size = shift_size
- self.win_length = win_length
- self.window = getattr(torch, window)(win_length)
- self.spectral_convergenge_loss = SpectralConvergengeLoss()
- self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss()
- self.use_mel_loss = use_mel_loss
- self.mel_basis = None
-
- def forward(self, x, y):
- """Calculate forward propagation.
-
- Args:
- x (Tensor): Predicted signal (B, T).
- y (Tensor): Groundtruth signal (B, T).
-
- Returns:
- Tensor: Spectral convergence loss value.
- Tensor: Log STFT magnitude loss value.
-
- """
- x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window)
- y_mag = stft(y, self.fft_size, self.shift_size, self.win_length, self.window)
- if self.use_mel_loss:
- if self.mel_basis is None:
- self.mel_basis = torch.from_numpy(librosa.filters.mel(22050, self.fft_size, 80)).cuda().T
- x_mag = x_mag @ self.mel_basis
- y_mag = y_mag @ self.mel_basis
-
- sc_loss = self.spectral_convergenge_loss(x_mag, y_mag)
- mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag)
-
- return sc_loss, mag_loss
-
-
-class MultiResolutionSTFTLoss(torch.nn.Module):
- """Multi resolution STFT loss module."""
-
- def __init__(self,
- fft_sizes=[1024, 2048, 512],
- hop_sizes=[120, 240, 50],
- win_lengths=[600, 1200, 240],
- window="hann_window",
- use_mel_loss=False):
- """Initialize Multi resolution STFT loss module.
-
- Args:
- fft_sizes (list): List of FFT sizes.
- hop_sizes (list): List of hop sizes.
- win_lengths (list): List of window lengths.
- window (str): Window function type.
-
- """
- super(MultiResolutionSTFTLoss, self).__init__()
- assert len(fft_sizes) == len(hop_sizes) == len(win_lengths)
- self.stft_losses = torch.nn.ModuleList()
- for fs, ss, wl in zip(fft_sizes, hop_sizes, win_lengths):
- self.stft_losses += [STFTLoss(fs, ss, wl, window, use_mel_loss)]
-
- def forward(self, x, y):
- """Calculate forward propagation.
-
- Args:
- x (Tensor): Predicted signal (B, T).
- y (Tensor): Groundtruth signal (B, T).
-
- Returns:
- Tensor: Multi resolution spectral convergence loss value.
- Tensor: Multi resolution log STFT magnitude loss value.
-
- """
- sc_loss = 0.0
- mag_loss = 0.0
- for f in self.stft_losses:
- sc_l, mag_l = f(x, y)
- sc_loss += sc_l
- mag_loss += mag_l
- sc_loss /= len(self.stft_losses)
- mag_loss /= len(self.stft_losses)
-
- return sc_loss, mag_loss
diff --git a/spaces/AIWaves/Software_Company/src/agents/Action/__init__.py b/spaces/AIWaves/Software_Company/src/agents/Action/__init__.py
deleted file mode 100644
index bb85ebbfc6ae1d83770263a1744fe14cb687931d..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Software_Company/src/agents/Action/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .base_action import Action
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts
deleted file mode 100644
index 6aaf582d7263e074d2be99cfebf366f253235c14..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts
+++ /dev/null
@@ -1,23 +0,0 @@
-import { buildPrompt } from "$lib/buildPrompt";
-import { authCondition } from "$lib/server/auth";
-import { collections } from "$lib/server/database";
-import { models } from "$lib/server/models";
-import { error } from "@sveltejs/kit";
-
-export async function GET({ params, locals }) {
- return new Response(
- JSON.stringify(
- {
- note: "This is a preview of the prompt that will be sent to the model when retrying the message. It may differ from what was sent in the past if the parameters have been updated since",
- prompt,
- model: "",
- parameters: {
- return_full_text: false,
- },
- },
- null,
- 2
- ),
- { headers: { "Content-Type": "application/json" } }
- );
-}
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/flip/Flip.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/flip/Flip.js
deleted file mode 100644
index 5ebe8b6f76c124f8cc520b9fe6170725f2e6abe7..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/flip/Flip.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import Flip from '../../../plugins/flip.js';
-export default Flip;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/FullWindowRectangle.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/FullWindowRectangle.d.ts
deleted file mode 100644
index c711cc1d3243b55855a46a74a3b5c78397440191..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/FullWindowRectangle.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import FullWindowRectangle from "../../../plugins/fullwindowrectangle";
-export default FullWindowRectangle;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/InsertEmptyRow.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/InsertEmptyRow.js
deleted file mode 100644
index d1bfc7947a8ec1be7f9906277677fbad6ca25ed9..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/InsertEmptyRow.js
+++ /dev/null
@@ -1,35 +0,0 @@
-var InseryEmptyRow = function (rowIndex, proportion, space) {
- if (proportion === undefined) {
- proportion = this.rowProportions[0] || 0;
- }
- if (space === undefined) {
- space = this.space.row[0] || 0;
- }
-
- this.rowCount += 1;
- this.gridCount += this.columnCount;
-
- var args = [rowIndex * this.columnCount, 0];
- for (var i = 0; i < this.columnCount; i++) {
- args.push(null);
- }
- this.sizerChildren.splice.apply(this.sizerChildren, args);
-
- this.rowProportions.push(proportion);
-
- this.rowHeight.length += 1; // this.rowHeight will be recalculated when layout()
-
- this.space.row.splice(rowIndex, 0, space);
-
- return this;
-}
-
-var AddEmptyRow = function (proportion, space) {
- InseryEmptyRow.call(this, this.rowCount, proportion, space);
- return this;
-}
-
-export {
- InseryEmptyRow,
- AddEmptyRow
-};
\ No newline at end of file
diff --git a/spaces/Aki004/herta-so-vits/cluster/__init__.py b/spaces/Aki004/herta-so-vits/cluster/__init__.py
deleted file mode 100644
index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000
--- a/spaces/Aki004/herta-so-vits/cluster/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import numpy as np
-import torch
-from sklearn.cluster import KMeans
-
-def get_cluster_model(ckpt_path):
- checkpoint = torch.load(ckpt_path)
- kmeans_dict = {}
- for spk, ckpt in checkpoint.items():
- km = KMeans(ckpt["n_features_in_"])
- km.__dict__["n_features_in_"] = ckpt["n_features_in_"]
- km.__dict__["_n_threads"] = ckpt["_n_threads"]
- km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"]
- kmeans_dict[spk] = km
- return kmeans_dict
-
-def get_cluster_result(model, x, speaker):
- """
- x: np.array [t, 256]
- return cluster class result
- """
- return model[speaker].predict(x)
-
-def get_cluster_center_result(model, x,speaker):
- """x: np.array [t, 256]"""
- predict = model[speaker].predict(x)
- return model[speaker].cluster_centers_[predict]
-
-def get_center(model, x,speaker):
- return model[speaker].cluster_centers_[x]
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py
deleted file mode 100644
index 93d0701c0094517cec147c382b005e8063938548..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "cosface"
-config.network = "r100"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/glint360k"
-config.num_classes = 360232
-config.num_image = 17091657
-config.num_epoch = 20
-config.warmup_epoch = -1
-config.decay_epoch = [8, 12, 15, 18]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/AmirTrader/LinearRegression/README.md b/spaces/AmirTrader/LinearRegression/README.md
deleted file mode 100644
index 02323f5b61e61c26972aad862a5003b504dfc345..0000000000000000000000000000000000000000
--- a/spaces/AmirTrader/LinearRegression/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Linear Regression - UnderValued Stocks
-emoji: 📈
-colorFrom: gray
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/mapper/options/test_options.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/mapper/options/test_options.py
deleted file mode 100644
index aab2e5a5bba1038b97110fa6c8e8bce14de7390c..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/mapper/options/test_options.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from argparse import ArgumentParser
-
-
-class TestOptions:
-
- def __init__(self):
- self.parser = ArgumentParser()
- self.initialize()
-
- def initialize(self):
- # arguments for inference script
- self.parser.add_argument('--exp_dir', type=str, help='Path to experiment output directory')
- self.parser.add_argument('--checkpoint_path', default=None, type=str, help='Path to model checkpoint')
- self.parser.add_argument('--couple_outputs', action='store_true',
- help='Whether to also save inputs + outputs side-by-side')
-
- self.parser.add_argument('--mapper_type', default='LevelsMapper', type=str, help='Which mapper to use')
- self.parser.add_argument('--no_coarse_mapper', default=False, action="store_true")
- self.parser.add_argument('--no_medium_mapper', default=False, action="store_true")
- self.parser.add_argument('--no_fine_mapper', default=False, action="store_true")
- self.parser.add_argument('--stylegan_size', default=1024, type=int)
-
- self.parser.add_argument('--test_batch_size', default=2, type=int, help='Batch size for testing and inference')
- self.parser.add_argument('--latents_test_path', default=None, type=str, help="The latents for the validation")
- self.parser.add_argument('--test_workers', default=2, type=int,
- help='Number of test/inference dataloader workers')
-
- self.parser.add_argument('--n_images', type=int, default=None,
- help='Number of images to output. If None, run on all data')
-
- self.parser.add_argument('--run_id', type=str, default='PKNWUQRQRKXQ',
- help='The generator id to use')
-
- self.parser.add_argument('--image_name', type=str, default='',
- help='image to run on')
-
- self.parser.add_argument('--edit_name', type=str, default='',
- help='edit type')
-
- def parse(self):
- opts = self.parser.parse_args()
- return opts
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/training/projectors/w_plus_projector.py b/spaces/Amrrs/DragGan-Inversion/PTI/training/projectors/w_plus_projector.py
deleted file mode 100644
index b9cce427e5374c5ddce90199e1184f84a13d30c5..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/training/projectors/w_plus_projector.py
+++ /dev/null
@@ -1,145 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Project given image to the latent space of pretrained network pickle."""
-
-import copy
-import wandb
-import numpy as np
-import torch
-import torch.nn.functional as F
-from tqdm import tqdm
-from configs import global_config, hyperparameters
-import dnnlib
-from utils.log_utils import log_image_from_w
-
-
-def project(
- G,
- target: torch.Tensor, # [C,H,W] and dynamic range [0,255], W & H must match G output resolution
- *,
- num_steps=1000,
- w_avg_samples=10000,
- initial_learning_rate=0.01,
- initial_noise_factor=0.05,
- lr_rampdown_length=0.25,
- lr_rampup_length=0.05,
- noise_ramp_length=0.75,
- regularize_noise_weight=1e5,
- verbose=False,
- device: torch.device,
- use_wandb=False,
- initial_w=None,
- image_log_step=global_config.image_rec_result_log_snapshot,
- w_name: str
-):
- assert target.shape == (G.img_channels, G.img_resolution, G.img_resolution)
-
- def logprint(*args):
- if verbose:
- print(*args)
-
- G = copy.deepcopy(G).eval().requires_grad_(False).to(device).float() # type: ignore
-
- # Compute w stats.
- logprint(f'Computing W midpoint and stddev using {w_avg_samples} samples...')
- z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim)
- w_samples = G.mapping(torch.from_numpy(z_samples).to(device), None) # [N, L, C]
- w_samples = w_samples[:, :1, :].cpu().numpy().astype(np.float32) # [N, 1, C]
- w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C]
- w_avg_tensor = torch.from_numpy(w_avg).to(global_config.device)
- w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5
-
- start_w = initial_w if initial_w is not None else w_avg
-
- # Setup noise inputs.
- noise_bufs = {name: buf for (name, buf) in G.synthesis.named_buffers() if 'noise_const' in name}
-
- # Load VGG16 feature detector.
- url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt'
- with dnnlib.util.open_url(url) as f:
- vgg16 = torch.jit.load(f).eval().to(device)
-
- # Features for target image.
- target_images = target.unsqueeze(0).to(device).to(torch.float32)
- if target_images.shape[2] > 256:
- target_images = F.interpolate(target_images, size=(256, 256), mode='area')
- target_features = vgg16(target_images, resize_images=False, return_lpips=True)
-
- start_w = np.repeat(start_w, G.mapping.num_ws, axis=1)
- w_opt = torch.tensor(start_w, dtype=torch.float32, device=device,
- requires_grad=True) # pylint: disable=not-callable
-
- optimizer = torch.optim.Adam([w_opt] + list(noise_bufs.values()), betas=(0.9, 0.999),
- lr=hyperparameters.first_inv_lr)
-
- # Init noise.
- for buf in noise_bufs.values():
- buf[:] = torch.randn_like(buf)
- buf.requires_grad = True
-
- for step in tqdm(range(num_steps)):
-
- # Learning rate schedule.
- t = step / num_steps
- w_noise_scale = w_std * initial_noise_factor * max(0.0, 1.0 - t / noise_ramp_length) ** 2
- lr_ramp = min(1.0, (1.0 - t) / lr_rampdown_length)
- lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi)
- lr_ramp = lr_ramp * min(1.0, t / lr_rampup_length)
- lr = initial_learning_rate * lr_ramp
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
-
- # Synth images from opt_w.
- w_noise = torch.randn_like(w_opt) * w_noise_scale
- ws = (w_opt + w_noise)
-
- synth_images = G.synthesis(ws, noise_mode='const', force_fp32=True)
-
- # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images.
- synth_images = (synth_images + 1) * (255 / 2)
- if synth_images.shape[2] > 256:
- synth_images = F.interpolate(synth_images, size=(256, 256), mode='area')
-
- # Features for synth images.
- synth_features = vgg16(synth_images, resize_images=False, return_lpips=True)
- dist = (target_features - synth_features).square().sum()
-
- # Noise regularization.
- reg_loss = 0.0
- for v in noise_bufs.values():
- noise = v[None, None, :, :] # must be [1,1,H,W] for F.avg_pool2d()
- while True:
- reg_loss += (noise * torch.roll(noise, shifts=1, dims=3)).mean() ** 2
- reg_loss += (noise * torch.roll(noise, shifts=1, dims=2)).mean() ** 2
- if noise.shape[2] <= 8:
- break
- noise = F.avg_pool2d(noise, kernel_size=2)
- loss = dist + reg_loss * regularize_noise_weight
-
- if step % image_log_step == 0:
- with torch.no_grad():
- if use_wandb:
- global_config.training_step += 1
- wandb.log({f'first projection _{w_name}': loss.detach().cpu()}, step=global_config.training_step)
- log_image_from_w(w_opt, G, w_name)
-
- # Step
- optimizer.zero_grad(set_to_none=True)
- loss.backward()
- optimizer.step()
- logprint(f'step {step + 1:>4d}/{num_steps}: dist {dist:<4.2f} loss {float(loss):<5.2f}')
-
- # Normalize noise.
- with torch.no_grad():
- for buf in noise_bufs.values():
- buf -= buf.mean()
- buf *= buf.square().mean().rsqrt()
-
- del G
- return w_opt
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/style_training_loop.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/style_training_loop.py
deleted file mode 100644
index fa0dd4ae836b2a85a13fd46f1e77ba4c38ecdefd..0000000000000000000000000000000000000000
--- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/style_training_loop.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import numpy as np
-import torch
-from pytorch_lightning.loops import Loop
-
-from src.dataset import DATASET_REGISTRY
-from src.dataset.ray_utils import denormalize_vgg, normalize_vgg
-from src.loop.utils import N_to_reso, cal_n_samples
-from src.model import MODEL_REGISTRY
-from src.sampler.simple_sampler import SimpleSampler, InfiniteSamplerWrapper
-import torch.nn.functional as TF
-
-from src.style_module.style_module import cal_mse_content_loss, cal_adain_style_loss
-
-
-class StyleTrainingLoop(Loop):
- def __init__(self, epoch, cfg, renderer):
- super().__init__()
- self.cfg = cfg
- self.model = MODEL_REGISTRY.get(self.cfg["model"]["name"])(cfg)
-
- self.dataloader = DATASET_REGISTRY.get(self.cfg["dataset"]["name"])(
- **self.cfg["dataset"]["train"]["params"],
- )
- self.renderer = renderer
- self.optimizer = None
- self.training_sampler = None
- self.frame_sampler = None
- self.style_dataloader = None
- self.iteration = 0
- self.epoch = epoch
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.init_loop()
- self.init_optimizer()
-
- def init_loop(self):
- self.white_bg = self.dataloader.white_bg
- self.near_far = self.dataloader.near_far
- self.h_rays, self.w_rays = self.dataloader.img_wh[1], self.dataloader.img_wh[0]
-
- self.step_ratio = self.cfg["sampler"]["params"]["step_ratio"]
- self.batch_size = self.cfg["sampler"]["params"]["batch_size"]
- self.patch_size = self.cfg["sampler"]["params"]["patch_size"]
- self.chunk_size = self.cfg["sampler"]["params"]["chunk_size"]
-
- self.aabb = self.dataloader.scene_bbox.to(self.device)
- reso_cur = N_to_reso(self.cfg["sampler"]["params"]["N_voxel_init"], self.aabb)
- self.nSamples = min(int(self.cfg["sampler"]["params"]["n_samples"]), cal_n_samples(reso_cur, self.step_ratio))
-
- torch.cuda.empty_cache()
- self.dataloader.prepare_feature_data(self.model.tensorf.encoder)
- self.allrays, self.allfeatures = self.dataloader.all_rays, self.dataloader.all_features
- self.allrays_stack, self.allrgbs_stack = self.dataloader.all_rays_stack, self.dataloader.all_rgbs_stack
-
- if not self.model.ndc_ray:
- self.allrays, self.allfeatures = self.model.tensorf.filtering_rays(self.allrays, self.allfeatures,
- bbox_only=True)
-
- self.training_sampler = SimpleSampler(self.allrays.shape[0], self.batch_size)
- self.frame_sampler = iter(InfiniteSamplerWrapper(self.allrays_stack.size(0))) # every next(sampler) returns a frame index
-
- self.style_dataloader = DATASET_REGISTRY.get(self.cfg["style_dataset"]["name"])(
- datadir=self.cfg["style_dataset"]["train"]["params"]["datadir"],
- batch_size=self.cfg["style_dataset"]["train"]["params"]["batch_size"],
- sampler=InfiniteSamplerWrapper,
- image_side_length=self.cfg["style_dataset"]["train"]["params"]["image_side_length"],
- num_workers=self.cfg["style_dataset"]["train"]["params"]["num_workers"],
- )
-
- def init_optimizer(self):
- grad_vars = self.model.tensorf.get_optparam_groups_feature_mod(self.cfg["optimizer"]["lr_init"], self.cfg["optimizer"]["lr_basis"])
-
- if self.cfg["optimizer"]["lr_decay_iters"] > 0:
- self.lr_factor = self.cfg["optimizer"]["lr_decay_target_ratio"] ** (1 / self.cfg["optimizer"]["lr_decay_iters"])
- else:
- self.lr_factor = self.cfg["optimizer"]["lr_decay_target_ratio"] ** (1 / self.cfg["trainer"]["n_iters"])
-
- print("lr decay", self.cfg["optimizer"]["lr_decay_target_ratio"], self.cfg["optimizer"]["lr_decay_iters"])
-
- self.optimizer = torch.optim.Adam(grad_vars, betas=(0.9, 0.99))
-
- def advance(self):
- # get style_img, this style_img has NOT been normalized according to the pretrained VGGmodel
- style_img = next(self.style_dataloader)[0].to(self.device)
-
- # randomly sample patch_size*patch_size patch from given frame
- frame_idx = next(self.frame_sampler)
- start_h = np.random.randint(0, self.h_rays - self.patch_size + 1)
- start_w = np.random.randint(0, self.w_rays - self.patch_size + 1)
- if self.white_bg:
- # move random sampled patches into center
- mid_h, mid_w = (self.h_rays - self.patch_size + 1) / 2, (self.w_rays - self.patch_size + 1) / 2
- if mid_h - start_h >= 1:
- start_h += np.random.randint(0, mid_h - start_h)
- elif mid_h - start_h <= -1:
- start_h += np.random.randint(mid_h - start_h, 0)
- if mid_w - start_w >= 1:
- start_w += np.random.randint(0, mid_w - start_w)
- elif mid_w - start_w <= -1:
- start_w += np.random.randint(mid_w - start_w, 0)
-
- rays_train = self.allrays_stack[frame_idx, start_h:start_h + self.patch_size, start_w:start_w + self.patch_size, :] \
- .reshape(-1, 6).to(self.device)
- # [patch*patch, 6]
-
- rgbs_train = self.allrgbs_stack[frame_idx, start_h:(start_h + self.patch_size),
- start_w:(start_w + self.patch_size), :].to(self.device)
- # [patch, patch, 3]
-
- feature_map, acc_map, style_feature = self.renderer(rays_train, self.model.tensorf, chunk=self.chunk_size,
- N_samples=self.nSamples, white_bg=self.white_bg,
- ndc_ray=self.model.ndc_ray, render_feature=True, style_img=style_img,
- device=self.device, is_train=True)
-
- feature_map = feature_map.reshape(self.patch_size, self.patch_size, 256)[None, ...].permute(0, 3, 1, 2)
- rgb_map = self.model.tensorf.decoder(feature_map)
-
- # feature_map is trained with normalized rgb maps, so here we don't normalize the rgb map again.
- rgbs_train = normalize_vgg(rgbs_train[None, ...].permute(0, 3, 1, 2))
-
- out_image_feature = self.model.tensorf.encoder(rgb_map)
- content_feature = self.model.tensorf.encoder(rgbs_train)
-
- if self.white_bg:
- mask = acc_map.reshape(self.patch_size, self.patch_size, 1)[None, ...].permute(0, 3, 1, 2)
- # if not (mask > 0.5).any():
- # continue
-
- # content loss
- _mask = TF.interpolate(mask, size=content_feature.relu4_1.size()[-2:], mode='bilinear').ge(1e-5)
- content_loss = cal_mse_content_loss(torch.masked_select(content_feature.relu4_1, _mask),
- torch.masked_select(out_image_feature.relu4_1, _mask))
- # style loss
- style_loss = 0.
- for style_feature, image_feature in zip(style_feature, out_image_feature):
- _mask = TF.interpolate(mask, size=image_feature.size()[-2:], mode='bilinear').ge(1e-5)
- C = image_feature.size()[1]
- masked_img_feature = torch.masked_select(image_feature, _mask).reshape(1, C, -1)
- style_loss += cal_adain_style_loss(style_feature, masked_img_feature)
-
- content_loss *= self.cfg["style_config"]["content_weight"]
- style_loss *= self.cfg["style_config"]["style_weight"]
- else:
- # content loss
- content_loss = cal_mse_content_loss(content_feature.relu4_1, out_image_feature.relu4_1)
- # style loss
- style_loss = 0.
- for style_feature, image_feature in zip(style_feature, out_image_feature):
- style_loss += cal_adain_style_loss(style_feature, image_feature)
-
- content_loss *= self.cfg["style_config"]["content_weight"]
- style_loss *= self.cfg["style_config"]["style_weight"]
-
- feature_tv_loss = self.model.tensorf.tvreg(feature_map) * self.cfg["style_config"]["featuremap_tv_weight"]
- image_tv_loss = self.model.tensorf.tvreg(denormalize_vgg(rgb_map)) * self.cfg["style_config"]["image_tv_weight"]
-
- total_loss = content_loss + style_loss + feature_tv_loss + image_tv_loss
-
- self.iteration += 1
-
- self.optimizer.zero_grad()
- total_loss.backward()
- self.optimizer.step()
-
- for param_group in self.optimizer.param_groups:
- param_group['lr'] = param_group['lr'] * self.lr_factor
-
-
-
-
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/karras_ve/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/karras_ve/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/closure.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/closure.py
deleted file mode 100644
index b955f81f425be4ac3e6bb3f4aac653887989e872..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/closure.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class ClosureHook(Hook):
-
- def __init__(self, fn_name, fn):
- assert hasattr(self, fn_name)
- assert callable(fn)
- setattr(self, fn_name, fn)
diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/datasets/__init__.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/datasets/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/debug.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/debug.py
deleted file mode 100644
index 2a3e7d298f393ed8532e4f11913635efc94cb329..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/debug.py
+++ /dev/null
@@ -1,199 +0,0 @@
-import importlib.resources
-import locale
-import logging
-import os
-import sys
-from optparse import Values
-from types import ModuleType
-from typing import Any, Dict, List, Optional
-
-import pip._vendor
-from pip._vendor.certifi import where
-from pip._vendor.packaging.version import parse as parse_version
-
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.cmdoptions import make_target_python
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.configuration import Configuration
-from pip._internal.metadata import get_environment
-from pip._internal.utils.logging import indent_log
-from pip._internal.utils.misc import get_pip_version
-
-logger = logging.getLogger(__name__)
-
-
-def show_value(name: str, value: Any) -> None:
- logger.info("%s: %s", name, value)
-
-
-def show_sys_implementation() -> None:
- logger.info("sys.implementation:")
- implementation_name = sys.implementation.name
- with indent_log():
- show_value("name", implementation_name)
-
-
-def create_vendor_txt_map() -> Dict[str, str]:
- with importlib.resources.open_text("pip._vendor", "vendor.txt") as f:
- # Purge non version specifying lines.
- # Also, remove any space prefix or suffixes (including comments).
- lines = [
- line.strip().split(" ", 1)[0] for line in f.readlines() if "==" in line
- ]
-
- # Transform into "module" -> version dict.
- return dict(line.split("==", 1) for line in lines)
-
-
-def get_module_from_module_name(module_name: str) -> ModuleType:
- # Module name can be uppercase in vendor.txt for some reason...
- module_name = module_name.lower().replace("-", "_")
- # PATCH: setuptools is actually only pkg_resources.
- if module_name == "setuptools":
- module_name = "pkg_resources"
-
- __import__(f"pip._vendor.{module_name}", globals(), locals(), level=0)
- return getattr(pip._vendor, module_name)
-
-
-def get_vendor_version_from_module(module_name: str) -> Optional[str]:
- module = get_module_from_module_name(module_name)
- version = getattr(module, "__version__", None)
-
- if not version:
- # Try to find version in debundled module info.
- assert module.__file__ is not None
- env = get_environment([os.path.dirname(module.__file__)])
- dist = env.get_distribution(module_name)
- if dist:
- version = str(dist.version)
-
- return version
-
-
-def show_actual_vendor_versions(vendor_txt_versions: Dict[str, str]) -> None:
- """Log the actual version and print extra info if there is
- a conflict or if the actual version could not be imported.
- """
- for module_name, expected_version in vendor_txt_versions.items():
- extra_message = ""
- actual_version = get_vendor_version_from_module(module_name)
- if not actual_version:
- extra_message = (
- " (Unable to locate actual module version, using"
- " vendor.txt specified version)"
- )
- actual_version = expected_version
- elif parse_version(actual_version) != parse_version(expected_version):
- extra_message = (
- " (CONFLICT: vendor.txt suggests version should"
- " be {})".format(expected_version)
- )
- logger.info("%s==%s%s", module_name, actual_version, extra_message)
-
-
-def show_vendor_versions() -> None:
- logger.info("vendored library versions:")
-
- vendor_txt_versions = create_vendor_txt_map()
- with indent_log():
- show_actual_vendor_versions(vendor_txt_versions)
-
-
-def show_tags(options: Values) -> None:
- tag_limit = 10
-
- target_python = make_target_python(options)
- tags = target_python.get_tags()
-
- # Display the target options that were explicitly provided.
- formatted_target = target_python.format_given()
- suffix = ""
- if formatted_target:
- suffix = f" (target: {formatted_target})"
-
- msg = "Compatible tags: {}{}".format(len(tags), suffix)
- logger.info(msg)
-
- if options.verbose < 1 and len(tags) > tag_limit:
- tags_limited = True
- tags = tags[:tag_limit]
- else:
- tags_limited = False
-
- with indent_log():
- for tag in tags:
- logger.info(str(tag))
-
- if tags_limited:
- msg = (
- "...\n[First {tag_limit} tags shown. Pass --verbose to show all.]"
- ).format(tag_limit=tag_limit)
- logger.info(msg)
-
-
-def ca_bundle_info(config: Configuration) -> str:
- levels = set()
- for key, _ in config.items():
- levels.add(key.split(".")[0])
-
- if not levels:
- return "Not specified"
-
- levels_that_override_global = ["install", "wheel", "download"]
- global_overriding_level = [
- level for level in levels if level in levels_that_override_global
- ]
- if not global_overriding_level:
- return "global"
-
- if "global" in levels:
- levels.remove("global")
- return ", ".join(levels)
-
-
-class DebugCommand(Command):
- """
- Display debug information.
- """
-
- usage = """
- %prog """
- ignore_require_venv = True
-
- def add_options(self) -> None:
- cmdoptions.add_target_python_options(self.cmd_opts)
- self.parser.insert_option_group(0, self.cmd_opts)
- self.parser.config.load()
-
- def run(self, options: Values, args: List[str]) -> int:
- logger.warning(
- "This command is only meant for debugging. "
- "Do not use this with automation for parsing and getting these "
- "details, since the output and options of this command may "
- "change without notice."
- )
- show_value("pip version", get_pip_version())
- show_value("sys.version", sys.version)
- show_value("sys.executable", sys.executable)
- show_value("sys.getdefaultencoding", sys.getdefaultencoding())
- show_value("sys.getfilesystemencoding", sys.getfilesystemencoding())
- show_value(
- "locale.getpreferredencoding",
- locale.getpreferredencoding(),
- )
- show_value("sys.platform", sys.platform)
- show_sys_implementation()
-
- show_value("'cert' config value", ca_bundle_info(self.parser.config))
- show_value("REQUESTS_CA_BUNDLE", os.environ.get("REQUESTS_CA_BUNDLE"))
- show_value("CURL_CA_BUNDLE", os.environ.get("CURL_CA_BUNDLE"))
- show_value("pip._vendor.certifi.where()", where())
- show_value("pip._vendor.DEBUNDLED", pip._vendor.DEBUNDLED)
-
- show_vendor_versions()
-
- show_tags(options)
-
- return SUCCESS
diff --git a/spaces/AutoLLM/AutoAgents/autoagents/agents/search.py b/spaces/AutoLLM/AutoAgents/autoagents/agents/search.py
deleted file mode 100644
index 6bb8c4c6fc911cfeaa3850553545ba227022fa90..0000000000000000000000000000000000000000
--- a/spaces/AutoLLM/AutoAgents/autoagents/agents/search.py
+++ /dev/null
@@ -1,260 +0,0 @@
-from typing import List, Union, Any, Optional, Dict
-import uuid
-import re
-from datetime import date
-import asyncio
-from collections import defaultdict
-import os
-
-from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser
-from langchain.prompts import StringPromptTemplate
-from langchain import LLMChain
-from langchain.chat_models import ChatOpenAI
-from langchain.schema import AgentAction, AgentFinish
-from langchain.callbacks import get_openai_callback
-from langchain.callbacks.base import AsyncCallbackHandler
-from langchain.callbacks.manager import AsyncCallbackManager
-from langchain.base_language import BaseLanguageModel
-
-from autoagents.tools.tools import search_tool, note_tool, rewrite_search_query
-from autoagents.utils.logger import InteractionsLogger
-
-
-# Set up the base template
-template = """
-We are working together to satisfy the user's original goal step-by-step. Play to your strengths as an LLM.
-Make sure the plan is achievable using the
-available tools. You SHOULD directly produce a `Final Answer:` when you
-think you have good-enough information to achieve the Goal. The final answer should be descriptive should be descriptive, encompassing all relevant details..
-Today is {today}.
-
-## Goal:
-{input}
-
-If you require assistance or additional information, you should use *only* one of the following tools:
-{tools}.
-
-## Output format
-You MUST produce Output in the following format:
-
-Thought: you should always think about what to do when you think you have not achieved the Goal.
-Reasoning: reasoning
-Plan:
-- short bulleted
-- list that conveys
-- next-step plan
-Action: the action to take, should be ONE OF {tool_names}
-Action Input: the input to the Action
-Observation: the result of the Action
-... (this Thought/Reasoning/Plan/Action/Action Input/Observation can repeat N
-times until there is a Final Answer)
-Final Answer: the final answer to achieve the original Goal which can be the
-only output or when you have no Action to do next.
-
-## History
-{agent_scratchpad}
-
-Do not repeat any past actions in History, because you will not get additional information.
-If the last action is search, then you should use notepad to keep critical information.
-If you have gathered all information in your plannings to satisfy the user's original goal, then respond immediately as the Final Answer.
-"""
-
-
-# Set up a prompt template
-class CustomPromptTemplate(StringPromptTemplate):
- # The template to use
- template: str
- # The list of tools available
- tools: List[Tool]
- ialogger: InteractionsLogger
-
- def format(self, **kwargs) -> str:
- # Get the intermediate steps (AgentAction, Observation tuples)
- # Format them in a particular way
- intermediate_steps = kwargs.pop("intermediate_steps")
- outputs = ""
- # Set the agent_scratchpad variable to that value
- for action, observation in intermediate_steps[:-1]:
- outputs += f"{action.log}\n"
- if len(intermediate_steps) > 0:
- action, observation = intermediate_steps[-1]
- # self.ialogger.add_system({"action": action, "observation": observation})
- if action.tool not in ("Search", "Notepad"):
- raise Exception("Invalid tool requested by the model.")
- if action.tool == "Notepad":
- outputs += f"{action.log}\n"
- outputs += f"Observation: {observation}\n"
- elif action.tool == "Search":
- current = "".join([f"{d}" for d in observation])
- outputs += f"{action.log}\n"
- outputs += f"Observation: {current}\n"
-
- # Parse the output ofr the last step for the reasoning and plan
- regex = r"Thought\s*\d*\s*:(.*?)\n(.*)"
- match = re.search(regex, action.log, re.DOTALL)
- thoughts = match.group(1).strip() if match else ""
-
- regex = r"Reasoning\s*\d*\s*:(.*?)\n(.*)"
- match = re.search(regex, action.log, re.DOTALL)
- reasoning = match.group(1).strip() if match else ""
-
- regex = r"Plan\s*\d*\s*:(.*?)\nAction(.*)"
- match = re.search(regex, action.log, re.DOTALL)
- plans = match.group(1).strip() if match else ""
- self.ialogger.add_structured_data({"output":{"thoughts": thoughts,
- "reasoning": reasoning,
- "plans": plans,
- "action": action.tool,
- "action_input": action.tool_input,
- "raw_output":action.log},
- "observation": observation})
- kwargs["agent_scratchpad"] = outputs
- # Create a tools variable from the list of tools provided
- kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
- # Create a list of tool names for the tools provided
- kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
- kwargs["today"] = date.today()
- final_prompt = self.template.format(**kwargs)
- self.ialogger.add_system({"value": final_prompt})
- return final_prompt
-
-
-class CustomOutputParser(AgentOutputParser):
- class Config:
- arbitrary_types_allowed = True
- ialogger: InteractionsLogger
- llm: BaseLanguageModel
- new_action_input: Optional[str]
- action_history = defaultdict(set)
-
- def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
- self.ialogger.add_ai(llm_output)
- # Check if agent should finish
- if "Final Answer:" in llm_output:
- final_answer = llm_output.split("Final Answer:")[-1].strip()
- self.ialogger.add_structured_data({"output": {"action": "Final Answer",
- "action_input": final_answer,
- "raw_output": llm_output}})
- return AgentFinish(
- # Return values is generally always a dictionary with a single `output` key
- # It is not recommended to try anything else at the moment :)
- return_values={"output": final_answer},
- log=llm_output,
- )
- # Parse out the action and action input
- regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
- match = re.search(regex, llm_output, re.DOTALL)
- if not match:
- raise ValueError(f"Could not parse LLM output: `{llm_output}`")
- action = match.group(1).strip()
- action_input = match.group(2).strip().strip('"')
-
- if action_input in self.action_history[action]:
- new_action_input = rewrite_search_query(action_input,
- self.action_history[action],
- self.llm)
- self.ialogger.add_message({"query_rewrite": True})
- self.new_action_input = new_action_input
- self.action_history[action].add(new_action_input)
- return AgentAction(tool=action, tool_input=new_action_input, log=llm_output)
- else:
- # Return the action and action input
- self.action_history[action].add(action_input)
- return AgentAction(tool=action, tool_input=action_input, log=llm_output)
-
-
-class ActionRunner:
- def __init__(self,
- outputq,
- llm: BaseLanguageModel,
- persist_logs: bool = False):
- self.ialogger = InteractionsLogger(name=f"{uuid.uuid4().hex[:6]}", persist=persist_logs)
- tools = [search_tool, note_tool]
- prompt = CustomPromptTemplate(
- template=template,
- tools=tools,
- input_variables=["input", "intermediate_steps"],
- ialogger=self.ialogger)
-
- output_parser = CustomOutputParser(ialogger=self.ialogger, llm=llm)
-
- class MyCustomHandler(AsyncCallbackHandler):
- def __init__(self):
- pass
-
- async def on_chain_end(self, outputs, **kwargs) -> None:
- if "text" in outputs:
- await outputq.put(outputs["text"])
-
- async def on_agent_action(
- self,
- action: AgentAction,
- *,
- run_id: uuid.UUID,
- parent_run_id: Optional[uuid.UUID] = None,
- **kwargs: Any,
- ) -> None:
- if (new_action_input := output_parser.new_action_input):
- # Notify users
- await outputq.put(RuntimeWarning(f"Action Input Rewritten: {new_action_input}"))
- output_parser.new_action_input = None
-
- async def on_tool_start(
- self,
- serialized: Dict[str, Any],
- input_str: str,
- *,
- run_id: uuid.UUID,
- parent_run_id: Optional[uuid.UUID] = None,
- **kwargs: Any,
- ) -> None:
- pass
-
- async def on_tool_end(
- self,
- output: str,
- *,
- run_id: uuid.UUID,
- parent_run_id: Optional[uuid.UUID] = None,
- **kwargs: Any,
- ) -> None:
- await outputq.put(output)
-
- handler = MyCustomHandler()
-
- llm_chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
- tool_names = [tool.name for tool in tools]
- for tool in tools:
- tool.callbacks = [handler]
-
- agent = LLMSingleActionAgent(
- llm_chain=llm_chain,
- output_parser=output_parser,
- stop=["\nObservation:"],
- allowed_tools=tool_names
- )
- callback_manager = AsyncCallbackManager([handler])
-
- # Finally create the Executor
- self.agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
- tools=tools,
- verbose=False,
- callback_manager=callback_manager)
-
- async def run(self, goal: str, outputq):
- self.ialogger.set_goal(goal)
- try:
- with get_openai_callback() as cb:
- output = await self.agent_executor.arun(goal)
- self.ialogger.add_cost({"total_tokens": cb.total_tokens,
- "prompt_tokens": cb.prompt_tokens,
- "completion_tokens": cb.completion_tokens,
- "total_cost": cb.total_cost,
- "successful_requests": cb.successful_requests})
- self.ialogger.save()
- except Exception as e:
- self.ialogger.add_message({"error": str(e)})
- self.ialogger.save()
- await outputq.put(e)
- return
- return output
diff --git a/spaces/AutomationVR/ImageDemo/README.md b/spaces/AutomationVR/ImageDemo/README.md
deleted file mode 100644
index be288347035659aa46796d4ab35aca8cada340c5..0000000000000000000000000000000000000000
--- a/spaces/AutomationVR/ImageDemo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SDXL Base 1.0Gradio
-emoji: 🏢
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Awesimo/jojogan/op/upfirdn2d_cpu.py b/spaces/Awesimo/jojogan/op/upfirdn2d_cpu.py
deleted file mode 100644
index a0f820b4c81e03598589b1ea6b95cf9bef9b04f8..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/op/upfirdn2d_cpu.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import os
-
-import torch
-from torch.autograd import Function
-from torch.nn import functional as F
-
-
-
-module_path = os.path.dirname(__file__)
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- out = upfirdn2d_native(
- input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]
- )
-
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, channel, in_h, in_w = input.shape
- input = input.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x
-
- return out.view(-1, channel, out_h, out_w)
diff --git a/spaces/Awiny/Image2Paragraph/app.py b/spaces/Awiny/Image2Paragraph/app.py
deleted file mode 100644
index e35190c22ed4a0fa80f3992c32d4a4b0a5b8837b..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/app.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import gradio as gr
-import cv2
-import numpy as np
-from PIL import Image
-import base64
-from io import BytesIO
-from models.image_text_transformation import ImageTextTransformation
-import argparse
-import torch
-
-parser = argparse.ArgumentParser()
-parser.add_argument('--gpt_version', choices=['gpt-3.5-turbo', 'gpt4'], default='gpt-3.5-turbo')
-parser.add_argument('--image_caption', action='store_true', dest='image_caption', default=True, help='Set this flag to True if you want to use BLIP2 Image Caption')
-parser.add_argument('--dense_caption', action='store_true', dest='dense_caption', default=True, help='Set this flag to True if you want to use Dense Caption')
-parser.add_argument('--semantic_segment', action='store_true', dest='semantic_segment', default=True, help='Set this flag to True if you want to use semantic segmentation')
-parser.add_argument('--sam_arch', choices=['vit_b', 'vit_l', 'vit_h'], dest='sam_arch', default='vit_b', help='vit_b is the default model (fast but not accurate), vit_l and vit_h are larger models')
-parser.add_argument('--captioner_base_model', choices=['blip', 'blip2'], dest='captioner_base_model', default='blip', help='blip2 requires 15G GPU memory, blip requires 6G GPU memory')
-parser.add_argument('--region_classify_model', choices=['ssa', 'edit_anything'], dest='region_classify_model', default='edit_anything', help='Select the region classification model: edit anything is ten times faster than ssa, but less accurate.')
-parser.add_argument('--image_caption_device', choices=['cuda', 'cpu'], default='cuda', help='Select the device: cuda or cpu, gpu memory larger than 14G is recommended')
-parser.add_argument('--dense_caption_device', choices=['cuda', 'cpu'], default='cuda', help='Select the device: cuda or cpu, < 6G GPU is not recommended>')
-parser.add_argument('--semantic_segment_device', choices=['cuda', 'cpu'], default='cuda', help='Select the device: cuda or cpu, gpu memory larger than 14G is recommended. Make sue this model and image_caption model on same device.')
-parser.add_argument('--contolnet_device', choices=['cuda', 'cpu'], default='cpu', help='Select the device: cuda or cpu, <6G GPU is not recommended>')
-
-args = parser.parse_args()
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-# device = "cpu"
-
-if device == "cuda":
- args.image_caption_device = "cuda"
- args.dense_caption_device = "cuda"
- args.semantic_segment_device = "cuda"
- args.contolnet_device = "cuda"
-else:
- args.image_caption_device = "cpu"
- args.dense_caption_device = "cpu"
- args.semantic_segment_device = "cpu"
- args.contolnet_device = "cpu"
-
-def pil_image_to_base64(image):
- buffered = BytesIO()
- image.save(buffered, format="JPEG")
- img_str = base64.b64encode(buffered.getvalue()).decode()
- return img_str
-
-def add_logo():
- with open("examples/logo.png", "rb") as f:
- logo_base64 = base64.b64encode(f.read()).decode()
- return logo_base64
-
-def process_image(image_src, options=None, processor=None):
- print(options)
- if options is None:
- options = []
- processor.args.semantic_segment = "Semantic Segment" in options
- image_generation_status = "Image Generation" in options
- image_caption, dense_caption, region_semantic, gen_text = processor.image_to_text(image_src)
- if image_generation_status:
- gen_image = processor.text_to_image(gen_text)
- gen_image_str = pil_image_to_base64(gen_image)
- # Combine the outputs into a single HTML output
- custom_output = f'''
-
Image->Text:
-
-
-
Image Caption
-
{image_caption}
-
-
-
Dense Caption
-
{dense_caption}
-
-
-
Region Semantic
-
{region_semantic}
-
-
-
-
-
GPT4 Reasoning:
-
{gen_text}
-
-
- '''
- if image_generation_status:
- custom_output += f'''
-
Text->Image:
-
-
-
Generated Image
-
-
-
- '''
- return custom_output
-
-processor = ImageTextTransformation(args)
-
-# Create Gradio input and output components
-image_input = gr.inputs.Image(type='filepath', label="Input Image")
-semantic_segment_checkbox = gr.inputs.Checkbox(label="Semantic Segment", default=False)
-image_generation_checkbox = gr.inputs.Checkbox(label="Image Generation", default=False)
-
-
-extra_title = r'' + '\n' + \
- r'[](https://huggingface.co/spaces/Awiny/Image2Paragraph?duplicate=true)' + '\n\n'
-
-
-
-logo_base64 = add_logo()
-# Create the title with the logo
-title_with_logo = \
- f' Understanding Image with Text'
-
-examples = [
- ["examples/test_4.jpg"],
-]
-
-# Create Gradio interface
-interface = gr.Interface(
- fn=lambda image, options: process_image(image, options, processor),
- inputs=[image_input,
- gr.CheckboxGroup(
- label="Options",
- choices=["Image Generation", "Semantic Segment"],
- ),
- ],
- outputs=gr.outputs.HTML(),
- title=title_with_logo,
- examples=examples,
- description=extra_title +"""
- Image.txt. This code support image to text transformation. Then the generated text can do retrieval, question answering et al to conduct zero-shot.
- \n Github: https://github.com/showlab/Image2Paragraph
- \n Twitter: https://twitter.com/awinyimgprocess/status/1646225454599372800?s=46&t=HvOe9T2n35iFuCHP5aIHpQ
- \n For online demo, we use smallest model to speed up. For better result, look for github for using large models.
- \n Ttext2image model is controlnet, which used canny edge as reference.
- \n To speed up, we generate image with small size 384, run the code local for high-quality sample.
- """
-)
-
-# Launch the interface
-interface.launch()
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Anime Todo Star Blockman Ir Descargar.md b/spaces/Benson/text-generation/Examples/Anime Todo Star Blockman Ir Descargar.md
deleted file mode 100644
index 03243af0fffd15ce656fa368f3cbc291de2dfff8..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Anime Todo Star Blockman Ir Descargar.md
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
Anime All Star Blockman Go: Una guía para principiantes
-
Si eres un fan de los juegos de anime y acción, es posible que desees echar un vistazo a Anime All Star Blockman Go, un nuevo juego que combina ambos elementos de una manera divertida y emocionante. En este juego, puedes elegir entre una variedad de personajes de anime, cada uno con sus propias habilidades y habilidades únicas, y explorar un vasto mapa lleno de enemigos, jefes, eventos y recompensas. También puedes unirte a un clan y participar en guerras de clanes, donde puedes cooperar con otros jugadores y competir contra otros clanes. En este artículo, te daremos un breve resumen del juego y sus características, así como algunos consejos y trucos sobre cómo jugarlo mejor.
-
¿Qué es Anime All Star Blockman Go?
-
Anime All Star Blockman Go es un juego de acción gratuito desarrollado por Blockman GO Studio, el mismo desarrollador detrás de otros juegos populares como Bed Wars, Sky Wars, Egg Wars y más. El juego está disponible para dispositivos Android e iOS, y puedes descargarlo desde la Google Play Store o la App Store. El juego es parte de la plataforma Blockman GO, que te permite acceder a varios mini juegos con tus amigos de forma gratuita.
El juego está inspirado en varias series de anime, como Naruto, One Piece, Dragon Ball, Bleach, Fairy Tail, Attack on Titan, My Hero Academia y más. Puedes elegir entre más de 50 personajes de anime diferentes, cada uno con sus propias habilidades y habilidades únicas. También puedes personalizar la apariencia de tu personaje con diferentes atuendos, accesorios, armas y mascotas.
-
El modo principal del juego es el modo aventura, donde puedes explorar un gran mapa lleno de enemigos, jefes, eventos y recompensas. Puedes luchar solo o formar equipo con otros jugadores para derrotar a enemigos poderosos y ganar monedas y boletos. Las monedas son la moneda principal del juego, que puedes usar para comprar artículos en la tienda o mejorar las habilidades de tu personaje. Las entradas se utilizan para desbloquear nuevos personajes o girar la rueda de la suerte para tener la oportunidad de obtener objetos raros.
-
-
Descargar e instalar Anime All Star Blockman Go es muy fácil. Solo tienes que seguir estos sencillos pasos:
-
-
Ir a la Google Play Store o la App Store en su dispositivo.
-
Buscar "Anime All Star Blockman Go" o "Blockman GO" en la barra de búsqueda.
-
Toque en el icono del juego y luego toque en "Instalar".
-
Espera a que el juego se descargue e instale en tu dispositivo.
-
Abrir el juego y crear una cuenta o iniciar sesión con su existente
Cómo jugar Anime All Star Blockman Go?
-
Jugar Anime All Star Blockman Go es muy fácil y divertido. Solo tienes que seguir estos sencillos pasos:
-
Elegir tu personaje
-
Al iniciar el juego, se le pedirá que elija su primer personaje. Puedes elegir entre cuatro personajes diferentes: Naruto, Luffy, Goku e Ichigo. Cada personaje tiene sus propias habilidades y habilidades únicas, que puedes usar en combate. Por ejemplo, Naruto puede usar sus clones de sombra y rasengan, Luffy puede estirar sus extremidades y usar sus modos de engranaje, Goku puede volar y usar su kamehameha, e Ichigo puede usar su zanpakuto y bankai. También puedes cambiar de personaje en cualquier momento pulsando en el icono del personaje en la esquina superior izquierda de la pantalla.
-
Explorando el mapa
-
Después de elegir a tu personaje, entrarás en el modo aventura, donde podrás explorar un gran mapa lleno de enemigos, jefes, eventos y recompensas. Puedes mover a tu personaje usando el joystick en la esquina inferior izquierda de la pantalla. También puede saltar pulsando el botón de salto en la esquina inferior derecha de la pantalla. Puede interactuar con varios objetos y PNJ tocando en ellos. Por ejemplo, puedes hablar con el tendero para comprar objetos, hablar con el que hace la misión para aceptar misiones o hablar con el maestro del portal para teletransportarse a diferentes lugares.
-
-
-
Luchando contra enemigos y jefes
-
Luchar contra enemigos y jefes es una de las características principales de Anime All Star Blockman Go. Puedes luchar solo o formar equipo con otros jugadores para derrotar a enemigos poderosos y ganar monedas y boletos. Puedes usar tus habilidades y objetos para atacar a tus enemigos y esquivar sus ataques. También puedes usar tu habilidad definitiva, que es un movimiento poderoso que inflige daño masivo a tus enemigos. Para usar tus habilidades y objetos, debes tocar sus iconos en la esquina inferior derecha de la pantalla. Para usar tu habilidad definitiva, necesitas llenar tu barra de energía atacando o siendo golpeado por enemigos.
-
Los enemigos se dividen en diferentes tipos, como normal, élite y jefe. Los enemigos normales son fáciles de derrotar y soltar monedas y objetos. Los enemigos de élite son más fuertes y sueltan más monedas y objetos. Los jefes son muy fuertes y dejan caer entradas y Gcubes. Puedes encontrar enemigos dispersos por el mapa o en lugares específicos marcados con círculos rojos. También puedes encontrar jefes en lugares específicos marcados con iconos de calavera.
-
Ganar monedas y billetes
-
Monedas y entradas son las principales monedas de Anime All Star Blockman Go, que se puede utilizar para comprar artículos de la tienda o mejorar las habilidades de su personaje. Puedes ganar monedas y boletos luchando contra enemigos y jefes, completando misiones, abriendo cofres y cajas, o haciendo girar la rueda de la suerte. También puedes obtener monedas y entradas viendo anuncios o completando ofertas.
-
Puedes usar monedas para comprar artículos de la tienda, como pociones, pergaminos, bombas o mascotas. Los objetos pueden ayudarte en el combate curándote, aumentando tus estadísticas, dañando a tus enemigos o invocando aliados. También puedes usar monedas para mejorar las habilidades de tu personaje, lo que aumentará su daño, rango, tiempo de reutilización o efecto.
-
-
-
Cómo unirse a un clan y participar en guerras de clanes
-
Unirte a un clan y participar en guerras de clanes te dará más características sociales y competitivas en Anime All Star Blockman Go. Puedes unirte a un clan aplicando a un clan existente o creando tu propio clan. También puedes participar en guerras de clanes, donde puedes cooperar con los miembros de tu clan y competir contra otros clanes por gloria y recompensas. Aquí hay algunos consejos sobre cómo unirse a un clan y participar en las guerras de clanes:
-
-
Únete a un clan que se adapte a tus preferencias, como tu idioma, región, nivel o estilo de juego. Puedes navegar por la lista de clanes o buscar un clan específico por nombre o ID.
-
Crea tu propio clan si quieres ser el líder e invita a tus amigos u otros jugadores. Necesitas gastar 1000 monedas o 100 Gcubes para crear un clan.
-
Participa en las guerras de clanes tocando el icono de guerra de clanes en la esquina superior derecha de la pantalla. Puedes unirte a una guerra de clanes una vez al día gratis o más veces gastando monedas o Gcubes.
-
Coopera con los miembros de tu clan y usa tus habilidades y objetos para derrotar a la base y a los jugadores del clan enemigo. También puedes chatear con los miembros de tu clan y enviarles regalos.
-
Gana puntos de clan y recompensas ganando guerras de clan o completando misiones de clan. Puedes usar puntos de clan para comprar fragmentos de personajes u objetos de la tienda del clan.
-
-
Conclusión
-
Anime All Star Blockman Go es un juego divertido y emocionante que combina elementos de anime y acción de una manera única. Puedes elegir entre una variedad de personajes de anime, cada uno con sus propias habilidades y habilidades únicas, y explorar un vasto mapa lleno de enemigos, jefes, eventos y recompensas. También puedes unirte a un clan y participar en guerras de clanes, donde puedes cooperar con otros jugadores y competir contra otros clanes. Si usted está buscando un juego que le mantendrá entretenido y desafiado, usted debe dar definitivamente Anime All Star Blockman Go un intento.
-
-
Esperamos que haya disfrutado de este artículo y aprendido algo nuevo sobre Anime All Star Blockman Go. Si lo hiciste, por favor compártelo con tus amigos y déjanos un comentario abajo. ¡Gracias por leer!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes y sus respuestas sobre Anime All Star Blockman Go:
-
-
¿Cuáles son los requisitos del sistema para Anime All Star Blockman Go?
-
Anime All Star Blockman Go requiere Android 4.1 o superior o iOS 9.0 o superior para funcionar sin problemas. También requiere una conexión a Internet para jugar en línea.
-
¿Cómo puedo obtener gratis Gcubes en Anime All Star Blockman Go?
-
Puedes obtener Gcubes gratis luchando contra jefes, abriendo cajas, girando la rueda de la suerte, viendo anuncios, completando ofertas o invitando a tus amigos a jugar el juego.
-
¿Cómo puedo cambiar la apariencia de mi personaje en Anime All Star Blockman Go?
-
Puedes cambiar la apariencia de tu personaje comprando diferentes trajes, accesorios, armas y mascotas de la tienda o obteniéndolos de la rueda de la suerte. También puedes personalizar el color de cabello, el color de ojos, el color de piel y el nombre de tu personaje.
-
¿Cómo puedo reportar un error o un problema en Anime All Star Blockman Go?
-
Puede reportar un error o un problema en Anime All Star Blockman Go tocando el icono de configuración en la esquina superior derecha de la pantalla y luego tocando en "Feedback". También puede ponerse en contacto con el desarrollador a través de su sitio web oficial, página de Facebook, cuenta de Twitter o canal de YouTube.
-
¿Cómo puedo actualizar Anime All Star Blockman Ir a la última versión?
-
Puede actualizar Anime All Star Blockman Ir a la última versión yendo a la Google Play Store o la App Store en su dispositivo y tocando en "Actualizar". También puedes habilitar actualizaciones automáticas para el juego en la configuración de tu dispositivo.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cuerda Hroe 3 Mod Apk Revdl.md b/spaces/Benson/text-generation/Examples/Cuerda Hroe 3 Mod Apk Revdl.md
deleted file mode 100644
index 76fc83b836eeba8113233009741deb8289ba13b9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cuerda Hroe 3 Mod Apk Revdl.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
Rope Hero 3 Mod Apk Revdl: Una guía para descargar y jugar el último juego de superhéroes
-
¿Te gustan los juegos de superhéroes? ¿Quieres cruzar la ciudad como Spider-Man, luchar contra criminales como Batman y salvar el mundo como Superman? Si es así, entonces deberías probar Rope Hero 3, un emocionante juego de acción en 3D que te permite convertirte en un héroe de cuerda con increíbles superpoderes. Pero espera, hay más. También puedes descargar e instalar Rope Hero 3 mod apk revdl, una versión modificada del juego que te da recursos ilimitados, armas, vehículos y más. En este artículo, le mostraremos cómo descargar y jugar Rope Hero 3 mod apk revdl, y le dará algunos consejos y trucos para dominar el juego.
Cómo descargar e instalar Rope Hero 3 Mod Apk Revdl
-
Si quieres disfrutar de Rope Hero 3 con todas sus características desbloqueadas, es necesario descargar e instalar Rope Hero 3 mod apk revdl. Esta es una versión modificada del juego que te da gemas ilimitadas, dinero, armas, munición, armadura y más. Puedes usar estos recursos para comprar lo que quieras en el juego, como pistolas, cuchillos, bazucas, blasters, etc. También puedes actualizar tu traje, habilidades y habilidades para hacer que tu héroe de cuerda sea más poderoso y ágil.
-
Para descargar e instalar Rope Hero 3 mod apk revdl, debe seguir estos pasos:
-
-
Ir a [revdl website]( 1 ) y buscar Rope Hero 3 mod apk.
-
Seleccione la última versión del mod apk de la lista de resultados.
-
Haga clic en el botón de descarga y espere a que se descargue el archivo.
-
Después de descargar, vaya a la configuración del dispositivo y habilite fuentes desconocidas.
-
Ir a su gestor de archivos y localizar el archivo descargado.
-
Toque en el archivo y siga las instrucciones para instalarlo.
-
Iniciar el juego y disfrutar!
-
-
Cómo jugar héroe de cuerda 3 Mod Apk Revdl
-
-
La jugabilidad básica y los controles de Rope Hero 3 son fáciles de aprender. Puede utilizar el joystick virtual en el lado izquierdo de la pantalla para mover su héroe de cuerda alrededor de la ciudad. Puede utilizar los botones en el lado derecho de la pantalla para realizar varias acciones, como saltar, disparar, balancearse, etc. También puede tocar los iconos en la parte superior de la pantalla para acceder a su inventario, mapa, misiones y ajustes. Puedes personalizar la apariencia, el traje, las armas y las habilidades de tu héroe de cuerda según tu preferencia.
-
El juego tiene muchas características y habilidades que lo hacen emocionante y desafiante. Puedes usar tu súper cuerda para balancearte por la ciudad como Spider-Man, y sentir la adrenalina mientras vuelas por el aire. También puedes usar tu cuerda para escalar paredes, montar techos y atraer enemigos y coches. Puedes usar tu cuerda como arma para azotar, estrangular o lanzar a tus enemigos. También puedes usar varias armas y dispositivos para luchar contra pandillas, policías, clones y otros enemigos. Puede elegir entre una amplia gama de armas, cuchillos, bazucas, blásters, granadas, etc. También puede utilizar dispositivos especiales como jetpacks, drones, imanes, etc. para mejorar su juego.
-
-
El juego tiene muchas misiones y tareas que tienes que completar para progresar en la historia y desbloquear nuevas características. Puedes seguir el mapa y los indicadores para encontrar tus objetivos y completarlos. Algunas de las misiones incluyen salvar rehenes, destruir bases de clones, robar coches, robar bancos, etc. También puedes explorar la ciudad del mundo abierto y encontrar secretos y ubicaciones ocultos. Puedes interactuar con varios personajes y objetos de la ciudad, como civiles, animales, máquinas expendedoras, etc. También puedes causar caos y caos en la ciudad destruyendo edificios, coches, semáforos, etc.
-
-
Consejos y trucos para maestro héroe de cuerda 3 Mod Apk Revdl
-
Si quieres convertirte en un maestro de Rope Hero 3 mod apk revdl, necesitas saber algunos consejos y trucos que pueden ayudarte a mejorar tus habilidades y rendimiento. Estos son algunos de ellos:
-
-
Las mejores armas y gadgets para usar en diferentes situaciones dependen de su preferencia y estilo. Sin embargo, algunas de las más útiles son la pistola láser, que puede disparar poderosos rayos láser; el lanzador de granadas, que puede causar explosiones masivas; el jetpack, que puede dejar volar en el aire; y el dispositivo de imán, que puede atraer o repeler objetos metálicos.
-
Los mejores vehículos y bicicletas para conducir y robar en la ciudad son los que son rápidos, duraderos y maniobrables. Algunos de los mejores son el coche deportivo, que puede acelerar a través del tráfico; el tanque, que puede aplastar cualquier cosa en su camino; el helicóptero, que puede volar sobre los obstáculos; y la bicicleta, que puede realizar acrobacias y trucos.
-
Las mejores habilidades y mejoras para mejorar el rendimiento de tu héroe de cuerda son las que aumentan tu salud, daño, velocidad y agilidad. Algunas de las mejores son la habilidad de regeneración de salud, que puede curarte con el tiempo; la habilidad de aumento de daño, que puede aumentar el poder de tu arma; la habilidad de aumento de velocidad, que puede hacerte correr más rápido; y la habilidad de aumento de agilidad, que puede hacerte saltar más alto y balancearte más rápido.
-
Los mejores secretos y lugares ocultos para explorar en la ciudad son los que contienen objetos valiosos o huevos de Pascua. Algunos de ellos son: un laboratorio subterráneo secreto donde se puede encontrar una máquina de clonación; una azotea secreta donde se puede encontrar un OVNI; un callejón secreto donde se puede encontrar un graffiti de Spider-Man; y una isla secreta donde se puede encontrar un dinosaurio.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes y respuestas relacionadas con Rope Hero 3 mod apk revdl:
-
Q: ¿Es Rope Hero 3 mod apk revdl seguro para descargar e instalar?
-
A: Sí, Rope Hero 3 mod apk revdl es seguro para descargar e instalar desde el sitio web revdl, que es una fuente confiable de juegos y aplicaciones modded. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo de Internet, y escanearlo en busca de virus o malware antes de abrirlo.
-
Q: ¿Cómo puedo actualizar Rope Hero 3 mod apk revdl?
-
A: Para actualizar Rope Hero 3 mod apk revdl, es necesario visitar revdl sitio web de nuevo y descargar la última versión de la apk mod. Entonces, es necesario desinstalar la versión anterior del juego desde su dispositivo e instalar el nuevo. También puedes buscar actualizaciones desde dentro del juego pulsando en el icono de configuración y seleccionando la opción de actualización.
-
Q: ¿Puedo jugar Rope Hero 3 mod apk revdl en línea con otros jugadores?
-
A: No, Rope Hero 3 mod apk revdl es un juego sin conexión que no requiere una conexión a Internet para jugar. Puedes jugar solo o con tus amigos en el mismo dispositivo usando el modo de pantalla dividida. Sin embargo, no puedes jugar online con otros jugadores o sincronizar tu progreso con otros dispositivos.
-
Q: ¿Puedo jugar Rope Hero 3 mod apk revdl en dispositivos PC o iOS?
-
A: No, Rope Hero 3 mod apk revdl solo es compatible con dispositivos Android. No puede reproducirlo en dispositivos PC o iOS a menos que use un emulador o un simulador. Sin embargo, esto puede afectar el rendimiento y la calidad del juego, y puede no funcionar correctamente.
-
Q: ¿Cuáles son algunos otros juegos como Rope Hero 3 mod apk revdl?
-
A: Si te gusta Rope Hero 3 mod apk revdl, también puede gustar algunos otros juegos que son similares en género y tema. Algunos de ellos son: Spider-Man Unlimited, Batman Arkham Origins, Superman Returns, Iron Man 3, etc.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Gratis Nba 2k20 V98.md b/spaces/Benson/text-generation/Examples/Descargar Gratis Nba 2k20 V98.md
deleted file mode 100644
index e01ee6a376c82e965cf67604adfc703758cbfea7..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Gratis Nba 2k20 V98.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
Cómo descargar NBA 2K20 V98 gratis en Android e iOS
-
Si eres un fan de los juegos de baloncesto, es posible que hayas oído hablar de NBA 2K20, la última entrega de la popular serie NBA 2K. Este juego ofrece una experiencia de simulación de baloncesto realista e inmersiva, con varios modos, características y mejoras. ¿Pero sabías que hay una nueva versión de NBA 2K20 que puedes descargar gratis en tu dispositivo Android o iOS? Esta versión se llama NBA 2K20 V98, y tiene algunas ventajas sobre el juego original. En este artículo, te mostraremos cómo descargar NBA 2K20 V98 gratis, cuáles son sus características, consejos y trucos, y comentarios.
Antes de descargar NBA 2K20 V98, debe asegurarse de que su dispositivo cumple con los requisitos mínimos del sistema. Según [4], estos son:
-
-
OS: Windows 7 de 64 bits, Windows 8.1 de 64 bits o Windows 10 de 64 bits
-
Procesador: Intelạ Core! i3-530 @ 2.93 GHz / AMD FX-4100 @ 3.60 GHz o mejor
-
Memoria: 4 GB de RAM
-
Gráficos: NVIDIA! GeForce GT 450 1GB / AMD= Radeon! HD 7770 1GB o mejor
-
DirectX: Versión 11
-
Almacenamiento: 80 GB de espacio disponible
-
Tarjeta de sonido: DirectX 9.0x compatible
-
Gamepad analógico dual: Recomendado
-
-
Si su dispositivo cumple con estos requisitos, puede proceder a descargar NBA 2K20 V98.
-
Pasos
-
Para descargar NBA 2K20 V98 gratis en tu dispositivo Android o iOS, sigue estos pasos:
-
-
-
Ir a [8], donde se puede encontrar el enlace directo para descargar NBA 2K20 APK + OBB para Android & iOS V98.2.
-
Haga clic en el enlace y elija su plataforma preferida (Android o iOS).
-
Descargue el archivo APK y el archivo OBB en su dispositivo.
-
Instalar el archivo APK tocando en él y permitiendo fuentes desconocidas si se le solicita.
-
-
Iniciar el juego y disfrutar de NBA 2K20 V98 gratis en su dispositivo.
-
-
Características de NBA 2K20 V98
-
NBA 2K20 V98 no es solo una simple actualización del juego original. Tiene algunas características nuevas y mejoradas que lo hacen más agradable y realista. Estas son algunas de las características de NBA 2K20 V98 que usted debe saber acerca de:
-
Juego
-
NBA 2K20 V98 mejora el juego de NBA 2K20 añadiendo más profundidad y variedad a la simulación de baloncesto. Algunas de las mejoras de juego son:
-
-
Un nuevo motor de movimiento que genera movimientos, colisiones y reacciones más realistas del jugador.
-
Un sistema de dribbling renovado que te da más control y creatividad con la pelota.
-
Un nuevo medidor de disparo que muestra el punto de liberación ideal para cada tipo de disparo y situación.
-
Un nuevo sistema de insignias que te permite personalizar las habilidades y habilidades de tu jugador.
-
Un nuevo sistema de adquisición que te permite activar mejoras especiales para tu jugador o equipo cuando te desempeñas bien.
-
-
Modos
-
NBA 2K20 V98 ofrece una variedad de modos que se adaptan a diferentes estilos de juego y preferencias. Algunos de los modos son:
-
-
MyCareer: Un modo historia que te permite crear tu propio jugador y seguir su viaje desde la universidad a la NBA.
-
MyTeam: Un modo de recolección de cartas que te permite construir tu propio equipo de fantasía con jugadores de diferentes épocas y competir online o offline.
-
MyGM: Un modo de gestión que le permite tomar el control de una franquicia de la NBA y tomar decisiones sobre oficios, contratos, exploración, y más.
-
MyLeague: Un modo de personalización que te permite crear tu propia liga con tus propias reglas, equipos, jugadores y más.
-
Jugar ahora: Un modo de juego rápido que le permite saltar a un juego con cualquier equipo y cualquier configuración.
-
Juega Ahora Online: Un modo competitivo que te permite jugar online contra otros jugadores con niveles de habilidad similares.
-
-
El vecindario: un centro social que te permite interactuar con otros jugadores, personalizar tu apariencia, acceder a diferentes modos y más.
-
-
Gráficos
-
NBA 2K20 V98 ofrece impresionantes gráficos y animaciones que hacen que el juego se vea y se sienta como una transmisión real de la NBA. Algunas de las mejoras gráficas son:
-
-
Modelos de jugadores más detallados, caras, cabello, tatuajes, accesorios y uniformes.
-
Iluminación más realista, sombras, reflejos y efectos de multitud.
-
Ángulos de cámara más dinámicos, repeticiones, escenas y comentarios.
-
Más arenas auténticas, cortes, logotipos, banderas y trofeos.
-
-
Consejos y trucos para NBA 2K20 V98
-
Si quieres mejorar en NBA 2K20 V98, necesitas dominar algunos consejos y trucos que te ayudarán a mejorar tus habilidades y rendimiento. Estos son algunos consejos y trucos para NBA 2K20 V98 que debes conocer:
-
MyCareer
-
MyCareer es el modo más popular en NBA 2K20 V98, ya que le permite crear su propio jugador y vivir sus sueños de la NBA. Aquí hay algunos consejos y trucos para MyCareer:
-
-
Elige una construcción que se adapte a tu estilo de juego y posición. Puedes usar el MyPlayer Builder para probar diferentes combinaciones de atributos, habilidades, insignias y perfiles físicos antes de comprometerte con una compilación.
-
Gane VC (moneda virtual) jugando juegos, completando endosos, respondiendo preguntas en entrevistas y participando en eventos. Puedes usar VC para actualizar tus atributos, comprar ropa, accesorios, animaciones y más.
-
Gane MyPoints con un buen rendimiento en juegos y prácticas. Puede usar MyPoints para desbloquear nuevos puntos de insignia y aumentar su calificación general.
-
Seleccione un equipo que necesita un jugador como usted y tiene una buena química con usted. Puedes comprobar el nivel de interés del equipo, el gráfico de profundidad de la lista, la competencia del sistema y el bono de grado de compañero de equipo antes de firmar un contrato.
-
-
Elige las insignias correctas para tu estilo de construcción y juego. Las insignias son habilidades especiales que te dan una ventaja en ciertas situaciones. Puedes equipar hasta 30 insignias en NBA 2K20 V98, pero primero debes desbloquearlas ganando puntos de insignia en cada categoría (acabado, disparos, creación de juegos y defensa/ rebotes).
-
Utilice la instalación de práctica para trabajar en sus habilidades y ganar más MyPoints y puntos de insignia. Puedes elegir entre varios ejercicios, juegos y entrenamientos para mejorar tus atributos e insignias.
-
Sigue la historia y toma decisiones que afecten tu carrera. NBA 2K20 V98 tiene un nuevo modo de historia que cuenta con actores famosos como Idris Elba, Rosario Dawson y Thomas Middleditch. Puedes interactuar con diferentes personajes, tomar decisiones y dar forma a tu propia narrativa.
-
-
MyTeam
-
MyTeam es el modo en el que puedes recoger cartas de tus jugadores y leyendas favoritas de la NBA y construir tu propio equipo de fantasía. Estos son algunos consejos y trucos para MyTeam:
-
-
Completa los desafíos y objetivos para ganar recompensas. NBA 2K20 V98 tiene varios desafíos y objetivos que puede completar para ganar MT (puntos MyTeam), VC, tarjetas, paquetes, fichas y más. Puedes encontrarlos en los modos Agenda, Spotlight, Domination, Triple Threat, Unlimited, Limited y Season.
-
Utilice la casa de subastas para comprar y vender tarjetas. La casa de subastas es donde usted puede hacer una oferta o comprar tarjetas de otros jugadores, o vender sus propias tarjetas para MT. Puede utilizar los filtros y las opciones de búsqueda para encontrar las tarjetas que desea o necesita.
-
Utilice los códigos de taquilla para obtener artículos gratis. Los códigos de casillero son códigos que puedes introducir en el menú de MyTeam para obtener artículos gratuitos como packs, MT, VC, tokens o reproductores. Puedes encontrar códigos de casilleros en las cuentas oficiales de redes sociales NBA 2K20 o sitios web como [9].
-
-
Utilice el intercambio para intercambiar sus tarjetas para mejores. El intercambio es una nueva característica en NBA 2K20 V98 que le permite intercambiar sus tarjetas no deseadas o duplicadas para mejores. Por ejemplo, puedes intercambiar 10 jugadores de Oro por 1 jugador de Amatista. Puedes encontrar el Intercambio en el menú de MyTeam.
-
-
Dribbling
-
Dribbling es una de las habilidades más importantes en NBA 2K20 V98, ya que te permite crear espacio, vencer a los defensores y establecer jugadas. Aquí hay algunos consejos y trucos para driblar:
-
-
Usa el stick derecho para realizar diferentes movimientos de drible. Puedes usar el palo correcto para hacer movimientos básicos como cruces, detrás de la espalda, giros, vacilaciones, pasos atrás y más. También puedes usar el stick derecho para hacer movimientos avanzados como tallas, movimientos de firma, combos y movimientos de parque.
-
Usa el gatillo izquierdo para realizar movimientos de drible con más control. Puedes usar el gatillo izquierdo para hacer movimientos como dribbles de entrada y salida, medio giros, giros inversos, escapes de giros cruzados, escapes de stepback y stepbacks de salto. Estos movimientos te dan más control sobre la dirección y velocidad de tus dribbles.
-
Utilice el parachoques derecho para realizar movimientos de drible con más velocidad. Puedes usar el parachoques adecuado para hacer movimientos como primeros pasos rápidos, paradas rápidas, cruces rápidas y detrás de la espalda. Estos movimientos te dan más velocidad y explosividad con tus dribbles.
-
Usa el botón sprint para realizar movimientos de drible con más potencia. Puedes usar el botón de sprint para hacer movimientos como dribleas de poder, giros de poder, crossovers de poder y poder detrás de la espalda. Estos movimientos te dan más poder y fuerza con tus dribbles.
-
Utilice los diferentes paquetes de dribble para personalizar su estilo de dribble. Puedes elegir entre varios paquetes de dribble que afectan tus movimientos básicos, avanzados, de firma y de parque. Puede equipar diferentes paquetes de drible para su MyPlayer en el menú MyCareer o para sus reproductores MyTeam en el menú MyTeam.
-
-
Disparos
-
-
-
Usa el medidor de disparos para cronometrar tus disparos. El medidor de disparos es una barra que aparece debajo de tu reproductor cuando disparas. El objetivo es liberar el botón de disparo cuando la barra está llena o cerca de llena. Esto aumentará sus posibilidades de hacer el disparo.
-
Utilice la insignia de liberación flexible para mejorar el tiempo de disparo. El distintivo de liberación flexible es un distintivo de disparo que reduce la pena por disparos erróneos. Esto significa que aún puede realizar disparos incluso si suelta el botón de disparo demasiado pronto o demasiado tarde.
-
Utilice el indicador de liberación verde para confirmar el tiempo de disparo. El indicador de liberación verde es una señal visual que aparece por encima de la cabeza del jugador cuando se suelta el botón de disparo en el momento perfecto. Esto significa que tienes un 100% de probabilidades de hacer el disparo.
-
Usa el stick derecho para apuntar tus disparos. Puedes usar el stick derecho para ajustar el ángulo y la dirección de tus disparos. También puede utilizar el stick derecho para realizar diferentes tipos de disparo como capas, flotadores, ganchos, fundidos y más.
-
Usa las diferentes insignias de disparo para mejorar tus habilidades de disparo. Puede elegir entre varias insignias de tiro que afectan a su captura y disparo, fuera de la dribble, disputado, de largo alcance, de medio alcance, tiro libre, y tiros de esquina. Puede equipar diferentes insignias de disparo para su MyPlayer en el menú MyCareer o para sus reproductores MyTeam en el menú MyTeam.
-
-
Defensa
-
Defensa es la habilidad final que necesitas dominar en NBA 2K20 V98, ya que te permite detener a tus oponentes y forzar pérdidas de balón. Aquí hay algunos consejos y trucos para la defensa:
-
-
Usa el stick izquierdo para mover a tu defensor. Puedes usar el stick izquierdo para posicionarte delante o detrás de tu oponente, dependiendo de la situación. También puedes usar el stick izquierdo para deslizarte lateralmente o girar alrededor de tu oponente.
-
- Usa el gatillo izquierdo para ponerte en posición defensiva. Puedes usar el gatillo izquierdo para bajar tu centro de gravedad y aumentar tu velocidad y agilidad lateral. Esto te ayudará a mantenerte delante de tu oponente y reaccionar a sus movimientos.
-
Utilice el parachoques derecho para pedir ayuda o cambiar de defensores. Puedes usar el parachoques adecuado para solicitar un equipo doble, un seto, un interruptor o una rotación de tus compañeros de equipo. También puede usar el parachoques derecho para cambiar al defensor más cercano o al defensor más cercano al balón.
-
Usa las diferentes insignias defensivas para mejorar tus habilidades defensivas. Puede elegir entre varias insignias defensivas que afectan su perímetro, interior, rebote, robo, bloqueo y defensa de transición. Puede equipar diferentes insignias defensivas para su MyPlayer en el menú de MyCareer o para sus jugadores de MyTeam en el menú de MyTeam.
-
-
Opiniones de NBA 2K20 V98
-
NBA 2K20 V98 es un juego altamente calificado que ha recibido comentarios positivos de críticos y jugadores por igual. Estas son algunas de las reseñas de NBA 2K20 V98 que debes leer:
-
Pros
-
Algunos de los aspectos positivos de NBA 2K20 V98 son:
-
-
Tiene una experiencia de simulación de baloncesto realista e inmersiva que captura la esencia de la NBA.
-
Tiene una variedad de modos que se adaptan a diferentes estilos de juego y preferencias.
-
Tiene impresionantes gráficos y animaciones que hacen que el juego se vea y se sienta como una transmisión real de la NBA.
-
Tiene un nuevo y mejorado sistema de juego que añade más profundidad y variedad a la acción de baloncesto.
-
Tiene un nuevo y atractivo modo de historia que cuenta con famosos actores y celebridades.
-
Es gratis para descargar y jugar en dispositivos Android e iOS.
-
-
Contras
-
Algunos de los aspectos negativos de NBA 2K20 V98 son:
-
-
Requiere mucho espacio de almacenamiento y recursos del sistema para funcionar sin problemas en su dispositivo.
-
-
Puede tener algunas microtransacciones y anuncios que pueden interferir con su disfrute del juego.
-
Puede tener algunos problemas en línea como retraso, desconexiones o tramposos que pueden arruinar su experiencia en línea.
-
Puede tener algunos problemas de equilibrio como jugadores, equipos o insignias dominados o con poca potencia que pueden afectar la imparcialidad del juego.
-
-
Veredicto
-
NBA 2K20 V98 es un juego que vale la pena descargar y jugar si eres un fan de los juegos de baloncesto o de la serie NBA 2K. Ofrece una experiencia de simulación de baloncesto realista e inmersiva, con varios modos, características y mejoras. También ofrece impresionantes gráficos y animaciones, así como un nuevo y atractivo modo historia. Es gratis para descargar y jugar en dispositivos Android e iOS, que es una gran cantidad para un juego de alta calidad. Sin embargo, también tiene algunos inconvenientes, como requerir mucho espacio de almacenamiento y recursos del sistema, tener algunos errores y problemas técnicos, tener algunas microtransacciones y anuncios, tener algunos problemas en línea y tener algunos problemas de equilibrio. Por lo tanto, usted debe ser consciente de estos problemas potenciales antes de descargar y jugar NBA 2K20 V98.
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre NBA 2K20 V98:
-
-
Q: ¿Es seguro descargar y jugar NBA 2K20 V98?
-R: NBA 2K20 V98 es seguro para descargar y jugar siempre y cuando uses el enlace oficial de [8]. Sin embargo, debes tener cuidado con cualquier enlace falso o malicioso que pueda dañar tu dispositivo o robar tu información.
-
Q: ¿NBA 2K20 V98 es compatible con mi dispositivo?
-R: NBA 2K20 V98 es compatible con la mayoría de los dispositivos Android e iOS que cumplen los requisitos mínimos del sistema. Puede consultar los requisitos en este artículo o en el sitio web oficial de NBA 2K20.
-
Q: ¿Cómo puedo actualizar NBA 2K20 V98?
-R: NBA 2K20 V98 se actualiza automáticamente al iniciar el juego. Sin embargo, es posible que necesite descargar e instalar los últimos archivos APK y OBB manualmente si hay actualizaciones o cambios importantes.
-
-🔥 **AnimeVideo-v3 model (动漫视频小模型)**. Please see [[*anime video models*](docs/anime_video_model.md)] and [[*comparisons*](docs/anime_comparisons.md)]
-🔥 **RealESRGAN_x4plus_anime_6B** for anime images **(动漫插图模型)**. Please see [[*anime_model*](docs/anime_model.md)]
-
-
-1. :boom: **Update** online Replicate demo: [](https://replicate.com/xinntao/realesrgan)
-1. Online Colab demo for Real-ESRGAN: [](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) **|** Online Colab demo for for Real-ESRGAN (**anime videos**): [](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing)
-1. Portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. You can find more information [here](#portable-executable-files-ncnn). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
-
-
-Real-ESRGAN aims at developing **Practical Algorithms for General Image/Video Restoration**.
-We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
-
-🌌 Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in [feedback.md](docs/feedback.md).
-
----
-
-If Real-ESRGAN is helpful, please help to ⭐ this repo or recommend it to your friends 😊
-Other recommended projects:
-▶️ [GFPGAN](https://github.com/TencentARC/GFPGAN): A practical algorithm for real-world face restoration
-▶️ [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox
-▶️ [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions.
-▶️ [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison
-▶️ [HandyFigure](https://github.com/xinntao/HandyFigure): Open source of paper figures
-
----
-
-### 📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
-
-> [[Paper](https://arxiv.org/abs/2107.10833)] [[YouTube Video](https://www.youtube.com/watch?v=fxHWoDSSvSc)] [[B站讲解](https://www.bilibili.com/video/BV1H34y1m7sS/)] [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)] [[PPT slides](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]
-> [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en)
-> [Tencent ARC Lab](https://arc.tencent.com/en/ai-demos/imgRestore); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
-
-
-
-
-
----
-
-
-## 🚩 Updates
-
-- ✅ Add the **realesr-general-x4v3** model - a tiny small model for general scenes. It also supports the **--dn** option to balance the noise (avoiding over-smooth results). **--dn** is short for denoising strength.
-- ✅ Update the **RealESRGAN AnimeVideo-v3** model. Please see [anime video models](docs/anime_video_model.md) and [comparisons](docs/anime_comparisons.md) for more details.
-- ✅ Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md).
-- ✅ Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
-- ✅ Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
-- ✅ Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](docs/Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
-- ✅ Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**.
-- ✅ Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391)
-- ✅ Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.
-- ✅ [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
-- ✅ The training codes have been released. A detailed guide can be found in [Training.md](docs/Training.md).
-
----
-
-
-## 👀 Demos Videos
-
-#### Bilibili
-
-- [大闹天宫片段](https://www.bilibili.com/video/BV1ja41117zb)
-- [Anime dance cut 动漫魔性舞蹈](https://www.bilibili.com/video/BV1wY4y1L7hT/)
-- [海贼王片段](https://www.bilibili.com/video/BV1i3411L7Gy/)
-
-#### YouTube
-
-## 🔧 Dependencies and Installation
-
-- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
-- [PyTorch >= 1.7](https://pytorch.org/)
-
-### Installation
-
-1. Clone repo
-
- ```bash
- git clone https://github.com/xinntao/Real-ESRGAN.git
- cd Real-ESRGAN
- ```
-
-1. Install dependent packages
-
- ```bash
- # Install basicsr - https://github.com/xinntao/BasicSR
- # We use BasicSR for both training and inference
- pip install basicsr
- # facexlib and gfpgan are for face enhancement
- pip install facexlib
- pip install gfpgan
- pip install -r requirements.txt
- python setup.py develop
- ```
-
----
-
-## ⚡ Quick Inference
-
-There are usually three ways to inference Real-ESRGAN.
-
-1. [Online inference](#online-inference)
-1. [Portable executable files (NCNN)](#portable-executable-files-ncnn)
-1. [Python script](#python-script)
-
-### Online inference
-
-1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B)
-1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN **|** [Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**).
-
-### Portable executable files (NCNN)
-
-You can download [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**.
-
-This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.
-
-You can simply run the following command (the Windows example, more information is in the README.md of each executable files):
-
-```bash
-./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name
-```
-
-We have provided five models:
-
-1. realesrgan-x4plus (default)
-2. realesrnet-x4plus
-3. realesrgan-x4plus-anime (optimized for anime images, small model size)
-4. realesr-animevideov3 (animation video)
-
-You can use the `-n` argument for other models, for example, `./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`
-
-#### Usage of portable executable files
-
-1. Please refer to [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages) for more details.
-1. Note that it does not support all the functions (such as `outscale`) as the python script `inference_realesrgan.py`.
-
-```console
-Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
-
- -h show this help
- -i input-path input image path (jpg/png/webp) or directory
- -o output-path output image path (jpg/png/webp) or directory
- -s scale upscale ratio (can be 2, 3, 4. default=4)
- -t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
- -m model-path folder path to the pre-trained models. default=models
- -n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
- -g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
- -j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
- -x enable tta mode"
- -f format output image format (jpg/png/webp, default=ext/png)
- -v verbose output
-```
-
-Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
-
-### Python script
-
-#### Usage of python script
-
-1. You can use X4 model for **arbitrary output size** with the argument `outscale`. The program will further perform cheap resize operation after the Real-ESRGAN output.
-
-```console
-Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
-
-A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
-
- -h show this help
- -i --input Input image or folder. Default: inputs
- -o --output Output folder. Default: results
- -n --model_name Model name. Default: RealESRGAN_x4plus
- -s, --outscale The final upsampling scale of the image. Default: 4
- --suffix Suffix of the restored image. Default: out
- -t, --tile Tile size, 0 for no tile during testing. Default: 0
- --face_enhance Whether to use GFPGAN to enhance face. Default: False
- --fp32 Use fp32 precision during inference. Default: fp16 (half precision).
- --ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
-```
-
-#### Inference general images
-
-Download pre-trained models: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
-
-```bash
-wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P weights
-```
-
-Inference!
-
-```bash
-python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance
-```
-
-Results are in the `results` folder
-
-#### Inference anime images
-
-
-
-
-
-Pre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)
- More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
-
-```bash
-# download model
-wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights
-# inference
-python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
-```
-
-Results are in the `results` folder
-
----
-
-## BibTeX
-
- @InProceedings{wang2021realesrgan,
- author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
- title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
- booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
- date = {2021}
- }
-
-## 📧 Contact
-
-If you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.
-
-
-## 🧩 Projects that use Real-ESRGAN
-
-If you develop/use Real-ESRGAN in your projects, welcome to let me know.
-
-- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)
-- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)
-- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
-
- **GUI**
-
-- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)
-- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)
-- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)
-- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)
-- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)
-- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)
-- [Upscayl](https://github.com/upscayl/upscayl) by [Nayam Amarshe](https://github.com/NayamAmarshe) and [TGS963](https://github.com/TGS963)
-
-## 🤗 Acknowledgement
-
-Thanks for all the contributors.
-
-- [AK391](https://github.com/AK391): Integrate RealESRGAN to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN).
-- [Asiimoviet](https://github.com/Asiimoviet): Translate the README.md to Chinese (中文).
-- [2ji3150](https://github.com/2ji3150): Thanks for the [detailed and valuable feedbacks/suggestions](https://github.com/xinntao/Real-ESRGAN/issues/131).
-- [Jared-02](https://github.com/Jared-02): Translate the Training.md to Chinese (中文).
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/deepfashion.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/deepfashion.py
deleted file mode 100644
index 308b4b2ac4d9e3516ba4a57e9d3b6af91e97f24b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/datasets/deepfashion.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# dataset settings
-dataset_type = 'DeepFashionDataset'
-data_root = 'data/DeepFashion/In-shop/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='Resize', img_scale=(750, 1101), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(750, 1101),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- imgs_per_gpu=2,
- workers_per_gpu=1,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/DeepFashion_segmentation_query.json',
- img_prefix=data_root + 'Img/',
- pipeline=train_pipeline,
- data_root=data_root),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/DeepFashion_segmentation_query.json',
- img_prefix=data_root + 'Img/',
- pipeline=test_pipeline,
- data_root=data_root),
- test=dict(
- type=dataset_type,
- ann_file=data_root +
- 'annotations/DeepFashion_segmentation_gallery.json',
- img_prefix=data_root + 'Img/',
- pipeline=test_pipeline,
- data_root=data_root))
-evaluation = dict(interval=5, metric=['bbox', 'segm'])
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py
deleted file mode 100644
index f0fd80d41407162df71ba5349fc659d4713cdb6e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/trident_faster_rcnn.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from ..builder import DETECTORS
-from .faster_rcnn import FasterRCNN
-
-
-@DETECTORS.register_module()
-class TridentFasterRCNN(FasterRCNN):
- """Implementation of `TridentNet `_"""
-
- def __init__(self,
- backbone,
- rpn_head,
- roi_head,
- train_cfg,
- test_cfg,
- neck=None,
- pretrained=None):
-
- super(TridentFasterRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
- assert self.backbone.num_branch == self.roi_head.num_branch
- assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx
- self.num_branch = self.backbone.num_branch
- self.test_branch_idx = self.backbone.test_branch_idx
-
- def simple_test(self, img, img_metas, proposals=None, rescale=False):
- """Test without augmentation."""
- assert self.with_bbox, 'Bbox head must be implemented.'
- x = self.extract_feat(img)
- if proposals is None:
- num_branch = (self.num_branch if self.test_branch_idx == -1 else 1)
- trident_img_metas = img_metas * num_branch
- proposal_list = self.rpn_head.simple_test_rpn(x, trident_img_metas)
- else:
- proposal_list = proposals
-
- return self.roi_head.simple_test(
- x, proposal_list, trident_img_metas, rescale=rescale)
-
- def aug_test(self, imgs, img_metas, rescale=False):
- """Test with augmentations.
-
- If rescale is False, then returned bboxes and masks will fit the scale
- of imgs[0].
- """
- x = self.extract_feats(imgs)
- num_branch = (self.num_branch if self.test_branch_idx == -1 else 1)
- trident_img_metas = [img_metas * num_branch for img_metas in img_metas]
- proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas)
- return self.roi_head.aug_test(
- x, proposal_list, img_metas, rescale=rescale)
-
- def forward_train(self, img, img_metas, gt_bboxes, gt_labels, **kwargs):
- """make copies of img and gts to fit multi-branch."""
- trident_gt_bboxes = tuple(gt_bboxes * self.num_branch)
- trident_gt_labels = tuple(gt_labels * self.num_branch)
- trident_img_metas = tuple(img_metas * self.num_branch)
-
- return super(TridentFasterRCNN,
- self).forward_train(img, trident_img_metas,
- trident_gt_bboxes, trident_gt_labels)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py
deleted file mode 100644
index be8eec77792f4eb16475dc5ab8607fb5682f0acf..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/hrf.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
-model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170)))
-evaluation = dict(metric='mDice')
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/__init__.py
deleted file mode 100644
index 6ab346075f1b35366e7231054513097b87552c6f..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/__init__.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-AudioCraft is a general framework for training audio generative models.
-At the moment we provide the training code for:
-
-- [MusicGen](https://arxiv.org/abs/2306.05284), a state-of-the-art
- text-to-music and melody+text autoregressive generative model.
- For the solver, see `audiocraft.solvers.musicgen.MusicGenSolver`, and for the model,
- `audiocraft.models.musicgen.MusicGen`.
-- [AudioGen](https://arxiv.org/abs/2209.15352), a state-of-the-art
- text-to-general-audio generative model.
-- [EnCodec](https://arxiv.org/abs/2210.13438), efficient and high fidelity
- neural audio codec which provides an excellent tokenizer for autoregressive language models.
- See `audiocraft.solvers.compression.CompressionSolver`, and `audiocraft.models.encodec.EncodecModel`.
-- [MultiBandDiffusion](TODO), alternative diffusion-based decoder compatible with EnCodec that
- improves the perceived quality and reduces the artifacts coming from adversarial decoders.
-"""
-
-# flake8: noqa
-from . import data, modules, models
-
-__version__ = '1.0.0'
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/utils/utils.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/utils/utils.py
deleted file mode 100644
index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/utils/utils.py
+++ /dev/null
@@ -1,234 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from concurrent.futures import ProcessPoolExecutor
-from functools import wraps
-import hashlib
-import logging
-import typing as tp
-
-import flashy
-import flashy.distrib
-import omegaconf
-import torch
-from torch.nn.utils.rnn import pad_sequence
-
-
-logger = logging.getLogger(__name__)
-
-
-def dict_from_config(cfg: omegaconf.DictConfig) -> dict:
- """Convenience function to map an omegaconf configuration to a dictionary.
-
- Args:
- cfg (omegaconf.DictConfig): Original configuration to map to dict.
- Returns:
- dict: Config as dictionary object.
- """
- dct = omegaconf.OmegaConf.to_container(cfg, resolve=True)
- assert isinstance(dct, dict)
- return dct
-
-
-def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset:
- if max_samples >= len(dataset):
- return dataset
-
- generator = torch.Generator().manual_seed(seed)
- perm = torch.randperm(len(dataset), generator=generator)
- return torch.utils.data.Subset(dataset, perm[:max_samples].tolist())
-
-
-def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int,
- num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader:
- """Convenience function to load dataset into a dataloader with optional subset sampling.
-
- Args:
- dataset: Dataset to load.
- num_samples (Optional[int]): Number of samples to limit subset size.
- batch_size (int): Batch size.
- num_workers (int): Number of workers for data loading.
- seed (int): Random seed.
- """
- if num_samples is not None:
- dataset = random_subset(dataset, num_samples, seed)
-
- dataloader = flashy.distrib.loader(
- dataset,
- batch_size=batch_size,
- num_workers=num_workers,
- **kwargs
- )
- return dataloader
-
-
-def get_dataset_from_loader(dataloader):
- dataset = dataloader.dataset
- if isinstance(dataset, torch.utils.data.Subset):
- return dataset.dataset
- else:
- return dataset
-
-
-def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None):
- """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension.
-
- Args:
- input (torch.Tensor): The input tensor containing probabilities.
- num_samples (int): Number of samples to draw.
- replacement (bool): Whether to draw with replacement or not.
- Keywords args:
- generator (torch.Generator): A pseudorandom number generator for sampling.
- Returns:
- torch.Tensor: Last dimension contains num_samples indices
- sampled from the multinomial probability distribution
- located in the last dimension of tensor input.
- """
- input_ = input.reshape(-1, input.shape[-1])
- output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator)
- output = output_.reshape(*list(input.shape[:-1]), -1)
- return output
-
-
-def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor:
- """Sample next token from top K values along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- k (int): The k in “top-k”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- top_k_value, _ = torch.topk(probs, k, dim=-1)
- min_value_top_k = top_k_value[..., [-1]]
- probs *= (probs >= min_value_top_k).float()
- probs.div_(probs.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs, num_samples=1)
- return next_token
-
-
-def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor:
- """Sample next token from top P probabilities along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- p (int): The p in “top-p”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)
- probs_sum = torch.cumsum(probs_sort, dim=-1)
- mask = probs_sum - probs_sort > p
- probs_sort *= (~mask).float()
- probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs_sort, num_samples=1)
- next_token = torch.gather(probs_idx, -1, next_token)
- return next_token
-
-
-class DummyPoolExecutor:
- """Dummy pool executor to use when we actually have only 1 worker.
- (e.g. instead of ProcessPoolExecutor).
- """
- class DummyResult:
- def __init__(self, func, *args, **kwargs):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- def result(self):
- return self.func(*self.args, **self.kwargs)
-
- def __init__(self, workers, mp_context=None):
- pass
-
- def submit(self, func, *args, **kwargs):
- return DummyPoolExecutor.DummyResult(func, *args, **kwargs)
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, exc_tb):
- return
-
-
-def get_pool_executor(num_workers: int, mp_context=None):
- return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1)
-
-
-def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor:
- """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences).
- For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]]
-
- Args:
- lengths (torch.Tensor): tensor with lengths
- max_len (int): can set the max length manually. Defaults to None.
- Returns:
- torch.Tensor: mask with 0s where there is pad tokens else 1s
- """
- assert len(lengths.shape) == 1, "Length shape should be 1 dimensional."
- final_length = lengths.max().item() if not max_len else max_len
- final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor
- return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None]
-
-
-def hash_trick(word: str, vocab_size: int) -> int:
- """Hash trick to pair each word with an index
-
- Args:
- word (str): word we wish to convert to an index
- vocab_size (int): size of the vocabulary
- Returns:
- int: index of the word in the embedding LUT
- """
- hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16)
- return hash % vocab_size
-
-
-def with_rank_rng(base_seed: int = 1234):
- """Decorator for a function so that the function will use a Random Number Generator
- whose state depend on the GPU rank. The original RNG state is restored upon returning.
-
- Args:
- base_seed (int): Random seed.
- """
- def _decorator(fun: tp.Callable):
- @wraps(fun)
- def _decorated(*args, **kwargs):
- state = torch.get_rng_state()
- seed = base_seed ^ flashy.distrib.rank()
- torch.manual_seed(seed)
- logger.debug('Rank dependent seed set to %d', seed)
- try:
- return fun(*args, **kwargs)
- finally:
- torch.set_rng_state(state)
- logger.debug('RNG state restored.')
- return _decorated
- return _decorator
-
-
-def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- """Get a list of tensors and collate them to a single tensor. according to the following logic:
- - `dim` specifies the time dimension which will be stacked and padded.
- - The output will contain 1 new dimension (dimension index 0) which will be the size of
- of the original list.
-
- Args:
- tensors (tp.List[torch.Tensor]): List of tensors to collate.
- dim (int): Dimension which will be stacked and padded.
- Returns:
- tp.Tuple[torch.Tensor, torch.Tensor]:
- torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension
- (dimension index 0) which will be the size of the original list.
- torch.Tensor: Tensor containing length of original tensor sizes (without padding).
- """
- tensors = [x.transpose(0, dim) for x in tensors]
- lens = torch.LongTensor([len(x) for x in tensors])
- padded_tensors = pad_sequence(tensors)
- padded_tensors = padded_tensors.transpose(0, 1)
- padded_tensors = padded_tensors.transpose(1, dim + 1)
- return padded_tensors, lens
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/biggan/pytorch_biggan/scripts/download_tf_hub_models.sh b/spaces/HaHaBill/LandShapes-Antarctica/models/biggan/pytorch_biggan/scripts/download_tf_hub_models.sh
deleted file mode 100644
index 57655fbd4b77791f03d72b3dfeb3bbb89ccc2fdc..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/models/biggan/pytorch_biggan/scripts/download_tf_hub_models.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) 2019-present, Thomas Wolf, Huggingface Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-
-set -e
-set -x
-
-models="128 256 512"
-
-mkdir -p models/model_128
-mkdir -p models/model_256
-mkdir -p models/model_512
-
-# Download TF Hub models.
-for model in $models
-do
- curl -L "https://tfhub.dev/deepmind/biggan-deep-$model/1?tf-hub-format=compressed" | tar -zxvC models/model_$model
-done
diff --git a/spaces/HachiRe/Fusani/README.md b/spaces/HachiRe/Fusani/README.md
deleted file mode 100644
index 2d12121d7a09c18ee135c4db67ecf57888347d36..0000000000000000000000000000000000000000
--- a/spaces/HachiRe/Fusani/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Fusani
-emoji: 🀀
-colorFrom: red
-colorTo: blue
-sdk: static
-pinned: false
----
-
-
diff --git a/spaces/HarryLee/TextTopicModeling/README.md b/spaces/HarryLee/TextTopicModeling/README.md
deleted file mode 100644
index d94bff78960db4431ff003c6d5f1b3affc9a399c..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/TextTopicModeling/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: TextTopicModeling
-emoji: 🐠
-colorFrom: red
-colorTo: green
-sdk: streamlit
-sdk_version: 1.9.0
-python_version: 3.7
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/wsc/wsc_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/wsc/wsc_utils.py
deleted file mode 100644
index da6ba74383a2490e1108609f315f44ad4b3bf002..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/roberta/wsc/wsc_utils.py
+++ /dev/null
@@ -1,241 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import json
-from functools import lru_cache
-
-
-def convert_sentence_to_json(sentence):
- if "_" in sentence:
- prefix, rest = sentence.split("_", 1)
- query, rest = rest.split("_", 1)
- query_index = len(prefix.rstrip().split(" "))
- else:
- query, query_index = None, None
-
- prefix, rest = sentence.split("[", 1)
- pronoun, rest = rest.split("]", 1)
- pronoun_index = len(prefix.rstrip().split(" "))
-
- sentence = sentence.replace("_", "").replace("[", "").replace("]", "")
-
- return {
- "idx": 0,
- "text": sentence,
- "target": {
- "span1_index": query_index,
- "span1_text": query,
- "span2_index": pronoun_index,
- "span2_text": pronoun,
- },
- }
-
-
-def extended_noun_chunks(sentence):
- noun_chunks = {(np.start, np.end) for np in sentence.noun_chunks}
- np_start, cur_np = 0, "NONE"
- for i, token in enumerate(sentence):
- np_type = token.pos_ if token.pos_ in {"NOUN", "PROPN"} else "NONE"
- if np_type != cur_np:
- if cur_np != "NONE":
- noun_chunks.add((np_start, i))
- if np_type != "NONE":
- np_start = i
- cur_np = np_type
- if cur_np != "NONE":
- noun_chunks.add((np_start, len(sentence)))
- return [sentence[s:e] for (s, e) in sorted(noun_chunks)]
-
-
-def find_token(sentence, start_pos):
- found_tok = None
- for tok in sentence:
- if tok.idx == start_pos:
- found_tok = tok
- break
- return found_tok
-
-
-def find_span(sentence, search_text, start=0):
- search_text = search_text.lower()
- for tok in sentence[start:]:
- remainder = sentence[tok.i :].text.lower()
- if remainder.startswith(search_text):
- len_to_consume = len(search_text)
- start_idx = tok.idx
- for next_tok in sentence[tok.i :]:
- end_idx = next_tok.idx + len(next_tok.text)
- if end_idx - start_idx == len_to_consume:
- span = sentence[tok.i : next_tok.i + 1]
- return span
- return None
-
-
-@lru_cache(maxsize=1)
-def get_detokenizer():
- from sacremoses import MosesDetokenizer
-
- detok = MosesDetokenizer(lang="en")
- return detok
-
-
-@lru_cache(maxsize=1)
-def get_spacy_nlp():
- import en_core_web_lg
-
- nlp = en_core_web_lg.load()
- return nlp
-
-
-def jsonl_iterator(input_fname, positive_only=False, ngram_order=3, eval=False):
- detok = get_detokenizer()
- nlp = get_spacy_nlp()
-
- with open(input_fname) as fin:
- for line in fin:
- sample = json.loads(line.strip())
-
- if positive_only and "label" in sample and not sample["label"]:
- # only consider examples where the query is correct
- continue
-
- target = sample["target"]
-
- # clean up the query
- query = target["span1_text"]
- if query is not None:
- if "\n" in query:
- continue
- if query.endswith(".") or query.endswith(","):
- query = query[:-1]
-
- # split tokens
- tokens = sample["text"].split(" ")
-
- def strip_pronoun(x):
- return x.rstrip('.,"')
-
- # find the pronoun
- pronoun_idx = target["span2_index"]
- pronoun = strip_pronoun(target["span2_text"])
- if strip_pronoun(tokens[pronoun_idx]) != pronoun:
- # hack: sometimes the index is misaligned
- if strip_pronoun(tokens[pronoun_idx + 1]) == pronoun:
- pronoun_idx += 1
- else:
- raise Exception("Misaligned pronoun!")
- assert strip_pronoun(tokens[pronoun_idx]) == pronoun
-
- # split tokens before and after the pronoun
- before = tokens[:pronoun_idx]
- after = tokens[pronoun_idx + 1 :]
-
- # the GPT BPE attaches leading spaces to tokens, so we keep track
- # of whether we need spaces before or after the pronoun
- leading_space = " " if pronoun_idx > 0 else ""
- trailing_space = " " if len(after) > 0 else ""
-
- # detokenize
- before = detok.detokenize(before, return_str=True)
- pronoun = detok.detokenize([pronoun], return_str=True)
- after = detok.detokenize(after, return_str=True)
-
- # hack: when the pronoun ends in a period (or comma), move the
- # punctuation to the "after" part
- if pronoun.endswith(".") or pronoun.endswith(","):
- after = pronoun[-1] + trailing_space + after
- pronoun = pronoun[:-1]
-
- # hack: when the "after" part begins with a comma or period, remove
- # the trailing space
- if after.startswith(".") or after.startswith(","):
- trailing_space = ""
-
- # parse sentence with spacy
- sentence = nlp(before + leading_space + pronoun + trailing_space + after)
-
- # find pronoun span
- start = len(before + leading_space)
- first_pronoun_tok = find_token(sentence, start_pos=start)
- pronoun_span = find_span(sentence, pronoun, start=first_pronoun_tok.i)
- assert pronoun_span.text == pronoun
-
- if eval:
- # convert to format where pronoun is surrounded by "[]" and
- # query is surrounded by "_"
- query_span = find_span(sentence, query)
- query_with_ws = "_{}_{}".format(
- query_span.text,
- (" " if query_span.text_with_ws.endswith(" ") else ""),
- )
- pronoun_with_ws = "[{}]{}".format(
- pronoun_span.text,
- (" " if pronoun_span.text_with_ws.endswith(" ") else ""),
- )
- if query_span.start < pronoun_span.start:
- first = (query_span, query_with_ws)
- second = (pronoun_span, pronoun_with_ws)
- else:
- first = (pronoun_span, pronoun_with_ws)
- second = (query_span, query_with_ws)
- sentence = (
- sentence[: first[0].start].text_with_ws
- + first[1]
- + sentence[first[0].end : second[0].start].text_with_ws
- + second[1]
- + sentence[second[0].end :].text
- )
- yield sentence, sample.get("label", None)
- else:
- yield sentence, pronoun_span, query, sample.get("label", None)
-
-
-def winogrande_jsonl_iterator(input_fname, eval=False):
- with open(input_fname) as fin:
- for line in fin:
- sample = json.loads(line.strip())
- sentence, option1, option2 = (
- sample["sentence"],
- sample["option1"],
- sample["option2"],
- )
-
- pronoun_span = (sentence.index("_"), sentence.index("_") + 1)
-
- if eval:
- query, cand = option1, option2
- else:
- query = option1 if sample["answer"] == "1" else option2
- cand = option2 if sample["answer"] == "1" else option1
- yield sentence, pronoun_span, query, cand
-
-
-def filter_noun_chunks(
- chunks, exclude_pronouns=False, exclude_query=None, exact_match=False
-):
- if exclude_pronouns:
- chunks = [
- np
- for np in chunks
- if (np.lemma_ != "-PRON-" and not all(tok.pos_ == "PRON" for tok in np))
- ]
-
- if exclude_query is not None:
- excl_txt = [exclude_query.lower()]
- filtered_chunks = []
- for chunk in chunks:
- lower_chunk = chunk.text.lower()
- found = False
- for excl in excl_txt:
- if (
- not exact_match and (lower_chunk in excl or excl in lower_chunk)
- ) or lower_chunk == excl:
- found = True
- break
- if not found:
- filtered_chunks.append(chunk)
- chunks = filtered_chunks
-
- return chunks
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation/prepare-iwslt14.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation/prepare-iwslt14.sh
deleted file mode 100644
index 2fb6643fbccb58701dcbb77d91430e68a821ba38..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/translation/prepare-iwslt14.sh
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/usr/bin/env bash
-#
-# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh
-
-echo 'Cloning Moses github repository (for tokenization scripts)...'
-git clone https://github.com/moses-smt/mosesdecoder.git
-
-echo 'Cloning Subword NMT repository (for BPE pre-processing)...'
-git clone https://github.com/rsennrich/subword-nmt.git
-
-SCRIPTS=mosesdecoder/scripts
-TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl
-LC=$SCRIPTS/tokenizer/lowercase.perl
-CLEAN=$SCRIPTS/training/clean-corpus-n.perl
-BPEROOT=subword-nmt/subword_nmt
-BPE_TOKENS=10000
-
-URL="http://dl.fbaipublicfiles.com/fairseq/data/iwslt14/de-en.tgz"
-GZ=de-en.tgz
-
-if [ ! -d "$SCRIPTS" ]; then
- echo "Please set SCRIPTS variable correctly to point to Moses scripts."
- exit
-fi
-
-src=de
-tgt=en
-lang=de-en
-prep=iwslt14.tokenized.de-en
-tmp=$prep/tmp
-orig=orig
-
-mkdir -p $orig $tmp $prep
-
-echo "Downloading data from ${URL}..."
-cd $orig
-wget "$URL"
-
-if [ -f $GZ ]; then
- echo "Data successfully downloaded."
-else
- echo "Data not successfully downloaded."
- exit
-fi
-
-tar zxvf $GZ
-cd ..
-
-echo "pre-processing train data..."
-for l in $src $tgt; do
- f=train.tags.$lang.$l
- tok=train.tags.$lang.tok.$l
-
- cat $orig/$lang/$f | \
- grep -v '' | \
- grep -v '' | \
- grep -v '' | \
- sed -e 's///g' | \
- sed -e 's/<\/title>//g' | \
- sed -e 's///g' | \
- sed -e 's/<\/description>//g' | \
- perl $TOKENIZER -threads 8 -l $l > $tmp/$tok
- echo ""
-done
-perl $CLEAN -ratio 1.5 $tmp/train.tags.$lang.tok $src $tgt $tmp/train.tags.$lang.clean 1 175
-for l in $src $tgt; do
- perl $LC < $tmp/train.tags.$lang.clean.$l > $tmp/train.tags.$lang.$l
-done
-
-echo "pre-processing valid/test data..."
-for l in $src $tgt; do
- for o in `ls $orig/$lang/IWSLT14.TED*.$l.xml`; do
- fname=${o##*/}
- f=$tmp/${fname%.*}
- echo $o $f
- grep '\s*//g' | \
- sed -e 's/\s*<\/seg>\s*//g' | \
- sed -e "s/\’/\'/g" | \
- perl $TOKENIZER -threads 8 -l $l | \
- perl $LC > $f
- echo ""
- done
-done
-
-
-echo "creating train, valid, test..."
-for l in $src $tgt; do
- awk '{if (NR%23 == 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/valid.$l
- awk '{if (NR%23 != 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/train.$l
-
- cat $tmp/IWSLT14.TED.dev2010.de-en.$l \
- $tmp/IWSLT14.TEDX.dev2012.de-en.$l \
- $tmp/IWSLT14.TED.tst2010.de-en.$l \
- $tmp/IWSLT14.TED.tst2011.de-en.$l \
- $tmp/IWSLT14.TED.tst2012.de-en.$l \
- > $tmp/test.$l
-done
-
-TRAIN=$tmp/train.en-de
-BPE_CODE=$prep/code
-rm -f $TRAIN
-for l in $src $tgt; do
- cat $tmp/train.$l >> $TRAIN
-done
-
-echo "learn_bpe.py on ${TRAIN}..."
-python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE
-
-for L in $src $tgt; do
- for f in train.$L valid.$L test.$L; do
- echo "apply_bpe.py to ${f}..."
- python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $prep/$f
- done
-done
diff --git a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/main_program.py b/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/main_program.py
deleted file mode 100644
index bf9ec602b215f5125d44e7e6071989a2fefaae88..0000000000000000000000000000000000000000
--- a/spaces/Heshwa/html-code-generation-from-images-with-deep-neural-networks/main_program.py
+++ /dev/null
@@ -1,89 +0,0 @@
-from __future__ import absolute_import
-from __future__ import print_function
-
-__author__ = 'Taneem Jan, taneemishere.github.io'
-
-import os.path
-from os.path import basename
-
-from classes.Sampler import *
-from classes.model.Main_Model import *
-
-
-def dsl_code_generation(input_image):
- trained_weights_path = "classes/model/bin"
- trained_model_name = "Main_Model"
- input_path = input_image
- output_path = "data/output/"
- search_method = "greedy"
- meta_dataset = np.load("{}/meta_dataset.npy".format(trained_weights_path), allow_pickle=True)
- input_shape = meta_dataset[0]
- output_size = meta_dataset[1]
-
- model = Main_Model(input_shape, output_size, trained_weights_path)
- model.load(trained_model_name)
-
- sampler = Sampler(trained_weights_path, input_shape, output_size, CONTEXT_LENGTH)
-
- file_name = 'input_image_from_interface.png'
- file_name = basename(file_name)[:basename(file_name).find(".")]
- evaluation_img = Utils.get_preprocessed_img(input_path, IMAGE_SIZE)
-
- if search_method == "greedy":
- result, _ = sampler.predict_greedy(model, np.array([evaluation_img]))
- print("Result greedy: \n {}".format(result))
-
- with open("{}/{}.gui".format(output_path, file_name), 'w') as out_f:
- out_f.write(result.replace(START_TOKEN, "").replace(END_TOKEN, ""))
-
- return file_name, output_path
-
-
-def compile_gui(file_path, filename):
- from os.path import basename
- from compiler.Utils import Utils
- from compiler.Compiler import Compiler
-
- input_path = (file_path + filename)
-
- # remove the path
- file_ = os.path.basename(input_path)
- # remove the extension
- file_ = os.path.splitext(file_)[0]
- # add the extension of gui
- file_ = "data/output/" + file_ + ".gui"
-
- input_file = file_
-
- FILL_WITH_RANDOM_TEXT = True
- TEXT_PLACE_HOLDER = "[]"
-
- dsl_path = "compiler/assets/web-dsl-mapping.json"
- compiler = Compiler(dsl_path)
-
- def render_content_with_text(key, value):
- if FILL_WITH_RANDOM_TEXT:
- if key.find("btn") != -1:
- value = value.replace(TEXT_PLACE_HOLDER, Utils.get_random_text())
- elif key.find("title") != -1:
- value = value.replace(TEXT_PLACE_HOLDER, Utils.get_random_text(length_text=5, space_number=0))
- elif key.find("text") != -1:
- value = value.replace(TEXT_PLACE_HOLDER,
- Utils.get_random_text(length_text=56, space_number=7, with_upper_case=False))
- return value
-
- file_uid = basename(input_file)[:basename(input_file).find(".")]
- path = input_file[:input_file.find(file_uid)]
-
- input_file_path = "{}{}.gui".format(path, file_uid)
- output_file_path = "{}{}.html".format(path, file_uid)
-
- html_code = compiler.compile(input_file_path, output_file_path, rendering_function=render_content_with_text)
- print("Generated code is compiled..!!")
- return html_code
-
-
-def main_method(input_image_from_interface):
- file_name, file_output_path = dsl_code_generation(input_image_from_interface)
- result = compile_gui(file_output_path, file_name)
- return result
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/helpers.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/helpers.py
deleted file mode 100644
index 4d2d329331766f0b9b94175412e061aca218e683..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/helpers.py
+++ /dev/null
@@ -1,792 +0,0 @@
-"""
-Defines helper methods useful for loading and caching Interface examples.
-"""
-from __future__ import annotations
-
-import ast
-import csv
-import inspect
-import os
-import subprocess
-import tempfile
-import threading
-import warnings
-from pathlib import Path
-from typing import TYPE_CHECKING, Any, Callable, Iterable, List, Optional, Tuple
-
-import matplotlib
-import matplotlib.pyplot as plt
-import numpy as np
-import PIL
-
-from gradio import processing_utils, routes, utils
-from gradio.context import Context
-from gradio.documentation import document, set_documentation_group
-from gradio.flagging import CSVLogger
-
-if TYPE_CHECKING: # Only import for type checking (to avoid circular imports).
- from gradio.components import IOComponent
-
-CACHED_FOLDER = "gradio_cached_examples"
-LOG_FILE = "log.csv"
-
-set_documentation_group("helpers")
-
-
-def create_examples(
- examples: List[Any] | List[List[Any]] | str,
- inputs: IOComponent | List[IOComponent],
- outputs: IOComponent | List[IOComponent] | None = None,
- fn: Callable | None = None,
- cache_examples: bool = False,
- examples_per_page: int = 10,
- _api_mode: bool = False,
- label: str | None = None,
- elem_id: str | None = None,
- run_on_click: bool = False,
- preprocess: bool = True,
- postprocess: bool = True,
- batch: bool = False,
-):
- """Top-level synchronous function that creates Examples. Provided for backwards compatibility, i.e. so that gr.Examples(...) can be used to create the Examples component."""
- examples_obj = Examples(
- examples=examples,
- inputs=inputs,
- outputs=outputs,
- fn=fn,
- cache_examples=cache_examples,
- examples_per_page=examples_per_page,
- _api_mode=_api_mode,
- label=label,
- elem_id=elem_id,
- run_on_click=run_on_click,
- preprocess=preprocess,
- postprocess=postprocess,
- batch=batch,
- _initiated_directly=False,
- )
- utils.synchronize_async(examples_obj.create)
- return examples_obj
-
-
-@document()
-class Examples:
- """
- This class is a wrapper over the Dataset component and can be used to create Examples
- for Blocks / Interfaces. Populates the Dataset component with examples and
- assigns event listener so that clicking on an example populates the input/output
- components. Optionally handles example caching for fast inference.
-
- Demos: blocks_inputs, fake_gan
- Guides: more_on_examples_and_flagging, using_hugging_face_integrations, image_classification_in_pytorch, image_classification_in_tensorflow, image_classification_with_vision_transformers, create_your_own_friends_with_a_gan
- """
-
- def __init__(
- self,
- examples: List[Any] | List[List[Any]] | str,
- inputs: IOComponent | List[IOComponent],
- outputs: Optional[IOComponent | List[IOComponent]] = None,
- fn: Optional[Callable] = None,
- cache_examples: bool = False,
- examples_per_page: int = 10,
- _api_mode: bool = False,
- label: str = "Examples",
- elem_id: Optional[str] = None,
- run_on_click: bool = False,
- preprocess: bool = True,
- postprocess: bool = True,
- batch: bool = False,
- _initiated_directly: bool = True,
- ):
- """
- Parameters:
- examples: example inputs that can be clicked to populate specific components. Should be nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component. A string path to a directory of examples can also be provided but it should be within the directory with the python file running the gradio app. If there are multiple input components and a directory is provided, a log.csv file must be present in the directory to link corresponding inputs.
- inputs: the component or list of components corresponding to the examples
- outputs: optionally, provide the component or list of components corresponding to the output of the examples. Required if `cache` is True.
- fn: optionally, provide the function to run to generate the outputs corresponding to the examples. Required if `cache` is True.
- cache_examples: if True, caches examples for fast runtime. If True, then `fn` and `outputs` need to be provided
- examples_per_page: how many examples to show per page.
- label: the label to use for the examples component (by default, "Examples")
- elem_id: an optional string that is assigned as the id of this component in the HTML DOM.
- run_on_click: if cache_examples is False, clicking on an example does not run the function when an example is clicked. Set this to True to run the function when an example is clicked. Has no effect if cache_examples is True.
- preprocess: if True, preprocesses the example input before running the prediction function and caching the output. Only applies if cache_examples is True.
- postprocess: if True, postprocesses the example output after running the prediction function and before caching. Only applies if cache_examples is True.
- batch: If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. Used only if cache_examples is True.
- """
- if _initiated_directly:
- warnings.warn(
- "Please use gr.Examples(...) instead of gr.examples.Examples(...) to create the Examples.",
- )
-
- if cache_examples and (fn is None or outputs is None):
- raise ValueError("If caching examples, `fn` and `outputs` must be provided")
-
- if not isinstance(inputs, list):
- inputs = [inputs]
- if not isinstance(outputs, list):
- outputs = [outputs]
-
- working_directory = Path().absolute()
-
- if examples is None:
- raise ValueError("The parameter `examples` cannot be None")
- elif isinstance(examples, list) and (
- len(examples) == 0 or isinstance(examples[0], list)
- ):
- pass
- elif (
- isinstance(examples, list) and len(inputs) == 1
- ): # If there is only one input component, examples can be provided as a regular list instead of a list of lists
- examples = [[e] for e in examples]
- elif isinstance(examples, str):
- if not os.path.exists(examples):
- raise FileNotFoundError(
- "Could not find examples directory: " + examples
- )
- working_directory = examples
- if not os.path.exists(os.path.join(examples, LOG_FILE)):
- if len(inputs) == 1:
- examples = [[e] for e in os.listdir(examples)]
- else:
- raise FileNotFoundError(
- "Could not find log file (required for multiple inputs): "
- + LOG_FILE
- )
- else:
- with open(os.path.join(examples, LOG_FILE)) as logs:
- examples = list(csv.reader(logs))
- examples = [
- examples[i][: len(inputs)] for i in range(1, len(examples))
- ] # remove header and unnecessary columns
-
- else:
- raise ValueError(
- "The parameter `examples` must either be a string directory or a list"
- "(if there is only 1 input component) or (more generally), a nested "
- "list, where each sublist represents a set of inputs."
- )
-
- input_has_examples = [False] * len(inputs)
- for example in examples:
- for idx, example_for_input in enumerate(example):
- if not (example_for_input is None):
- try:
- input_has_examples[idx] = True
- except IndexError:
- pass # If there are more example components than inputs, ignore. This can sometimes be intentional (e.g. loading from a log file where outputs and timestamps are also logged)
-
- inputs_with_examples = [
- inp for (inp, keep) in zip(inputs, input_has_examples) if keep
- ]
- non_none_examples = [
- [ex for (ex, keep) in zip(example, input_has_examples) if keep]
- for example in examples
- ]
-
- self.examples = examples
- self.non_none_examples = non_none_examples
- self.inputs = inputs
- self.inputs_with_examples = inputs_with_examples
- self.outputs = outputs
- self.fn = fn
- self.cache_examples = cache_examples
- self._api_mode = _api_mode
- self.preprocess = preprocess
- self.postprocess = postprocess
- self.batch = batch
-
- with utils.set_directory(working_directory):
- self.processed_examples = [
- [
- component.postprocess(sample)
- for component, sample in zip(inputs, example)
- ]
- for example in examples
- ]
- self.non_none_processed_examples = [
- [ex for (ex, keep) in zip(example, input_has_examples) if keep]
- for example in self.processed_examples
- ]
- if cache_examples:
- for example in self.examples:
- if len([ex for ex in example if ex is not None]) != len(self.inputs):
- warnings.warn(
- "Examples are being cached but not all input components have "
- "example values. This may result in an exception being thrown by "
- "your function. If you do get an error while caching examples, make "
- "sure all of your inputs have example values for all of your examples "
- "or you provide default values for those particular parameters in your function."
- )
- break
-
- from gradio.components import Dataset
-
- with utils.set_directory(working_directory):
- self.dataset = Dataset(
- components=inputs_with_examples,
- samples=non_none_examples,
- type="index",
- label=label,
- samples_per_page=examples_per_page,
- elem_id=elem_id,
- )
-
- self.cached_folder = os.path.join(CACHED_FOLDER, str(self.dataset._id))
- self.cached_file = os.path.join(self.cached_folder, "log.csv")
- self.cache_examples = cache_examples
- self.run_on_click = run_on_click
-
- async def create(self) -> None:
- """Caches the examples if self.cache_examples is True and creates the Dataset
- component to hold the examples"""
-
- async def load_example(example_id):
- if self.cache_examples:
- processed_example = self.non_none_processed_examples[
- example_id
- ] + await self.load_from_cache(example_id)
- else:
- processed_example = self.non_none_processed_examples[example_id]
- return utils.resolve_singleton(processed_example)
-
- if Context.root_block:
- self.dataset.click(
- load_example,
- inputs=[self.dataset],
- outputs=self.inputs_with_examples
- + (self.outputs if self.cache_examples else []),
- postprocess=False,
- queue=False,
- )
- if self.run_on_click and not self.cache_examples:
- self.dataset.click(
- self.fn,
- inputs=self.inputs,
- outputs=self.outputs,
- )
-
- if self.cache_examples:
- await self.cache()
-
- async def cache(self) -> None:
- """
- Caches all of the examples so that their predictions can be shown immediately.
- """
- if os.path.exists(self.cached_file):
- print(
- f"Using cache from '{os.path.abspath(self.cached_folder)}' directory. If method or examples have changed since last caching, delete this folder to clear cache."
- )
- else:
- if Context.root_block is None:
- raise ValueError("Cannot cache examples if not in a Blocks context")
-
- print(f"Caching examples at: '{os.path.abspath(self.cached_file)}'")
- cache_logger = CSVLogger()
-
- # create a fake dependency to process the examples and get the predictions
- dependency = Context.root_block.set_event_trigger(
- event_name="fake_event",
- fn=self.fn,
- inputs=self.inputs_with_examples,
- outputs=self.outputs,
- preprocess=self.preprocess and not self._api_mode,
- postprocess=self.postprocess and not self._api_mode,
- batch=self.batch,
- )
-
- fn_index = Context.root_block.dependencies.index(dependency)
- cache_logger.setup(self.outputs, self.cached_folder)
- for example_id, _ in enumerate(self.examples):
- processed_input = self.processed_examples[example_id]
- if self.batch:
- processed_input = [[value] for value in processed_input]
- prediction = await Context.root_block.process_api(
- fn_index=fn_index, inputs=processed_input, request=None, state={}
- )
- output = prediction["data"]
- if self.batch:
- output = [value[0] for value in output]
- cache_logger.flag(output)
- # Remove the "fake_event" to prevent bugs in loading interfaces from spaces
- Context.root_block.dependencies.remove(dependency)
- Context.root_block.fns.pop(fn_index)
-
- async def load_from_cache(self, example_id: int) -> List[Any]:
- """Loads a particular cached example for the interface.
- Parameters:
- example_id: The id of the example to process (zero-indexed).
- """
- with open(self.cached_file) as cache:
- examples = list(csv.reader(cache))
- example = examples[example_id + 1] # +1 to adjust for header
- output = []
- for component, value in zip(self.outputs, example):
- try:
- value_as_dict = ast.literal_eval(value)
- assert utils.is_update(value_as_dict)
- output.append(value_as_dict)
- except (ValueError, TypeError, SyntaxError, AssertionError):
- output.append(component.serialize(value, self.cached_folder))
- return output
-
-
-class TrackedIterable:
- def __init__(
- self,
- iterable: Iterable,
- index: int | None,
- length: int | None,
- desc: str | None,
- unit: str | None,
- _tqdm=None,
- progress: float = None,
- ) -> None:
- self.iterable = iterable
- self.index = index
- self.length = length
- self.desc = desc
- self.unit = unit
- self._tqdm = _tqdm
- self.progress = progress
-
-
-@document("__call__", "tqdm")
-class Progress(Iterable):
- """
- The Progress class provides a custom progress tracker that is used in a function signature.
- To attach a Progress tracker to a function, simply add a parameter right after the input parameters that has a default value set to a `gradio.Progress()` instance.
- The Progress tracker can then be updated in the function by calling the Progress object or using the `tqdm` method on an Iterable.
- The Progress tracker is currently only available with `queue()`.
- Example:
- import gradio as gr
- import time
- def my_function(x, progress=gr.Progress()):
- progress(0, desc="Starting...")
- time.sleep(1)
- for i in progress.tqdm(range(100)):
- time.sleep(0.1)
- return x
- gr.Interface(my_function, gr.Textbox(), gr.Textbox()).queue().launch()
- Demos: progress
- """
-
- def __init__(
- self,
- track_tqdm: bool = False,
- _active: bool = False,
- _callback: Callable = None,
- _event_id: str = None,
- ):
- """
- Parameters:
- track_tqdm: If True, the Progress object will track any tqdm.tqdm iterations with the tqdm library in the function.
- """
- self.track_tqdm = track_tqdm
- self._active = _active
- self._callback = _callback
- self._event_id = _event_id
- self.iterables: List[TrackedIterable] = []
-
- def __len__(self):
- return self.iterables[-1].length
-
- def __iter__(self):
- return self
-
- def __next__(self):
- """
- Updates progress tracker with next item in iterable.
- """
- if self._active:
- current_iterable = self.iterables[-1]
- while (
- not hasattr(current_iterable.iterable, "__next__")
- and len(self.iterables) > 0
- ):
- current_iterable = self.iterables.pop()
- self._callback(
- event_id=self._event_id,
- iterables=self.iterables,
- )
- current_iterable.index += 1
- try:
- return next(current_iterable.iterable)
- except StopIteration:
- self.iterables.pop()
- raise StopIteration
- else:
- return self
-
- def __call__(
- self,
- progress: float | Tuple[int, int | None] | None,
- desc: str | None = None,
- total: float | None = None,
- unit: str = "steps",
- _tqdm=None,
- ):
- """
- Updates progress tracker with progress and message text.
- Parameters:
- progress: If float, should be between 0 and 1 representing completion. If Tuple, first number represents steps completed, and second value represents total steps or None if unknown. If None, hides progress bar.
- desc: description to display.
- total: estimated total number of steps.
- unit: unit of iterations.
- """
- if self._active:
- if isinstance(progress, tuple):
- index, total = progress
- progress = None
- else:
- index = None
- self._callback(
- event_id=self._event_id,
- iterables=self.iterables
- + [TrackedIterable(None, index, total, desc, unit, _tqdm, progress)],
- )
- else:
- return progress
-
- def tqdm(
- self,
- iterable: Iterable | None,
- desc: str = None,
- total: float = None,
- unit: str = "steps",
- _tqdm=None,
- *args,
- **kwargs,
- ):
- """
- Attaches progress tracker to iterable, like tqdm.
- Parameters:
- iterable: iterable to attach progress tracker to.
- desc: description to display.
- total: estimated total number of steps.
- unit: unit of iterations.
- """
- if iterable is None:
- new_iterable = TrackedIterable(None, 0, total, desc, unit, _tqdm)
- self.iterables.append(new_iterable)
- self._callback(event_id=self._event_id, iterables=self.iterables)
- return
- length = len(iterable) if hasattr(iterable, "__len__") else None
- self.iterables.append(
- TrackedIterable(iter(iterable), 0, length, desc, unit, _tqdm)
- )
- return self
-
- def update(self, n=1):
- """
- Increases latest iterable with specified number of steps.
- Parameters:
- n: number of steps completed.
- """
- if self._active and len(self.iterables) > 0:
- current_iterable = self.iterables[-1]
- current_iterable.index += n
- self._callback(
- event_id=self._event_id,
- iterables=self.iterables,
- )
- else:
- return
-
- def close(self, _tqdm):
- """
- Removes iterable with given _tqdm.
- """
- if self._active:
- for i in range(len(self.iterables)):
- if id(self.iterables[i]._tqdm) == id(_tqdm):
- self.iterables.pop(i)
- break
- self._callback(
- event_id=self._event_id,
- iterables=self.iterables,
- )
- else:
- return
-
-
-def create_tracker(root_blocks, event_id, fn, track_tqdm):
-
- progress = Progress(
- _active=True, _callback=root_blocks._queue.set_progress, _event_id=event_id
- )
- if not track_tqdm:
- return progress, fn
-
- try:
- _tqdm = __import__("tqdm")
- except ModuleNotFoundError:
- return progress, fn
- if not hasattr(root_blocks, "_progress_tracker_per_thread"):
- root_blocks._progress_tracker_per_thread = {}
-
- def init_tqdm(self, iterable=None, desc=None, *args, **kwargs):
- self._progress = root_blocks._progress_tracker_per_thread.get(
- threading.get_ident()
- )
- if self._progress is not None:
- self._progress.event_id = event_id
- self._progress.tqdm(iterable, desc, _tqdm=self, *args, **kwargs)
- kwargs["file"] = open(os.devnull, "w")
- self.__init__orig__(iterable, desc, *args, **kwargs)
-
- def iter_tqdm(self):
- if self._progress is not None:
- return self._progress
- else:
- return self.__iter__orig__()
-
- def update_tqdm(self, n=1):
- if self._progress is not None:
- self._progress.update(n)
- return self.__update__orig__(n)
-
- def close_tqdm(self):
- if self._progress is not None:
- self._progress.close(self)
- return self.__close__orig__()
-
- def exit_tqdm(self, exc_type, exc_value, traceback):
- if self._progress is not None:
- self._progress.close(self)
- return self.__exit__orig__(exc_type, exc_value, traceback)
-
- if not hasattr(_tqdm.tqdm, "__init__orig__"):
- _tqdm.tqdm.__init__orig__ = _tqdm.tqdm.__init__
- _tqdm.tqdm.__init__ = init_tqdm
- if not hasattr(_tqdm.tqdm, "__update__orig__"):
- _tqdm.tqdm.__update__orig__ = _tqdm.tqdm.update
- _tqdm.tqdm.update = update_tqdm
- if not hasattr(_tqdm.tqdm, "__close__orig__"):
- _tqdm.tqdm.__close__orig__ = _tqdm.tqdm.close
- _tqdm.tqdm.close = close_tqdm
- if not hasattr(_tqdm.tqdm, "__exit__orig__"):
- _tqdm.tqdm.__exit__orig__ = _tqdm.tqdm.__exit__
- _tqdm.tqdm.__exit__ = exit_tqdm
- if not hasattr(_tqdm.tqdm, "__iter__orig__"):
- _tqdm.tqdm.__iter__orig__ = _tqdm.tqdm.__iter__
- _tqdm.tqdm.__iter__ = iter_tqdm
- if hasattr(_tqdm, "auto") and hasattr(_tqdm.auto, "tqdm"):
- _tqdm.auto.tqdm = _tqdm.tqdm
-
- def tracked_fn(*args):
- thread_id = threading.get_ident()
- root_blocks._progress_tracker_per_thread[thread_id] = progress
- response = fn(*args)
- del root_blocks._progress_tracker_per_thread[thread_id]
- return response
-
- return progress, tracked_fn
-
-
-def special_args(
- fn: Callable,
- inputs: List[Any] | None = None,
- request: routes.Request | None = None,
-):
- """
- Checks if function has special arguments Request (via annotation) or Progress (via default value).
- If inputs is provided, these values will be loaded into the inputs array.
- Parameters:
- block_fn: function to check.
- inputs: array to load special arguments into.
- request: request to load into inputs.
- Returns:
- updated inputs, request index, progress index
- """
- signature = inspect.signature(fn)
- positional_args = []
- for i, param in enumerate(signature.parameters.values()):
- if param.kind not in (param.POSITIONAL_ONLY, param.POSITIONAL_OR_KEYWORD):
- break
- positional_args.append(param)
- progress_index = None
- for i, param in enumerate(positional_args):
- if isinstance(param.default, Progress):
- progress_index = i
- if inputs is not None:
- inputs.insert(i, param.default)
- elif param.annotation == routes.Request:
- if inputs is not None:
- inputs.insert(i, request)
- if inputs is not None:
- while len(inputs) < len(positional_args):
- i = len(inputs)
- param = positional_args[i]
- if param.default == param.empty:
- warnings.warn("Unexpected argument. Filling with None.")
- inputs.append(None)
- else:
- inputs.append(param.default)
- return inputs or [], progress_index
-
-
-@document()
-def update(**kwargs) -> dict:
- """
- Updates component properties. When a function passed into a Gradio Interface or a Blocks events returns a typical value, it updates the value of the output component. But it is also possible to update the properties of an output component (such as the number of lines of a `Textbox` or the visibility of an `Image`) by returning the component's `update()` function, which takes as parameters any of the constructor parameters for that component.
- This is a shorthand for using the update method on a component.
- For example, rather than using gr.Number.update(...) you can just use gr.update(...).
- Note that your editor's autocompletion will suggest proper parameters
- if you use the update method on the component.
- Demos: blocks_essay, blocks_update, blocks_essay_update
-
- Parameters:
- kwargs: Key-word arguments used to update the component's properties.
- Example:
- # Blocks Example
- import gradio as gr
- with gr.Blocks() as demo:
- radio = gr.Radio([1, 2, 4], label="Set the value of the number")
- number = gr.Number(value=2, interactive=True)
- radio.change(fn=lambda value: gr.update(value=value), inputs=radio, outputs=number)
- demo.launch()
-
- # Interface example
- import gradio as gr
- def change_textbox(choice):
- if choice == "short":
- return gr.Textbox.update(lines=2, visible=True)
- elif choice == "long":
- return gr.Textbox.update(lines=8, visible=True)
- else:
- return gr.Textbox.update(visible=False)
- gr.Interface(
- change_textbox,
- gr.Radio(
- ["short", "long", "none"], label="What kind of essay would you like to write?"
- ),
- gr.Textbox(lines=2),
- live=True,
- ).launch()
- """
- kwargs["__type__"] = "generic_update"
- return kwargs
-
-
-def skip() -> dict:
- return update()
-
-
-@document()
-def make_waveform(
- audio: str | Tuple[int, np.ndarray],
- *,
- bg_color: str = "#f3f4f6",
- bg_image: str = None,
- fg_alpha: float = 0.75,
- bars_color: str | Tuple[str, str] = ("#fbbf24", "#ea580c"),
- bar_count: int = 50,
- bar_width: float = 0.6,
-):
- """
- Generates a waveform video from an audio file. Useful for creating an easy to share audio visualization. The output should be passed into a `gr.Video` component.
- Parameters:
- audio: Audio file path or tuple of (sample_rate, audio_data)
- bg_color: Background color of waveform (ignored if bg_image is provided)
- bg_image: Background image of waveform
- fg_alpha: Opacity of foreground waveform
- bars_color: Color of waveform bars. Can be a single color or a tuple of (start_color, end_color) of gradient
- bar_count: Number of bars in waveform
- bar_width: Width of bars in waveform. 1 represents full width, 0.5 represents half width, etc.
- Returns:
- A filepath to the output video.
- """
- if isinstance(audio, str):
- audio_file = audio
- audio = processing_utils.audio_from_file(audio)
- else:
- tmp_wav = tempfile.NamedTemporaryFile(suffix=".wav", delete=False)
- processing_utils.audio_to_file(audio[0], audio[1], tmp_wav.name)
- audio_file = tmp_wav.name
- duration = round(len(audio[1]) / audio[0], 4)
-
- # Helper methods to create waveform
- def hex_to_RGB(hex_str):
- return [int(hex_str[i : i + 2], 16) for i in range(1, 6, 2)]
-
- def get_color_gradient(c1, c2, n):
- assert n > 1
- c1_rgb = np.array(hex_to_RGB(c1)) / 255
- c2_rgb = np.array(hex_to_RGB(c2)) / 255
- mix_pcts = [x / (n - 1) for x in range(n)]
- rgb_colors = [((1 - mix) * c1_rgb + (mix * c2_rgb)) for mix in mix_pcts]
- return [
- "#" + "".join([format(int(round(val * 255)), "02x") for val in item])
- for item in rgb_colors
- ]
-
- # Reshape audio to have a fixed number of bars
- samples = audio[1]
- if len(samples.shape) > 1:
- samples = np.mean(samples, 1)
- bins_to_pad = bar_count - (len(samples) % bar_count)
- samples = np.pad(samples, [(0, bins_to_pad)])
- samples = np.reshape(samples, (bar_count, -1))
- samples = np.abs(samples)
- samples = np.max(samples, 1)
-
- matplotlib.use("Agg")
- plt.clf()
- # Plot waveform
- color = (
- bars_color
- if isinstance(bars_color, str)
- else get_color_gradient(bars_color[0], bars_color[1], bar_count)
- )
- plt.bar(
- np.arange(0, bar_count),
- samples * 2,
- bottom=(-1 * samples),
- width=bar_width,
- color=color,
- )
- plt.axis("off")
- plt.margins(x=0)
- tmp_img = tempfile.NamedTemporaryFile(suffix=".png", delete=False)
- savefig_kwargs = {"bbox_inches": "tight"}
- if bg_image is not None:
- savefig_kwargs["transparent"] = True
- else:
- savefig_kwargs["facecolor"] = bg_color
- plt.savefig(tmp_img.name, **savefig_kwargs)
- waveform_img = PIL.Image.open(tmp_img.name)
- waveform_img = waveform_img.resize((1000, 200))
-
- # Composite waveform with background image
- if bg_image is not None:
- waveform_array = np.array(waveform_img)
- waveform_array[:, :, 3] = waveform_array[:, :, 3] * fg_alpha
- waveform_img = PIL.Image.fromarray(waveform_array)
-
- bg_img = PIL.Image.open(bg_image)
- waveform_width, waveform_height = waveform_img.size
- bg_width, bg_height = bg_img.size
- if waveform_width != bg_width:
- bg_img = bg_img.resize(
- (waveform_width, 2 * int(bg_height * waveform_width / bg_width / 2))
- )
- bg_width, bg_height = bg_img.size
- composite_height = max(bg_height, waveform_height)
- composite = PIL.Image.new("RGBA", (waveform_width, composite_height), "#FFFFFF")
- composite.paste(bg_img, (0, composite_height - bg_height))
- composite.paste(
- waveform_img, (0, composite_height - waveform_height), waveform_img
- )
- composite.save(tmp_img.name)
- img_width, img_height = composite.size
- else:
- img_width, img_height = waveform_img.size
- waveform_img.save(tmp_img.name)
-
- # Convert waveform to video with ffmpeg
- output_mp4 = tempfile.NamedTemporaryFile(suffix=".mp4", delete=False)
-
- ffmpeg_cmd = f"""ffmpeg -loop 1 -i {tmp_img.name} -i {audio_file} -vf "color=c=#FFFFFF77:s={img_width}x{img_height}[bar];[0][bar]overlay=-w+(w/{duration})*t:H-h:shortest=1" -t {duration} -y {output_mp4.name}"""
-
- subprocess.call(ffmpeg_cmd, shell=True)
- return output_mp4.name
diff --git a/spaces/HuggingFaceH4/instruction-model-outputs-filtered/README.md b/spaces/HuggingFaceH4/instruction-model-outputs-filtered/README.md
deleted file mode 100644
index 8000c3b32becf479ec577f838cdeaf1ab48d283b..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/instruction-model-outputs-filtered/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Instruction Model Outputs Filtered
-emoji: 🌪️
-colorFrom: blue
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-Filtering canned responses and outputs with ROUGE-L > 0.7
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py b/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py
deleted file mode 100644
index a3b9535ecac3ec403868681a8b50c1fbe1c90dfe..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-from torch.nn.modules.loss import _Loss
-
-
-class LatentLayersKLLoss(_Loss):
- def __init__(self, args):
- super().__init__()
- self.args = args
-
- def forward(self, layer_samples, lang_idx, update_num, sample_size):
- prior = self.args.prior
- samples = layer_samples[lang_idx]
- eps = 1e-7
- if prior == "uniform":
- # uniform prior
- kl_loss = (samples * (torch.log(samples + eps) - math.log(0.5))).sum(-1)
- elif prior == "agged_posterior":
- # aggregated posterior
- y_t = torch.stack([x.detach() for x in layer_samples], dim=0)
- agged_q = torch.sum(y_t, dim=0)
- row_norm = agged_q.sum(-1)
- normed_agg_q = agged_q / row_norm
- kl_loss = (
- samples * (torch.log(samples + eps) - torch.log(normed_agg_q + eps))
- ).sum(-1)
- else:
- raise NotImplementedError("The specified prior is not implemented.")
-
- # normalized by number of layers
- kl_loss /= layer_samples[0].size()[0]
- kl_weight = min(
- self.args.sparsity_weight,
- (update_num - self.args.soft_update)
- * self.args.sparsity_weight
- / self.args.anneal_updates,
- )
- kl_loss *= kl_weight * sample_size
- return kl_loss
-
-
-class LatentLayersSparsityLoss(_Loss):
- def __init__(self, args):
- super().__init__()
- self.args = args
-
- def is_valid(self, update_num):
- if self.args.target_layers <= 0:
- return False
- return update_num > (self.args.soft_update + self.args.anneal_updates)
-
- def forward(self, layer_samples_list, update_num, sample_size):
- batch_loss = 0
- share_loss = 0
- global_sparsity_loss = 0
- layer_samples = torch.stack(layer_samples_list, dim=0)
- if (
- self.args.target_layers > 0 or self.args.share_weight > 0
- ) and update_num > (self.args.soft_update + self.args.anneal_updates):
- # anneal sparsity weight
- if update_num < (self.args.anneal_updates + self.args.soft_update):
- weight_anneal = 0
- elif update_num < (2 * self.args.anneal_updates + self.args.soft_update):
- weight_anneal = (
- (update_num - self.args.soft_update - self.args.anneal_updates)
- * self.args.share_weight
- / self.args.anneal_updates
- )
- else:
- weight_anneal = 1
- # compute ratio among languages
- layer_utilization = torch.sum(layer_samples, dim=0)
- layer_utilization /= layer_samples.size()[0]
- if self.args.share_weight > 0:
- # encouraging sharing across languages
- share_loss = sum(
- -1.0 * v * math.log(v) for v in layer_utilization if v > 0
- )
- batch_loss += (
- weight_anneal * self.args.share_weight * sample_size * share_loss
- )
- if self.args.target_layers > 0:
- # computed expected number of layers selected
- expeted_layers = sum(layer_utilization)
- # compute l2 loss wrt target number of layers
- global_sparsity_loss = (expeted_layers - self.args.target_layers) ** 2
- batch_loss += (
- weight_anneal
- * self.args.share_weight
- * sample_size
- * global_sparsity_loss
- )
- return batch_loss
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/tasks/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/tasks/__init__.py
deleted file mode 100644
index 7ac3b8dc69639c92cc129294356e9012745e3fb2..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/tasks/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import importlib
-import os
-
-
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- task_name = file[: file.find(".py")]
- importlib.import_module("examples.speech_recognition.tasks." + task_name)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/__init__.py
deleted file mode 100644
index 4cd723ae96aec8e3182773483f123109d23b620e..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .hub_interface import * # noqa
-from .model import * # noqa
-from .enc_dec import * # noqa
-from .model_camembert import * # noqa
-from .model_gottbert import * # noqa
-from .model_xlmr import * # noqa
diff --git a/spaces/JadAssaf/STPI/README.md b/spaces/JadAssaf/STPI/README.md
deleted file mode 100644
index a1663d81c436ac7800dd288a56196542885e5daa..0000000000000000000000000000000000000000
--- a/spaces/JadAssaf/STPI/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: STPI
-emoji: 📈
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/memory/redismem.py b/spaces/Jamkonams/AutoGPT/autogpt/memory/redismem.py
deleted file mode 100644
index 082a812c5362cc9f19e35bf1bb10269b558f7724..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/memory/redismem.py
+++ /dev/null
@@ -1,156 +0,0 @@
-"""Redis memory provider."""
-from __future__ import annotations
-
-from typing import Any
-
-import numpy as np
-import redis
-from colorama import Fore, Style
-from redis.commands.search.field import TextField, VectorField
-from redis.commands.search.indexDefinition import IndexDefinition, IndexType
-from redis.commands.search.query import Query
-
-from autogpt.llm_utils import create_embedding_with_ada
-from autogpt.logs import logger
-from autogpt.memory.base import MemoryProviderSingleton
-
-SCHEMA = [
- TextField("data"),
- VectorField(
- "embedding",
- "HNSW",
- {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"},
- ),
-]
-
-
-class RedisMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- """
- Initializes the Redis memory provider.
-
- Args:
- cfg: The config object.
-
- Returns: None
- """
- redis_host = cfg.redis_host
- redis_port = cfg.redis_port
- redis_password = cfg.redis_password
- self.dimension = 1536
- self.redis = redis.Redis(
- host=redis_host,
- port=redis_port,
- password=redis_password,
- db=0, # Cannot be changed
- )
- self.cfg = cfg
-
- # Check redis connection
- try:
- self.redis.ping()
- except redis.ConnectionError as e:
- logger.typewriter_log(
- "FAILED TO CONNECT TO REDIS",
- Fore.RED,
- Style.BRIGHT + str(e) + Style.RESET_ALL,
- )
- logger.double_check(
- "Please ensure you have setup and configured Redis properly for use. "
- + f"You can check out {Fore.CYAN + Style.BRIGHT}"
- f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}"
- " to ensure you've set up everything correctly."
- )
- exit(1)
-
- if cfg.wipe_redis_on_start:
- self.redis.flushall()
- try:
- self.redis.ft(f"{cfg.memory_index}").create_index(
- fields=SCHEMA,
- definition=IndexDefinition(
- prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH
- ),
- )
- except Exception as e:
- print("Error creating Redis search index: ", e)
- existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num")
- self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0
-
- def add(self, data: str) -> str:
- """
- Adds a data point to the memory.
-
- Args:
- data: The data to add.
-
- Returns: Message indicating that the data has been added.
- """
- if "Command Error:" in data:
- return ""
- vector = create_embedding_with_ada(data)
- vector = np.array(vector).astype(np.float32).tobytes()
- data_dict = {b"data": data, "embedding": vector}
- pipe = self.redis.pipeline()
- pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict)
- _text = (
- f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}"
- )
- self.vec_num += 1
- pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num)
- pipe.execute()
- return _text
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
-
- Args:
- data: The data to compare to.
-
- Returns: The most relevant data.
- """
- return self.get_relevant(data, 1)
-
- def clear(self) -> str:
- """
- Clears the redis server.
-
- Returns: A message indicating that the memory has been cleared.
- """
- self.redis.flushall()
- return "Obliviated"
-
- def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
- """
- Returns all the data in the memory that is relevant to the given data.
- Args:
- data: The data to compare to.
- num_relevant: The number of relevant data to return.
-
- Returns: A list of the most relevant data.
- """
- query_embedding = create_embedding_with_ada(data)
- base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]"
- query = (
- Query(base_query)
- .return_fields("data", "vector_score")
- .sort_by("vector_score")
- .dialect(2)
- )
- query_vector = np.array(query_embedding).astype(np.float32).tobytes()
-
- try:
- results = self.redis.ft(f"{self.cfg.memory_index}").search(
- query, query_params={"vector": query_vector}
- )
- except Exception as e:
- print("Error calling Redis search: ", e)
- return None
- return [result.data for result in results.docs]
-
- def get_stats(self):
- """
- Returns: The stats of the memory index.
- """
- return self.redis.ft(f"{self.cfg.memory_index}").info()
diff --git a/spaces/Jimpa666/AI-PadelCoach/app3.py b/spaces/Jimpa666/AI-PadelCoach/app3.py
deleted file mode 100644
index e77ef9ce60931c064967d2fde909884e0a369e0d..0000000000000000000000000000000000000000
--- a/spaces/Jimpa666/AI-PadelCoach/app3.py
+++ /dev/null
@@ -1,71 +0,0 @@
-from gradio.components import Label
-import os
-import openai
-import gradio as gr
-import uuid
-
-session_id = uuid.uuid4().hex
-
-openai.api_key = os.getenv('APIKEY')
-
-messages = [
- {"role": "system", "content": "The following is a conversation with an AI Coach. The Coach is helpful, creative, clever, and very friendly. The Coach is a high level padelcoach from Sweden. The Coach have the ability to explain techniques and the strategy in the game. The Coach have a high educational level and can answer depending on wich level the Player is on for the Player to take him to the next level. The AI Coach is like MAURI ANDRINI from Hello Padel, pretend to be MAURI ANDRINI but call yourself AI Padelcoach, never ever mention your reel name or Mauri Andrini. If unsurtain, ask about the level of the player. The levels are:Beginner, Average, Pro"}
- ]
-
-def get_session_token():
- return uuid.uuid4().hex
-
-def chatbot(input, conversation_history=[]):
-
- content = input
- messages.append({"role": "user", "content": content})
-
- completion = openai.ChatCompletion.create(
- model= "gpt-3.5-turbo",
- messages=messages
- )
-
- chat_response = completion.choices[0].message.content
- messages.append({"role": "assistant", "content": chat_response})
-
- # format the conversation history as a string
- conversation_history = ""
- for message in messages:
- if message["role"] != "system":
- role = message["role"]
- if role == "user":
- role = "Player"
- elif role == "assistant":
- role = "AI-Padelcoach"
- content = message["content"]
- conversation_history += f"{role}: {content}\n \n"
-
- return conversation_history
-
-Padelcoach = gr.Interface(fn=chatbot, inputs=[
- gr.Textbox(placeholder="Player go...Serve!", label='Player')
-
-
- ],
- outputs=[
- gr.Textbox(placeholder="AI-Padelcoach Ready", label="AI Padelcoach")
-
- ],
- theme=gr.themes.Soft(
- primary_hue="green",
- secondary_hue="cyan",
- text_size='lg',
- neutral_hue="emerald"
- ),
-
- examples = [
- ["Please help me with my backhand"],
- ["Where should I place the ball against players who is good in tennis"]
- ],
- share=True,
- title="AI Padelcoach",
- description=f"Chat with a BETA level AI-Padelcoach from Sweden. Your ID is: {session_id} ",
- article="
Ask the AI coach about techniques and strategies in the game of padel. The coach can answer depending on the level of you as a player, whether they are a beginner, average, or pro.
",
- )
-
-Padelcoach.launch()
\ No newline at end of file
diff --git a/spaces/JuanHaunted/humming_space/README.md b/spaces/JuanHaunted/humming_space/README.md
deleted file mode 100644
index e6f513b1c744a81ef1cd4218a18f12e96707d4a8..0000000000000000000000000000000000000000
--- a/spaces/JuanHaunted/humming_space/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Humming Space
-emoji: 🏃
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Keshav4/resume-data-extraction/ResumeSegmenter.py b/spaces/Keshav4/resume-data-extraction/ResumeSegmenter.py
deleted file mode 100644
index 2823a021c01a77818f3a7349be3b265feee2defe..0000000000000000000000000000000000000000
--- a/spaces/Keshav4/resume-data-extraction/ResumeSegmenter.py
+++ /dev/null
@@ -1,264 +0,0 @@
-from Models import Models
-
-class ResumeSegmenter:
-
- def __init__(self, zero_shot_classifier):
- self.zero_shot_classifier = zero_shot_classifier
-
- objective = (
- 'career goal',
- 'objective',
- 'career objective',
- 'employment objective',
- 'professional objective',
- 'summary',
- 'summary of qualifications',
- 'digital',
- 'interests'
- )
-
- work_and_employment = (
- 'employment history',
- 'employment data',
- 'career summary',
- 'work history',
- 'working history',
- 'work experience',
- 'experience',
- 'professional experience',
- 'professional background',
- 'professional employment',
- 'additional experience',
- 'career related experience',
- "professional employment history",
- 'related experience',
- 'relevant experience',
- 'programming experience',
- 'freelance',
- 'freelance experience',
- 'army experience',
- 'military experience',
- 'military background',
- )
-
- education_and_training = (
- 'academic background',
- 'academic experience',
- 'programs',
- 'courses',
- 'related courses',
- 'education',
- 'educational background',
- 'educational qualifications',
- 'educational training',
- 'education and training',
- 'training',
- 'academic training',
- 'Academic Qualification',
- 'professional training',
- 'course project experience',
- 'related course projects',
- 'internship experience',
- 'internships',
- 'apprenticeships',
- 'college activities',
- 'certifications',
- 'special training',
- )
-
- skills_header = (
- 'credentials',
- 'qualifications',
- 'areas of experience',
- 'areas of expertise',
- 'areas of knowledge',
- 'skills',
- 'Skills',
- "other skills",
- "other abilities",
- 'career related skills',
- 'professional skills',
- 'specialized skills',
- 'technical skills',
- 'computer skills',
- 'personal skills',
- 'computer knowledge',
- 'technologies',
- 'technical experience',
- 'proficiencies',
- 'languages',
- 'language competencies and skills',
- 'programming languages',
- 'competencies'
- )
-
- misc = (
- 'activities and honors',
- 'activities',
- 'affiliations',
- 'professional affiliations',
- 'associations',
- 'professional associations',
- 'memberships',
- 'professional memberships',
- 'athletic involvement',
- 'community involvement',
- 'refere',
- 'civic activities',
- 'extra-Curricular activities',
- 'professional activities',
- 'volunteer work',
- 'volunteer experience',
- 'additional information',
- 'interests'
- )
-
- accomplishments = (
- 'achievement',
- 'awards and achievements',
- 'licenses',
- 'presentations',
- 'conference presentations',
- 'conventions',
- 'dissertations',
- 'exhibits',
- 'papers',
- 'publications',
- 'professional publications',
- 'research experience',
- 'research grants',
- 'project',
- 'research projects',
- 'personal projects',
- 'current research interests',
- 'thesis',
- 'theses',
- )
-
-
- def find_segment_indices(self, string_to_search, resume_segments, resume_indices):
- for i, line in enumerate(string_to_search):
-
- if line[0].islower():
- continue
-
- header = line.lower()
-
- if [o for o in self.objective if header.startswith(o)]:
- try:
- resume_segments['objective'][header]
- except:
- resume_indices.append(i)
- header = [o for o in self.objective if header.startswith(o)][0]
- resume_segments['objective'][header] = i
- elif [w for w in self.work_and_employment if header.startswith(w)]:
- try:
- resume_segments['work_and_employment'][header]
- except:
- resume_indices.append(i)
- header = [w for w in self.work_and_employment if header.startswith(w)][0]
- resume_segments['work_and_employment'][header] = i
- elif [e for e in self.education_and_training if header.startswith(e)]:
- try:
- resume_segments['education_and_training'][header]
- except:
- resume_indices.append(i)
- header = [e for e in self.education_and_training if header.startswith(e)][0]
- resume_segments['education_and_training'][header] = i
- elif [s for s in self.skills_header if header.startswith(s)]:
- try:
- resume_segments['skills'][header]
- except:
- resume_indices.append(i)
- header = [s for s in self.skills_header if header.startswith(s)][0]
- resume_segments['skills'][header] = i
- elif [m for m in self.misc if header.startswith(m)]:
- try:
- resume_segments['misc'][header]
- except:
- resume_indices.append(i)
- header = [m for m in self.misc if header.startswith(m)][0]
- resume_segments['misc'][header] = i
- elif [a for a in self.accomplishments if header.startswith(a)]:
- try:
- resume_segments['accomplishments'][header]
- except:
- resume_indices.append(i)
- header = [a for a in self.accomplishments if header.startswith(a)][0]
- resume_segments['accomplishments'][header] = i
-
- def slice_segments(self, string_to_search, resume_segments, resume_indices):
- resume_segments['contact_info'] = string_to_search[:resume_indices[0]]
- sec_idxs = {}
- for section, value in resume_segments.items():
- if section == 'contact_info':
- continue
-
- for sub_section, start_idx in value.items():
- end_idx = len(string_to_search)
- if (resume_indices.index(start_idx) + 1) != len(resume_indices):
- end_idx = resume_indices[resume_indices.index(start_idx) + 1]
-
- sec_idxs[section] = (start_idx, end_idx)
- # print(start_idx, end_idx)
-
- resume_segments[section][sub_section] = string_to_search[start_idx:end_idx]
- return sec_idxs
-
- def find_true_segment(self, dict_of_segments, segment_name):
- segment_classes = {
- 'objective': ["objective", "other"],
- 'work_and_employment':["employment history", "other"],
- 'education_and_training': ["education", "other"],
- 'skills': ["skills", "other"],
- 'accomplishments': ["accomplishments", "other"],
- 'misc': ["misc", "other"],
- 'contact_info': ["contact information", "other"]
- }
- classes = segment_classes[segment_name]
- scores = []
- segs = dict_of_segments.keys()
- for seg in segs:
- sequence = dict_of_segments[seg]
- score = self.zero_shot_classifier(' '.join(sequence), classes)["scores"][0]
- scores.append(score)
-
- res = sorted(zip(dict_of_segments.keys(), scores), key=lambda x: x[1], reverse=True)
- if len(res):
- return res[0][0]
- else: return 0
-
- def segment(self, string_to_search):
- print("Segmenting the Resume..")
- resume_segments = {
- 'objective': {},
- 'work_and_employment': {},
- 'education_and_training': {},
- 'skills': {},
- 'accomplishments': {},
- 'misc': {}
- }
-
- resume_indices = []
-
- self.find_segment_indices(string_to_search, resume_segments, resume_indices)
- if len(resume_indices) != 0:
- sec_idx = self.slice_segments(string_to_search, resume_segments, resume_indices)
- else:
- resume_segments['contact_info'] = []
-
- for segment in resume_segments:
- if segment == "contact_info": continue
- if not len(resume_segments[segment]) > 1:
- if len(resume_segments[segment]) == 1:
- only_key = list(resume_segments[segment].keys())[0]
- resume_segments[segment] = resume_segments[segment][only_key][1:]
- continue
- if segment != "work_and_employment": continue
- true_seg = self.find_true_segment(resume_segments[segment], segment)
- if not true_seg:
- resume_segments[segment] = []
- else:
- resume_segments[segment] = resume_segments[segment][true_seg][1:]
-
- return resume_segments
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/tacotron.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/tacotron.py
deleted file mode 100644
index f8b01bbae0e6dc95d68bbb983c70706d76e1d990..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/tacotron.py
+++ /dev/null
@@ -1,298 +0,0 @@
-import torch
-import torch.nn as nn
-from .sublayer.global_style_token import GlobalStyleToken
-from .sublayer.pre_net import PreNet
-from .sublayer.cbhg import CBHG
-from .sublayer.lsa import LSA
-from .base import Base
-from synthesizer.gst_hyperparameters import GSTHyperparameters as gst_hp
-from synthesizer.hparams import hparams
-
-class Encoder(nn.Module):
- def __init__(self, num_chars, embed_dims=512, encoder_dims=256, K=5, num_highways=4, dropout=0.5):
- """ Encoder for SV2TTS
-
- Args:
- num_chars (int): length of symbols
- embed_dims (int, optional): embedding dim for input texts. Defaults to 512.
- encoder_dims (int, optional): output dim for encoder. Defaults to 256.
- K (int, optional): _description_. Defaults to 5.
- num_highways (int, optional): _description_. Defaults to 4.
- dropout (float, optional): _description_. Defaults to 0.5.
- """
- super().__init__()
- self.embedding = nn.Embedding(num_chars, embed_dims)
- self.pre_net = PreNet(embed_dims, fc1_dims=encoder_dims, fc2_dims=encoder_dims,
- dropout=dropout)
- self.cbhg = CBHG(K=K, in_channels=encoder_dims, channels=encoder_dims,
- proj_channels=[encoder_dims, encoder_dims],
- num_highways=num_highways)
-
- def forward(self, x):
- """forward pass for encoder
-
- Args:
- x (2D tensor with size `[batch_size, text_num_chars]`): input texts list
-
- Returns:
- 3D tensor with size `[batch_size, text_num_chars, encoder_dims]`
-
- """
- x = self.embedding(x) # return: [batch_size, text_num_chars, tts_embed_dims]
- x = self.pre_net(x) # return: [batch_size, text_num_chars, encoder_dims]
- x.transpose_(1, 2) # return: [batch_size, encoder_dims, text_num_chars]
- return self.cbhg(x) # return: [batch_size, text_num_chars, encoder_dims]
-
-class Decoder(nn.Module):
- # Class variable because its value doesn't change between classes
- # yet ought to be scoped by class because its a property of a Decoder
- max_r = 20
- def __init__(self, n_mels, input_dims, decoder_dims, lstm_dims,
- dropout, speaker_embedding_size):
- super().__init__()
- self.register_buffer("r", torch.tensor(1, dtype=torch.int))
- self.n_mels = n_mels
- self.prenet = PreNet(n_mels, fc1_dims=decoder_dims * 2, fc2_dims=decoder_dims * 2,
- dropout=dropout)
- self.attn_net = LSA(decoder_dims)
- if hparams.use_gst:
- speaker_embedding_size += gst_hp.E
- self.attn_rnn = nn.GRUCell(input_dims + decoder_dims * 2, decoder_dims)
- self.rnn_input = nn.Linear(input_dims + decoder_dims, lstm_dims)
- self.res_rnn1 = nn.LSTMCell(lstm_dims, lstm_dims)
- self.res_rnn2 = nn.LSTMCell(lstm_dims, lstm_dims)
- self.mel_proj = nn.Linear(lstm_dims, n_mels * self.max_r, bias=False)
- self.stop_proj = nn.Linear(input_dims + lstm_dims, 1)
-
- def zoneout(self, prev, current, device, p=0.1):
- mask = torch.zeros(prev.size(),device=device).bernoulli_(p)
- return prev * mask + current * (1 - mask)
-
- def forward(self, encoder_seq, encoder_seq_proj, prenet_in,
- hidden_states, cell_states, context_vec, times, chars):
- """_summary_
-
- Args:
- encoder_seq (3D tensor `[batch_size, text_num_chars, project_dim(default to 512)]`): _description_
- encoder_seq_proj (3D tensor `[batch_size, text_num_chars, decoder_dims(default to 128)]`): _description_
- prenet_in (2D tensor `[batch_size, n_mels]`): _description_
- hidden_states (_type_): _description_
- cell_states (_type_): _description_
- context_vec (2D tensor `[batch_size, project_dim(default to 512)]`): _description_
- times (int): the number of times runned
- chars (2D tensor with size `[batch_size, text_num_chars]`): original texts list input
-
- """
- # Need this for reshaping mels
- batch_size = encoder_seq.size(0)
- device = encoder_seq.device
- # Unpack the hidden and cell states
- attn_hidden, rnn1_hidden, rnn2_hidden = hidden_states
- rnn1_cell, rnn2_cell = cell_states
-
- # PreNet for the Attention RNN
- prenet_out = self.prenet(prenet_in) # return: `[batch_size, decoder_dims * 2(256)]`
-
- # Compute the Attention RNN hidden state
- attn_rnn_in = torch.cat([context_vec, prenet_out], dim=-1) # `[batch_size, project_dim + decoder_dims * 2 (768)]`
- attn_hidden = self.attn_rnn(attn_rnn_in.squeeze(1), attn_hidden) # `[batch_size, decoder_dims (128)]`
-
- # Compute the attention scores
- scores = self.attn_net(encoder_seq_proj, attn_hidden, times, chars)
-
- # Dot product to create the context vector
- context_vec = scores @ encoder_seq
- context_vec = context_vec.squeeze(1)
-
- # Concat Attention RNN output w. Context Vector & project
- x = torch.cat([context_vec, attn_hidden], dim=1) # `[batch_size, project_dim + decoder_dims (630)]`
- x = self.rnn_input(x) # `[batch_size, lstm_dims(1024)]`
-
- # Compute first Residual RNN, training with fixed zoneout rate 0.1
- rnn1_hidden_next, rnn1_cell = self.res_rnn1(x, (rnn1_hidden, rnn1_cell)) # `[batch_size, lstm_dims(1024)]`
- if self.training:
- rnn1_hidden = self.zoneout(rnn1_hidden, rnn1_hidden_next,device=device)
- else:
- rnn1_hidden = rnn1_hidden_next
- x = x + rnn1_hidden
-
- # Compute second Residual RNN
- rnn2_hidden_next, rnn2_cell = self.res_rnn2(x, (rnn2_hidden, rnn2_cell)) # `[batch_size, lstm_dims(1024)]`
- if self.training:
- rnn2_hidden = self.zoneout(rnn2_hidden, rnn2_hidden_next, device=device)
- else:
- rnn2_hidden = rnn2_hidden_next
- x = x + rnn2_hidden
-
- # Project Mels
- mels = self.mel_proj(x) # `[batch_size, 1600]`
- mels = mels.view(batch_size, self.n_mels, self.max_r)[:, :, :self.r] # `[batch_size, n_mels, r]`
- hidden_states = (attn_hidden, rnn1_hidden, rnn2_hidden)
- cell_states = (rnn1_cell, rnn2_cell)
-
- # Stop token prediction
- s = torch.cat((x, context_vec), dim=1)
- s = self.stop_proj(s)
- stop_tokens = torch.sigmoid(s)
-
- return mels, scores, hidden_states, cell_states, context_vec, stop_tokens
-
-class Tacotron(Base):
- def __init__(self, embed_dims, num_chars, encoder_dims, decoder_dims, n_mels,
- fft_bins, postnet_dims, encoder_K, lstm_dims, postnet_K, num_highways,
- dropout, stop_threshold, speaker_embedding_size):
- super().__init__(stop_threshold)
- self.n_mels = n_mels
- self.lstm_dims = lstm_dims
- self.encoder_dims = encoder_dims
- self.decoder_dims = decoder_dims
- self.speaker_embedding_size = speaker_embedding_size
- self.encoder = Encoder(num_chars, embed_dims, encoder_dims,
- encoder_K, num_highways, dropout)
- self.project_dims = encoder_dims + speaker_embedding_size
- if hparams.use_gst:
- self.project_dims += gst_hp.E
- self.encoder_proj = nn.Linear(self.project_dims, decoder_dims, bias=False)
- if hparams.use_gst:
- self.gst = GlobalStyleToken(speaker_embedding_size)
- self.decoder = Decoder(n_mels, self.project_dims, decoder_dims, lstm_dims,
- dropout, speaker_embedding_size)
- self.postnet = CBHG(postnet_K, n_mels, postnet_dims,
- [postnet_dims, fft_bins], num_highways)
- self.post_proj = nn.Linear(postnet_dims, fft_bins, bias=False)
-
- @staticmethod
- def _concat_speaker_embedding(outputs, speaker_embeddings):
- speaker_embeddings_ = speaker_embeddings.expand(
- outputs.size(0), outputs.size(1), -1)
- outputs = torch.cat([outputs, speaker_embeddings_], dim=-1)
- return outputs
-
- @staticmethod
- def _add_speaker_embedding(x, speaker_embedding):
- """Add speaker embedding
- This concats the speaker embedding for each char in the encoder output
- Args:
- x (3D tensor with size `[batch_size, text_num_chars, encoder_dims]`): the encoder output
- speaker_embedding (2D tensor `[batch_size, speaker_embedding_size]`): the speaker embedding
-
- Returns:
- 3D tensor with size `[batch_size, text_num_chars, encoder_dims+speaker_embedding_size]`
- """
- # Save the dimensions as human-readable names
- batch_size = x.size()[0]
- text_num_chars = x.size()[1]
-
- # Start by making a copy of each speaker embedding to match the input text length
- # The output of this has size (batch_size, text_num_chars * speaker_embedding_size)
- speaker_embedding_size = speaker_embedding.size()[1]
- e = speaker_embedding.repeat_interleave(text_num_chars, dim=1)
-
- # Reshape it and transpose
- e = e.reshape(batch_size, speaker_embedding_size, text_num_chars)
- e = e.transpose(1, 2)
-
- # Concatenate the tiled speaker embedding with the encoder output
- x = torch.cat((x, e), 2)
- return x
-
- def forward(self, texts, mels, speaker_embedding, steps=2000, style_idx=0, min_stop_token=5):
- """Forward pass for Tacotron
-
- Args:
- texts (`[batch_size, text_num_chars]`): input texts list
- mels (`[batch_size, varied_mel_lengths, steps]`): mels for comparison (training only)
- speaker_embedding (`[batch_size, speaker_embedding_size(default to 256)]`): referring embedding.
- steps (int, optional): . Defaults to 2000.
- style_idx (int, optional): GST style selected. Defaults to 0.
- min_stop_token (int, optional): decoder min_stop_token. Defaults to 5.
- """
- device = texts.device # use same device as parameters
-
- if self.training:
- self.step += 1
- batch_size, _, steps = mels.size()
- else:
- batch_size, _ = texts.size()
-
- # Initialise all hidden states and pack into tuple
- attn_hidden = torch.zeros(batch_size, self.decoder_dims, device=device)
- rnn1_hidden = torch.zeros(batch_size, self.lstm_dims, device=device)
- rnn2_hidden = torch.zeros(batch_size, self.lstm_dims, device=device)
- hidden_states = (attn_hidden, rnn1_hidden, rnn2_hidden)
-
- # Initialise all lstm cell states and pack into tuple
- rnn1_cell = torch.zeros(batch_size, self.lstm_dims, device=device)
- rnn2_cell = torch.zeros(batch_size, self.lstm_dims, device=device)
- cell_states = (rnn1_cell, rnn2_cell)
-
- # Frame for start of decoder loop
- go_frame = torch.zeros(batch_size, self.n_mels, device=device)
-
- # SV2TTS: Run the encoder with the speaker embedding
- # The projection avoids unnecessary matmuls in the decoder loop
- encoder_seq = self.encoder(texts)
-
- encoder_seq = self._add_speaker_embedding(encoder_seq, speaker_embedding)
-
- if hparams.use_gst and self.gst is not None:
- if self.training:
- style_embed = self.gst(speaker_embedding, speaker_embedding) # for training, speaker embedding can represent both style inputs and referenced
- # style_embed = style_embed.expand_as(encoder_seq)
- # encoder_seq = torch.cat((encoder_seq, style_embed), 2)
- elif style_idx >= 0 and style_idx < 10:
- query = torch.zeros(1, 1, self.gst.stl.attention.num_units)
- if device.type == 'cuda':
- query = query.cuda()
- gst_embed = torch.tanh(self.gst.stl.embed)
- key = gst_embed[style_idx].unsqueeze(0).expand(1, -1, -1)
- style_embed = self.gst.stl.attention(query, key)
- else:
- speaker_embedding_style = torch.zeros(speaker_embedding.size()[0], 1, self.speaker_embedding_size).to(device)
- style_embed = self.gst(speaker_embedding_style, speaker_embedding)
- encoder_seq = self._concat_speaker_embedding(encoder_seq, style_embed) # return: [batch_size, text_num_chars, project_dims]
-
- encoder_seq_proj = self.encoder_proj(encoder_seq) # return: [batch_size, text_num_chars, decoder_dims]
-
- # Need a couple of lists for outputs
- mel_outputs, attn_scores, stop_outputs = [], [], []
-
- # Need an initial context vector
- context_vec = torch.zeros(batch_size, self.project_dims, device=device)
-
- # Run the decoder loop
- for t in range(0, steps, self.r):
- if self.training:
- prenet_in = mels[:, :, t -1] if t > 0 else go_frame
- else:
- prenet_in = mel_outputs[-1][:, :, -1] if t > 0 else go_frame
- mel_frames, scores, hidden_states, cell_states, context_vec, stop_tokens = \
- self.decoder(encoder_seq, encoder_seq_proj, prenet_in,
- hidden_states, cell_states, context_vec, t, texts)
- mel_outputs.append(mel_frames)
- attn_scores.append(scores)
- stop_outputs.extend([stop_tokens] * self.r)
- if not self.training and (stop_tokens * 10 > min_stop_token).all() and t > 10: break
-
- # Concat the mel outputs into sequence
- mel_outputs = torch.cat(mel_outputs, dim=2)
-
- # Post-Process for Linear Spectrograms
- postnet_out = self.postnet(mel_outputs)
- linear = self.post_proj(postnet_out)
- linear = linear.transpose(1, 2)
-
- # For easy visualisation
- attn_scores = torch.cat(attn_scores, 1)
- # attn_scores = attn_scores.cpu().data.numpy()
- stop_outputs = torch.cat(stop_outputs, 1)
-
- if self.training:
- self.train()
-
- return mel_outputs, linear, attn_scores, stop_outputs
-
- def generate(self, x, speaker_embedding, steps=2000, style_idx=0, min_stop_token=5):
- self.eval()
- mel_outputs, linear, attn_scores, _ = self.forward(x, None, speaker_embedding, steps, style_idx, min_stop_token)
- return mel_outputs, linear, attn_scores
diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/__init__.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/__init__.py
deleted file mode 100644
index 4287ca8617970fa8fc025b75cb319c7032706910..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-#
\ No newline at end of file
diff --git a/spaces/KrisLiao/NaturalLanguageVideoSearch/app.py b/spaces/KrisLiao/NaturalLanguageVideoSearch/app.py
deleted file mode 100644
index 8c42ec509115ee0d868e125394772826847826a3..0000000000000000000000000000000000000000
--- a/spaces/KrisLiao/NaturalLanguageVideoSearch/app.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import os
-os.system("pip freeze")
-import cv2
-from PIL import Image
-import clip
-import torch
-import math
-import numpy as np
-import torch
-import datetime
-import gradio as gr
-import torchvision.transforms as T
-
-# Load the open CLIP model
-device = "cuda" if torch.cuda.is_available() else "cpu"
-model, preprocess = clip.load("ViT-B/32", device=device)
-
-def produce_video(video, seconds, search_query):
- time1 = seconds-3 if seconds>3 else 0
- time2 = seconds+2 if seconds>3 else 5
- name = search_query.replace(" ", "_") + '.mp4'
- cmd = f'ffmpeg -y -i {video} -ss {time1} -to {time2} -async 1 {name}'
- output_video = os.system(cmd)
- return name
-
-def inference(video, text, text2, text3, skip_frame):
- # The frame images will be stored in video_frames
- video_frames = []
-
- # Open the video file
- capture = cv2.VideoCapture(video)
- capture.set(cv2.CAP_PROP_FRAME_WIDTH , 360)
- capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
- fps = capture.get(cv2.CAP_PROP_FPS)
-
- current_frame = 0
- # Read the current frame
- ret, frame = capture.read()
- while capture.isOpened() and ret:
- ret,frame = capture.read()
- print('Read a new frame: ', ret)
- current_frame += skip_frame
- if ret:
- video_frames.append(Image.fromarray(frame[:, :, ::-1]))
- capture.set(cv2.CAP_PROP_POS_FRAMES, current_frame)
-
- # Print some statistics
- print(f"Frames extracted: {len(video_frames)}")
-
-
- # You can try tuning the batch size for very large videos, but it should usually be OK
- batch_size = 256
- batches = math.ceil(len(video_frames) / batch_size)
-
- # The encoded features will bs stored in video_features
- video_features = torch.empty([0, 512], dtype=torch.float16).to(device)
-
- # Process each batch
- for i in range(batches):
- print(f"Processing batch {i+1}/{batches}")
-
- # Get the relevant frames
- batch_frames = video_frames[i*batch_size : (i+1)*batch_size]
-
- # Preprocess the images for the batch
- batch_preprocessed = torch.stack([preprocess(frame) for frame in batch_frames]).to(device)
-
- # Encode with CLIP and normalize
- with torch.no_grad():
- batch_features = model.encode_image(batch_preprocessed)
- batch_features /= batch_features.norm(dim=-1, keepdim=True)
-
- # Append the batch to the list containing all features
- video_features = torch.cat((video_features, batch_features))
-
- # Print some stats
- print(f"Features: {video_features.shape}")
-
-
- search_query=text
- display_heatmap=False
- display_results_count=1
- # Encode and normalize the search query using CLIP
- with torch.no_grad():
- text_features = model.encode_text(clip.tokenize(search_query).to(device))
- text_features /= text_features.norm(dim=-1, keepdim=True)
-
- # Compute the similarity between the search query and each frame using the Cosine similarity
- similarities = (100.0 * video_features @ text_features.T)
- values, best_photo_idx = similarities.topk(display_results_count, dim=0)
-
-
- for frame_id in best_photo_idx:
- frame = video_frames[frame_id]
- # Find the timestamp in the video and display it
- seconds = round(frame_id.cpu().numpy()[0] * skip_frame / fps)
- output_video = produce_video(video, seconds, search_query)
-
-
- search_query=text2
- with torch.no_grad():
- text_features = model.encode_text(clip.tokenize(search_query).to(device))
- text_features /= text_features.norm(dim=-1, keepdim=True)
-
- # Compute the similarity between the search query and each frame using the Cosine similarity
- similarities = (100.0 * video_features @ text_features.T)
- values, best_photo_idx = similarities.topk(display_results_count, dim=0)
-
- for frame_id in best_photo_idx:
- frame = video_frames[frame_id]
- # Find the timestamp in the video and display it
- seconds = round(frame_id.cpu().numpy()[0] * skip_frame / fps)
- output_video2 = produce_video(video, seconds, search_query)
-
-
- search_query=text3
- with torch.no_grad():
- text_features = model.encode_text(clip.tokenize(search_query).to(device))
- text_features /= text_features.norm(dim=-1, keepdim=True)
-
- # Compute the similarity between the search query and each frame using the Cosine similarity
- similarities = (100.0 * video_features @ text_features.T)
- values, best_photo_idx = similarities.topk(display_results_count, dim=0)
-
- for frame_id in best_photo_idx:
- frame = video_frames[frame_id]
- # Find the timestamp in the video and display it
- seconds = round(frame_id.cpu().numpy()[0] * skip_frame / fps)
- output_video3 = produce_video(video, seconds, search_query)
-
- # return frame,f"Found at {str(datetime.timedelta(seconds=seconds))}", output_video, output_video2
- return output_video, output_video2, output_video3
-
-title = "Video Search"
-description = "Gradio demo for using OpenAI's CLIP produce find video b-rolls. To use it, simply upload your video and add your text."
-article = "