diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Be2works Full Version How to Repair Laptop Batteries with Ease.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Be2works Full Version How to Repair Laptop Batteries with Ease.md
deleted file mode 100644
index a842cddffe7105cd369e31fbc1f7f6c99bd541a3..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Be2works Full Version How to Repair Laptop Batteries with Ease.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-
Download GX Developer 8.7 Full Crack
-
If you are looking for a reliable and powerful software to program and control Mitsubishi PLCs, you might want to try GX Developer. This software is widely used by engineers and technicians who work with Mitsubishi controllers in various industries. In this article, we will show you how to download GX Developer 8.7 full crack, which is the latest version of this software, and how to use it effectively.
GX Developer is the basic controller programming environment supporting Q Process, Q, L, FX Series and legacy controllers A and AnS Series. It is part of the MELSOFT series of software products developed by Mitsubishi Electric. It allows you to create, edit, debug and transfer programs for Mitsubishi PLCs using various programming languages, such as ladder logic, structured text, function block diagram, sequential function chart and MELSAP-L.
-
Features and benefits of GX Developer
-
Some of the features and benefits of GX Developer are:
-
-
It supports multiple controllers and communication protocols, such as RS-232C, RS-422/485, Ethernet, USB and CC-Link.
-
It has a user-friendly interface that allows you to easily access various functions and tools.
-
It has a powerful editor that supports syntax highlighting, auto-completion, drag-and-drop, copy-and-paste and undo-redo operations.
-
It has a built-in simulator that allows you to test your program without connecting to a real controller.
-
It has a comprehensive online help system that provides detailed information on each function and command.
-
It has a data logging function that allows you to record and analyze the data from the controller.
-
It has a password protection function that allows you to secure your program from unauthorized access or modification.
-
-
System requirements for GX Developer
-
The minimum system requirements for GX Developer are:
-
-
Operating system
Windows XP/Vista/7/8/10 (32-bit or 64-bit)
-
CPU
Pentium III 800 MHz or higher
-
Memory
256 MB or more
-
HDD
500 MB or more of free space
-
Display
XGA (1024 x 768) or higher resolution
-
Others
CD-ROM drive, mouse, keyboard, printer (optional)
-
-
How to download GX Developer 8.7 full crack?
-
To download GX Developer 8.7 full crack, you need to follow these steps:
-
How to download gx developer 8.7 full crack for free
-Download gx developer 8.7 full crack with serial key
-Download gx developer 8.7 full crack for windows 10
-Download gx developer 8.7 full crack for plc programming
-Download gx developer 8.7 full crack latest version
-Download gx developer 8.7 full crack torrent link
-Download gx developer 8.7 full crack from official website
-Download gx developer 8.7 full crack without virus
-Download gx developer 8.7 full crack online
-Download gx developer 8.7 full crack and install guide
-Download gx developer 8.7 full crack for mitsubishi plc
-Download gx developer 8.7 full crack with license key
-Download gx developer 8.7 full crack for mac
-Download gx developer 8.7 full crack for linux
-Download gx developer 8.7 full crack for android
-Download gx developer 8.7 full crack zip file
-Download gx developer 8.7 full crack rar file
-Download gx developer 8.7 full crack iso file
-Download gx developer 8.7 full crack setup file
-Download gx developer 8.7 full crack exe file
-Download gx developer 8.7 full crack software
-Download gx developer 8.7 full crack tool
-Download gx developer 8.7 full crack patch
-Download gx developer 8.7 full crack keygen
-Download gx developer 8.7 full crack activation code
-Download gx developer 8.7 full crack registration code
-Download gx developer 8.7 full crack product key
-Download gx developer 8.7 full crack generator
-Download gx developer 8.7 full crack hack
-Download gx developer 8.7 full crack mod
-Download gx developer 8.7 full crack apk
-Download gx developer 8.7 full crack review
-Download gx developer 8.7 full crack tutorial
-Download gx developer 8.7 full crack video
-Download gx developer 8.7 full crack youtube
-Download gx developer 8.7 full crack blog
-Download gx developer 8.7 full crack forum
-Download gx developer 8.7 full crack reddit
-Download gx developer 8.7 full crack quora
-Download gx developer 8.7 full crack facebook
-Download gx developer 8.7 full crack twitter
-Download gx developer 8.7 full crack instagram
-Download gx developer 8.7 full crack pinterest
-Download gx developer 8.7 full crack linkedin
-Download gx developer 8.7 full crack medium
-Download gx developer 8.7 full crack wordpress
-Download gx developer 8.7 full crack wixsite
-Download gx developer 8.7 full crack google drive
-Download gx developer 8.7 full crack dropbox
-
Step 1: Visit the official website of Mitsubishi Electric
Step 2: Select the software category and GX Developer product
-
The next step is to select the software category and GX Developer product from the list. You can use the search box or the filters to narrow down your options. For example, you can type "GX Developer" in the search box or select "MELSOFT" as the large category and "GX series" as the small category. Then, you will see a list of manuals for different versions of GX Developer. You need to select "GX Developer Version 8 Operating Manual" as your desired product.
-
Step 3: Download the installation file and the crack file
-
The third step is to download the installation file and the crack file for GX Developer 8.7 full crack. You can click on the "Download" button next to each file name to start downloading them. The installation file is named "GXDEV807E.zip" and has a size of about 1 GB. The crack file is named "GXDEV807E_Crack.zip" and has a size of about 10 MB.
-
Step 4: Install GX Developer and apply the crack
-
The final step is to install GX Developer and apply the crack to activate it. You need to follow these sub-steps:
-
-
Extract both zip files using a tool like WinRAR or 7-Zip.
-
Run the setup.exe file from the extracted folder of "GXDEV807E.zip". Follow the instructions on the screen to complete the installation process.
-
Copy all the files from the extracted folder of "GXDEV807E_Crack.zip" and paste them into the installation folder of GX Developer. The default installation folder is "C:\Program Files (x86)\MELSOFT\GX Works2". Overwrite any existing files if prompted.
-
Run GX Developer from your desktop shortcut or start menu. You should see a message saying "License registration completed" on the bottom right corner of the screen.
-
Congratulations! You have successfully installed GX Developer 8.7 full crack on your computer.
-
-
How to use GX Developer 8.7?
-
To use GX Developer 8.7 effectively, you need to follow these steps:
-
Create a new project or open an existing one
-
The first step is to create a new project or open an existing one in GX Developer. You can do this by clicking on the "File" menu and selecting "New Project" or "Open Project". A project is a collection of files that contain your program, settings, comments and other information related to your controller. You need to specify a project name, a controller type, a communication method and other options when creating a new project.
-
Configure the controller settings and communication parameters
-
The next step is to configure the controller settings and communication parameters in GX Developer. You can do this by clicking on the "Project" menu and selecting "Parameter". A parameter is a value that determines how your controller operates or communicates with other devices. You need to set up parameters such as input/output assignments, device comments, memory allocation, network configuration, password protection and others according to your needs.
-
Write, edit and debug your program using various tools and languages
-
The third step is to write, edit and debug your program using various tools and languages in GX Developer. You can do this by clicking on the "Program" menu and selecting "Edit". An edit window will open where you can write your program using different programming languages such as ladder logic (LD), structured text (ST), function block diagram (FBD), sequential function chart (SFC) or MELSAP-L (ML). You can also use various tools such as syntax check, cross reference, comment input/output, device monitor/editor and others to help you write your program more efficiently.
-
Transfer your program to your controller and monitor its operation
-
The final step is to transfer your program to your controller and monitor its operation in GX Developer. You can do this by clicking on the "Online" menu and selecting "Transfer To PLC" or "Transfer From PLC". A transfer window will open where you can select which files you want to transfer between your computer and your controller. You can also click on monitor window or a test window will open where you can monitor or test the operation of your controller and your program. You can use various functions such as start/stop, force on/off, data change, data trace and others to control or observe your controller and your program.
-
Conclusion
-
In conclusion, GX Developer is a powerful and reliable software that allows you to program and control Mitsubishi PLCs. It has many features and benefits that make it easy and convenient to use. It supports various programming languages and communication protocols. It also has a built-in simulator and a data logging function. To download GX Developer 8.7 full crack, you need to visit the official website of Mitsubishi Electric, select the software category and GX Developer product, download the installation file and the crack file, install GX Developer and apply the crack. To use GX Developer 8.7 effectively, you need to create or open a project, configure the controller settings and communication parameters, write, edit and debug your program using various tools and languages, transfer your program to your controller and monitor its operation.
-
FAQs
-
Here are some frequently asked questions about GX Developer 8.7 full crack:
-
-
Q: Is it legal to download GX Developer 8.7 full crack?
-
A: No, it is not legal to download GX Developer 8.7 full crack. It is a violation of the software license agreement and intellectual property rights of Mitsubishi Electric. You should only download GX Developer from the official website of Mitsubishi Electric and pay for the license fee.
-
Q: Is it safe to download GX Developer 8.7 full crack?
-
A: No, it is not safe to download GX Developer 8.7 full crack. It may contain viruses, malware or spyware that can harm your computer or steal your personal information. You should only download GX Developer from the official website of Mitsubishi Electric and scan it with a reliable antivirus software.
-
Q: Is it compatible with Windows 10?
-
A: Yes, it is compatible with Windows 10. However, you may need to run it in compatibility mode or as an administrator if you encounter any problems.
-
Q: How can I update GX Developer 8.7?
-
A: You can update GX Developer 8.7 by visiting the official website of Mitsubishi Electric and downloading the latest version of GX Developer or the update patch file. You should uninstall the previous version of GX Developer before installing the new one.
-
Q: How can I get technical support for GX Developer 8.7?
-
A: You can get technical support for GX Developer 8.7 by contacting Mitsubishi Electric or its authorized distributors or service providers in your region. You can also refer to the online help system or the user manual of GX Developer for more information.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Boris FX Optics A Comprehensive Review of Features Benefits and Drawbacks.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Boris FX Optics A Comprehensive Review of Features Benefits and Drawbacks.md
deleted file mode 100644
index ef404175a3d7023c18048f43582f2eb35f99a050..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Boris FX Optics A Comprehensive Review of Features Benefits and Drawbacks.md
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
Boris FX Optics Review: A Powerful Plugin for Photo Editing
-
Boris FX Optics is a plugin that allows you to apply cinematic effects and filters to your photos in Photoshop, Lightroom, or as a standalone application. It offers over 160 filters and thousands of presets that can transform your images into stunning artworks.
-
In this Boris FX Optics review, we will take a look at some of the features and benefits of this plugin, as well as some of the drawbacks and limitations. We will also show you some examples of how you can use Boris FX Optics to enhance your photos.
Boris FX Optics is a plugin that is designed to give you creative control over your photos. It lets you apply various effects and filters that can change the mood, tone, color, and style of your images. Some of the features and benefits of Boris FX Optics are:
-
-
It has over 160 filters and thousands of presets that cover a wide range of categories, such as film stocks, lens flares, color grading, lighting effects, textures, gradients, distortions, and more.
-
It has a user-friendly interface that allows you to preview and adjust the effects in real-time. You can also use masks, layers, blend modes, and opacity controls to fine-tune the results.
-
It has a powerful engine that supports high-resolution images up to 8K and delivers fast performance. You can also use it as a standalone application or as a plugin for Photoshop or Lightroom.
-
It has a comprehensive online library that provides tutorials, tips, and inspiration for using Boris FX Optics. You can also access the Boris FX community forum and support team for any questions or issues.
-
-
Drawbacks and Limitations of Boris FX Optics
-
Boris FX Optics is a plugin that is not without its flaws and limitations. Some of the drawbacks and limitations of Boris FX Optics are:
-
-
It is a premium plugin that costs $149 for a perpetual license or $99 for an annual subscription. It also requires an activation code and an internet connection to run.
-
It is compatible with Windows 10 and Mac OS 10.13 or higher, but not with Linux or older versions of Windows or Mac OS. It also requires Photoshop CC 2015.5 or higher or Lightroom Classic 6 or higher to work as a plugin.
-
It has some filters and presets that are similar or redundant to each other. It also has some effects that are outdated or unrealistic for modern photography.
-
It has some bugs and glitches that might affect the performance or quality of the plugin. For example, some users have reported issues with the installation process, the interface layout, the preview window, and the rendering speed.
-
-
Examples of Using Boris FX Optics
-
Boris FX Optics is a plugin that can help you create stunning photos with cinematic effects and filters. Here are some examples of how you can use Boris FX Optics to enhance your photos:
-
Example 1: Adding Film Grain and Color Grading
-
In this example, we will use Boris FX Optics to add some film grain and color grading to a portrait photo. Here are the steps:
-
-
Open the photo in Photoshop and launch Boris FX Optics from the Filter menu.
-
Select the Film Stocks filter from the Film Lab category.
-
Choose a preset that suits your style. For this example, we will use Kodak Portra 400 VC.
-
Adjust the amount of film grain using the Grain slider.
-
Adjust the color grading using the Color Correction sliders.
-
Click OK to apply the effect and return to Photoshop.
-
-
Here is the before and after comparison:
-
-
-
-
-
-
Example 2: Adding Lens Flare and Vignette
-
In this example, we will use Boris FX Optics to add some lens flare and vignette to a landscape photo. Here are the steps:
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Convert Videos to Any Device with Freemake Video Converter Gold 4.1.9.16 Portable Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Convert Videos to Any Device with Freemake Video Converter Gold 4.1.9.16 Portable Torrent.md
deleted file mode 100644
index 3fc29c670085529db5ef3ad68e67c0d21735b8cf..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Convert Videos to Any Device with Freemake Video Converter Gold 4.1.9.16 Portable Torrent.md
+++ /dev/null
@@ -1,200 +0,0 @@
-
-
-
-
Outline
-
-
-
-
Freemake Video Converter Gold 4.1.9.16 Portable Torrent: A Powerful and Versatile Video Converter
-
-
Introduction
-
-
What is Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
-
What are the main features and benefits of using it?
-
-
-
How to download and install Freemake Video Converter Gold 4.1.9.16 Portable Torrent
-
-
Where to find the torrent file and how to use a torrent client
-
How to run the portable version without installation
-
-
-
How to use Freemake Video Converter Gold 4.1.9.16 Portable Torrent
-
-
How to add video files and choose output formats and settings
-
How to convert videos to various devices and platforms
-
How to rip and burn DVDs and Blu-rays
-
How to edit videos and add subtitles
-
How to upload videos online
-
-
-
Conclusion
-
-
A summary of the main points and advantages of Freemake Video Converter Gold 4.1.9.16 Portable Torrent
-
A call to action for the readers to try it out
-
-
-
FAQs
-
-
What are the system requirements for Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
-
What is the difference between the free version and the gold pack?
-
How to activate the gold pack features?
-
How to update Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
-
How to contact the support team or report a problem?
-
-
-
-
-
-
- And here is the article based on the outline:
-
-
Article
-
-
-
-
Freemake Video Converter Gold 4.1.9.16 Portable Torrent: A Powerful and Versatile Video Converter
-
If you are looking for a reliable and easy-to-use video converter that can handle any video format, device, or platform, you might want to check out Freemake Video Converter Gold 4.1.9.16 Portable Torrent.
-
This is a portable version of one of the most popular video converters on the market, which means you can run it from any USB drive or external hard drive without installing it on your computer.
-
Freemake Video Converter Gold 4.1.9.16 Portable Torrent
In this article, we will show you what Freemake Video Converter Gold 4.1.9.16 Portable Torrent can do for you, how to download and install it, and how to use it for various video conversion tasks.
-
What is Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
-
Freemake Video Converter Gold 4.1.9.16 Portable Torrent is a software that allows you to convert video files from one format to another, as well as rip and burn DVDs and Blu-rays, edit videos, add subtitles, and upload videos online.
-
It supports over 200 input formats, including AVI, MP4, MKV, WMV, MPG, 3GP, SWF, FLV, MOV, DV, RM, QT, TS, MTS, etc.
-
Freemake Video Converter Gold Portable Download
-How to use Freemake Video Converter Gold 4.1.9.16
-Freemake Video Converter Gold Crack Torrent
-Freemake Video Converter Gold 4.1.9.16 Features
-Freemake Video Converter Gold Review
-Freemake Video Converter Gold License Key
-Freemake Video Converter Gold vs Pro
-Freemake Video Converter Gold Free Trial
-Freemake Video Converter Gold System Requirements
-Freemake Video Converter Gold Activation Code
-Freemake Video Converter Gold Full Version
-Freemake Video Converter Gold 4.1.9.16 Patch
-Freemake Video Converter Gold Tutorial
-Freemake Video Converter Gold Alternatives
-Freemake Video Converter Gold Discount Coupon
-Freemake Video Converter Gold Support
-Freemake Video Converter Gold 4.1.9.16 Serial Key
-Freemake Video Converter Gold User Guide
-Freemake Video Converter Gold Benefits
-Freemake Video Converter Gold Problems
-Freemake Video Converter Gold 4.1.9.16 Portable Zip
-Freemake Video Converter Gold Online
-Freemake Video Converter Gold Quality Settings
-Freemake Video Converter Gold Update
-Freemake Video Converter Gold 4.1.9.16 Portable Mega
-Freemake Video Converter Gold for Mac
-Freemake Video Converter Gold Subtitles
-Freemake Video Converter Gold FAQ
-Freemake Video Converter Gold 4.1.9.16 Portable Mediafire
-Freemake Video Converter Gold for Windows 10
-Freemake Video Converter Gold DVD Menu
-Freemake Video Converter Gold Tips and Tricks
-Freemake Video Converter Gold 4.1.9.16 Portable Google Drive
-Freemake Video Converter Gold for Android
-Freemake Video Converter Gold Watermark Removal
-Freemake Video Converter Gold Testimonials
-Freemake Video Converter Gold 4.1.9.16 Portable Rarbg
-Freemake Video Converter Gold for Linux
-Freemake Video Converter Gold Audio Settings
-Freemake Video Converter Gold Refund Policy
-Freemake Video Converter Gold 4.1.9.16 Portable Kickass
-Freemake Video Converter Gold for iPhone
-Freemake Video Converter Gold Crop and Rotate
-Freemake Video Converter Gold Comparison Chart
-Freemake Video Converter Gold 4.1.9.16 Portable Magnet Link
-Freelance video converter gold for iPad
-
It also supports output formats for various devices and platforms, such as iPod, iPhone, iPad, PSP, Android, BlackBerry, Nokia, Xbox, Apple TV, etc.
-
You can also convert videos to HTML5 formats (Ogg, WebM, H.264) for modern web browsers.
-
Freemake Video Converter Gold 4.1.9.16 Portable Torrent is fast and efficient thanks to its integrated CUDA and DXVA technologies that optimize the conversion process and reduce CPU usage.
-
It also has a gold pack feature that unlocks some exclusive options such as automatic black bar removal, custom DVD menus, backup function, advanced preset editor, etc.
-
How to download and install Freemake Video Converter Gold 4.1.9.16 Portable Torrent
-
To download Freemake Video Converter Gold 4.1.9.16 Portable Torrent, you need a torrent client such as uTorrent or BitTorrent.
Once you have downloaded the torrent file, open it with your torrent client and choose where to save the portable version of Freemake Video Converter Gold 4.1.9.16.
-
The download size is about 30 MB.
-
After the download is complete, you can run Freemake Video Converter Gold 4.1.9.16 Portable by double-clicking on the executable file (FreemakeVideoConverterPortable.exe).
-
You don't need to install anything on your computer or register any account.
-
How to use Freemake Video Converter Gold 4.1.9.16 Portable Torrent
-
How to add video files and choose output formats and settings
-
To add video files to Freemake Video Converter Gold 4.1.9.16 Portable Torrent, you can either drag and drop them from your computer or click on the +Video button at the top left corner of the interface.
-
You can also add audio files (MP3, AAC, WMA, WAV) or image files (JPG, BMP, PNG,GIF) if you want to create a slideshow or a music video.
-
Once you have added your files, you can choose the output format from the bottom row of icons.
-
You can either select a specific device or platform (such as iPhone or YouTube) or a general format (such as AVI or MP3).
-
You can also click on the cogwheel icon next to each format icon to customize some settings such as resolution, bitrate, frame rate, codec, etc.
-
How to convert videos to various devices and platforms
-
If you want to convert videos for a specific device or platform (such as iPod or Facebook), you just need to select it from the bottom row of icons.
-
Freemake Video Converter Gold 4.1.9.16 Portable Torrent will automatically adjust the output parameters according to the optimal compatibility and quality standards.
-
You can also preview how your video will look like on your device or platform by clicking on the play button next to each format icon.
-
To start the conversion process, click on the Convert button at the bottom right corner of the interface.
-
You can choose where to save your converted files or open them directly after conversion.
-
How to rip and burn DVDs and Blu-rays
-
If you want to rip an unprotected DVD or Blu-ray disc (or an ISO image or a DVD folder) into a video file format (such as AVI or MP4), you just need to click on the +DVD button at the top left corner of the interface.
-
Then select your source disc (or image or folder) from your computer or optical drive.
-
Then choose your output format from the bottom row of icons (you can also select Blu-ray if you want to create a Blu-ray disc out of your source disc).
-
You can also edit some settings such as title selection, language selection, subtitle selection, etc by clicking on the cogwheel icon next to each format icon.
-
To start ripping process, click on Convert button at bottom right corner of interface. You can choose where save your ripped files or open them directly after ripping.
-
If you want burn a video file format (such as AVI or MP4) into a DVD or Blu-ray disc (or an ISO image or a DVD folder), you just need to click on +Video button at top left corner of interface. Then add your video files from your computer. Then choose DVD or Blu-ray from bottom row of icons. custom DVD menus by clicking on Menu button next each format icon. You can choose from various templates, add your own background image, title, etc. To start burning process, click on Burn button at bottom right corner of interface. You can choose where save your burned files or open them directly after burning.
-
How to edit videos and add subtitles
-
If you want edit your videos before converting them, you can use built-in video editor by clicking on scissors icon next each video file in list. You can perform various editing tasks such as trimming, cropping, rotating, flipping, joining, etc. You can also add transitions, effects, watermarks, etc by clicking on respective buttons at top right corner of editor window. To apply changes, click OK button at bottom right corner of editor window. To cancel changes, click Cancel button at bottom left corner of editor window.
-
If you want add subtitles your videos before converting them, you can use built-in subtitle tool by clicking on SRT icon next each video file in list. You can either import external subtitle files (SSA/SRT/ASS) from your computer or search online subtitles by clicking on respective buttons at top right corner of subtitle window. You can also adjust some settings such as font size, color, position, etc by clicking on cogwheel icon at top left corner of subtitle window. To apply changes, click OK button at bottom right corner of subtitle window. To cancel changes, click Cancel button at bottom left corner of subtitle window.
-
How to upload videos online
-
If you want upload your videos online after converting them, you can use built-in uploader by clicking on Upload button at top right corner of interface.
-
You can choose from various websites such as YouTube, Facebook, Vimeo, Dailymotion, etc and enter your account details and video information.
-
You can also adjust some settings such as video quality, privacy options, tags, etc by clicking on cogwheel icon next to each website icon.
-
To start uploading process, click on Upload button at bottom right corner of interface. You can monitor the progress and status of your uploads and open them directly after uploading.
-
Conclusion
-
Freemake Video Converter Gold 4.1.9.16 Portable Torrent is a powerful and versatile video converter that can handle any video format, device, or platform.
-
It is also fast and efficient thanks to its integrated CUDA and DXVA technologies that optimize the conversion process and reduce CPU usage.
-
It also has a gold pack feature that unlocks some exclusive options such as automatic black bar removal, custom DVD menus, backup function, advanced preset editor, etc.
-
It is also easy to use and intuitive thanks to its simple and user-friendly interface and its built-in tools for editing, adding subtitles, and uploading videos online.
-
If you want to try out Freemake Video Converter Gold 4.1.9.16 Portable Torrent for yourself, you can download it from various websites such as nsaneforums.com or youngworldforum.forumfree.it using a torrent client such as uTorrent or BitTorrent.
-
You don't need to install anything on your computer or register any account. You just need to run it from any USB drive or external hard drive and start converting your videos to any format, device, or platform you want.
-
So what are you waiting for? Download Freemake Video Converter Gold 4.1.9.16 Portable Torrent today and enjoy the best video conversion experience ever!
-
FAQs
-
-
What are the system requirements for Freemake Video Converter Gold 4.1.9.16 Portable Torrent?
-
-
The system requirements for Freemake Video Converter Gold 4.1.9.16 Portable Torrent are:
-
-
Windows Vista/7/8/10
-
.NET Framework 4.0 or higher
-
256 MB or more RAM
-
50 MB free hard disc space
-
Stable Internet connection for video download & YouTube upload
-
DVD-ROM drive for burning DVD
-
BD-ROM drive for burning Blu-ray
-
-
-
What is the difference between the free version and the gold pack?
-
-
The free version of Freemake Video Converter allows you to convert video files to various formats, devices, and platforms, as well as rip and burn DVDs and Blu-rays, edit videos, add subtitles, and upload videos online.
-
The gold pack is a set of unique features that enhance the free version with some exclusive options such as automatic black bar removal, custom DVD menus, backup function, advanced preset editor, etc.
-
-At the end of the DVD there is information about the search code. In the beginning of the movie starts a credit: Bajrangi Bhaijaan 720p DvDRip x264 English Subtitle Movie Samples. No charge, no catches, no hassle, we will not share your information with anyone. The mbox file is about 9 gigabytes and contains an index of the contents, like a DVD table of contents.
-
-This release supports Matroska movies only, and does not support subtitles, codecs and other features.
-
-It also does not have a working DVD menu. This means that it cannot be played directly on any DVD player. It must be copied to a DVD media first. The subtitle files were not encoded with the same compression ratio as the main movie, so they are not as small as usual.
-
-Bajrangi Bhaijaan: Story, Cast, Overview, 2015
-
-Director: Kabir Khan
-
-Writer: Kabir Khan
-
-Starring: Akshay Kumar, Nawazuddin Siddiqui, Paresh Rawal, Sonakshi Sinha, Varun Sharma, Ritesh Deshmukh, and Manoj Pahwa. Special appearance by the then Bihar Chief Minister Nitish Kumar, Rajnath Singh, Arvind Kejriwal and Prakash Karat.
-
-Plot Summary :
-
-During the 1990s, in the town of Nagpur in the Indian state of Maharashtra, two men were competing for the love of a girl: a poor Muslim named Babban Ashfaq, who worked as a guard in a printing factory and could not afford a dowry, and a rich Hindu named Lallu Singhal, who owned a printing business and could afford one.
-
-When the girl, Priyanka, turned 21, her father approached a prominent crime lord, Harshvardhan Ganga, who in turn introduced him to a mafioso, Baba Kamlakar Dhanpat. Baba Kamlakar Dhanpat gave the poor man a dowry of his own, but the Hindu was not satisfied.
-
-Then a rival gang, the Bajrangi, came to the rescue. They were a band of six thieves who specialized in stealing jewellery and money. They smuggled Baba Kamlakar Dhanpat's money to Switzerland, where it was kept in a safe deposit box. Baba Kamlakar Dhanpat then drove to Mumbai 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 44.md b/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 44.md
deleted file mode 100644
index cf8dd37324a2879b8c7aa5e4f552405f5eec385a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Biologija Pries Egzamina Knyga Pdf 44.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
Biologija Pries Egzamina Knyga Pdf 44: A Review of the Best Biology Book for Exam Preparation
-
-
If you are looking for a biology book that can help you ace your exams, you might want to check out Biologija Pries Egzamina Knyga Pdf 44. This book is a collection of 100 professional tips for biology exams, written by experts in the field. It covers topics such as genetics, ecology, evolution, cell biology, anatomy, physiology, and more. It also provides practice questions, answers, and explanations for each topic.
Biologija Pries Egzamina Knyga Pdf 44 is a digital book that you can download and read on your computer or mobile device. It is available in Lithuanian language, and it has been praised by many students and teachers for its clarity, accuracy, and relevance. It is designed to help you prepare for the national biology exam in Lithuania, as well as other biology tests and competitions.
-
-
In this article, we will review Biologija Pries Egzamina Knyga Pdf 44 and tell you why it is one of the best biology books for exam preparation. We will also give you some tips on how to use it effectively and where to get it online.
-
-
Why Biologija Pries Egzamina Knyga Pdf 44 is a Great Biology Book for Exam Preparation
-
-
There are many reasons why Biologija Pries Egzamina Knyga Pdf 44 is a great biology book for exam preparation. Here are some of them:
-
-
-
-
It is written by experts. The authors of Biologija Pries Egzamina Knyga Pdf 44 are experienced biology teachers and researchers who have extensive knowledge and expertise in the subject. They have also participated in various biology competitions and projects, so they know what kind of questions and challenges you might face in your exams.
-
It is comprehensive. Biologija Pries Egzamina Knyga Pdf 44 covers all the major topics and concepts that you need to know for your biology exams. It also includes subtopics and details that might not be covered in your textbooks or lectures, but are important for your understanding and application of biology.
-
It is practical. Biologija Pries Egzamina Knyga Pdf 44 does not only give you theoretical information, but also shows you how to apply it in real-life situations. It provides examples, diagrams, illustrations, tables, charts, and graphs that help you visualize and analyze biological phenomena. It also gives you practice questions, answers, and explanations that help you test your knowledge and skills.
-
It is relevant. Biologija Pries Egzamina Knyga Pdf 44 is based on the latest curriculum and standards of the national biology exam in Lithuania. It also follows the trends and developments in the field of biology, such as biotechnology, genetic engineering, environmental issues, and more. It helps you stay updated and informed about the current state of biology.
-
It is easy to use. Biologija Pries Egzamina Knyga Pdf 44 is a digital book that you can download and read on your computer or mobile device. You can access it anytime and anywhere you want. You can also print it out if you prefer a hard copy. You can search for keywords, bookmark pages, highlight text, take notes, and more.
-
-
-
How to Use Biologija Pries Egzamina Knyga Pdf 44 Effectively
-
-
To get the most out of Biologija Pries Egzamina Knyga Pdf 44, here are some tips on how to use it effectively:
-
-
-
Read it before your exams. Biologija Pries Egzamina Knyga Pdf 44 is not a substitute for your textbooks or lectures. It is a supplement that can help you review and reinforce what you have learned. Therefore, you should read it before your exams to refresh your memory and fill in any gaps in your knowledge.
-
Read it after your exams. Biologija Pries Egzamina Knyga Pdf 44 is not only useful for exam preparation, but also for exam analysis. After your exams, you d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate HOT.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate HOT.md
deleted file mode 100644
index 8614294a9328e0faa7f78fc1c130f8500649aeee..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Bongiovi Acoustics DPS Audio Enhancer 2.2.0.15 Activate HOT.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
bongiovi audio enhancer is a digital audio mastering suite, which helps to clarify, polish and punch up the sound of any audio source, even your headphone or computer speakers. it is the perfect tool to enhance acoustic performances, and to finally realise the true potential of your music.
the entire processing section, from input to loudness is available for each of the audio playback sources (headphones, line-out, line-in) and can be used independently. its performance is exactly the same in all application modes.
-
with bongiovi audio enhancer you can drastically improve the sound of any audio source, even your headphones or laptop speakers. the processing effect is exactly the same in all applications. buttons and sliders apply boost automatically to audio input, headphones, line-in, output and the master output. usually the strongest influence is just around the 80db mark, but boost values up to 90db are possible.
-
dps audio enhancer enhance, produce or re-simulate the sound of the music you listen to, to make music even more exciting! bongiovi acoustics dps audio enhancer 2.2.0.15. involved. overall, the concept of acoustics dps dps plugin audio enhancer is really a very good one. to start with, acoustics dps plugin audio enhancer is certainly a free application. only limitations include restricted bass and treble controls from what i can see. those two settings are restricted to pro version which can be bought for a fee i guess (cant seem to find more info about it at the moment). acoustics dps plugin audio enhancer can be readily downloaded and installed on most versions of windows. just enter the download link.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1line/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/1line/AutoGPT/autogpt/json_utils/json_fix_llm.py
deleted file mode 100644
index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/json_utils/json_fix_llm.py
+++ /dev/null
@@ -1,220 +0,0 @@
-"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance
-of the ChatGPT API or LLM models."""
-from __future__ import annotations
-
-import contextlib
-import json
-from typing import Any, Dict
-
-from colorama import Fore
-from regex import regex
-
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_general import correct_json
-from autogpt.llm_utils import call_ai_function
-from autogpt.logs import logger
-from autogpt.speech import say_text
-
-JSON_SCHEMA = """
-{
- "command": {
- "name": "command name",
- "args": {
- "arg name": "value"
- }
- },
- "thoughts":
- {
- "text": "thought",
- "reasoning": "reasoning",
- "plan": "- short bulleted\n- list that conveys\n- long-term plan",
- "criticism": "constructive self-criticism",
- "speak": "thoughts summary to say to user"
- }
-}
-"""
-
-CFG = Config()
-
-
-def auto_fix_json(json_string: str, schema: str) -> str:
- """Fix the given JSON string to make it parseable and fully compliant with
- the provided schema using GPT-3.
-
- Args:
- json_string (str): The JSON string to fix.
- schema (str): The schema to use to fix the JSON.
- Returns:
- str: The fixed JSON string.
- """
- # Try to fix the JSON using GPT:
- function_string = "def fix_json(json_string: str, schema:str=None) -> str:"
- args = [f"'''{json_string}'''", f"'''{schema}'''"]
- description_string = (
- "This function takes a JSON string and ensures that it"
- " is parseable and fully compliant with the provided schema. If an object"
- " or field specified in the schema isn't contained within the correct JSON,"
- " it is omitted. The function also escapes any double quotes within JSON"
- " string values to ensure that they are valid. If the JSON string contains"
- " any None or NaN values, they are replaced with null before being parsed."
- )
-
- # If it doesn't already start with a "`", add one:
- if not json_string.startswith("`"):
- json_string = "```json\n" + json_string + "\n```"
- result_string = call_ai_function(
- function_string, args, description_string, model=CFG.fast_llm_model
- )
- logger.debug("------------ JSON FIX ATTEMPT ---------------")
- logger.debug(f"Original JSON: {json_string}")
- logger.debug("-----------")
- logger.debug(f"Fixed JSON: {result_string}")
- logger.debug("----------- END OF FIX ATTEMPT ----------------")
-
- try:
- json.loads(result_string) # just check the validity
- return result_string
- except json.JSONDecodeError: # noqa: E722
- # Get the call stack:
- # import traceback
- # call_stack = traceback.format_exc()
- # print(f"Failed to fix JSON: '{json_string}' "+call_stack)
- return "failed"
-
-
-def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]:
- """Fix the given JSON string to make it parseable and fully compliant with two techniques.
-
- Args:
- json_string (str): The JSON string to fix.
-
- Returns:
- str: The fixed JSON string.
- """
-
- # Parse and print Assistant response
- assistant_reply_json = fix_and_parse_json(assistant_reply)
- if assistant_reply_json == {}:
- assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
- assistant_reply
- )
-
- if assistant_reply_json != {}:
- return assistant_reply_json
-
- logger.error(
- "Error: The following AI output couldn't be converted to a JSON:\n",
- assistant_reply,
- )
- if CFG.speak_mode:
- say_text("I have received an invalid JSON response from the OpenAI API.")
-
- return {}
-
-
-def fix_and_parse_json(
- json_to_load: str, try_to_fix_with_gpt: bool = True
-) -> Dict[Any, Any]:
- """Fix and parse JSON string
-
- Args:
- json_to_load (str): The JSON string.
- try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT.
- Defaults to True.
-
- Returns:
- str or dict[Any, Any]: The parsed JSON.
- """
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = json_to_load.replace("\t", "")
- return json.loads(json_to_load)
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = correct_json(json_to_load)
- return json.loads(json_to_load)
- # Let's do something manually:
- # sometimes GPT responds with something BEFORE the braces:
- # "I'm sorry, I don't understand. Please try again."
- # {"text": "I'm sorry, I don't understand. Please try again.",
- # "confidence": 0.0}
- # So let's try to find the first brace and then parse the rest
- # of the string
- try:
- brace_index = json_to_load.index("{")
- maybe_fixed_json = json_to_load[brace_index:]
- last_brace_index = maybe_fixed_json.rindex("}")
- maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1]
- return json.loads(maybe_fixed_json)
- except (json.JSONDecodeError, ValueError) as e:
- return try_ai_fix(try_to_fix_with_gpt, e, json_to_load)
-
-
-def try_ai_fix(
- try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str
-) -> Dict[Any, Any]:
- """Try to fix the JSON with the AI
-
- Args:
- try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI.
- exception (Exception): The exception that was raised.
- json_to_load (str): The JSON string to load.
-
- Raises:
- exception: If try_to_fix_with_gpt is False.
-
- Returns:
- str or dict[Any, Any]: The JSON string or dictionary.
- """
- if not try_to_fix_with_gpt:
- raise exception
- if CFG.debug_mode:
- logger.warn(
- "Warning: Failed to parse AI output, attempting to fix."
- "\n If you see this warning frequently, it's likely that"
- " your prompt is confusing the AI. Try changing it up"
- " slightly."
- )
- # Now try to fix this up using the ai_functions
- ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA)
-
- if ai_fixed_json != "failed":
- return json.loads(ai_fixed_json)
- # This allows the AI to react to the error message,
- # which usually results in it correcting its ways.
- # logger.error("Failed to fix AI output, telling the AI.")
- return {}
-
-
-def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str):
- if CFG.speak_mode and CFG.debug_mode:
- say_text(
- "I have received an invalid JSON response from the OpenAI API. "
- "Trying to fix it now."
- )
- logger.error("Attempting to fix JSON by finding outermost brackets\n")
-
- try:
- json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}")
- json_match = json_pattern.search(json_string)
-
- if json_match:
- # Extract the valid JSON object from the string
- json_string = json_match.group(0)
- logger.typewriter_log(
- title="Apparently json was fixed.", title_color=Fore.GREEN
- )
- if CFG.speak_mode and CFG.debug_mode:
- say_text("Apparently json was fixed.")
- else:
- return {}
-
- except (json.JSONDecodeError, ValueError):
- if CFG.debug_mode:
- logger.error(f"Error: Invalid JSON: {json_string}\n")
- if CFG.speak_mode:
- say_text("Didn't work. I will have to ignore this response then.")
- logger.error("Error: Invalid JSON, setting it to empty JSON now.\n")
- json_string = {}
-
- return fix_and_parse_json(json_string)
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Challenge Your Friends and the World with Quiz Planet APK for Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Challenge Your Friends and the World with Quiz Planet APK for Android.md
deleted file mode 100644
index 04f774db88776594d8967ba0b39ae14c13a7221e..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Challenge Your Friends and the World with Quiz Planet APK for Android.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Quiz Planet APK Download: A Fun and Social Trivia Game
-
Do you love trivia games? Do you enjoy challenging your friends and other players online? Do you want to test your knowledge with thousands of questions in 28 languages? If you answered yes to any of these questions, then you should download Quiz Planet APK for Android today!
Quiz Planet is a fun and social trivia game that lets you play with your friends or other players from around the world. You can choose from various quiz categories, such as sports, music, movies, geography, history, science, and more. You can also use power-ups to help you out or challenge yourself with daily quizzes and leaderboard rankings. Quiz Planet is the ultimate trivia game for anyone who loves learning new things and having a good time.
-
How to Download Quiz Planet APK for Android?
-
If you want to download Quiz Planet APK for Android, you can follow these simple steps:
-
-
Go to APKCombo, a trusted website that offers free and safe APK downloads.
-
Type "Quiz Planet" in the search bar and click on the result that matches the game.
-
Choose the latest version of Quiz Planet APK and click on the download button.
-
Wait for the download to finish and then open the file.
-
Allow the installation of apps from unknown sources if prompted by your device settings.
-
Follow the instructions on the screen and enjoy playing Quiz Planet!
-
-
Note: You can also download Quiz Planet APK from other sources, such as APKPure or Uptodown, but make sure they are reliable and virus-free.
-
Why Play Quiz Planet?
-
Quiz Planet is not just another trivia game. It has many features and benefits that make it stand out from the crowd. Here are some of them:
-
Test Your Knowledge with Thousands of Questions
-
Quiz Planet has over 10,000 questions in 16 different categories, ranging from easy to hard. You can choose from topics such as animals, art, books, celebrities, food, games, history, music, movies, science, sports, and more. You can also switch between four difficulty levels: easy, medium, hard, and expert. Whether you are a casual player or a trivia buff, you will always find something new and interesting to learn in Quiz Planet.
-
Compete Against Your Friends and Other Players
-
Quiz Planet is a social game that lets you play with your friends or other players from around the world. You can invite your friends via Facebook or WhatsApp, or you can play with random opponents online. You can also chat with your opponents during the game and send them emojis and stickers. Quiz Planet is a great way to have fun and make new friends while testing your knowledge.
-
quiz planet apk download for android
-quiz planet apk download latest version
-quiz planet apk download free
-quiz planet apk download mod
-quiz planet apk download offline
-quiz planet apk download 2023
-quiz planet apk download hack
-quiz planet apk download no ads
-quiz planet apk download unlimited coins
-quiz planet apk download pc
-quiz planet trivia game apk download
-quiz planet app apk download
-quiz planet online apk download
-quiz planet pro apk download
-quiz planet premium apk download
-quiz planet full apk download
-quiz planet cracked apk download
-quiz planet unlocked apk download
-quiz planet updated apk download
-quiz planet new version apk download
-how to download quiz planet apk
-where to download quiz planet apk
-best site to download quiz planet apk
-safe site to download quiz planet apk
-trusted site to download quiz planet apk
-quiz planet apk free download for android
-quiz planet apk mod free download
-quiz planet trivia game apk free download
-quiz planet app apk free download
-quiz planet pro apk free download
-quiz planet premium apk free download
-quiz planet full apk free download
-quiz planet cracked apk free download
-quiz planet unlocked apk free download
-quiz planet latest version apk free download
-quiz planet 2023 apk free download
-quiz planet hack apk free download
-quiz planet mod apk free download unlimited coins
-quiz planet mod apk free download no ads
-quiz planet mod apk free download offline
-quiz planet trivia game mod apk free download
-quiz planet app mod apk free download
-quiz planet pro mod apk free download
-quiz planet premium mod apk free download
-quiz planet full mod apk free download
-quiz planet cracked mod apk free download
-quiz planet unlocked mod apk free download
-
Collect Cute Aliens and Climb the Leaderboard
-
Quiz Planet rewards you for playing well and answering correctly. You can collect cute aliens as you progress through the game and unlock new planets to explore. You can also earn coins and gems that you can use to buy power-ups or customize your profile. Moreover, you can climb the leaderboard and compete against your country or the world. Quiz Planet is a game that keeps you motivated and engaged.
-
Enjoy the Game in 28 Languages
-
Quiz Planet is a game that supports 28 languages, including English, Spanish, French, German, Italian, Portuguese, Russian, Turkish, Arabic, Chinese, Japanese, Korean, Hindi, and more. You can play the game in your native language or learn a new one by switching languages anytime. Quiz Planet is a game that is accessible and inclusive for everyone.
-
Tips and Tricks for Playing Quiz Planet
-
If you want to improve your skills and win more games in Quiz Planet, you can follow these tips and tricks:
-
Use Your Power-Ups Wisely
-
Quiz Planet gives you three power-ups that you can use during the game: skip, bomb, and freeze. Skip lets you skip a question if you don't know the answer or don't like it. Bomb eliminates two wrong answers from the four options. Freeze stops the timer for 10 seconds so you can think more carefully. You can use one power-up per question, but they are limited in number and cost coins or gems to buy more. Therefore, use them wisely and only when necessary.
-
Learn from Your Mistakes
-
Quiz Planet shows you the correct answer after each question. You can also review your answers at the end of each game. This is a great opportunity to learn from your mistakes and improve your knowledge. You can also search for more information about the topics that interest you or challenge you online. Quiz Planet is a game that helps you grow and learn.
-
Challenge Yourself with Daily Quizzes
-
Quiz Planet offers you daily quizzes that you can play every day. These quizzes are based on current events, trends, and popular topics. They are also more challenging and rewarding than the regular quizzes. You can compete against your country and see how you rank among other players. You can also earn more coins and gems by playing daily quizzes. Quiz Planet is a game that keeps you updated and entertained.
-
Frequently Asked Questions about Quiz Planet
-
Here are some of the most common questions and answers about Quiz Planet:
-
Is Quiz Planet Free to Play?
-
Yes, Quiz Planet is free to play, but it offers in-app purchases that you can buy with real money. These purchases include coins, gems, power-ups, and premium membership. You can use these items to enhance your gameplay and unlock more features. However, you can also enjoy the game without spending any money.
-
How Can I Contact Quiz Planet Support?
-
If you have any questions, problems, or feedback about Quiz Planet, you can contact the support team by sending an email to support@quizplanet.app. They will reply to you as soon as possible and help you with your issue.
-
How Can I Report a Bug or a Wrong Question?
-
If you encounter a bug or a wrong question in Quiz Planet, you can report it through the game settings. To do so, follow these steps:
-
-
Tap on the gear icon on the top right corner of the screen.
-
Tap on "Report a bug" or "Report a wrong question".
-
Fill in the form with the details of the bug or the question.
-
Tap on "Send".
-
-
The support team will review your report and fix the bug or the question as soon as possible.
-
How Can I Change My Language or Nickname?
-
If you want to change your language or nickname in Quiz Planet, you can do so through the game settings. To do so, follow these steps:
-
-
Tap on the gear icon on the top right corner of the screen.
-
Tap on "Language" or "Nickname".
-
Select your preferred language or enter your new nickname.
-
Tap on "Save".
-
-
Your language or nickname will be updated immediately.
-
How Can I Delete My Account or Data?
-
If you want to delete your account or data in Quiz Planet, you should be aware that this action is irreversible and will erase all your progress and achievements in the game. If you still want to proceed, you can follow this link: https://quizplanet.app/delete-account. You will need to enter your email address and confirm your decision. Once you delete your account or data, you will not be able to restore it or play Quiz Planet again with the same email address.
-
Conclusion
-
Quiz Planet is a fun and social trivia game that lets you test your knowledge with thousands of questions in 28 languages. You can play with your friends or other players online, collect cute aliens and climb the leaderboard, and enjoy the game in a conversational style. Quiz Planet is a game that is suitable for everyone who loves trivia and learning new things. If you want to download Quiz Planet APK for Android, you can follow our guide above and start playing today!
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Design Your Own 3D Worlds with 3DBear App for Mobile Devices.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Design Your Own 3D Worlds with 3DBear App for Mobile Devices.md
deleted file mode 100644
index 7ed0caf8de3f0c05e4ac709ee8ab6bbaf90bb0c4..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Design Your Own 3D Worlds with 3DBear App for Mobile Devices.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Download 3D Bear App: A Fun and Educational Way to Create Your Own Virtual Worlds
-
Have you ever dreamed of making your imagination a reality with 3D models? Do you want to learn new skills and express your creativity in a fun and engaging way? Do you want to share your amazing creations with others and see what they have made? If you answered yes to any of these questions, then you should download 3D Bear app, a mobile augmented reality solution that allows you to create and experience your own virtual worlds. In this article, we will tell you what 3D Bear app is, why you should download it, how to download it, and how to use it.
-
What is 3D Bear App?
-
3D Bear app is a mobile application that uses augmented reality (AR) technology to let you create and interact with 3D models in your everyday surroundings. AR is a technology that overlays digital information on top of the real world, creating a mixed reality experience. With 3D Bear app, you can use your smartphone or tablet as a window to see and manipulate 3D models in your environment.
Unlike virtual reality (VR), which requires special headsets and controllers, AR can be accessed with any device that has a camera and a screen. This makes AR more accessible and affordable for everyone. You can use 3D Bear app anywhere you want, whether it's at home, at school, or outdoors. You can also move around and explore your AR scenes from different angles and perspectives.
-
A tool for learning and creativity
-
3D Bear app is not just a game or a toy. It is also a tool for learning and creativity. You can use 3D Bear app to learn about various topics, such as science, history, art, and design. You can also use it to express your ideas and stories in a visual and interactive way. You can create anything you want with 3D models, from animals and plants, to buildings and vehicles, to characters and scenes.
-
How to download 3d bear app on iPhone
-Download 3d bear app for Android
-3d bear app review and features
-Best 3d modeling app for education
-Create your own virtual worlds with 3d bear app
-Download 3d bear app and get started with augmented reality
-3d bear app tutorial and tips
-How to use 3d bear app for immersive learning
-Download 3d bear app and join the arvrinedu community
-3d bear app pricing and plans
-How to download 3d bear app on iPad
-Download 3d bear app for Windows
-3d bear app case studies and testimonials
-Best 3d design app for creativity and collaboration
-Make your imagination a reality with 3d bear app
-Download 3d bear app and enjoy it on your iPod touch
-3d bear app support and FAQ
-How to use 3d bear app for virtual reality
-Download 3d bear app and access hundreds of 3d models
-3d bear app benefits and advantages
-How to download 3d bear app on Mac
-Download 3d bear app for Chromebook
-3d bear app awards and recognition
-Best 3d printing app for students and teachers
-Visualize your ideas in your everyday surroundings with 3dbear app
-Download 3dbear app and try the demo version
-3dbear app blog and news updates
-How to use 3dbear app for circular economy in business
-Download 3dbear app and control robots with ar-glasses
-3dbear app challenges and competitions
-How to download 3dbear app on Linux
-Download 3dbear app for Kindle Fire
-3dbear app partners and collaborators
-Best 3d scanning app for homecare and elderly care services
-Experience the holy grail of fusion experiments with 3dbear app
-Download 3dbear app and get a free trial
-3dbear app feedback and suggestions
-How to use 3dbear app for special needs students in Savon ammattiopisto
-Download 3dbear app and create love in the time of virtual reality
-3dbear app ratings and reviews
-
A platform for sharing and collaboration
-
Another feature of 3D Bear app is that it allows you to share your AR creations with others and see what they have made. You can record videos of your AR scenes and post them on the 3DBear community platform, where you can earn points and unlock more features. You can also browse other users' videos and give them feedback or inspiration. You can also collaborate with other users on joint projects or challenges.
-
Why Download 3D Bear App?
-
Now that you know what 3D Bear app is, you might be wondering why you should download it. Here are some reasons why:
-
It's free and easy to use
-
One of the best things about 3D Bear app is that it's free to download and use. You don't need to pay anything or register an account to start creating your own virtual worlds. You also don't need any prior experience or knowledge of AR or 3D modeling. The app has a simple and intuitive interface that guides you through the process of creating your AR scenes.
-
It has a variety of 3D models and collections
-
Another reason why you should download 3D Bear app is that it has a large and diverse library of 3D models and collections that you can use for your AR scenes. You can choose from categories such as animals, dinosaurs, space, furniture, vehicles, and more. You can also access collections that are curated by 3DBear or by other users. These collections are based on themes such as Halloween, Christmas, fairy tales, and more. You can also create your own collections and share them with others.
-
It supports importing your own models and content
-
If you want to customize your AR scenes even more, you can also import your own 3D models and content into 3D Bear app. You can use any 3D modeling software or app to create your own models, or you can download them from online sources. You can also import images and videos from your device or from the web. You can then use these models and content to enhance your AR scenes and make them more unique and personal.
-
How to Download 3D Bear App?
-
Downloading 3D Bear app is very easy and fast. You just need to follow these steps:
-
For iOS devices
-
If you have an iPhone or an iPad, you can download 3D Bear app from the App Store. Here's how:
-
-
Open the App Store on your device.
-
Search for "3D Bear" in the search bar.
-
Tap on the app icon that says "3D Bear - Create Your Own World".
-
Tap on the "Get" button to download the app.
-
Wait for the app to install on your device.
-
Open the app and start creating your AR scenes.
-
-
For Android devices
-
If you have an Android phone or tablet, you can download 3D Bear app from the Google Play Store. Here's how:
-
-
Open the Google Play Store on your device.
-
Search for "3D Bear" in the search bar.
-
Tap on the app icon that says "3D Bear - Augmented Reality".
-
Tap on the "Install" button to download the app.
-
Wait for the app to install on your device.
-
Open the app and start creating your AR scenes.
-
-
How to Use 3D Bear App?
-
Using 3D Bear app is very simple and fun. You just need to follow these steps:
-
Choose a lesson plan or a challenge
-
When you open the app, you will see two options: lesson plans and challenges. Lesson plans are guided activities that teach you about various topics and skills using 3D models. Challenges are creative tasks that test your imagination and problem-solving abilities using 3D models. You can choose either option depending on your preference and goal.
-
Create your AR scene with 3D models
-
After choosing a lesson plan or a challenge, you will enter the AR mode of the app. Here, you will see a camera view of your surroundings with a menu of 3D models at the bottom. You can tap on any model to select it and drag it onto your screen. You can then use gestures to move, rotate, scale, and delete the model. You can also use buttons to change the color, texture, opacity, and animation of the model. You can add as many models as you want to create your AR scene.
-
Record and share your AR video
-
When you are happy with your AR scene, you can record a video of it using the record button at the top right corner of the screen. You can also use voice-over or music to narrate or enhance your video. After recording, you can preview, edit, or save your video. You can then share your video with others using the share button at the bottom right corner of the screen. You can post your video on the 3DBear community platform, where you can earn points and unlock more features. You can also share your video on other social media platforms, such as Facebook, Instagram, YouTube, or TikTok.
-
Conclusion
-
In conclusion, 3D Bear app is a fun and educational way to create your own virtual worlds using augmented reality technology. You can use 3D Bear app to learn about various topics, express your creativity, and share your creations with others. You can download 3D Bear app for free from the App Store or the Google Play Store and start creating your AR scenes in minutes. So what are you waiting for? Download 3D Bear app today and unleash your imagination!
- FAQs Q: What devices are compatible with 3D Bear app A: 3D Bear app is compatible with iOS devices that support ARKit and Android devices that support ARCore. You can check the list of compatible devices here for iOS and here for Android. Q: How can I import my own 3D models and content into 3D Bear app? A: You can import your own 3D models and content into 3D Bear app by using the import button at the bottom left corner of the screen. You can then choose to import from your device, from the web, or from Google Poly. You can also scan real objects using the scan button and turn them into 3D models. Q: How can I collaborate with other users on 3D Bear app? A: You can collaborate with other users on 3D Bear app by joining or creating a group project. You can find group projects on the 3DBear community platform or create your own by inviting other users. You can then work together on a common theme or challenge and share your AR scenes with each other. Q: How can I earn points and unlock more features on 3D Bear app? A: You can earn points and unlock more features on 3D Bear app by completing lesson plans, challenges, and group projects. You can also earn points by sharing your AR videos, giving feedback to other users, and inviting your friends to join the app. You can use your points to unlock more 3D models, collections, and features. Q: How can I contact the 3DBear team for support or feedback? A: You can contact the 3DBear team for support or feedback by using the contact form on their website or by sending an email to support@3dbear.io. You can also follow them on social media platforms, such as Facebook, Twitter, Instagram, and YouTube, to get the latest updates and news. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/BGMI 2.2 APK and OBB Download Guide New Map Crossbow and More.md b/spaces/1phancelerku/anime-remove-background/BGMI 2.2 APK and OBB Download Guide New Map Crossbow and More.md
deleted file mode 100644
index f740fe6d148a85c10658c585e386977eb8089104..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/BGMI 2.2 APK and OBB Download Guide New Map Crossbow and More.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
BGMI 2.2 Update Download Apk: Everything You Need to Know
-
If you are a fan of Battlegrounds Mobile India (BGMI), you must be eagerly waiting for the latest update of the game. The BGMI 2.2 update is finally here and it brings a lot of new features, improvements, and surprises for the players. In this article, we will tell you everything you need to know about the BGMI 2.2 update, including what's new, how to download it, and some tips to enjoy it better.
-
What is BGMI?
-
BGMI is a popular battle royale game that was launched in India in July 2021 by Krafton, the developer of PUBG Mobile. The game is similar to PUBG Mobile, but with some changes and adaptations to suit the Indian market and regulations. BGMI allows you to play solo, duo, or squad matches with up to 100 players on various maps and modes. You can also customize your character, weapons, vehicles, and accessories with a variety of skins and outfits. The game is free to play, but you can also purchase in-game currency and items with real money.
The BGMI 2.2 update is one of the biggest updates of the game so far. It introduces a lot of new content and features that will make your gaming experience more exciting and fun. Here are some of the major highlights of the update:
-
New map: NUSA
-
NUSA is a new map that is exclusive to BGMI. It is based on Indonesia and features a tropical island with diverse landscapes, such as beaches, forests, villages, temples, and volcanoes. The map is 8x8 km in size and supports up to 64 players. NUSA also has some unique elements, such as hot air balloons, zip lines, waterfalls, and lava flows. You can explore the map and find hidden secrets and treasures along the way.
-
New weapon: Tactical Crossbow
-
The Tactical Crossbow is a new weapon that is available on NUSA map only. It is a silent and deadly weapon that can fire three types of bolts: explosive, electric, and tracking. The explosive bolt can deal damage to enemies and vehicles within a small radius. The electric bolt can stun enemies and disable vehicles for a short time. The tracking bolt can mark enemies on the mini-map for your teammates to see. The Tactical Crossbow has a limited range and ammo capacity, so use it wisely.
-
New voice pack: The Voice of Battlegrounds
-
The Voice of Battlegrounds is a new voice pack that you can use to communicate with your teammates and enemies in the game. It features the voice of Jonathan Frakes, a famous actor and director who is best known for his role as Commander Riker in Star Trek: The Next Generation. The voice pack has over 200 lines of dialogue that cover various situations and scenarios in the game. You can also customize the voice pack with different accents, tones, and languages.
-
New modes: Payload 2.0 and Infection Mode
-
Payload 2.0 is a new mode that is available on Erangel map only. It is an upgraded version of the original Payload mode that was introduced in PUBG Mobile. In this mode, you can use various heavy weapons and vehicles, such as rocket launchers, grenade launchers, helicopters, and armored trucks. You can also revive your teammates with a special device and call for air drops and airstrikes. The mode is fast-paced and action-packed, so be ready for some intense battles. Infection Mode is a new mode that is available on NUSA map only. It is a zombie-themed mode that pits two teams of players against each other: defenders and zombies. The defenders have to survive until the end of the match, while the zombies have to infect all the defenders. The zombies have different abilities and classes, such as speedsters, stealthers, and brutes. The defenders have limited weapons and ammo, but they can use traps and barricades to protect themselves. The mode is thrilling and suspenseful, so be careful of the lurking zombies.
New events: Halloween and Diwali celebrations
-
The BGMI 2.2 update also brings some festive events to celebrate Halloween and Diwali. You can participate in these events and earn rewards, such as skins, outfits, emotes, and vouchers. You can also enjoy some special features, such as pumpkin lanterns, firecrackers, and fireworks. The events will run from October 31 to November 14, so don't miss this chance to have some fun and show your spirit.
-
New skins, outfits, and accessories
-
Of course, no update is complete without some new skins, outfits, and accessories to spice up your look. The BGMI 2.2 update offers a variety of new items that you can get from crates, spins, shops, or events. Some of the new items include: - NUSA-themed skins for weapons, vehicles, parachutes, and backpacks - Tactical Crossbow-themed skins for crossbows - The Voice of Battlegrounds-themed skins for helmets and vests - Halloween-themed skins for weapons and vehicles - Diwali-themed skins for weapons and vehicles - Star Trek-themed outfits and accessories - Zombie-themed outfits and accessories - And more! You can mix and match these items to create your own unique style and impress your friends and foes.
-
How to download bgmi 2.2 update apk for android
-BGMI 2.2 update apk download link and installation guide
-BGMI 2.2 update apk and obb files: real or fake?
-BGMI 2.2 update release date, features, and patch notes
-BGMI 2.2 update apk: new map, crossbow, voice pack, and more
-BGMI 2.2 update download apk: latest details out by Krafton
-BGMI 2.2 update apk download: how to get early access
-BGMI 2.2 update apk: is it safe and legal to download?
-BGMI 2.2 update apk download: tips and tricks to avoid errors
-BGMI 2.2 update apk: what's new in the latest version of Battlegrounds Mobile India
-BGMI 2.2 update apk download: best settings and graphics options
-BGMI 2.2 update apk: how to play the new NUSA map
-BGMI 2.2 update apk download: everything you need to know about the tactical crossbow
-BGMI 2.2 update apk: how to use the new voice pack feature
-BGMI 2.2 update apk download: how to get free rewards and coupons
-BGMI 2.2 update apk: how to transfer your data from PUBG Mobile
-BGMI 2.2 update apk download: how to fix lag and ping issues
-BGMI 2.2 update apk: how to join the official discord server and community
-BGMI 2.2 update apk download: how to report hackers and cheaters
-BGMI 2.2 update apk: how to customize your character and outfits
-BGMI 2.2 update apk download: how to rank up faster and earn more BP
-BGMI 2.2 update apk: how to unlock the new emotes and gestures
-BGMI 2.2 update apk download: how to stream and record your gameplay
-BGMI 2.2 update apk: how to improve your aim and skills
-BGMI 2.2 update apk download: how to play with your friends and squad up
-BGMI 2.2 update apk: how to enable the gyroscope and ADS settings
-BGMI 2.2 update apk download: how to change your name and profile picture
-BGMI 2.2 update apk: how to redeem codes and vouchers
-BGMI 2.2 update apk download: how to check your stats and achievements
-BGMI 2.2 update apk: how to participate in tournaments and events
-BGMI 2.2 update apk download: how to get the M15 Royale Pass for free
-BGMI 2.2 update apk: how to master the new vehicles and weapons
-BGMI 2.2 update apk download: how to optimize your battery and storage space
-BGMI 2.2 update apk: how to enable the HDR mode and ultra HD graphics
-BGMI 2.2 update apk download: how to link your social media accounts and invite friends
-BGMI 2.2 update apk: how to use the quick chat and voice chat features
-BGMI 2.2 update apk download: how to access the training mode and practice range
-BGMI 2.2 update apk: how to switch between TPP and FPP modes
-BGMI 2.3 beta version registration, eligibility, and requirements
-
How to download BGMI 2.2 update apk?
-
If you are wondering how to download the BGMI 2.2 update apk on your device, don't worry. We have got you covered. Just follow these simple steps and you will be able to enjoy the update in no time.
-
Requirements and precautions
-
Before you download the update, make sure that your device meets the following requirements: - Android version: 5.1.1 or above - RAM: 2 GB or more - Storage space: 1 GB or more Also, make sure that you have a stable internet connection and enough battery power. It is recommended that you back up your game data before updating to avoid any loss or corruption.
-
Download link and instructions
-
To download the BGMI 2.2 update apk, you can use this link: [BGMI 2.2 Update Download Apk]. This is the official link from Krafton's website and it is safe and secure. Once you have downloaded the apk file, follow these instructions to install it on your device: - Locate the apk file in your device's file manager or downloads folder - Tap on the apk file to start the installation process - Allow the installation from unknown sources if prompted - Wait for the installation to complete - Launch the game and enjoy the update Note: If you have already installed BGMI on your device from Google Play Store or any other source, you don't need to uninstall it before installing the update. The update will overwrite the existing version without affecting your game data.
-
Troubleshooting tips
-
If you face any issues while downloading or installing the update, try these tips to solve them: - Check your internet connection and try again - Clear your device's cache and storage space - Restart your device and try again - Contact Krafton's customer support if the problem persists We hope that these tips will help you resolve any problems that you may encounter.
-
Conclusion
-
The BGMI 2.2 update is a massive update that brings a lot of new content and features to the game. You can explore a new map, use a new weapon, communicate with a new voice pack, play new modes, join new events, and get new skins, outfits, and accessories. You can also download the update easily by following our guide above. So what are you waiting for? Download the BGMI 2.2 update apk now and experience the thrill of Battlegrounds Mobile India like never before!
-
FAQs
Here are some of the frequently asked questions (FAQs) about the BGMI 2.2 update:
-
-
-
Question
-
Answer
-
-
-
Is the BGMI 2.2 update available for iOS devices?
-
No, the BGMI 2.2 update is currently available for Android devices only. Krafton has not announced any plans to release the update for iOS devices yet.
-
-
-
How big is the BGMI 2.2 update apk file?
-
The BGMI 2.2 update apk file is about 1 GB in size. However, you may need additional storage space to download and install the update on your device.
-
-
-
Can I play with players from other regions or versions of the game?
-
No, you can only play with players from India who have the same version of the game as you. You cannot play with players from other regions or versions of the game, such as PUBG Mobile or PUBG Mobile Lite.
-
-
-
Will I lose my progress or rank if I update the game?
-
No, you will not lose your progress or rank if you update the game. Your game data will be saved and transferred to the new version without any loss or corruption.
-
-
-
How can I get more in-game currency and items in the game?
-
You can get more in-game currency and items in the game by completing missions, participating in events, opening crates, spinning wheels, or purchasing them with real money. You can also use some promo codes or coupons to get some free rewards.
-
-
-
I hope that these FAQs will answer some of your queries and doubts about the BGMI 2.2 update. If you have any other questions, feel free to leave a comment below or contact Krafton's customer support.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Data Structure Using C By Udit Agarwal Pdf Free Extra Quality.md b/spaces/1phancelerku/anime-remove-background/Data Structure Using C By Udit Agarwal Pdf Free Extra Quality.md
deleted file mode 100644
index 49779944006546f5b51ce5e019628429dbfd5665..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Data Structure Using C By Udit Agarwal Pdf Free Extra Quality.md
+++ /dev/null
@@ -1,72 +0,0 @@
-## Data Structure Using C By Udit Agarwal Pdf Free
-
-
-
-
-
-
-
-
-
-**Click Here ✦✦✦ [https://vittuv.com/2tBMuA](https://vittuv.com/2tBMuA)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Data Structure Using C By Udit Agarwal: A Comprehensive Guide for Beginners
-
-
-
-Data structures are the fundamental building blocks of any computer program. They are used to store, organize, and manipulate data efficiently. Data structures can be classified into two types: linear and nonlinear. Linear data structures store data in a sequential manner, such as arrays, stacks, queues, and linked lists. Nonlinear data structures store data in a hierarchical or networked manner, such as trees, graphs, and heaps.
-
-
-
-Learning data structures is essential for any aspiring programmer, as it helps them to design and implement efficient algorithms for various problems. However, learning data structures can be challenging, especially for beginners who are not familiar with the syntax and concepts of programming languages. That is why Data Structure Using C By Udit Agarwal is a perfect book for beginners who want to learn data structures using C.
-
-
-
-Data Structure Using C By Udit Agarwal is a comprehensive book that covers all the topics related to data structures using C. The book starts with an introduction to C programming language, followed by chapters on arrays, strings, pointers, structures, unions, files, and dynamic memory allocation. The book then explains the concepts and implementation of various linear and nonlinear data structures using C. The book also includes numerous examples, exercises, and case studies to help the readers understand and apply the concepts.
-
-
-
-The book is written in a simple and lucid language that makes it easy to follow for beginners. The book also follows a systematic and logical approach that helps the readers to grasp the concepts quickly. The book is suitable for students of computer science and engineering, as well as professionals who want to refresh their knowledge of data structures using C.
-
-
-
-Data Structure Using C By Udit Agarwal is available in PDF format for free download from various online sources[^1^] [^2^] [^3^]. The book also comes with a CD that contains the source code of all the programs discussed in the book. The book is a must-read for anyone who wants to learn data structures using C.
-
-
-
-## Linear Data Structures
-
-
-
-Linear data structures are the simplest type of data structures that store data in a sequential manner. They have a linear relationship between the elements, meaning that each element has a unique predecessor and successor, except for the first and last element. Linear data structures can be implemented using arrays or linked lists. Some of the common linear data structures are:
-
-
-
-- **Arrays:** An array is a collection of homogeneous elements that are stored in contiguous memory locations. Each element can be accessed by its index, which is a positive integer that represents its position in the array. Arrays have a fixed size and are easy to implement and use. However, arrays have some limitations, such as wasting memory space if not fully utilized, difficulty in inserting and deleting elements, and lack of flexibility in resizing.
-
-- **Stacks:** A stack is a collection of elements that follows the Last-In-First-Out (LIFO) principle. This means that the last element inserted into the stack is the first one to be removed from it. A stack has two basic operations: push and pop. Push adds an element to the top of the stack, while pop removes and returns the top element from the stack. A stack can be used to implement recursion, reverse a string, check for balanced parentheses, and evaluate postfix expressions.
-
-- **Queues:** A queue is a collection of elements that follows the First-In-First-Out (FIFO) principle. This means that the first element inserted into the queue is the first one to be removed from it. A queue has two basic operations: enqueue and dequeue. Enqueue adds an element to the rear of the queue, while dequeue removes and returns the front element from the queue. A queue can be used to implement scheduling, buffering, simulation, and breadth-first search.
-
-- **Linked Lists:** A linked list is a collection of heterogeneous elements that are stored in non-contiguous memory locations. Each element is called a node, which contains some data and a pointer to the next node. The first node is called the head, and the last node is called the tail. A linked list has no fixed size and can grow or shrink dynamically. A linked list can overcome some of the limitations of arrays, such as wasting memory space, difficulty in inserting and deleting elements, and lack of flexibility in resizing. However, linked lists have some drawbacks, such as increased complexity in implementation and access time, difficulty in random access, and extra memory space for pointers.
-
-
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Download LOST in BLUE and Explore a Mysterious Island on Android.md b/spaces/1phancelerku/anime-remove-background/Download LOST in BLUE and Explore a Mysterious Island on Android.md
deleted file mode 100644
index 7dae1bc2f7048a41d1adde53a637fb275c011e3d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download LOST in BLUE and Explore a Mysterious Island on Android.md
+++ /dev/null
@@ -1,170 +0,0 @@
-
-
How to Download Lost in Blue for Android
-
Do you love survival games that challenge you to use your wits and skills to stay alive on a mysterious island? If so, you might want to check out Lost in Blue, a survival sandbox game developed by Volcano Force for Android devices. In this game, you will have to collect resources, craft items, build facilities, and fight enemies as you try to find a way back home. You can also team up with other players from around the world and form camps to cooperate and compete with each other. In this article, we will show you how to download Lost in Blue for Android and give you some tips on how to start playing.
-
What is Lost in Blue?
-
Lost in Blue is a survival sandbox game that was released globally in June 2023. The game is inspired by the popular Lost in Blue series of video games that were originally released for Nintendo DS. The game has received positive reviews from critics and players alike for its stunning graphics, immersive gameplay, and diverse content.
The game's story begins with a plane crash that leaves you stranded on a strange island. You will have to explore various natural environments like beaches, rainforests, volcanoes, glaciers, etc. and overcome different obstacles such as mutant zombies, militias, wild creatures, etc. You will also have to craft weapons and tools, build facilities and houses, and grow food to survive. You can choose from four different professions that affect your skills and abilities: hunter, gatherer, engineer, or doctor. You can also join or create camps with other players and cooperate or compete with them.
-
Why should you play Lost in Blue?
-
There are many reasons why you should download and play Lost in Blue if you are a fan of survival games. Here are some of them:
-
lost in blue android apk download
-how to download lost in blue game on android
-lost in blue survival game for android free download
-download lost in blue mod apk for android
-lost in blue android game download latest version
-download lost in blue offline game for android
-lost in blue android apk + obb download
-best site to download lost in blue for android
-download lost in blue full game for android
-lost in blue android game download size
-lost in blue android game review and download
-download lost in blue hack apk for android
-lost in blue android game download without verification
-download lost in blue cheats for android
-lost in blue android game download link
-download lost in blue tips and tricks for android
-lost in blue android game download from play store
-download lost in blue guide for android
-lost in blue android game download requirements
-download lost in blue walkthrough for android
-lost in blue android game free download no ads
-download lost in blue update for android
-lost in blue android game download error fix
-download lost in blue wallpaper for android
-lost in blue android game download high graphics
-download lost in blue soundtrack for android
-lost in blue android game download compatible devices
-download lost in blue emulator for android
-lost in blue android game download gameplay video
-download lost in blue theme for android
-lost in blue android game free download unlimited money
-download lost in blue pc version for android
-lost in blue android game free download no root
-download lost in blue ios version for android
-lost in blue android game free download no survey
-download lost in blue online multiplayer for android
-lost in blue android game free download safe and secure
-download lost in blue beta version for android
-lost in blue android game free download fast and easy
-download lost in blue original version for android
-
-
The game has amazing graphics that make you feel like you are really on an island. The game uses realistic lighting effects, dynamic weather changes, and detailed textures that create a stunning visual experience.
-
The game has engaging gameplay that keeps you hooked for hours. The game offers a lot of activities and challenges that test your survival skills. You can explore different locations, collect resources, craft items, build facilities, and fight enemies. You can also trade, chat, and cooperate with other players in real-time.
-
The game has diverse content that offers a lot of variety and replay value. The game has over 100 types of resources, over 200 types of items, over 50 types of facilities, and over 40 types of enemies. The game also has different modes, such as story mode, survival mode, and multiplayer mode.
-
-
How to download Lost in Blue for Android?
-
If you are interested in playing Lost in Blue, you will need to download and install the game on your Android device. The game is available for free on the Google Play Store, but it also has some in-app purchases that can enhance your gaming experience. Here is a step-by-step guide on how to download Lost in Blue for Android:
-
Requirements for downloading Lost in Blue
-
Before you download Lost in Blue, you should make sure that your device meets the minimum and recommended specifications for running the game smoothly. According to the game's official website, these are the requirements for downloading Lost in Blue:
-
-
-
Minimum
-
Recommended
-
-
-
Android 5.0 or higher
-
Android 8.0 or higher
-
-
-
2 GB of RAM
-
4 GB of RAM or more
-
-
-
2 GB of free storage space
-
4 GB of free storage space or more
-
-
-
Internet connection
-
Wi-Fi or 4G connection
-
-
-
Steps for downloading Lost in Blue
-
Once you have checked that your device meets the requirements, you can follow these steps to download Lost in Blue:
-
-
Open the Google Play Store app on your device and search for "Lost in Blue" or use this link to go directly to the game's page.
-
Tap on the "Install" button and wait for the game to download and install on your device. The game's size is about 1.5 GB, so it may take some time depending on your internet speed.
-
After the installation is complete, tap on the "Open" button or find the game's icon on your home screen and launch the game.
-
The game will ask you to grant some permissions, such as access to your storage, location, microphone, etc. You can choose to allow or deny these permissions, but some of them may be necessary for the game to function properly.
-
The game will also ask you to log in with your Google account or create a new account with your email address. You can choose either option, but logging in with your Google account will allow you to sync your progress across different devices and access some features like leaderboards and achievements.
-
The game will then load some data and resources and take you to the main menu. You can now start playing Lost in Blue!
-
-
Tips for downloading Lost in Blue
-
To avoid any issues or errors while downloading Lost in Blue, here are some tips that you should keep in mind:
-
-
Make sure that you have enough storage space on your device before downloading the game. You can check your available space by going to Settings > Storage on your device.
-
Make sure that you have a stable internet connection while downloading the game. You can use Wi-Fi or 4G if possible, as they are faster and more reliable than 3G or other networks.
-
Make sure that your device's battery is sufficiently charged before downloading the game. You can plug in your device to a power source or enable battery saver mode if needed.
-
If you encounter any problems while downloading the game, such as slow speed, interrupted download, or corrupted file, you can try clearing the cache and data of the Google Play Store app by going to Settings > Apps > Google Play Store > Storage > Clear Cache/Clear Data. You can also try restarting your device or using a different network.
-
If you have any questions or feedback about the game, you can contact the game's customer service by tapping on the "Help" button on the main menu or sending an email to support@volcanoforce.com.
-
-
How to start playing Lost in Blue?
in various events and activities. However, joining a camp also means that you will have to follow the camp's rules and share your resources with other members. You can also leave or change your camp at any time, but you will lose some of your progress and items.
-
To join a camp, follow these steps:
-
-
After choosing your profession, you will see a screen where you can join a camp. You can tap on the "Join Camp" button to see a list of available camps that you can join.
-
You can also tap on the "Create Camp" button to create your own camp. You will need to enter a camp name, a camp slogan, and a camp flag. You will also need to pay some gold coins as a creation fee.
-
Whether you join or create a camp, you will see a screen where you can see your camp's information, such as its name, slogan, flag, members, level, rank, etc. You can also see your camp's base on the map and visit it by tapping on the "Go" button.
-
You have successfully joined a camp!
-
-
How to survive in Lost in Blue?
-
Now that you have completed the initial steps, you are ready to start your survival adventure in Lost in Blue. However, surviving on the island is not easy, as you will face many dangers and difficulties. You will need to gather resources, craft items, build facilities, and fight enemies to stay alive. Here are some basic tips and tricks on how to survive in Lost in Blue:
-
Gathering resources
-
One of the most important aspects of survival is gathering resources. Resources are materials that you can collect from the environment and use for crafting or building. There are many types of resources in Lost in Blue, such as wood, stone, metal, fiber, food, water, etc. You can find them in different locations, such as forests, beaches, caves, etc.
-
To gather resources, follow these steps:
-
-
Find a resource node that you want to collect. You can see the name and icon of the resource on the top of the screen when you approach it.
-
Tap on the resource node to start gathering it. You will see a progress bar that shows how much of the resource you have collected.
-
If you have a tool that matches the resource type, such as an axe for wood or a pickaxe for stone, you can use it to gather faster and get more resources. You can equip a tool by tapping on its icon on the bottom right corner of the screen.
-
When you have collected enough of the resource, it will be added to your inventory. You can check your inventory by tapping on the backpack icon on the top right corner of the screen.
-
You have successfully gathered resources!
-
-
Crafting items
-
Another important aspect of survival is crafting items. Items are objects that you can create from the resources you gathered and use for various purposes. There are many types of items in Lost in Blue, such as weapons, tools, equipment, consumables, etc. You can use them to improve your survival chances, such as by increasing your attack power, defense, speed, health, etc.
-
To craft items, follow these steps:
-
-
Open your inventory by tapping on the backpack icon on the top right corner of the screen.
-
Tap on the "Craft" button on the bottom of the screen to see the list of items that you can craft.
-
Tap on the item that you want to craft to see its details, such as its name, description, icon, and required resources.
-
If you have enough resources to craft the item, you can tap on the "Craft" button below the item details. You will see a progress bar that shows how long it takes to craft the item.
-
When the crafting is complete, the item will be added to your inventory. You can equip or use the item by tapping on its icon in your inventory.
-
You have successfully crafted items!
-
-
Building facilities
-
Another important aspect of survival is building facilities. Facilities are structures that you can build from the resources you gathered and use for various purposes. There are many types of facilities in Lost in Blue, such as houses, workbenches, towers, fences, etc. You can use them to enhance your camp, such as by providing shelter, storage, production, defense, etc.
-
To build facilities, follow these steps:
-
-
Go to your camp's base by tapping on the "Camp" button on the top left corner of the screen and then tapping on the "Go" button next to your camp's name.
-
Tap on the "Build" button on the bottom of the screen to see the list of facilities that you can build.
-
Tap on the facility that you want to build to see its details, such as its name, description, icon, and required resources.
-
If you have enough resources to build the facility, you can tap on the "Build" button below the facility details. You will see a blueprint of the facility that you can drag and rotate on the ground.
-
When you have found a suitable spot for the facility, you can tap on the "Confirm" button to start building it. You will see a progress bar that shows how long it takes to build the facility.
-
When the building is complete, the facility will be ready to use. You can interact with it by tapping on it and choosing an option from the menu.
-
You have successfully built facilities!
-
-
Fighting enemies
-
The final aspect of survival that we will cover is fighting enemies. Enemies are hostile creatures or players that will try to attack you and harm you. There are many types of enemies in Lost in Blue, such as zombies, wild animals, militias, other players, etc. You will need to fight them to protect yourself and your camp, as well as to get loot and rewards from them.
-
To fight enemies, follow these steps:
-
-
Find an enemy that you want to fight. You can see the name and health bar of the enemy on the top of the screen when you target it.
-
Tap on the enemy to start attacking it. You will use your equipped weapon or tool to deal damage to the enemy. You can also use the virtual joystick and buttons to move around and dodge the enemy's attacks.
-
If you have any items or skills that can help you in combat, such as consumables, traps, or abilities, you can use them by tapping on their icons on the bottom right corner of the screen.
-
When you have reduced the enemy's health to zero, you will defeat it. You can loot the enemy's corpse by tapping on it and choosing an option from the menu. You can get various items and resources from the enemy, such as meat, leather, metal, etc.
-
You have successfully fought enemies!
-
-
Conclusion
-
In this article, we have shown you how to download Lost in Blue for Android and how to start playing the game. We have also given you some basic tips and tricks on how to survive on the island. We hope that you have found this article helpful and informative. If you are looking for a survival sandbox game that offers a lot of fun and challenge, you should definitely give Lost in Blue a try. You can download the game for free from the Google Play Store and start your adventure today!
-
FAQs
-
Here are some frequently asked questions about Lost in Blue and their answers:
-
-
Q: How can I play with my friends in Lost in Blue?
-
A: You can play with your friends in Lost in Blue by joining or creating a camp with them. You can invite your friends to your camp by tapping on the "Camp" button on the top left corner of the screen and then tapping on the "Invite" button next to your friend's name. You can also join your friend's camp by tapping on the "Join Camp" button on the main menu and then searching for your friend's camp name.
-
Q: How can I save my progress in Lost in Blue?
-
A: You can save your progress in Lost in Blue by logging in with your Google account or creating a new account with your email address. The game will automatically sync your progress across different devices and platforms. You can also manually save your progress by tapping on the "Settings" button on the top right corner of the screen and then tapping on the "Save" button.
-
Q: How can I change my profession in Lost in Blue?
-
A: You can change your profession in Lost in Blue by tapping on the "Profession" button on the top right corner of the screen and then tapping on the "Change" button below your current profession. You will need to pay some gold coins as a change fee. You can also change your profession for free once every 24 hours.
-
Q: How can I get more gold coins in Lost in Blue?
-
A: You can get more gold coins in Lost in Blue by completing various tasks and achievements, participating in events and activities, trading with other players, or purchasing them with real money.
-
Q: How can I contact the game's customer service?
-
A: You can contact the game's customer service by tapping on the "Help" button on the main menu or sending an email to support@volcanoforce.com.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/tests/parse.ts b/spaces/2023Liu2023/bingo/tests/parse.ts
deleted file mode 100644
index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/tests/parse.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import { promises as fs } from 'fs'
-import { join } from 'path'
-import { parseHeadersFromCurl } from '@/lib/utils'
-
-(async () => {
- const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8')
- const headers = parseHeadersFromCurl(content)
- console.log(headers)
-
- const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8')
- const cmdHeaders = parseHeadersFromCurl(cmdContent)
- console.log(cmdHeaders)
-})()
diff --git a/spaces/2ndelement/voicevox/build_util/check_release_build.py b/spaces/2ndelement/voicevox/build_util/check_release_build.py
deleted file mode 100644
index 71bf49c080f4fc39d1e08ccaa9cd6b1c35731ce8..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/build_util/check_release_build.py
+++ /dev/null
@@ -1,70 +0,0 @@
-"""
-ビルド結果をテストする
-"""
-import argparse
-import json
-import time
-from io import BytesIO
-from pathlib import Path
-from subprocess import Popen
-from urllib.parse import urlencode
-from urllib.request import Request, urlopen
-
-import soundfile
-
-base_url = "http://127.0.0.1:50021/"
-
-
-def test_release_build(dist_dir: Path, skip_run_process: bool) -> None:
- run_file = dist_dir / "run"
- if not run_file.exists():
- run_file = dist_dir / "run.exe"
-
- # 起動
- process = None
- if not skip_run_process:
- process = Popen([run_file.absolute()], cwd=dist_dir)
- time.sleep(60) # 待機
-
- # バージョン取得テスト
- req = Request(base_url + "version")
- with urlopen(req) as res:
- assert len(res.read()) > 0
-
- # テキスト -> クエリ
- text = "こんにちは、音声合成の世界へようこそ"
- req = Request(
- base_url + "audio_query?" + urlencode({"speaker": "1", "text": text}),
- method="POST",
- )
- with urlopen(req) as res:
- query = json.loads(res.read().decode("utf-8"))
-
- # クエリ -> 音声
- req = Request(base_url + "synthesis?speaker=1", method="POST")
- req.add_header("Content-Type", "application/json")
- req.data = json.dumps(query).encode("utf-8")
- with urlopen(req) as res:
- wave = res.read()
- soundfile.read(BytesIO(wave))
-
- # エンジンマニフェスト
- req = Request(base_url + "engine_manifest", method="GET")
- with urlopen(req) as res:
- manifest = json.loads(res.read().decode("utf-8"))
- assert "uuid" in manifest
-
- if not skip_run_process:
- # プロセスが稼働中であることを確認
- assert process.poll() is None
-
- # 停止
- process.terminate()
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--dist_dir", type=Path, default=Path("dist/"))
- parser.add_argument("--skip_run_process", action="store_true")
- args = parser.parse_args()
- test_release_build(dist_dir=args.dist_dir, skip_run_process=args.skip_run_process)
diff --git a/spaces/7hao/bingo/src/lib/utils.ts b/spaces/7hao/bingo/src/lib/utils.ts
deleted file mode 100644
index 07feedb34e356b1b3cf867872f32d47a96ae12fb..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/lib/utils.ts
+++ /dev/null
@@ -1,138 +0,0 @@
-import { clsx, type ClassValue } from 'clsx'
-import { customAlphabet } from 'nanoid'
-import { twMerge } from 'tailwind-merge'
-
-export function cn(...inputs: ClassValue[]) {
- return twMerge(clsx(inputs))
-}
-
-export const nanoid = customAlphabet(
- '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz',
- 7
-) // 7-character random string
-
-export function createChunkDecoder() {
- const decoder = new TextDecoder()
- return function (chunk: Uint8Array | undefined): string {
- if (!chunk) return ''
- return decoder.decode(chunk, { stream: true })
- }
-}
-
-export function random (start: number, end: number) {
- return start + Math.ceil(Math.random() * (end - start))
-}
-
-export function randomIP() {
- return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}`
-}
-
-export function parseHeadersFromCurl(content: string) {
- const re = /-H '([^:]+):\s*([^']+)/mg
- const headers: HeadersInit = {}
- content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl
- content.replace(re, (_: string, key: string, value: string) => {
- headers[key] = value
- return ''
- })
-
- return headers
-}
-
-export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2']
-export function encodeHeadersToCookie(content: string) {
- const base64Content = btoa(content)
- const contentChunks = base64Content.match(/.{1,4000}/g) || []
- return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`)
-}
-
-export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) {
- let base64Content = ''
- ChunkKeys.forEach((key) => {
- base64Content += (cookies[key] || '')
- })
- try {
- return atob(base64Content)
- } catch(e) {
- return ''
- }
-}
-
-export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) {
- return parseHeadersFromCurl(extraCurlFromCookie(cookies))
-}
-
-export function formatDate(input: string | number | Date): string {
- const date = new Date(input)
- return date.toLocaleDateString('en-US', {
- month: 'long',
- day: 'numeric',
- year: 'numeric'
- })
-}
-
-export function parseCookie(cookie: string, cookieName: string) {
- const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie
- return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : ''
-}
-
-export function parseCookies(cookie: string, cookieNames: string[]) {
- const cookies: { [key: string]: string } = {}
- cookieNames.forEach(cookieName => {
- cookies[cookieName] = parseCookie(cookie, cookieName)
- })
- return cookies
-}
-
-export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0'
-export const DEFAULT_IP = process.env.BING_IP || randomIP()
-
-export function parseUA(ua?: string, default_ua = DEFAULT_UA) {
- return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua
-}
-
-export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>) {
- let {
- BING_COOKIE = process.env.BING_COOKIE,
- BING_UA = process.env.BING_UA,
- BING_IP = process.env.BING_IP,
- BING_HEADER = process.env.BING_HEADER,
- } = cookies
-
- if (BING_HEADER) {
- return extraHeadersFromCookie({
- BING_HEADER,
- ...cookies,
- })
- }
-
- const ua = parseUA(BING_UA)
-
- if (!BING_COOKIE) {
- BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || 'xxx' // hf 暂时不用 Cookie 也可以正常使用
- }
-
- const parsedCookie = parseCookie(BING_COOKIE, '_U')
- if (!parsedCookie) {
- throw new Error('Invalid Cookie')
- }
- return {
- 'x-forwarded-for': BING_IP || DEFAULT_IP,
- 'Accept-Encoding': 'gzip, deflate, br',
- 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
- 'User-Agent': ua!,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- cookie: `_U=${parsedCookie}` || '',
- }
-}
-
-export class WatchDog {
- private tid = 0
- watch(fn: Function, timeout = 2000) {
- clearTimeout(this.tid)
- this.tid = setTimeout(fn, timeout + Math.random() * 1000)
- }
- reset() {
- clearTimeout(this.tid)
- }
-}
diff --git a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/layers_33966KB.py b/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/layers_33966KB.py
deleted file mode 100644
index 9b127bc6427f5c60c8cf85603a3d8a093c3501c4..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/layers_33966KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/nets_33966KB.py
deleted file mode 100644
index b8986f968dc5383e65d35aac6e4367299de3378b..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/nets_33966KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_33966KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16, 32)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 16)
- self.stg1_high_band_net = BaseASPPNet(2, 16)
-
- self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(8, 16)
-
- self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(16, 32)
-
- self.out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(16, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(16, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_batch_rvc.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_batch_rvc.py
deleted file mode 100644
index 311fe912b6c043f9852b5860ac296bcb46b57a71..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_batch_rvc.py
+++ /dev/null
@@ -1,217 +0,0 @@
-"""
-v1
-runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "E:\codes\py39\RVC-beta\output" "E:\codes\py39\test-20230416b\weights\mi-test.pth" 0.66 cuda:0 True 3 0 1 0.33
-v2
-runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\test-20230416b\logs\mi-test-v2\aadded_IVF677_Flat_nprobe_1_v2.index" harvest "E:\codes\py39\RVC-beta\output_v2" "E:\codes\py39\test-20230416b\weights\mi-test-v2.pth" 0.66 cuda:0 True 3 0 1 0.33
-"""
-import os, sys, pdb, torch
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import argparse
-import glob
-import sys
-import torch
-import tqdm as tq
-from multiprocessing import cpu_count
-
-
-class Config:
- def __init__(self, device, is_half):
- self.device = device
- self.is_half = is_half
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("16系/10系显卡和P40强制单精度")
- self.is_half = False
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(f"configs/{config_file}", "r") as f:
- strr = f.read().replace("true", "false")
- with open(f"configs/{config_file}", "w") as f:
- f.write(strr)
- with open("trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- else:
- self.gpu_name = None
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- elif torch.backends.mps.is_available():
- print("没有发现支持的N卡, 使用MPS进行推理")
- self.device = "mps"
- else:
- print("没有发现支持的N卡, 使用CPU进行推理")
- self.device = "cpu"
- self.is_half = True
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
-
-
-f0up_key = sys.argv[1]
-input_path = sys.argv[2]
-index_path = sys.argv[3]
-f0method = sys.argv[4] # harvest or pm
-opt_path = sys.argv[5]
-model_path = sys.argv[6]
-index_rate = float(sys.argv[7])
-device = sys.argv[8]
-is_half = bool(sys.argv[9])
-filter_radius = int(sys.argv[10])
-resample_sr = int(sys.argv[11])
-rms_mix_rate = float(sys.argv[12])
-protect = float(sys.argv[13])
-print(sys.argv)
-config = Config(device, is_half)
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from vc_infer_pipeline import VC
-from infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from my_utils import load_audio
-from fairseq import checkpoint_utils
-from scipy.io import wavfile
-
-hubert_model = None
-
-
-def load_hubert():
- global hubert_model
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-
-def vc_single(sid, input_audio, f0_up_key, f0_file, f0_method, file_index, index_rate):
- global tgt_sr, net_g, vc, hubert_model, version
- if input_audio is None:
- return "You need to upload an audio", None
- f0_up_key = int(f0_up_key)
- audio = load_audio(input_audio, 16000)
- times = [0, 0, 0]
- if hubert_model == None:
- load_hubert()
- if_f0 = cpt.get("f0", 1)
- # audio_opt=vc.pipeline(hubert_model,net_g,sid,audio,times,f0_up_key,f0_method,file_index,file_big_npy,index_rate,if_f0,f0_file=f0_file)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- sid,
- audio,
- input_audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=f0_file,
- )
- print(times)
- return audio_opt
-
-
-def get_vc(model_path):
- global n_spk, tgt_sr, net_g, vc, cpt, device, is_half, version
- print("loading pth %s" % model_path)
- cpt = torch.load(model_path, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1: #
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净,真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- # return {"visible": True,"maximum": n_spk, "__type__": "update"}
-
-
-get_vc(model_path)
-audios = os.listdir(input_path)
-for file in tq.tqdm(audios):
- if file.endswith(".wav"):
- file_path = input_path + "/" + file
- wav_opt = vc_single(
- 0, file_path, f0up_key, None, f0method, index_path, index_rate
- )
- out_path = opt_path + "/" + file
- wavfile.write(out_path, tgt_sr, wav_opt)
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/hifigan/hifigan.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/hifigan/hifigan.py
deleted file mode 100644
index ae7e61f56b00d60bcc49a18ece3edbe54746f7ea..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/hifigan/hifigan.py
+++ /dev/null
@@ -1,365 +0,0 @@
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-
-from modules.parallel_wavegan.layers import UpsampleNetwork, ConvInUpsampleNetwork
-from modules.parallel_wavegan.models.source import SourceModuleHnNSF
-import numpy as np
-
-LRELU_SLOPE = 0.1
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Conv1d1x1(Conv1d):
- """1x1 Conv1d with customized initialization."""
-
- def __init__(self, in_channels, out_channels, bias):
- """Initialize 1x1 Conv1d module."""
- super(Conv1d1x1, self).__init__(in_channels, out_channels,
- kernel_size=1, padding=0,
- dilation=1, bias=bias)
-
-
-class HifiGanGenerator(torch.nn.Module):
- def __init__(self, h, c_out=1):
- super(HifiGanGenerator, self).__init__()
- self.h = h
- self.num_kernels = len(h['resblock_kernel_sizes'])
- self.num_upsamples = len(h['upsample_rates'])
-
- if h['use_pitch_embed']:
- self.harmonic_num = 8
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h['upsample_rates']))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=h['audio_sample_rate'],
- harmonic_num=self.harmonic_num)
- self.noise_convs = nn.ModuleList()
- self.conv_pre = weight_norm(Conv1d(80, h['upsample_initial_channel'], 7, 1, padding=3))
- resblock = ResBlock1 if h['resblock'] == '1' else ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h['upsample_rates'], h['upsample_kernel_sizes'])):
- c_cur = h['upsample_initial_channel'] // (2 ** (i + 1))
- self.ups.append(weight_norm(
- ConvTranspose1d(c_cur * 2, c_cur, k, u, padding=(k - u) // 2)))
- if h['use_pitch_embed']:
- if i + 1 < len(h['upsample_rates']):
- stride_f0 = np.prod(h['upsample_rates'][i + 1:])
- self.noise_convs.append(Conv1d(
- 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2))
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h['upsample_initial_channel'] // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(h['resblock_kernel_sizes'], h['resblock_dilation_sizes'])):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, c_out, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
-
- def forward(self, x, f0=None):
- if f0 is not None:
- # harmonic-source signal, noise-source signal, uv flag
- f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2)
- har_source, noi_source, uv = self.m_source(f0)
- har_source = har_source.transpose(1, 2)
-
- x = self.conv_pre(x)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- x = self.ups[i](x)
- if f0 is not None:
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False, use_cond=False, c_in=1):
- super(DiscriminatorP, self).__init__()
- self.use_cond = use_cond
- if use_cond:
- from utils.hparams import hparams
- t = hparams['hop_size']
- self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2)
- c_in = 2
-
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(c_in, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x, mel):
- fmap = []
- if self.use_cond:
- x_mel = self.cond_net(mel)
- x = torch.cat([x_mel, x], 1)
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_cond=False, c_in=1):
- super(MultiPeriodDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorP(2, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(3, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(5, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(7, use_cond=use_cond, c_in=c_in),
- DiscriminatorP(11, use_cond=use_cond, c_in=c_in),
- ])
-
- def forward(self, y, y_hat, mel=None):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y, mel)
- y_d_g, fmap_g = d(y_hat, mel)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False, use_cond=False, upsample_rates=None, c_in=1):
- super(DiscriminatorS, self).__init__()
- self.use_cond = use_cond
- if use_cond:
- t = np.prod(upsample_rates)
- self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2)
- c_in = 2
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(c_in, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x, mel):
- if self.use_cond:
- x_mel = self.cond_net(mel)
- x = torch.cat([x_mel, x], 1)
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self, use_cond=False, c_in=1):
- super(MultiScaleDiscriminator, self).__init__()
- from utils.hparams import hparams
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True, use_cond=use_cond,
- upsample_rates=[4, 4, hparams['hop_size'] // 16],
- c_in=c_in),
- DiscriminatorS(use_cond=use_cond,
- upsample_rates=[4, 4, hparams['hop_size'] // 32],
- c_in=c_in),
- DiscriminatorS(use_cond=use_cond,
- upsample_rates=[4, 4, hparams['hop_size'] // 64],
- c_in=c_in),
- ])
- self.meanpools = nn.ModuleList([
- AvgPool1d(4, 2, padding=1),
- AvgPool1d(4, 2, padding=1)
- ])
-
- def forward(self, y, y_hat, mel=None):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i - 1](y)
- y_hat = self.meanpools[i - 1](y_hat)
- y_d_r, fmap_r = d(y, mel)
- y_d_g, fmap_g = d(y_hat, mel)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- r_losses = 0
- g_losses = 0
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg ** 2)
- r_losses += r_loss
- g_losses += g_loss
- r_losses = r_losses / len(disc_real_outputs)
- g_losses = g_losses / len(disc_real_outputs)
- return r_losses, g_losses
-
-
-def cond_discriminator_loss(outputs):
- loss = 0
- for dg in outputs:
- g_loss = torch.mean(dg ** 2)
- loss += g_loss
- loss = loss / len(outputs)
- return loss
-
-
-def generator_loss(disc_outputs):
- loss = 0
- for dg in disc_outputs:
- l = torch.mean((1 - dg) ** 2)
- loss += l
- loss = loss / len(disc_outputs)
- return loss
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/classifier.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/classifier.py
deleted file mode 100644
index 67e98b9d8ffb96a150b517497ace0a242d7163ef..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/classifier.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import os
-import torch
-import pytorch_lightning as pl
-from omegaconf import OmegaConf
-from torch.nn import functional as F
-from torch.optim import AdamW
-from torch.optim.lr_scheduler import LambdaLR
-from copy import deepcopy
-from einops import rearrange
-from glob import glob
-from natsort import natsorted
-
-from ldm.modules.diffusionmodules.openaimodel import EncoderUNetModel, UNetModel
-from ldm.util import log_txt_as_img, default, ismap, instantiate_from_config
-
-__models__ = {
- 'class_label': EncoderUNetModel,
- 'segmentation': UNetModel
-}
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-class NoisyLatentImageClassifier(pl.LightningModule):
-
- def __init__(self,
- diffusion_path,
- num_classes,
- ckpt_path=None,
- pool='attention',
- label_key=None,
- diffusion_ckpt_path=None,
- scheduler_config=None,
- weight_decay=1.e-2,
- log_steps=10,
- monitor='val/loss',
- *args,
- **kwargs):
- super().__init__(*args, **kwargs)
- self.num_classes = num_classes
- # get latest config of diffusion model
- diffusion_config = natsorted(glob(os.path.join(diffusion_path, 'configs', '*-project.yaml')))[-1]
- self.diffusion_config = OmegaConf.load(diffusion_config).model
- self.diffusion_config.params.ckpt_path = diffusion_ckpt_path
- self.load_diffusion()
-
- self.monitor = monitor
- self.numd = self.diffusion_model.first_stage_model.encoder.num_resolutions - 1
- self.log_time_interval = self.diffusion_model.num_timesteps // log_steps
- self.log_steps = log_steps
-
- self.label_key = label_key if not hasattr(self.diffusion_model, 'cond_stage_key') \
- else self.diffusion_model.cond_stage_key
-
- assert self.label_key is not None, 'label_key neither in diffusion model nor in model.params'
-
- if self.label_key not in __models__:
- raise NotImplementedError()
-
- self.load_classifier(ckpt_path, pool)
-
- self.scheduler_config = scheduler_config
- self.use_scheduler = self.scheduler_config is not None
- self.weight_decay = weight_decay
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- def load_diffusion(self):
- model = instantiate_from_config(self.diffusion_config)
- self.diffusion_model = model.eval()
- self.diffusion_model.train = disabled_train
- for param in self.diffusion_model.parameters():
- param.requires_grad = False
-
- def load_classifier(self, ckpt_path, pool):
- model_config = deepcopy(self.diffusion_config.params.unet_config.params)
- model_config.in_channels = self.diffusion_config.params.unet_config.params.out_channels
- model_config.out_channels = self.num_classes
- if self.label_key == 'class_label':
- model_config.pool = pool
-
- self.model = __models__[self.label_key](**model_config)
- if ckpt_path is not None:
- print('#####################################################################')
- print(f'load from ckpt "{ckpt_path}"')
- print('#####################################################################')
- self.init_from_ckpt(ckpt_path)
-
- @torch.no_grad()
- def get_x_noisy(self, x, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x))
- continuous_sqrt_alpha_cumprod = None
- if self.diffusion_model.use_continuous_noise:
- continuous_sqrt_alpha_cumprod = self.diffusion_model.sample_continuous_noise_level(x.shape[0], t + 1)
- # todo: make sure t+1 is correct here
-
- return self.diffusion_model.q_sample(x_start=x, t=t, noise=noise,
- continuous_sqrt_alpha_cumprod=continuous_sqrt_alpha_cumprod)
-
- def forward(self, x_noisy, t, *args, **kwargs):
- return self.model(x_noisy, t)
-
- @torch.no_grad()
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = rearrange(x, 'b h w c -> b c h w')
- x = x.to(memory_format=torch.contiguous_format).float()
- return x
-
- @torch.no_grad()
- def get_conditioning(self, batch, k=None):
- if k is None:
- k = self.label_key
- assert k is not None, 'Needs to provide label key'
-
- targets = batch[k].to(self.device)
-
- if self.label_key == 'segmentation':
- targets = rearrange(targets, 'b h w c -> b c h w')
- for down in range(self.numd):
- h, w = targets.shape[-2:]
- targets = F.interpolate(targets, size=(h // 2, w // 2), mode='nearest')
-
- # targets = rearrange(targets,'b c h w -> b h w c')
-
- return targets
-
- def compute_top_k(self, logits, labels, k, reduction="mean"):
- _, top_ks = torch.topk(logits, k, dim=1)
- if reduction == "mean":
- return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item()
- elif reduction == "none":
- return (top_ks == labels[:, None]).float().sum(dim=-1)
-
- def on_train_epoch_start(self):
- # save some memory
- self.diffusion_model.model.to('cpu')
-
- @torch.no_grad()
- def write_logs(self, loss, logits, targets):
- log_prefix = 'train' if self.training else 'val'
- log = {}
- log[f"{log_prefix}/loss"] = loss.mean()
- log[f"{log_prefix}/acc@1"] = self.compute_top_k(
- logits, targets, k=1, reduction="mean"
- )
- log[f"{log_prefix}/acc@5"] = self.compute_top_k(
- logits, targets, k=5, reduction="mean"
- )
-
- self.log_dict(log, prog_bar=False, logger=True, on_step=self.training, on_epoch=True)
- self.log('loss', log[f"{log_prefix}/loss"], prog_bar=True, logger=False)
- self.log('global_step', self.global_step, logger=False, on_epoch=False, prog_bar=True)
- lr = self.optimizers().param_groups[0]['lr']
- self.log('lr_abs', lr, on_step=True, logger=True, on_epoch=False, prog_bar=True)
-
- def shared_step(self, batch, t=None):
- x, *_ = self.diffusion_model.get_input(batch, k=self.diffusion_model.first_stage_key)
- targets = self.get_conditioning(batch)
- if targets.dim() == 4:
- targets = targets.argmax(dim=1)
- if t is None:
- t = torch.randint(0, self.diffusion_model.num_timesteps, (x.shape[0],), device=self.device).long()
- else:
- t = torch.full(size=(x.shape[0],), fill_value=t, device=self.device).long()
- x_noisy = self.get_x_noisy(x, t)
- logits = self(x_noisy, t)
-
- loss = F.cross_entropy(logits, targets, reduction='none')
-
- self.write_logs(loss.detach(), logits.detach(), targets.detach())
-
- loss = loss.mean()
- return loss, logits, x_noisy, targets
-
- def training_step(self, batch, batch_idx):
- loss, *_ = self.shared_step(batch)
- return loss
-
- def reset_noise_accs(self):
- self.noisy_acc = {t: {'acc@1': [], 'acc@5': []} for t in
- range(0, self.diffusion_model.num_timesteps, self.diffusion_model.log_every_t)}
-
- def on_validation_start(self):
- self.reset_noise_accs()
-
- @torch.no_grad()
- def validation_step(self, batch, batch_idx):
- loss, *_ = self.shared_step(batch)
-
- for t in self.noisy_acc:
- _, logits, _, targets = self.shared_step(batch, t)
- self.noisy_acc[t]['acc@1'].append(self.compute_top_k(logits, targets, k=1, reduction='mean'))
- self.noisy_acc[t]['acc@5'].append(self.compute_top_k(logits, targets, k=5, reduction='mean'))
-
- return loss
-
- def configure_optimizers(self):
- optimizer = AdamW(self.model.parameters(), lr=self.learning_rate, weight_decay=self.weight_decay)
-
- if self.use_scheduler:
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(optimizer, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [optimizer], scheduler
-
- return optimizer
-
- @torch.no_grad()
- def log_images(self, batch, N=8, *args, **kwargs):
- log = dict()
- x = self.get_input(batch, self.diffusion_model.first_stage_key)
- log['inputs'] = x
-
- y = self.get_conditioning(batch)
-
- if self.label_key == 'class_label':
- y = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
- log['labels'] = y
-
- if ismap(y):
- log['labels'] = self.diffusion_model.to_rgb(y)
-
- for step in range(self.log_steps):
- current_time = step * self.log_time_interval
-
- _, logits, x_noisy, _ = self.shared_step(batch, t=current_time)
-
- log[f'inputs@t{current_time}'] = x_noisy
-
- pred = F.one_hot(logits.argmax(dim=1), num_classes=self.num_classes)
- pred = rearrange(pred, 'b h w c -> b c h w')
-
- log[f'pred@t{current_time}'] = self.diffusion_model.to_rgb(pred)
-
- for key in log:
- log[key] = log[key][:N]
-
- return log
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/meters.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/meters.py
deleted file mode 100644
index 1b5716547cefd33fb68ab99dc2a6a70f55336625..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/meters.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import time
-import torch
-
-
-class AvgrageMeter(object):
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.avg = 0
- self.sum = 0
- self.cnt = 0
-
- def update(self, val, n=1):
- self.sum += val * n
- self.cnt += n
- self.avg = self.sum / self.cnt
-
-
-class Timer:
- timer_map = {}
-
- def __init__(self, name, enable=False):
- if name not in Timer.timer_map:
- Timer.timer_map[name] = 0
- self.name = name
- self.enable = enable
-
- def __enter__(self):
- if self.enable:
- # if torch.cuda.is_available():
- # torch.cuda.synchronize()
- self.t = time.time()
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- if self.enable:
- # if torch.cuda.is_available():
- # torch.cuda.synchronize()
- Timer.timer_map[self.name] += time.time() - self.t
- if self.enable:
- print(f'[Timer] {self.name}: {Timer.timer_map[self.name]}')
diff --git a/spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/README.md b/spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/README.md
deleted file mode 100644
index 05244cc50f50d08eda7dc9508b81f64a4fd32842..0000000000000000000000000000000000000000
--- a/spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ExperimentalChatGPTv1
-emoji: 🔥
-colorFrom: pink
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Abhi5ingh/fashionsd/README.md b/spaces/Abhi5ingh/fashionsd/README.md
deleted file mode 100644
index 0efa321f63e9e9a80799199f514651cf1be019c1..0000000000000000000000000000000000000000
--- a/spaces/Abhi5ingh/fashionsd/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Fashionsd
-emoji: 👀
-colorFrom: green
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.28.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatBase.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatBase.py
deleted file mode 100644
index b98fe56595a161bb5cfbcc7871ff94845edb3b3a..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatBase.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from __future__ import annotations
-
-from aiohttp import ClientSession
-
-from ..typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-
-
-class ChatBase(AsyncGeneratorProvider):
- url = "https://www.chatbase.co"
- supports_gpt_35_turbo = True
- supports_gpt_4 = True
- working = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> AsyncGenerator:
- if model == "gpt-4":
- chat_id = "quran---tafseer-saadi-pdf-wbgknt7zn"
- elif model == "gpt-3.5-turbo" or not model:
- chat_id = "chatbase--1--pdf-p680fxvnm"
- else:
- raise ValueError(f"Model are not supported: {model}")
- headers = {
- "User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
- "Accept" : "*/*",
- "Accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3",
- "Origin" : cls.url,
- "Referer" : cls.url + "/",
- "Sec-Fetch-Dest" : "empty",
- "Sec-Fetch-Mode" : "cors",
- "Sec-Fetch-Site" : "same-origin",
- }
- async with ClientSession(
- headers=headers
- ) as session:
- data = {
- "messages": messages,
- "captchaCode": "hadsa",
- "chatId": chat_id,
- "conversationId": f"kcXpqEnqUie3dnJlsRi_O-{chat_id}"
- }
- async with session.post("https://www.chatbase.co/api/fe/chat", json=data) as response:
- response.raise_for_status()
- async for stream in response.content.iter_any():
- yield stream.decode()
-
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/MikuChat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/MikuChat.py
deleted file mode 100644
index bf19631f4b59d39fa1eebe9ca2c0bce8d0a19982..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/MikuChat.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from __future__ import annotations
-
-import random, json
-from datetime import datetime
-from ...requests import StreamSession
-
-from ...typing import AsyncGenerator
-from ..base_provider import AsyncGeneratorProvider
-
-
-class MikuChat(AsyncGeneratorProvider):
- url = "https://ai.okmiku.com"
- supports_gpt_35_turbo = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> AsyncGenerator:
- if not model:
- model = "gpt-3.5-turbo"
- headers = {
- "authority": "api.catgpt.cc",
- "accept": "application/json",
- "origin": cls.url,
- "referer": f"{cls.url}/chat/",
- 'x-app-version': 'undefined',
- 'x-date': get_datetime(),
- 'x-fingerprint': get_fingerprint(),
- 'x-platform': 'web'
- }
- async with StreamSession(headers=headers, impersonate="chrome107") as session:
- data = {
- "model": model,
- "top_p": 0.8,
- "temperature": 0.5,
- "presence_penalty": 1,
- "frequency_penalty": 0,
- "max_tokens": 2000,
- "stream": True,
- "messages": messages,
- }
- async with session.post("https://api.catgpt.cc/ai/v1/chat/completions", json=data) as response:
- print(await response.text())
- response.raise_for_status()
- async for line in response.iter_lines():
- if line.startswith(b"data: "):
- line = json.loads(line[6:])
- chunk = line["choices"][0]["delta"].get("content")
- if chunk:
- yield chunk
-
-def k(e: str, t: int):
- a = len(e) & 3
- s = len(e) - a
- i = t
- c = 3432918353
- o = 461845907
- n = 0
- r = 0
- while n < s:
- r = (ord(e[n]) & 255) | ((ord(e[n + 1]) & 255) << 8) | ((ord(e[n + 2]) & 255) << 16) | ((ord(e[n + 3]) & 255) << 24)
- n += 4
- r = (r & 65535) * c + (((r >> 16) * c & 65535) << 16) & 4294967295
- r = (r << 15) | (r >> 17)
- r = (r & 65535) * o + (((r >> 16) * o & 65535) << 16) & 4294967295
- i ^= r
- i = (i << 13) | (i >> 19)
- l = (i & 65535) * 5 + (((i >> 16) * 5 & 65535) << 16) & 4294967295
- i = (l & 65535) + 27492 + (((l >> 16) + 58964 & 65535) << 16)
-
- if a == 3:
- r ^= (ord(e[n + 2]) & 255) << 16
- elif a == 2:
- r ^= (ord(e[n + 1]) & 255) << 8
- elif a == 1:
- r ^= ord(e[n]) & 255
- r = (r & 65535) * c + (((r >> 16) * c & 65535) << 16) & 4294967295
- r = (r << 15) | (r >> 17)
- r = (r & 65535) * o + (((r >> 16) * o & 65535) << 16) & 4294967295
- i ^= r
-
- i ^= len(e)
- i ^= i >> 16
- i = (i & 65535) * 2246822507 + (((i >> 16) * 2246822507 & 65535) << 16) & 4294967295
- i ^= i >> 13
- i = (i & 65535) * 3266489909 + (((i >> 16) * 3266489909 & 65535) << 16) & 4294967295
- i ^= i >> 16
- return i & 0xFFFFFFFF
-
-def get_fingerprint() -> str:
- return str(k(str(int(random.random() * 100000)), 256))
-
-def get_datetime() -> str:
- return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
\ No newline at end of file
diff --git a/spaces/Adapter/CoAdapter/t2i_adapters/t2i_adapters_for_style.py b/spaces/Adapter/CoAdapter/t2i_adapters/t2i_adapters_for_style.py
deleted file mode 100644
index e8c98efa7795a7943fb7e7f14f5206971c5c5eac..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/t2i_adapters/t2i_adapters_for_style.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import torch
-import numpy as np
-from transformers import CLIPVisionModel
-
-from ldm.models.diffusion.ddpm import LatentDiffusion, disabled_train
-from ldm.util import instantiate_from_config
-
-
-class T2IAdapterStyleV3(LatentDiffusion):
-
- def __init__(self, adapter_config, extra_cond_key, noise_schedule, *args, **kwargs):
- super(T2IAdapterStyleV3, self).__init__(*args, **kwargs)
- self.adapter = instantiate_from_config(adapter_config)
- self.extra_cond_key = extra_cond_key
- self.noise_schedule = noise_schedule
- self.clip_vision_model = CLIPVisionModel.from_pretrained(
- 'openai/clip-vit-large-patch14'
- )
-
- self.clip_vision_model = self.clip_vision_model.eval()
- self.clip_vision_model.train = disabled_train
- for param in self.clip_vision_model.parameters():
- param.requires_grad = False
-
-
- def shared_step(self, batch, **kwargs):
- for k in self.ucg_training:
- if k == self.extra_cond_key:
- continue
- p = self.ucg_training[k]
- for i in range(len(batch[k])):
- if self.ucg_prng.choice(2, p=[1 - p, p]):
- if isinstance(batch[k], list):
- batch[k][i] = ""
- batch['jpg'] = batch['jpg'] * 2 - 1
- x, c = self.get_input(batch, self.first_stage_key)
- extra_cond = super(LatentDiffusion, self).get_input(batch, self.extra_cond_key).to(self.device)
- extra_cond = self.clip_vision_model(extra_cond)['last_hidden_state']
- features_adapter = self.adapter(extra_cond)
- if self.extra_cond_key in self.ucg_training:
- idx = np.random.choice(self.adapter.num_token, np.random.randint(1, self.adapter.num_token+1), replace=False)
- idx_tensor = torch.from_numpy(idx).to(features_adapter.device)
- features_adapter = torch.index_select(features_adapter, 1, idx_tensor)
- t = self.get_time_with_schedule(self.noise_schedule, x.size(0))
- loss, loss_dict = self(x, c, t=t, append_to_context=features_adapter)
- return loss, loss_dict
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.adapter.parameters())
- opt = torch.optim.AdamW(params, lr=lr)
- return opt
-
- def on_save_checkpoint(self, checkpoint):
- keys = list(checkpoint['state_dict'].keys())
- for key in keys:
- if 'adapter' not in key:
- del checkpoint['state_dict'][key]
-
- def on_load_checkpoint(self, checkpoint):
- for name in self.state_dict():
- if 'adapter' not in name:
- checkpoint['state_dict'][name] = self.state_dict()[name]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/Pages.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/Pages.js
deleted file mode 100644
index fa56dce6f1c2c041dd92acb512cbf0671759c9bc..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pages/Pages.js
+++ /dev/null
@@ -1,65 +0,0 @@
-import OverlapSizer from '../overlapsizer/OverlapSizer.js';
-import Methods from './methods/Methods.js';
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-
-class Pages extends OverlapSizer {
- constructor(scene, config) {
- super(scene, config);
- this.type = 'rexPages';
- this.childrenMap = this.sizerChildren;
- this._previousKey = undefined;
- this._currentKey = undefined;
- this.setSwapMode(GetValue(config, 'swapMode', 0));
- this.setFadeInDuration(GetValue(config, 'fadeIn', 0));
- }
-
- setSwapMode(mode) {
- if (typeof (mode) === 'string') {
- mode = SWAPMODE[mode];
- }
- this.swapMode = mode;
- return this;
- }
-
- setFadeInDuration(duration) {
- this.fadeInDuration = duration;
- return this;
- }
-
- get previousKey() {
- return this._previousKey;
- }
-
- get currentKey() {
- return this._currentKey;
- }
-
- set currentKey(key) {
- this.swapPage(key);
- }
-
- get currentPage() {
- return this.getPage(this.currentKey);
- }
-
- get previousPage() {
- return this.getPage(this.previousKey);
- }
-
- get keys() {
- return Object.keys(this.sizerChildren);
- }
-}
-
-Object.assign(
- Pages.prototype,
- Methods
-);
-
-const SWAPMODE = {
- invisible: 0,
- destroy: 1
-};
-
-export default Pages;
\ No newline at end of file
diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/vits/transforms.py b/spaces/Akmyradov/TurkmenTTSweSTT/vits/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Akmyradov/TurkmenTTSweSTT/vits/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/AlexWang/lama/fetch_data/places_standard_test_val_prepare.sh b/spaces/AlexWang/lama/fetch_data/places_standard_test_val_prepare.sh
deleted file mode 100644
index 6017e29aa1593c1c66affa4b9081afac2b9fb000..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/fetch_data/places_standard_test_val_prepare.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-mkdir -p places_standard_dataset/original/test/
-tar -xvf test_large.tar --transform='s/.*\///' -C places_standard_dataset/original/test/
-
-mkdir -p places_standard_dataset/original/val/
-tar -xvf val_large.tar --transform='s/.*\///' -C places_standard_dataset/original/val/
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/base_dataset.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/base_dataset.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/image_encoder.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/image_encoder.py
deleted file mode 100644
index 831489eefed167264c8fd8f57e1ed59610ebb858..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/paint_by_example/image_encoder.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import torch
-from torch import nn
-from transformers import CLIPPreTrainedModel, CLIPVisionModel
-
-from ...models.attention import BasicTransformerBlock
-from ...utils import logging
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-class PaintByExampleImageEncoder(CLIPPreTrainedModel):
- def __init__(self, config, proj_size=768):
- super().__init__(config)
- self.proj_size = proj_size
-
- self.model = CLIPVisionModel(config)
- self.mapper = PaintByExampleMapper(config)
- self.final_layer_norm = nn.LayerNorm(config.hidden_size)
- self.proj_out = nn.Linear(config.hidden_size, self.proj_size)
-
- # uncondition for scaling
- self.uncond_vector = nn.Parameter(torch.randn((1, 1, self.proj_size)))
-
- def forward(self, pixel_values, return_uncond_vector=False):
- clip_output = self.model(pixel_values=pixel_values)
- latent_states = clip_output.pooler_output
- latent_states = self.mapper(latent_states[:, None])
- latent_states = self.final_layer_norm(latent_states)
- latent_states = self.proj_out(latent_states)
- if return_uncond_vector:
- return latent_states, self.uncond_vector
-
- return latent_states
-
-
-class PaintByExampleMapper(nn.Module):
- def __init__(self, config):
- super().__init__()
- num_layers = (config.num_hidden_layers + 1) // 5
- hid_size = config.hidden_size
- num_heads = 1
- self.blocks = nn.ModuleList(
- [
- BasicTransformerBlock(hid_size, num_heads, hid_size, activation_fn="gelu", attention_bias=True)
- for _ in range(num_layers)
- ]
- )
-
- def forward(self, hidden_states):
- for block in self.blocks:
- hidden_states = block(hidden_states)
-
- return hidden_states
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py
deleted file mode 100644
index 781dba78d68e77fa7eee15f5bbcc539731f8378d..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- norm_eval=False,
- plugins=[
- dict(
- cfg=dict(type='ContextBlock', ratio=1. / 16),
- stages=(False, True, True, True),
- position='after_conv3')
- ]))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/match_costs/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/match_costs/__init__.py
deleted file mode 100644
index add5e0d394034d89b2d47c314ff1938294deb6ea..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/match_costs/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from .builder import build_match_cost
-from .match_cost import BBoxL1Cost, ClassificationCost, FocalLossCost, IoUCost
-
-__all__ = [
- 'build_match_cost', 'ClassificationCost', 'BBoxL1Cost', 'IoUCost',
- 'FocalLossCost'
-]
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 1f9a917fa4223bd2428f2b2d10eac446f7ecc71a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/dmnet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/WSL-installation-guide.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/WSL-installation-guide.md
deleted file mode 100644
index 30b7fa3e6f4613898fbb0d0bd16b77db5d79c14b..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/WSL-installation-guide.md
+++ /dev/null
@@ -1,82 +0,0 @@
-Guide created by [@jfryton](https://github.com/jfryton). Thank you jfryton.
-
------
-
-Here's an easy-to-follow, step-by-step guide for installing Windows Subsystem for Linux (WSL) with Ubuntu on Windows 10/11:
-
-## Step 1: Enable WSL
-
-1. Press the Windows key + X and click on "Windows PowerShell (Admin)" or "Windows Terminal (Admin)" to open PowerShell or Terminal with administrator privileges.
-2. In the PowerShell window, type the following command and press Enter:
-
-```
-wsl --install
-```
-
-If this command doesn't work, you can enable WSL with the following command for Windows 10:
-
-```
-wsl --set-default-version 1
-```
-
-For Windows 11, you can use:
-
-```
-wsl --set-default-version 2
-```
-
-You may be prompted to restart your computer. If so, save your work and restart.
-
-## Step 2: Install Ubuntu
-
-1. Open the Microsoft Store.
-2. Search for "Ubuntu" in the search bar.
-3. Choose the desired Ubuntu version (e.g., Ubuntu 20.04 LTS) and click "Get" or "Install" to download and install the Ubuntu app.
-4. Once the installation is complete, click "Launch" or search for "Ubuntu" in the Start menu and open the app.
-
-## Step 3: Set up Ubuntu
-
-1. When you first launch the Ubuntu app, it will take a few minutes to set up. Be patient as it installs the necessary files and sets up your environment.
-2. Once the setup is complete, you will be prompted to create a new UNIX username and password. Choose a username and password, and make sure to remember them, as you will need them for future administrative tasks within the Ubuntu environment.
-
-## Step 4: Update and upgrade packages
-
-1. After setting up your username and password, it's a good idea to update and upgrade your Ubuntu system. Run the following commands in the Ubuntu terminal:
-
-```
-sudo apt update
-sudo apt upgrade
-```
-
-2. Enter your password when prompted. This will update the package list and upgrade any outdated packages.
-
-Congratulations! You have now installed WSL with Ubuntu on your Windows 10/11 system. You can use the Ubuntu terminal for various tasks, like running Linux commands, installing packages, or managing files.
-
-You can launch your WSL Ubuntu installation by selecting the Ubuntu app (like any other program installed on your computer) or typing 'ubuntu' into Powershell or Terminal.
-
-## Step 5: Proceed with Linux instructions
-
-1. You can now follow the Linux setup instructions. If you receive any error messages about a missing tool or package, just install them using apt:
-
-```
-sudo apt install [missing package]
-```
-
-You will probably need to install build-essential
-
-```
-sudo apt install build-essential
-```
-
-If you face any issues or need to troubleshoot, you can always refer to the official Microsoft documentation for WSL: https://docs.microsoft.com/en-us/windows/wsl/
-
-#### WSL2 performance using /mnt:
-when you git clone a repository, put it inside WSL and not outside. To understand more, take a look at this [issue](https://github.com/microsoft/WSL/issues/4197#issuecomment-604592340)
-
-## Bonus: Port Forwarding
-
-By default, you won't be able to access the webui from another device on your local network. You will need to setup the appropriate port forwarding using the following command (using PowerShell or Terminal with administrator privileges).
-
-```
-netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=7860 connectaddress=localhost connectport=7860
-```
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ui_file_saving.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ui_file_saving.py
deleted file mode 100644
index 7357ac85d45868794c9950ca0e970ac82cf486d6..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/ui_file_saving.py
+++ /dev/null
@@ -1,88 +0,0 @@
-import gradio as gr
-
-from modules import chat, presets, shared, ui, utils
-from modules.utils import gradio
-
-
-def create_ui():
- mu = shared.args.multi_user
-
- # Text file saver
- with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['file_saver']:
- shared.gradio['save_filename'] = gr.Textbox(lines=1, label='File name')
- shared.gradio['save_root'] = gr.Textbox(lines=1, label='File folder', info='For reference. Unchangeable.', interactive=False)
- shared.gradio['save_contents'] = gr.Textbox(lines=10, label='File contents')
- with gr.Row():
- shared.gradio['save_confirm'] = gr.Button('Save', elem_classes="small-button", interactive=not mu)
- shared.gradio['save_cancel'] = gr.Button('Cancel', elem_classes="small-button")
-
- # Text file deleter
- with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['file_deleter']:
- shared.gradio['delete_filename'] = gr.Textbox(lines=1, label='File name')
- shared.gradio['delete_root'] = gr.Textbox(lines=1, label='File folder', info='For reference. Unchangeable.', interactive=False)
- with gr.Row():
- shared.gradio['delete_confirm'] = gr.Button('Delete', elem_classes="small-button", variant='stop', interactive=not mu)
- shared.gradio['delete_cancel'] = gr.Button('Cancel', elem_classes="small-button")
-
- # Character saver/deleter
- with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['character_saver']:
- shared.gradio['save_character_filename'] = gr.Textbox(lines=1, label='File name', info='The character will be saved to your characters/ folder with this base filename.')
- with gr.Row():
- shared.gradio['save_character_confirm'] = gr.Button('Save', elem_classes="small-button", interactive=not mu)
- shared.gradio['save_character_cancel'] = gr.Button('Cancel', elem_classes="small-button")
-
- with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['character_deleter']:
- gr.Markdown('Confirm the character deletion?')
- with gr.Row():
- shared.gradio['delete_character_confirm'] = gr.Button('Delete', elem_classes="small-button", variant='stop', interactive=not mu)
- shared.gradio['delete_character_cancel'] = gr.Button('Cancel', elem_classes="small-button")
-
-
-def create_event_handlers():
- shared.gradio['save_confirm'].click(
- lambda x, y, z: utils.save_file(x + y, z), gradio('save_root', 'save_filename', 'save_contents'), None).then(
- lambda: gr.update(visible=False), None, gradio('file_saver'))
-
- shared.gradio['delete_confirm'].click(
- lambda x, y: utils.delete_file(x + y), gradio('delete_root', 'delete_filename'), None).then(
- lambda: gr.update(visible=False), None, gradio('file_deleter'))
-
- shared.gradio['delete_cancel'].click(lambda: gr.update(visible=False), None, gradio('file_deleter'))
- shared.gradio['save_cancel'].click(lambda: gr.update(visible=False), None, gradio('file_saver'))
-
- shared.gradio['save_character_confirm'].click(
- chat.save_character, gradio('name2', 'greeting', 'context', 'character_picture', 'save_character_filename'), None).then(
- lambda: gr.update(visible=False), None, gradio('character_saver')).then(
- lambda x: gr.update(choices=utils.get_available_characters(), value=x), gradio('save_character_filename'), gradio('character_menu'))
-
- shared.gradio['delete_character_confirm'].click(
- chat.delete_character, gradio('character_menu'), None).then(
- lambda: gr.update(visible=False), None, gradio('character_deleter')).then(
- lambda: gr.update(choices=(characters := utils.get_available_characters()), value=characters[0]), None, gradio('character_menu'))
-
- shared.gradio['save_character_cancel'].click(lambda: gr.update(visible=False), None, gradio('character_saver'))
- shared.gradio['delete_character_cancel'].click(lambda: gr.update(visible=False), None, gradio('character_deleter'))
-
- shared.gradio['save_preset'].click(
- ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(
- presets.generate_preset_yaml, gradio('interface_state'), gradio('save_contents')).then(
- lambda: 'presets/', None, gradio('save_root')).then(
- lambda: 'My Preset.yaml', None, gradio('save_filename')).then(
- lambda: gr.update(visible=True), None, gradio('file_saver'))
-
- shared.gradio['delete_preset'].click(
- lambda x: f'{x}.yaml', gradio('preset_menu'), gradio('delete_filename')).then(
- lambda: 'presets/', None, gradio('delete_root')).then(
- lambda: gr.update(visible=True), None, gradio('file_deleter'))
-
- shared.gradio['save_grammar'].click(
- ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(
- lambda x: x, gradio('grammar_string'), gradio('save_contents')).then(
- lambda: 'grammars/', None, gradio('save_root')).then(
- lambda: 'My Fancy Grammar.gbnf', None, gradio('save_filename')).then(
- lambda: gr.update(visible=True), None, gradio('file_saver'))
-
- shared.gradio['delete_grammar'].click(
- lambda x: x, gradio('grammar_file'), gradio('delete_filename')).then(
- lambda: 'grammars/', None, gradio('delete_root')).then(
- lambda: gr.update(visible=True), None, gradio('file_deleter'))
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/tests/test_consistency.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/tests/test_consistency.py
deleted file mode 100644
index f2c6fd4fe9074143803e0eb6c99fa02a47632094..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/tests/test_consistency.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import numpy as np
-import pytest
-import torch
-from PIL import Image
-
-import clip
-
-
-@pytest.mark.parametrize('model_name', clip.available_models())
-def test_consistency(model_name):
- device = "cpu"
- jit_model, transform = clip.load(model_name, device=device, jit=True)
- py_model, _ = clip.load(model_name, device=device, jit=False)
-
- image = transform(Image.open("CLIP.png")).unsqueeze(0).to(device)
- text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
-
- with torch.no_grad():
- logits_per_image, _ = jit_model(image, text)
- jit_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
-
- logits_per_image, _ = py_model(image, text)
- py_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
-
- assert np.allclose(jit_probs, py_probs, atol=0.01, rtol=0.1)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/context_block.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/context_block.py
deleted file mode 100644
index d60fdb904c749ce3b251510dff3cc63cea70d42e..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/context_block.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-from torch import nn
-
-from ..utils import constant_init, kaiming_init
-from .registry import PLUGIN_LAYERS
-
-
-def last_zero_init(m):
- if isinstance(m, nn.Sequential):
- constant_init(m[-1], val=0)
- else:
- constant_init(m, val=0)
-
-
-@PLUGIN_LAYERS.register_module()
-class ContextBlock(nn.Module):
- """ContextBlock module in GCNet.
-
- See 'GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond'
- (https://arxiv.org/abs/1904.11492) for details.
-
- Args:
- in_channels (int): Channels of the input feature map.
- ratio (float): Ratio of channels of transform bottleneck
- pooling_type (str): Pooling method for context modeling.
- Options are 'att' and 'avg', stand for attention pooling and
- average pooling respectively. Default: 'att'.
- fusion_types (Sequence[str]): Fusion method for feature fusion,
- Options are 'channels_add', 'channel_mul', stand for channelwise
- addition and multiplication respectively. Default: ('channel_add',)
- """
-
- _abbr_ = 'context_block'
-
- def __init__(self,
- in_channels,
- ratio,
- pooling_type='att',
- fusion_types=('channel_add', )):
- super(ContextBlock, self).__init__()
- assert pooling_type in ['avg', 'att']
- assert isinstance(fusion_types, (list, tuple))
- valid_fusion_types = ['channel_add', 'channel_mul']
- assert all([f in valid_fusion_types for f in fusion_types])
- assert len(fusion_types) > 0, 'at least one fusion should be used'
- self.in_channels = in_channels
- self.ratio = ratio
- self.planes = int(in_channels * ratio)
- self.pooling_type = pooling_type
- self.fusion_types = fusion_types
- if pooling_type == 'att':
- self.conv_mask = nn.Conv2d(in_channels, 1, kernel_size=1)
- self.softmax = nn.Softmax(dim=2)
- else:
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- if 'channel_add' in fusion_types:
- self.channel_add_conv = nn.Sequential(
- nn.Conv2d(self.in_channels, self.planes, kernel_size=1),
- nn.LayerNorm([self.planes, 1, 1]),
- nn.ReLU(inplace=True), # yapf: disable
- nn.Conv2d(self.planes, self.in_channels, kernel_size=1))
- else:
- self.channel_add_conv = None
- if 'channel_mul' in fusion_types:
- self.channel_mul_conv = nn.Sequential(
- nn.Conv2d(self.in_channels, self.planes, kernel_size=1),
- nn.LayerNorm([self.planes, 1, 1]),
- nn.ReLU(inplace=True), # yapf: disable
- nn.Conv2d(self.planes, self.in_channels, kernel_size=1))
- else:
- self.channel_mul_conv = None
- self.reset_parameters()
-
- def reset_parameters(self):
- if self.pooling_type == 'att':
- kaiming_init(self.conv_mask, mode='fan_in')
- self.conv_mask.inited = True
-
- if self.channel_add_conv is not None:
- last_zero_init(self.channel_add_conv)
- if self.channel_mul_conv is not None:
- last_zero_init(self.channel_mul_conv)
-
- def spatial_pool(self, x):
- batch, channel, height, width = x.size()
- if self.pooling_type == 'att':
- input_x = x
- # [N, C, H * W]
- input_x = input_x.view(batch, channel, height * width)
- # [N, 1, C, H * W]
- input_x = input_x.unsqueeze(1)
- # [N, 1, H, W]
- context_mask = self.conv_mask(x)
- # [N, 1, H * W]
- context_mask = context_mask.view(batch, 1, height * width)
- # [N, 1, H * W]
- context_mask = self.softmax(context_mask)
- # [N, 1, H * W, 1]
- context_mask = context_mask.unsqueeze(-1)
- # [N, 1, C, 1]
- context = torch.matmul(input_x, context_mask)
- # [N, C, 1, 1]
- context = context.view(batch, channel, 1, 1)
- else:
- # [N, C, 1, 1]
- context = self.avg_pool(x)
-
- return context
-
- def forward(self, x):
- # [N, C, 1, 1]
- context = self.spatial_pool(x)
-
- out = x
- if self.channel_mul_conv is not None:
- # [N, C, 1, 1]
- channel_mul_term = torch.sigmoid(self.channel_mul_conv(context))
- out = out * channel_mul_term
- if self.channel_add_conv is not None:
- # [N, C, 1, 1]
- channel_add_term = self.channel_add_conv(context)
- out = out + channel_add_term
-
- return out
diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/jupyter/scripts/test_persistent_data.py b/spaces/AnthonyTruchetPoC/persistent-docker/jupyter/scripts/test_persistent_data.py
deleted file mode 100644
index eaa869aaa80d4c5b10b8e1b445f02579f0e11526..0000000000000000000000000000000000000000
--- a/spaces/AnthonyTruchetPoC/persistent-docker/jupyter/scripts/test_persistent_data.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# ---
-# jupyter:
-# jupytext:
-# text_representation:
-# extension: .py
-# format_name: percent
-# format_version: '1.3'
-# jupytext_version: 1.14.7
-# kernelspec:
-# display_name: Python 3 (ipykernel)
-# language: python
-# name: python3
-# ---
-
-# %%
-import os
-from pathlib import Path
-
-import numpy as np
-import pandas as pd
-
-# %%
-from athai.data_utils import cached_download_csv
-
-# %%
-DATE_COLUMN = "date/time"
-DATA_URL = (
- "https://s3-us-west-2.amazonaws.com/"
- "streamlit-demo-data/uber-raw-data-sep14.csv.gz"
-)
-
-DATA_PATH = Path(os.environ.get("APP_DATA"))
-
-
-# %%
-nrows=10
-data = cached_download_csv(DATA_PATH, DATA_URL, nrows=nrows)
-data
-
-# %%
-
-# %%
diff --git a/spaces/Anustup/NS_AI_LABS/src/__init__.py b/spaces/Anustup/NS_AI_LABS/src/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/tuneavideo/util.py b/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/tuneavideo/util.py
deleted file mode 100644
index 77a5ebbf1f6d7ca002de3bede9d8d98fc24946d2..0000000000000000000000000000000000000000
--- a/spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/tuneavideo/util.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import os
-from typing import Union
-
-import imageio
-import numpy as np
-import torch
-import torchvision
-from einops import rearrange
-from tqdm import tqdm
-
-
-def save_videos_grid(
- videos: torch.Tensor, save_path: str = "output", path: str = "output.gif", rescale=False, n_rows=4, fps=3
-):
- videos = rearrange(videos, "b c t h w -> t b c h w")
- outputs = []
- for x in videos:
- x = torchvision.utils.make_grid(x, nrow=n_rows)
- x = x.transpose(0, 1).transpose(1, 2).squeeze(-1)
- if rescale:
- x = (x + 1.0) / 2.0 # -1,1 -> 0,1
- x = (x * 255).numpy().astype(np.uint8)
- outputs.append(x)
-
- if not os.path.exists(save_path):
- os.makedirs(save_path)
-
- imageio.mimsave(os.path.join(save_path, path), outputs, fps=fps)
- return os.path.join(save_path, path)
-
-
-# DDIM Inversion
-@torch.no_grad()
-def init_prompt(prompt, pipeline):
- uncond_input = pipeline.tokenizer(
- [""], padding="max_length", max_length=pipeline.tokenizer.model_max_length, return_tensors="pt"
- )
- uncond_embeddings = pipeline.text_encoder(uncond_input.input_ids.to(pipeline.device))[0]
- text_input = pipeline.tokenizer(
- [prompt],
- padding="max_length",
- max_length=pipeline.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_embeddings = pipeline.text_encoder(text_input.input_ids.to(pipeline.device))[0]
- context = torch.cat([uncond_embeddings, text_embeddings])
-
- return context
-
-
-def next_step(
- model_output: Union[torch.FloatTensor, np.ndarray],
- timestep: int,
- sample: Union[torch.FloatTensor, np.ndarray],
- ddim_scheduler,
-):
- timestep, next_timestep = (
- min(timestep - ddim_scheduler.config.num_train_timesteps // ddim_scheduler.num_inference_steps, 999),
- timestep,
- )
- alpha_prod_t = ddim_scheduler.alphas_cumprod[timestep] if timestep >= 0 else ddim_scheduler.final_alpha_cumprod
- alpha_prod_t_next = ddim_scheduler.alphas_cumprod[next_timestep]
- beta_prod_t = 1 - alpha_prod_t
- next_original_sample = (sample - beta_prod_t**0.5 * model_output) / alpha_prod_t**0.5
- next_sample_direction = (1 - alpha_prod_t_next) ** 0.5 * model_output
- next_sample = alpha_prod_t_next**0.5 * next_original_sample + next_sample_direction
- return next_sample
-
-
-def get_noise_pred_single(latents, t, context, unet):
- noise_pred = unet(latents, t, encoder_hidden_states=context)["sample"]
- return noise_pred
-
-
-@torch.no_grad()
-def ddim_loop(pipeline, ddim_scheduler, latent, num_inv_steps, prompt):
- context = init_prompt(prompt, pipeline)
- uncond_embeddings, cond_embeddings = context.chunk(2)
- all_latent = [latent]
- latent = latent.clone().detach()
- for i in tqdm(range(num_inv_steps)):
- t = ddim_scheduler.timesteps[len(ddim_scheduler.timesteps) - i - 1]
- noise_pred = get_noise_pred_single(latent, t, cond_embeddings, pipeline.unet)
- latent = next_step(noise_pred, t, latent, ddim_scheduler)
- all_latent.append(latent)
- return all_latent
-
-
-@torch.no_grad()
-def ddim_inversion(pipeline, ddim_scheduler, video_latent, num_inv_steps, prompt=""):
- ddim_latents = ddim_loop(pipeline, ddim_scheduler, video_latent, num_inv_steps, prompt)
- return ddim_latents
diff --git a/spaces/AsakuraMizu/moe-tts/text/korean.py b/spaces/AsakuraMizu/moe-tts/text/korean.py
deleted file mode 100644
index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000
--- a/spaces/AsakuraMizu/moe-tts/text/korean.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import re
-from jamo import h2j, j2hcj
-import ko_pron
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (ipa, lazy ipa) pairs:
-_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('t͡ɕ','ʧ'),
- ('d͡ʑ','ʥ'),
- ('ɲ','n^'),
- ('ɕ','ʃ'),
- ('ʷ','w'),
- ('ɭ','l`'),
- ('ʎ','ɾ'),
- ('ɣ','ŋ'),
- ('ɰ','ɯ'),
- ('ʝ','j'),
- ('ʌ','ə'),
- ('ɡ','g'),
- ('\u031a','#'),
- ('\u0348','='),
- ('\u031e',''),
- ('\u0320',''),
- ('\u0339','')
-]]
-
-
-def latin_to_hangul(text):
- for regex, replacement in _latin_to_hangul:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def divide_hangul(text):
- text = j2hcj(h2j(text))
- for regex, replacement in _hangul_divided:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def hangul_number(num, sino=True):
- '''Reference https://github.com/Kyubyong/g2pK'''
- num = re.sub(',', '', num)
-
- if num == '0':
- return '영'
- if not sino and num == '20':
- return '스무'
-
- digits = '123456789'
- names = '일이삼사오육칠팔구'
- digit2name = {d: n for d, n in zip(digits, names)}
-
- modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉'
- decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔'
- digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())}
- digit2dec = {d: dec for d, dec in zip(digits, decimals.split())}
-
- spelledout = []
- for i, digit in enumerate(num):
- i = len(num) - i - 1
- if sino:
- if i == 0:
- name = digit2name.get(digit, '')
- elif i == 1:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- else:
- if i == 0:
- name = digit2mod.get(digit, '')
- elif i == 1:
- name = digit2dec.get(digit, '')
- if digit == '0':
- if i % 4 == 0:
- last_three = spelledout[-min(3, len(spelledout)):]
- if ''.join(last_three) == '':
- spelledout.append('')
- continue
- else:
- spelledout.append('')
- continue
- if i == 2:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 3:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 4:
- name = digit2name.get(digit, '') + '만'
- name = name.replace('일만', '만')
- elif i == 5:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- elif i == 6:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 7:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 8:
- name = digit2name.get(digit, '') + '억'
- elif i == 9:
- name = digit2name.get(digit, '') + '십'
- elif i == 10:
- name = digit2name.get(digit, '') + '백'
- elif i == 11:
- name = digit2name.get(digit, '') + '천'
- elif i == 12:
- name = digit2name.get(digit, '') + '조'
- elif i == 13:
- name = digit2name.get(digit, '') + '십'
- elif i == 14:
- name = digit2name.get(digit, '') + '백'
- elif i == 15:
- name = digit2name.get(digit, '') + '천'
- spelledout.append(name)
- return ''.join(elem for elem in spelledout)
-
-
-def number_to_hangul(text):
- '''Reference https://github.com/Kyubyong/g2pK'''
- tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text))
- for token in tokens:
- num, classifier = token
- if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers:
- spelledout = hangul_number(num, sino=False)
- else:
- spelledout = hangul_number(num, sino=True)
- text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}')
- # digit by digit for remaining digits
- digits = '0123456789'
- names = '영일이삼사오육칠팔구'
- for d, n in zip(digits, names):
- text = text.replace(d, n)
- return text
-
-
-def korean_to_lazy_ipa(text):
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text)
- for regex, replacement in _ipa_to_lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def korean_to_ipa(text):
- text = korean_to_lazy_ipa(text)
- return text.replace('ʧ','tʃ').replace('ʥ','dʑ')
diff --git a/spaces/Banbri/zcvzcv/src/lib/triggerDownload.ts b/spaces/Banbri/zcvzcv/src/lib/triggerDownload.ts
deleted file mode 100644
index e5627a26a4bba34bdf28279d265c6a71440d8136..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/lib/triggerDownload.ts
+++ /dev/null
@@ -1,12 +0,0 @@
-export function triggerDownload(filename: string, text: string) {
- var element = document.createElement('a');
- element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
- element.setAttribute('download', filename);
-
- element.style.display = 'none';
- document.body.appendChild(element);
-
- element.click();
-
- document.body.removeChild(element);
-}
\ No newline at end of file
diff --git a/spaces/Bart92/RVC_HF/infer/modules/vc/__init__.py b/spaces/Bart92/RVC_HF/infer/modules/vc/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Benson/text-generation/Examples/Descarga De Software Ricoh Aficio 2018d.md b/spaces/Benson/text-generation/Examples/Descarga De Software Ricoh Aficio 2018d.md
deleted file mode 100644
index ad0de13c0c4e2f5a77ca016c33ae4c792a61a708..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descarga De Software Ricoh Aficio 2018d.md
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
Descargar X-Force Keygen para AutoCAD 2019 64 bits
-
Si estás buscando una forma de activar AutoCAD 2019, uno de los programas de diseño más populares y potentes del mundo, es posible que hayas oído hablar de X-Force Keygen. Esta es una herramienta que puede generar códigos de activación para cualquier producto de Autodesk, incluyendo AutoCAD 2019. Pero ¿qué es X-Force Keygen y cómo se puede utilizar para descargar y activar AutoCAD 2019 64 bits? En este artículo, responderemos estas preguntas y más.
-
¿Qué es AutoCAD 2019?
-
AutoCAD 2019 es la última versión del software que le permite crear impresionantes diseños, dibujos, modelos y animaciones en 2D y 3D. Es utilizado por profesionales y aficionados por igual en diversos campos como la arquitectura, ingeniería, construcción, fabricación y entretenimiento. AutoCAD 2019 ofrece muchas características nuevas y mejoras que hacen que sea más fácil y rápido crear y editar sus proyectos.
Algunas de las características de AutoCAD 2019 son:
-
-
DWG Compare: Esta característica le permite comparar dos versiones de un dibujo y resaltar las diferencias. También puede combinar cambios de una versión a otra.
-
Vistas compartidas: Esta función le permite compartir sus dibujos en línea con cualquier persona sin necesidad de tener AutoCAD instalado. También puede obtener comentarios y comentarios de otros en sus vistas compartidas.
-
Guardar en Web y Móvil: Esta función le permite guardar sus dibujos en la nube y acceder a ellos desde cualquier dispositivo. También puede editar y ver sus dibujos en la web o en su teléfono móvil o tableta.
-
Mejoras en los gráficos 2D: Esta función mejora el rendimiento y la calidad de los gráficos 2D. Puedes ampliar, desplazar y cambiar las órdenes de sorteo más rápido y sin problemas.
-
AutoCAD Web App: Esta característica le permite acceder a AutoCAD desde cualquier navegador. Puede crear, editar y ver sus dibujos en línea sin instalar nada.
-
-
-
Los requisitos mínimos del sistema para AutoCAD 2019 son:
-
-
Sistema operativo: Windows 7 SP1, Windows 8.1, o Windows 10 (solo 64 bits)
Resolución de visualización: Básico: 1920 x 1080 con color verdadero / Recomendado: Resoluciones de hasta 3840 x 2160 soportadas en sistemas Windows 10 (64 bits)
-
Espacio en disco: Básico: 6 GB / Recomendado: Espacio libre adicional requerido durante la instalación
-
-
¿Qué es X-Force Keygen?
-
X-Force Keygen es un software que puede generar códigos de activación para cualquier producto de Autodesk, incluyendo AutoCAD 2019. También se conoce como una grieta o un parche que puede evitar el proceso de activación del software. Es creado por un grupo de hackers llamado X-Force, que son conocidos por descifrar varios software y juegos.
-
¿Cómo funciona X-Force Keygen?
-
X-Force Keygen funciona generando un número de serie único y un código de activación para el software que desea activar. Hace esto modificando los archivos de registro y el archivo host de su sistema. También bloquea la verificación en línea y la actualización del software, para que pueda usarlo sin restricciones o interrupciones.
-
Beneficios de usar X-Force Keygen
-
Algunos de los beneficios de usar X-Force Keygen son:
-
-
Gratis: Puede descargar y usar X-Force Keygen gratis, sin pagar tarifas ni suscripciones.
-
Fácil: Puede descargar y usar X-Force Keygen con solo unos pocos clics, sin ningún procedimiento o paso complicado.
-
Efectivo: Puede activar cualquier producto de Autodesk, incluyendo AutoCAD 2019, con X-Force Keygen, sin errores o fallas.
-
-
Riesgos de usar X-Force Keygen
-
Sin embargo, el uso de X-Force Keygen también viene con algunos riesgos y desventajas, como:
-
-
-
-
Inseguro: Descargar y usar X-Force Keygen puede exponer su sistema a malware, virus, spyware y otras amenazas. Puede dañar su sistema o comprometer sus datos y privacidad.
-
Poco ético: Usar X-Force Keygen no es ético, ya que priva a los desarrolladores de software de sus ingresos y reconocimiento legítimos. También puede dañar la industria del software y la innovación mediante su uso.
-
-
¿Cómo descargar y usar X-Force Keygen para AutoCAD 2019 64 bits?
-
Si todavía desea descargar y usar X-Force Keygen para AutoCAD 2019 64 Bit, puede seguir estos pasos:
-
Paso 1: Descargar X-Force Keygen desde una fuente confiable
-
Puede buscar X-Force Keygen en Internet y encontrar muchos sitios web que lo ofrecen para su descarga. Sin embargo, no todos son confiables y seguros. Solo debe descargar X-Force Keygen de una fuente confiable que tenga comentarios positivos y comentarios de otros usuarios. También debe escanear el archivo descargado con un programa antivirus antes de abrirlo.
-
Paso 2: Instalar AutoCAD 2019 desde el sitio web oficial
-
Debe instalar AutoCAD 2019 desde el sitio web oficial de Autodesk, que es https://www.autodesk.com/products/autocad/free-trial. Debe elegir la versión de 64 bits del software y seguir las instrucciones de instalación. No debe introducir ningún número de serie o clave de producto durante el proceso de instalación.
-
Paso 3: Ejecutar X-Force Keygen como administrador
-
Debe ejecutar X-Force Keygen como administrador haciendo clic derecho en el archivo y seleccionando "Ejecutar como administrador". Deberías ver una ventana como esta:
-
-
Debería seleccionar "AutoCAD 2019" en el menú desplegable y hacer clic en "Parche". Debería ver un mensaje que diga "Parcheado con éxito".
-
Paso 4: Generar e introducir el código de activación
-
-
Paso 5: Disfruta de tu AutoCAD activado 2019
-
Ha activado con éxito AutoCAD 2019 con X-Force Keygen. Ahora puede disfrutar de todas las características y funciones del software sin limitaciones o interrupciones.
-
Conclusión
-
En este artículo, hemos explicado qué es AutoCAD 2019, qué es X-Force Keygen, cómo funciona, cuáles son sus beneficios y riesgos, y cómo descargarlo y usarlo para activar AutoCAD 2019 64 bit. También hemos proporcionado una guía paso a paso con capturas de pantalla para ayudarle con el proceso. Sin embargo, no recomendamos ni respaldamos el uso de X-Force Keygen, ya que es ilegal, inseguro y poco ético. Siempre debe respetar el acuerdo de licencia de software y apoyar a los desarrolladores de software mediante la compra del software legalmente. Si tiene alguna pregunta o duda sobre AutoCAD 2019 o X-Force Keygen, puede consultar las preguntas frecuentes a continuación o ponerse en contacto con el equipo de soporte oficial de Autodesk.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre AutoCAD 2019 y X-Force Keygen:
-
-
Q: ¿Cuánto cuesta AutoCAD 2019?
-
A: AutoCAD 2019 cuesta $1,610 por año o $210 por mes para una suscripción. También puede obtener una prueba gratuita durante 30 días o una versión gratuita para estudiantes y educadores.
-
Q: ¿Es seguro usar X-Force Keygen?
-
A: No, X-Force Keygen no es seguro de usar. Puede contener malware, virus, spyware y otras amenazas que pueden dañar su sistema o comprometer sus datos y privacidad. También puede causar errores o fallos en su software o sistema.
-
Q: ¿Es legal usar X-Force Keygen?
-
A: No, X-Force Keygen no es legal de usar. Viola los términos y condiciones del acuerdo de licencia de software e infringe los derechos de propiedad intelectual de los desarrolladores de software. Puedes enfrentarte a consecuencias legales si te pillan usándolo.
-
Q: ¿Cuáles son las alternativas a X-Force Keygen?
-
-
Q: ¿Cómo puedo contactar al equipo de soporte oficial de Autodesk?
-
A: Puede ponerse en contacto con el equipo de soporte oficial de Autodesk visitando su sitio web https://www.autodesk.com/support/contact-support o llamando a su número de teléfono 1-855-301-9562.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apkmirror Ar Elementos.md b/spaces/Benson/text-generation/Examples/Descargar Apkmirror Ar Elementos.md
deleted file mode 100644
index 6d8759f5d7c948caecbde92adee6ed5c9e48b982..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apkmirror Ar Elementos.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
Cómo descargar y usar elementos de RA de APKMirror
-
La realidad aumentada (RA) es una tecnología que mejora el mundo real con elementos digitales, como imágenes, texto, videos y modelos 3D. La RA puede crear experiencias inmersivas e interactivas que enriquecen nuestra percepción e interacción con el entorno. Hay muchas aplicaciones de RA disponibles para dispositivos Android e iOS, pero una de las mejores es AR Elements.
AR Elements es una aplicación desarrollada por Google que muestra las capacidades de ARCore, la plataforma de Google para crear aplicaciones de RA. Te muestra cómo establecer las expectativas del usuario en AR, cómo animar a los usuarios a moverse y explorar el mundo de la RA, cómo interactuar con objetos virtuales y más. Puede aprender de los ejemplos proporcionados por la aplicación o crear sus propias escenas de realidad aumentada utilizando las herramientas y los activos proporcionados.
-
En este artículo, te mostraremos cómo descargar y usar AR Elements desde APKMirror, uno de los sitios web más populares para descargar aplicaciones de Android. También explicaremos las características y beneficios de AR Elements y cómo se compara con otras aplicaciones de AR en términos de funcionalidad, calidad y experiencia de usuario.
-
Cómo descargar y usar elementos de RA de APKMirror
-
APKMirror es un sitio web que te permite descargar aplicaciones de Android que no están disponibles en Google Play Store o que están bloqueados por regiones. APKMirror alberga miles de aplicaciones de diversas categorías, como juegos, redes sociales, productividad, educación, entretenimiento y más. Puedes encontrar y descargar cualquier aplicación que quieras de APKMirror de forma gratuita y sin ningún tipo de registro o suscripción.
-
Para descargar y usar elementos AR de APKMirror, siga estos pasos:
Desplácese hacia abajo para encontrar la última versión de AR Elements o utilice la barra de búsqueda para encontrarla.
-
Haga clic en el botón de descarga junto a la versión que desea descargar.
-
-
Si ves un mensaje de advertencia que dice "Instalar bloqueado", ve a la configuración de tu dispositivo y habilita "Fuentes desconocidas" o "Instalar aplicaciones desconocidas".
-
Siga las instrucciones en la pantalla para instalar elementos AR en su dispositivo.
-
Elementos de AR de lanzamiento desde el cajón de la aplicación o la pantalla de inicio.
-
-
Características y beneficios de los elementos RA
-
AR Elements es una aplicación que demuestra las mejores prácticas para crear experiencias de RA atractivas y realistas. Tiene cuatro características principales que puedes usar para aprender, crear y divertirte con RA:
-
-
Diseño: Esta función le muestra cómo diseñar una interfaz de usuario efectiva para su aplicación de RA. Te enseña cómo usar señales visuales, como flechas, cuadrículas, planos, sombras y oclusión, para guiar a los usuarios a través de tu aplicación. También te muestra cómo usar animaciones, sonidos, hápticos y retroalimentación de voz para mejorar la participación del usuario.
-
Motion: Esta función muestra cómo animar a los usuarios a moverse y explorar el mundo de la RA. Te enseña cómo usar audio espacial, portales, senderos, waypoints y anclajes para crear una sensación de profundidad y escala. También muestra cómo usar la física, las colisiones, los gestos y la entrada táctil para crear interacciones realistas con objetos virtuales.
-
Iluminación : Esta función muestra cómo usar efectos de iluminación para crear una escena AR realista y consistente. Le enseña cómo usar la iluminación ambiental, la iluminación direccional, las luces de punto, las luces de punto y las sombras para crear una sensación de realismo e inmersión. También muestra cómo utilizar la estimación de la luz, las sondas de reflexión y las sondas de luz para ajustar la iluminación de acuerdo con el mundo real.
-
-
-
Los beneficios de AR Elements son que te ayuda a aprender los fundamentos del desarrollo de AR, te inspira a crear tus propias aplicaciones de AR y te entretiene con experiencias de AR divertidas e interactivas. Puedes usar Elementos RA como referencia, herramienta o área de juegos para tus proyectos RA.
-
Comparación de elementos AR con otras aplicaciones AR
-
Hay muchas otras aplicaciones de RA disponibles para dispositivos Android e iOS, como Snapchat, Instagram, Pokemon Go, IKEA Place, Google Maps y más. ¿Cómo se compara AR Elements con estas aplicaciones en términos de funcionalidad, calidad y experiencia de usuario?
-
Aquí hay una tabla que resume algunas de las diferencias entre los elementos de RA y otras aplicaciones de RA:
-
-
-
Elementos AR
-
Otras aplicaciones de RA
-
-
-
Se centra en enseñar y demostrar las mejores prácticas para crear aplicaciones de RA
-
Se enfoca en proveer funciones o servicios específicos usando tecnología AR
-
-
-
Proporciona ejemplos y herramientas para crear varios tipos de escenas e interacciones AR
-
Proporciona tipos predefinidos o limitados de escenas e interacciones AR
-
-
-
Utiliza modelos 3D de alta calidad y texturas de Google Poly o fuentes personalizadas
-
Utiliza modelos 3D de baja calidad o genéricos y texturas de fuentes de terceros
-
-
-
Utiliza efectos de iluminación realistas y consistentes para crear escenas AR inmersivas
-
Utiliza efectos de iluminación simples o inconsistentes para crear escenas básicas de realidad aumentada
-
-
-
Utiliza audio espacial, portales, senderos, waypoints, anclas, física, colisiones, gestos, entrada táctil, animaciones, sonidos, hápticos, retroalimentación de voz, señales visuales, cuadrículas, planos, sombras, oclusión, iluminación ambiental, iluminación direccional, luces de punto, estimación de luz, sondas de reflexión y sondas de luz para crear experiencias de realidad aumentada realistas y atractivas
-
Utiliza algunas o ninguna de estas características para crear experiencias de RA simples o poco realistas
-
-
-
Las ventajas de AR Elements son que ofrece más funcionalidad, calidad y experiencia de usuario que otras aplicaciones de AR. Le permite crear sus propias escenas de AR e interacciones utilizando varias características y activos. También te ayuda a aprender las mejores prácticas para crear aplicaciones de RA.
-
Las desventajas de AR Elements son que requiere más habilidades técnicas y conocimientos que otras aplicaciones de AR. También requiere más compatibilidad y rendimiento de dispositivos que otras aplicaciones de AR. Puede que no sea adecuado para usuarios ocasionales o novatos que solo quieren disfrutar de algunas experiencias de RA simples o divertidas.
-
Conclusión
-
En conclusión, AR Elements es una de las mejores aplicaciones de AR para dispositivos Android e iOS. Muestra las capacidades de ARCore, la plataforma de Google para crear aplicaciones de RA. Te muestra cómo diseñar una interfaz de usuario efectiva para tu aplicación de RA. Te muestra cómo animar a los usuarios a moverse y explorar el mundo de la RA. Te muestra cómo usar efectos de iluminación para crear una escena AR realista y consistente. Te muestra cómo usar modelos 3D y texturas para crear tus propias escenas AR.
-
Si quieres descargar y usar elementos AR de APKMirror, puedes seguir los pasos que te hemos dado en este artículo. También puede consultar el sitio web oficial de APKMirror para obtener más información sobre la descarga de aplicaciones para Android.
-
Si desea obtener más información sobre los elementos de RA y cómo crear sus propias aplicaciones de RA utilizando las herramientas y recursos de Google, puede visitar el sitio web oficial de los desarrolladores de Google para obtener más información.
-
Esperamos que hayas disfrutado de este artículo y hayas aprendido algo nuevo sobre AR Elements y APKMirror. Si tiene alguna pregunta o comentario sobre este artículo o cualquier cosa relacionada con AR Elements o APKMirror, no dude en dejar un comentario a continuación. ¡Nos encantaría saber de ti!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre AR Elements y APKMirror:
-
¿Qué es APKMirror?
-
-
¿Es APKMirror seguro y legal?
-
APKMirror es seguro y legal de usar. APKMirror solo hospeda aplicaciones que son originales y no modificadas. APKMirror no aloja ninguna aplicación pirata o ilegal. APKMirror también escanea todas las aplicaciones en busca de virus y malware antes de subirlos al sitio web. APKMirror no requiere ningún permiso o acceso a su dispositivo o datos personales.
-
¿Qué es ARCore y por qué lo necesito?
-
ARCore es la plataforma de Google para crear aplicaciones de RA. ARCore permite a su dispositivo detectar su entorno, entender el mundo e interactuar con la información. ARCore utiliza tres tecnologías clave para permitir experiencias de realidad aumentada: seguimiento de movimiento, comprensión ambiental y estimación de luz. ARCore funciona con motores Java/OpenGL, Unity e Unreal.
-
Necesitas ARCore para ejecutar elementos AR y otras aplicaciones AR que se construyen con ARCore. Puedes descargar ARCore desde Google Play Store o APKMirror. También necesitas un dispositivo compatible que admita ARCore. Puedes consultar la lista de dispositivos compatibles en el sitio web oficial de los desarrolladores de Google.
-
¿Cómo puedo crear mis propias aplicaciones de RA usando las herramientas y recursos de Google?
-
Si desea crear sus propias aplicaciones de RA utilizando las herramientas y recursos de Google, puede visitar el sitio web oficial de los desarrolladores de Google para obtener más información. Puede encontrar tutoriales, guías, muestras, documentación y soporte para desarrollar aplicaciones de RA utilizando ARCore, Sceneform, Poly, Firebase, ML Kit y más. También puede unirse a la comunidad de desarrolladores de Google y conectarse con otros desarrolladores que estén interesados en AR.
-
¿Cuáles son algunas de las mejores aplicaciones de RA para dispositivos Android e iOS?
-
Algunas de las mejores aplicaciones de RA para dispositivos Android e iOS son:
-
-
Snapchat: Una aplicación de redes sociales que te permite enviar y recibir fotos y videos con filtros, pegatinas, lentes y efectos que utilizan tecnología AR.
-
-
Pokemon Go: Una aplicación de juego que te permite atrapar y luchar contra criaturas virtuales llamadas Pokémon en el mundo real usando tecnología AR.
-
IKEA Place: Una aplicación de compras que te permite previsualizar y colocar muebles y accesorios para el hogar en tu espacio usando tecnología RA.
-
Google Maps: Una aplicación de navegación que te permite explorar el mundo con direcciones, puntos de referencia, vista de la calle y vista en vivo que utilizan tecnología AR.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/types.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/types.py
deleted file mode 100644
index f358b12f55ccf5ce5983082bef5df33fc318d994..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/dynamodb/types.py
+++ /dev/null
@@ -1,310 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-from decimal import (
- Clamped,
- Context,
- Decimal,
- Inexact,
- Overflow,
- Rounded,
- Underflow,
-)
-
-from boto3.compat import collections_abc
-
-STRING = 'S'
-NUMBER = 'N'
-BINARY = 'B'
-STRING_SET = 'SS'
-NUMBER_SET = 'NS'
-BINARY_SET = 'BS'
-NULL = 'NULL'
-BOOLEAN = 'BOOL'
-MAP = 'M'
-LIST = 'L'
-
-
-DYNAMODB_CONTEXT = Context(
- Emin=-128,
- Emax=126,
- prec=38,
- traps=[Clamped, Overflow, Inexact, Rounded, Underflow],
-)
-
-
-BINARY_TYPES = (bytearray, bytes)
-
-
-class Binary:
- """A class for representing Binary in dynamodb
-
- Especially for Python 2, use this class to explicitly specify
- binary data for item in DynamoDB. It is essentially a wrapper around
- binary. Unicode and Python 3 string types are not allowed.
- """
-
- def __init__(self, value):
- if not isinstance(value, BINARY_TYPES):
- types = ', '.join([str(t) for t in BINARY_TYPES])
- raise TypeError(f'Value must be of the following types: {types}')
- self.value = value
-
- def __eq__(self, other):
- if isinstance(other, Binary):
- return self.value == other.value
- return self.value == other
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def __repr__(self):
- return f'Binary({self.value!r})'
-
- def __str__(self):
- return self.value
-
- def __bytes__(self):
- return self.value
-
- def __hash__(self):
- return hash(self.value)
-
-
-class TypeSerializer:
- """This class serializes Python data types to DynamoDB types."""
-
- def serialize(self, value):
- """The method to serialize the Python data types.
-
- :param value: A python value to be serialized to DynamoDB. Here are
- the various conversions:
-
- Python DynamoDB
- ------ --------
- None {'NULL': True}
- True/False {'BOOL': True/False}
- int/Decimal {'N': str(value)}
- string {'S': string}
- Binary/bytearray/bytes (py3 only) {'B': bytes}
- set([int/Decimal]) {'NS': [str(value)]}
- set([string]) {'SS': [string])
- set([Binary/bytearray/bytes]) {'BS': [bytes]}
- list {'L': list}
- dict {'M': dict}
-
- For types that involve numbers, it is recommended that ``Decimal``
- objects are used to be able to round-trip the Python type.
- For types that involve binary, it is recommended that ``Binary``
- objects are used to be able to round-trip the Python type.
-
- :rtype: dict
- :returns: A dictionary that represents a dynamoDB data type. These
- dictionaries can be directly passed to botocore methods.
- """
- dynamodb_type = self._get_dynamodb_type(value)
- serializer = getattr(self, f'_serialize_{dynamodb_type}'.lower())
- return {dynamodb_type: serializer(value)}
-
- def _get_dynamodb_type(self, value):
- dynamodb_type = None
-
- if self._is_null(value):
- dynamodb_type = NULL
-
- elif self._is_boolean(value):
- dynamodb_type = BOOLEAN
-
- elif self._is_number(value):
- dynamodb_type = NUMBER
-
- elif self._is_string(value):
- dynamodb_type = STRING
-
- elif self._is_binary(value):
- dynamodb_type = BINARY
-
- elif self._is_type_set(value, self._is_number):
- dynamodb_type = NUMBER_SET
-
- elif self._is_type_set(value, self._is_string):
- dynamodb_type = STRING_SET
-
- elif self._is_type_set(value, self._is_binary):
- dynamodb_type = BINARY_SET
-
- elif self._is_map(value):
- dynamodb_type = MAP
-
- elif self._is_listlike(value):
- dynamodb_type = LIST
-
- else:
- msg = f'Unsupported type "{type(value)}" for value "{value}"'
- raise TypeError(msg)
-
- return dynamodb_type
-
- def _is_null(self, value):
- if value is None:
- return True
- return False
-
- def _is_boolean(self, value):
- if isinstance(value, bool):
- return True
- return False
-
- def _is_number(self, value):
- if isinstance(value, (int, Decimal)):
- return True
- elif isinstance(value, float):
- raise TypeError(
- 'Float types are not supported. Use Decimal types instead.'
- )
- return False
-
- def _is_string(self, value):
- if isinstance(value, str):
- return True
- return False
-
- def _is_binary(self, value):
- if isinstance(value, (Binary, bytearray, bytes)):
- return True
- return False
-
- def _is_set(self, value):
- if isinstance(value, collections_abc.Set):
- return True
- return False
-
- def _is_type_set(self, value, type_validator):
- if self._is_set(value):
- if False not in map(type_validator, value):
- return True
- return False
-
- def _is_map(self, value):
- if isinstance(value, collections_abc.Mapping):
- return True
- return False
-
- def _is_listlike(self, value):
- if isinstance(value, (list, tuple)):
- return True
- return False
-
- def _serialize_null(self, value):
- return True
-
- def _serialize_bool(self, value):
- return value
-
- def _serialize_n(self, value):
- number = str(DYNAMODB_CONTEXT.create_decimal(value))
- if number in ['Infinity', 'NaN']:
- raise TypeError('Infinity and NaN not supported')
- return number
-
- def _serialize_s(self, value):
- return value
-
- def _serialize_b(self, value):
- if isinstance(value, Binary):
- value = value.value
- return value
-
- def _serialize_ss(self, value):
- return [self._serialize_s(s) for s in value]
-
- def _serialize_ns(self, value):
- return [self._serialize_n(n) for n in value]
-
- def _serialize_bs(self, value):
- return [self._serialize_b(b) for b in value]
-
- def _serialize_l(self, value):
- return [self.serialize(v) for v in value]
-
- def _serialize_m(self, value):
- return {k: self.serialize(v) for k, v in value.items()}
-
-
-class TypeDeserializer:
- """This class deserializes DynamoDB types to Python types."""
-
- def deserialize(self, value):
- """The method to deserialize the DynamoDB data types.
-
- :param value: A DynamoDB value to be deserialized to a pythonic value.
- Here are the various conversions:
-
- DynamoDB Python
- -------- ------
- {'NULL': True} None
- {'BOOL': True/False} True/False
- {'N': str(value)} Decimal(str(value))
- {'S': string} string
- {'B': bytes} Binary(bytes)
- {'NS': [str(value)]} set([Decimal(str(value))])
- {'SS': [string]} set([string])
- {'BS': [bytes]} set([bytes])
- {'L': list} list
- {'M': dict} dict
-
- :returns: The pythonic value of the DynamoDB type.
- """
-
- if not value:
- raise TypeError(
- 'Value must be a nonempty dictionary whose key '
- 'is a valid dynamodb type.'
- )
- dynamodb_type = list(value.keys())[0]
- try:
- deserializer = getattr(
- self, f'_deserialize_{dynamodb_type}'.lower()
- )
- except AttributeError:
- raise TypeError(f'Dynamodb type {dynamodb_type} is not supported')
- return deserializer(value[dynamodb_type])
-
- def _deserialize_null(self, value):
- return None
-
- def _deserialize_bool(self, value):
- return value
-
- def _deserialize_n(self, value):
- return DYNAMODB_CONTEXT.create_decimal(value)
-
- def _deserialize_s(self, value):
- return value
-
- def _deserialize_b(self, value):
- return Binary(value)
-
- def _deserialize_ns(self, value):
- return set(map(self._deserialize_n, value))
-
- def _deserialize_ss(self, value):
- return set(map(self._deserialize_s, value))
-
- def _deserialize_bs(self, value):
- return set(map(self._deserialize_b, value))
-
- def _deserialize_l(self, value):
- return [self.deserialize(v) for v in value]
-
- def _deserialize_m(self, value):
- return {k: self.deserialize(v) for k, v in value.items()}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/request.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/request.py
deleted file mode 100644
index 398386a5b9f61c13be314e256e671a37d28e3623..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/request.py
+++ /dev/null
@@ -1,170 +0,0 @@
-from __future__ import absolute_import
-
-from .filepost import encode_multipart_formdata
-from .packages.six.moves.urllib.parse import urlencode
-
-__all__ = ["RequestMethods"]
-
-
-class RequestMethods(object):
- """
- Convenience mixin for classes who implement a :meth:`urlopen` method, such
- as :class:`urllib3.HTTPConnectionPool` and
- :class:`urllib3.PoolManager`.
-
- Provides behavior for making common types of HTTP request methods and
- decides which type of request field encoding to use.
-
- Specifically,
-
- :meth:`.request_encode_url` is for sending requests whose fields are
- encoded in the URL (such as GET, HEAD, DELETE).
-
- :meth:`.request_encode_body` is for sending requests whose fields are
- encoded in the *body* of the request using multipart or www-form-urlencoded
- (such as for POST, PUT, PATCH).
-
- :meth:`.request` is for making any kind of request, it will look up the
- appropriate encoding format and use one of the above two methods to make
- the request.
-
- Initializer parameters:
-
- :param headers:
- Headers to include with all requests, unless other headers are given
- explicitly.
- """
-
- _encode_url_methods = {"DELETE", "GET", "HEAD", "OPTIONS"}
-
- def __init__(self, headers=None):
- self.headers = headers or {}
-
- def urlopen(
- self,
- method,
- url,
- body=None,
- headers=None,
- encode_multipart=True,
- multipart_boundary=None,
- **kw
- ): # Abstract
- raise NotImplementedError(
- "Classes extending RequestMethods must implement "
- "their own ``urlopen`` method."
- )
-
- def request(self, method, url, fields=None, headers=None, **urlopen_kw):
- """
- Make a request using :meth:`urlopen` with the appropriate encoding of
- ``fields`` based on the ``method`` used.
-
- This is a convenience method that requires the least amount of manual
- effort. It can be used in most situations, while still having the
- option to drop down to more specific methods when necessary, such as
- :meth:`request_encode_url`, :meth:`request_encode_body`,
- or even the lowest level :meth:`urlopen`.
- """
- method = method.upper()
-
- urlopen_kw["request_url"] = url
-
- if method in self._encode_url_methods:
- return self.request_encode_url(
- method, url, fields=fields, headers=headers, **urlopen_kw
- )
- else:
- return self.request_encode_body(
- method, url, fields=fields, headers=headers, **urlopen_kw
- )
-
- def request_encode_url(self, method, url, fields=None, headers=None, **urlopen_kw):
- """
- Make a request using :meth:`urlopen` with the ``fields`` encoded in
- the url. This is useful for request methods like GET, HEAD, DELETE, etc.
- """
- if headers is None:
- headers = self.headers
-
- extra_kw = {"headers": headers}
- extra_kw.update(urlopen_kw)
-
- if fields:
- url += "?" + urlencode(fields)
-
- return self.urlopen(method, url, **extra_kw)
-
- def request_encode_body(
- self,
- method,
- url,
- fields=None,
- headers=None,
- encode_multipart=True,
- multipart_boundary=None,
- **urlopen_kw
- ):
- """
- Make a request using :meth:`urlopen` with the ``fields`` encoded in
- the body. This is useful for request methods like POST, PUT, PATCH, etc.
-
- When ``encode_multipart=True`` (default), then
- :func:`urllib3.encode_multipart_formdata` is used to encode
- the payload with the appropriate content type. Otherwise
- :func:`urllib.parse.urlencode` is used with the
- 'application/x-www-form-urlencoded' content type.
-
- Multipart encoding must be used when posting files, and it's reasonably
- safe to use it in other times too. However, it may break request
- signing, such as with OAuth.
-
- Supports an optional ``fields`` parameter of key/value strings AND
- key/filetuple. A filetuple is a (filename, data, MIME type) tuple where
- the MIME type is optional. For example::
-
- fields = {
- 'foo': 'bar',
- 'fakefile': ('foofile.txt', 'contents of foofile'),
- 'realfile': ('barfile.txt', open('realfile').read()),
- 'typedfile': ('bazfile.bin', open('bazfile').read(),
- 'image/jpeg'),
- 'nonamefile': 'contents of nonamefile field',
- }
-
- When uploading a file, providing a filename (the first parameter of the
- tuple) is optional but recommended to best mimic behavior of browsers.
-
- Note that if ``headers`` are supplied, the 'Content-Type' header will
- be overwritten because it depends on the dynamic random boundary string
- which is used to compose the body of the request. The random boundary
- string can be explicitly set with the ``multipart_boundary`` parameter.
- """
- if headers is None:
- headers = self.headers
-
- extra_kw = {"headers": {}}
-
- if fields:
- if "body" in urlopen_kw:
- raise TypeError(
- "request got values for both 'fields' and 'body', can only specify one."
- )
-
- if encode_multipart:
- body, content_type = encode_multipart_formdata(
- fields, boundary=multipart_boundary
- )
- else:
- body, content_type = (
- urlencode(fields),
- "application/x-www-form-urlencoded",
- )
-
- extra_kw["body"] = body
- extra_kw["headers"] = {"Content-Type": content_type}
-
- extra_kw["headers"].update(headers)
- extra_kw.update(urlopen_kw)
-
- return self.urlopen(method, url, **extra_kw)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/py34compat.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/py34compat.py
deleted file mode 100644
index 3ad917222a4e5bb93fe1c9e8fe1713bcab3630b6..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/py34compat.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import importlib
-
-try:
- import importlib.util
-except ImportError:
- pass
-
-
-try:
- module_from_spec = importlib.util.module_from_spec
-except AttributeError:
- def module_from_spec(spec):
- return spec.loader.load_module(spec.name)
diff --git a/spaces/BlackCub/ChatGPT4/app.py b/spaces/BlackCub/ChatGPT4/app.py
deleted file mode 100644
index 632f0ee79c2a44a19c299e5965101cad17293e69..0000000000000000000000000000000000000000
--- a/spaces/BlackCub/ChatGPT4/app.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import gradio as gr
-import os
-import json
-import requests
-
-#Streaming endpoint
-API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
-
-#Inferenec function
-def predict(openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]):
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_gpt4_key}" #Users will provide their own OPENAI_API_KEY
- }
- print(f"system message is ^^ {system_msg}")
- if system_msg.strip() == '':
- initial_message = [{"role": "user", "content": f"{inputs}"},]
- multi_turn_message = []
- else:
- initial_message= [{"role": "system", "content": system_msg},
- {"role": "user", "content": f"{inputs}"},]
- multi_turn_message = [{"role": "system", "content": system_msg},]
-
- if chat_counter == 0 :
- payload = {
- "model": "gpt-4",
- "messages": initial_message ,
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
- print(f"chat_counter - {chat_counter}")
- else: #if chat_counter != 0 :
- messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},]
- for data in chatbot:
- user = {}
- user["role"] = "user"
- user["content"] = data[0]
- assistant = {}
- assistant["role"] = "assistant"
- assistant["content"] = data[1]
- messages.append(user)
- messages.append(assistant)
- temp = {}
- temp["role"] = "user"
- temp["content"] = inputs
- messages.append(temp)
- #messages
- payload = {
- "model": "gpt-4",
- "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}],
- "temperature" : temperature, #1.0,
- "top_p": top_p, #1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,}
-
- chat_counter+=1
-
- history.append(inputs)
- print(f"Logging : payload is - {payload}")
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- print(f"Logging : response code - {response}")
- token_counter = 0
- partial_words = ""
-
- counter=0
- for chunk in response.iter_lines():
- #Skipping first chunk
- if counter == 0:
- counter+=1
- continue
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
- token_counter+=1
- yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history}
-
-#Resetting to blank
-def reset_textbox():
- return gr.update(value='')
-
-#to set a component as visible=False
-def set_visible_false():
- return gr.update(visible=False)
-
-#to set a component as visible=True
-def set_visible_true():
- return gr.update(visible=True)
-
-title = """
🔥GPT4 using Chat-Completions API & 🚀Gradio-Streaming
"""
-#display message for themes feature
-theme_addon_msg = """
🌟 This Demo also introduces you to Gradio Themes. Discover more on Gradio website using our Themeing-Guide🎨! You can develop from scratch, modify an existing Gradio theme, and share your themes with community by uploading them to huggingface-hub easily using theme.push_to_hub().
-"""
-
-#Using info to add additional information about System message in GPT4
-system_msg_info = """A conversation could begin with a system message to gently instruct the assistant.
-System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'"""
-
-#Modifying existing Gradio Theme
-theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green",
- text_size=gr.themes.sizes.text_lg)
-
-with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""",
- theme=theme) as demo:
- gr.HTML(title)
- gr.HTML("""
🔥This Huggingface Gradio Demo provides you access to GPT4 API with System Messages. Please note that you would be needing an OPENAI API key for GPT4 access🙌""")
- gr.HTML(theme_addon_msg)
- gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key
''')
-
- with gr.Column(elem_id = "col_container"):
- #Users need to provide their own GPT4 API key, it is no longer provided by Huggingface
- with gr.Row():
- openai_gpt4_key = gr.Textbox(label="OpenAI GPT4 Key", value="", type="password", placeholder="sk..", info = "You have to provide your own GPT4 keys for this app to function properly",)
- with gr.Accordion(label="System message:", open=False):
- system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="",placeholder="Type here..")
- accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False)
-
- chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot")
- inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter")
- state = gr.State([])
- with gr.Row():
- with gr.Column(scale=7):
- b1 = gr.Button().style(full_width=True)
- with gr.Column(scale=3):
- server_status_code = gr.Textbox(label="Status code from OpenAI server", )
-
- #top_p, temperature
- with gr.Accordion("Parameters", open=False):
- top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
- temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
- chat_counter = gr.Number(value=0, visible=False, precision=0)
-
- #Event handling
- inputs.submit( predict, [openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
- b1.click( predict, [openai_gpt4_key, system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
-
- inputs.submit(set_visible_false, [], [system_msg])
- b1.click(set_visible_false, [], [system_msg])
- inputs.submit(set_visible_true, [], [accordion_msg])
- b1.click(set_visible_true, [], [accordion_msg])
-
- b1.click(reset_textbox, [], [inputs])
- inputs.submit(reset_textbox, [], [inputs])
-
- #Examples
- with gr.Accordion(label="Examples for System message:", open=False):
- gr.Examples(
- examples = [["""You are an AI programming assistant.
-
- - Follow the user's requirements carefully and to the letter.
- - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail.
- - Then output the code in a single code block.
- - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""],
- ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."],
- ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."],
- ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."],
- ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."],
- ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."],
- ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."],
- ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."],
- ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."],
- ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."],
- ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."],
- ["You are a helpful assistant that provides detailed and accurate information."],
- ["You are an assistant that speaks like Shakespeare."],
- ["You are a friendly assistant who uses casual language and humor."],
- ["You are a financial advisor who gives expert advice on investments and budgeting."],
- ["You are a health and fitness expert who provides advice on nutrition and exercise."],
- ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."],
- ["You are a movie critic who shares insightful opinions on films and their themes."],
- ["You are a history enthusiast who loves to discuss historical events and figures."],
- ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."],
- ["You are an AI poet who can compose creative and evocative poems on any given topic."],],
- inputs = system_msg,)
-
-demo.queue(max_size=99, concurrency_count=20).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/CC26011988/Opposition_Analysis/app.py b/spaces/CC26011988/Opposition_Analysis/app.py
deleted file mode 100644
index 5dc1e119861b7aed634c0219e0492856665a2a66..0000000000000000000000000000000000000000
--- a/spaces/CC26011988/Opposition_Analysis/app.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import gradio as gr
-import os
-import sys
-import gensim
-import pandas as pd
-
-from huggingface_hub import hf_hub_download
-file_path = hf_hub_download("CC26011988/lib_repo", "coa_funcs.py", repo_type="model", use_auth_token=os.environ['TOKEN'])
-sys.path.append(os.path.dirname(file_path))
-import coa_funcs as coa
-
-def run_coa(t1, t2, ex1, ex2):
-
- # Trace problems if needed
- if True:
- f = open("diagnostics.txt", "a")
- file_path = os.path.abspath("diagnostics.txt")
- print(file_path)
- f.write("New run **********************************************")
- f.write(t1 + '\n**********' + t2 + '\n**********' + ex1 +'\n**********' + ex2)
- f.close()
-
- results=coa.do_coa(t1, t2, ex1.split(), ex2.split())
-
- opps='
'
-
- for s in results:
- print(s[0], '<-', s[2], '->', s[1])
- if s[2]>0:
- opps = opps + str(s[0]) + ' <---- ' + str(s[2]) + ' ----> ' + str(s[1]) + '
'
- for r in rps_2:
- rps2_formatted = rps2_formatted + r + '
'
-
- return opps, df, rps1_formatted, rps2_formatted
-
-with open("long_eg1.txt") as f:
- long_eg1 = f.read()
-
-with open("long_eg2.txt") as f:
- long_eg2 = f.read()
-
-with gr.Blocks() as demo:
-
- with gr.Row(): # Inputs
-
- decorative=gr.HTML('
Computational Semiotics - Opposition Analysis Demo
')
-
- with gr.Row():
-
- text1_proxy = gr.Textbox(lines=5, value="", label="Object 1 Text")
-
- inbtw = gr.Button("Compute Oppositions (Scroll down for results)").style(full_width=False)
-
- text2_proxy = gr.Textbox(lines=5, value="", label="Object 2 Text")
-
- with gr.Row(): # Inputs
-
- text1 = gr.Textbox(value="", label="Filter these words from Text 1:")
-
- spacer=gr.HTML('''
Based on the research paper "Computational opposition analysis using word embeddings" by Cameron Shackell and Laurianne Sitbon published in
- Argument & Computation available here. Please contact Cameron Shackell at the listed email for further information.
-
Opposition analysis is useful in understanding audience thinking around topics and has applications in naming, marketing, and brand positioning.''')
-
- text2 = gr.Textbox(value="", label="Filter these words from Text 2")
-
- #########################################
- examples = gr.Examples(examples=[["Pepsi tastes sweeter.", 'Pepsi',"Coke tastes saltier.", 'Coke'], ["Tesla offer such beautiful engineering and tech. It was expensive but I love my Tesla.", 'Tesla', "Subaru make safe, reliable, affordable vehicles. I made a smart choice to invest in a Subaru!", 'Subaru'],[long_eg1,'treat',long_eg2,'use']],inputs=[ text1_proxy, text1, text2_proxy, text2], label="Click a row below to load example values")
- ###########################################
-
- # drop3 = gr.Dropdown(["a", "b", "c"], label="Similarity model")
-
- with gr.Row(): # Outputs
-
- results_title=gr.HTML('
Object 1 is more about...
')
- results_title=gr.HTML('
Oppositions
')
- results_title=gr.HTML('
Object 2 is more about...
')
-
- with gr.Row():
-
- res_rps_1 = gr.HTML(label="")
-
- opps = gr.HTML(label="") # gr.Textbox(lines=10, placeholder="results", label="RPS 1")
-
- res_rps_2 = gr.HTML(label="")
-
- with gr.Row(): # Outputs
- full_results=gr.Dataframe(row_count = (1, "dynamic"), col_count=(3, "fixed"), label='Full results')
- # gr.Label(num_top_classes=4)
-
- inbtw.click(fn=run_coa, inputs=[text1_proxy, text2_proxy, text1, text2], outputs=[opps, full_results, res_rps_1, res_rps_2])
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/CODEACON/README/README.md b/spaces/CODEACON/README/README.md
deleted file mode 100644
index d407360dcfbebe8ac941b37f006066021fdc946b..0000000000000000000000000000000000000000
--- a/spaces/CODEACON/README/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: README
-emoji: 📚
-colorFrom: yellow
-colorTo: pink
-sdk: static
-pinned: false
----
-
-Edit this `README.md` markdown file to author your organization card 🔥
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/extend.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/extend.md
deleted file mode 100644
index 0145513227e20dcfdb2bf83e14b286b515730c07..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/extend.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Extend Detectron2's Defaults
-
-__Research is about doing things in new ways__.
-This brings a tension in how to create abstractions in code,
-which is a challenge for any research engineering project of a significant size:
-
-1. On one hand, it needs to have very thin abstractions to allow for the possibility of doing
- everything in new ways. It should be reasonably easy to break existing
- abstractions and replace them with new ones.
-
-2. On the other hand, such a project also needs reasonably high-level
- abstractions, so that users can easily do things in standard ways,
- without worrying too much about the details that only certain researchers care about.
-
-In detectron2, there are two types of interfaces that address this tension together:
-
-1. Functions and classes that take only a "config" argument (optionally with a minimal
- set of extra arguments in cases of mature interfaces).
-
- Such functions and classes implement
- the "standard default" behavior: it will read what it needs from the
- config and do the "standard" thing.
- Users only need to load a standard config and pass it around, without having to worry about
- which arguments are used and what they all mean.
-
-2. Functions and classes that have well-defined explicit arguments.
-
- Each of these is a small building block of the entire system.
- They require users' effort to stitch together, but can be stitched together in more flexible ways.
- When you need to implement something different from the "standard defaults"
- included in detectron2, these well-defined components can be reused.
-
-
-If you only need the standard behavior, the [Beginner's Tutorial](getting_started.html)
-should suffice. If you need to extend detectron2 to your own needs,
-see the following tutorials for more details:
-
-* Detectron2 includes a few standard datasets, but you can use custom ones. See
- [Use Custom Datasets](datasets.html).
-* Detectron2 contains the standard logic that creates a data loader from a
- dataset, but you can write your own as well. See [Use Custom Data Loaders](data_loading.html).
-* Detectron2 implements many standard detection models, and provide ways for you
- to overwrite its behaviors. See [Use Models](models.html) and [Write Models](write-models.html).
-* Detectron2 provides a default training loop that is good for common training tasks.
- You can customize it with hooks, or write your own loop instead. See [training](training.html).
diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/samplers/pseudo_sampler.py b/spaces/CVPR/WALT/mmdet/core/bbox/samplers/pseudo_sampler.py
deleted file mode 100644
index 2bd81abcdc62debc14772659d7a171f20bf33364..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/bbox/samplers/pseudo_sampler.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from .base_sampler import BaseSampler
-from .sampling_result import SamplingResult
-
-
-@BBOX_SAMPLERS.register_module()
-class PseudoSampler(BaseSampler):
- """A pseudo sampler that does not do sampling actually."""
-
- def __init__(self, **kwargs):
- pass
-
- def _sample_pos(self, **kwargs):
- """Sample positive samples."""
- raise NotImplementedError
-
- def _sample_neg(self, **kwargs):
- """Sample negative samples."""
- raise NotImplementedError
-
- def sample(self, assign_result, bboxes, gt_bboxes, **kwargs):
- """Directly returns the positive and negative indices of samples.
-
- Args:
- assign_result (:obj:`AssignResult`): Assigned results
- bboxes (torch.Tensor): Bounding boxes
- gt_bboxes (torch.Tensor): Ground truth boxes
-
- Returns:
- :obj:`SamplingResult`: sampler results
- """
- pos_inds = torch.nonzero(
- assign_result.gt_inds > 0, as_tuple=False).squeeze(-1).unique()
- neg_inds = torch.nonzero(
- assign_result.gt_inds == 0, as_tuple=False).squeeze(-1).unique()
- gt_flags = bboxes.new_zeros(bboxes.shape[0], dtype=torch.uint8)
- sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes,
- assign_result, gt_flags)
- return sampling_result
diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/__init__.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/__init__.py
deleted file mode 100644
index a6ec0ecc3063cd23c2463f2f53f1c2a83b04d43b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from .generic_roi_extractor import GenericRoIExtractor
-from .single_level_roi_extractor import SingleRoIExtractor
-
-__all__ = [
- 'SingleRoIExtractor',
- 'GenericRoIExtractor',
-]
diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/deform_conv.py b/spaces/CVPR/regionclip-demo/detectron2/layers/deform_conv.py
deleted file mode 100644
index eca070f59645af4c9ccd003d99678f19538f355d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/layers/deform_conv.py
+++ /dev/null
@@ -1,501 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import math
-from functools import lru_cache
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-from torchvision.ops import deform_conv2d
-
-from detectron2 import _C
-
-from .wrappers import _NewEmptyTensorOp
-
-
-class _DeformConv(Function):
- @staticmethod
- def forward(
- ctx,
- input,
- offset,
- weight,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- im2col_step=64,
- ):
- if input is not None and input.dim() != 4:
- raise ValueError(
- "Expected 4D tensor as input, got {}D tensor instead.".format(input.dim())
- )
- ctx.stride = _pair(stride)
- ctx.padding = _pair(padding)
- ctx.dilation = _pair(dilation)
- ctx.groups = groups
- ctx.deformable_groups = deformable_groups
- ctx.im2col_step = im2col_step
-
- ctx.save_for_backward(input, offset, weight)
-
- output = input.new_empty(
- _DeformConv._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride)
- )
-
- ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
-
- if not input.is_cuda:
- if deformable_groups != 1:
- raise NotImplementedError(
- "Deformable Conv with deformable_groups != 1 is not supported on CPUs!"
- )
- return deform_conv2d(
- input, offset, weight, stride=stride, padding=padding, dilation=dilation
- )
- else:
- cur_im2col_step = _DeformConv._cal_im2col_step(input.shape[0], ctx.im2col_step)
- assert (input.shape[0] % cur_im2col_step) == 0, "im2col step must divide batchsize"
-
- _C.deform_conv_forward(
- input,
- weight,
- offset,
- output,
- ctx.bufs_[0],
- ctx.bufs_[1],
- weight.size(3),
- weight.size(2),
- ctx.stride[1],
- ctx.stride[0],
- ctx.padding[1],
- ctx.padding[0],
- ctx.dilation[1],
- ctx.dilation[0],
- ctx.groups,
- ctx.deformable_groups,
- cur_im2col_step,
- )
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, offset, weight = ctx.saved_tensors
-
- grad_input = grad_offset = grad_weight = None
-
- if not grad_output.is_cuda:
- raise NotImplementedError("Deformable Conv is not supported on CPUs!")
- else:
- cur_im2col_step = _DeformConv._cal_im2col_step(input.shape[0], ctx.im2col_step)
- assert (input.shape[0] % cur_im2col_step) == 0, "im2col step must divide batchsize"
-
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- _C.deform_conv_backward_input(
- input,
- offset,
- grad_output,
- grad_input,
- grad_offset,
- weight,
- ctx.bufs_[0],
- weight.size(3),
- weight.size(2),
- ctx.stride[1],
- ctx.stride[0],
- ctx.padding[1],
- ctx.padding[0],
- ctx.dilation[1],
- ctx.dilation[0],
- ctx.groups,
- ctx.deformable_groups,
- cur_im2col_step,
- )
-
- if ctx.needs_input_grad[2]:
- grad_weight = torch.zeros_like(weight)
- _C.deform_conv_backward_filter(
- input,
- offset,
- grad_output,
- grad_weight,
- ctx.bufs_[0],
- ctx.bufs_[1],
- weight.size(3),
- weight.size(2),
- ctx.stride[1],
- ctx.stride[0],
- ctx.padding[1],
- ctx.padding[0],
- ctx.dilation[1],
- ctx.dilation[0],
- ctx.groups,
- ctx.deformable_groups,
- 1,
- cur_im2col_step,
- )
-
- return grad_input, grad_offset, grad_weight, None, None, None, None, None, None
-
- @staticmethod
- def _output_size(input, weight, padding, dilation, stride):
- channels = weight.size(0)
- output_size = (input.size(0), channels)
- for d in range(input.dim() - 2):
- in_size = input.size(d + 2)
- pad = padding[d]
- kernel = dilation[d] * (weight.size(d + 2) - 1) + 1
- stride_ = stride[d]
- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1,)
- if not all(map(lambda s: s > 0, output_size)):
- raise ValueError(
- "convolution input is too small (output would be {})".format(
- "x".join(map(str, output_size))
- )
- )
- return output_size
-
- @staticmethod
- @lru_cache(maxsize=128)
- def _cal_im2col_step(input_size, default_size):
- """
- Calculate proper im2col step size, which should be divisible by input_size and not larger
- than prefer_size. Meanwhile the step size should be as large as possible to be more
- efficient. So we choose the largest one among all divisors of input_size which are smaller
- than prefer_size.
- :param input_size: input batch size .
- :param default_size: default preferred im2col step size.
- :return: the largest proper step size.
- """
- if input_size <= default_size:
- return input_size
- best_step = 1
- for step in range(2, min(int(math.sqrt(input_size)) + 1, default_size)):
- if input_size % step == 0:
- if input_size // step <= default_size:
- return input_size // step
- best_step = step
-
- return best_step
-
-
-class _ModulatedDeformConv(Function):
- @staticmethod
- def forward(
- ctx,
- input,
- offset,
- mask,
- weight,
- bias=None,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- ):
- ctx.stride = stride
- ctx.padding = padding
- ctx.dilation = dilation
- ctx.groups = groups
- ctx.deformable_groups = deformable_groups
- ctx.with_bias = bias is not None
- if not ctx.with_bias:
- bias = input.new_empty(1) # fake tensor
- if not input.is_cuda:
- raise NotImplementedError("Deformable Conv is not supported on CPUs!")
- if (
- weight.requires_grad
- or mask.requires_grad
- or offset.requires_grad
- or input.requires_grad
- ):
- ctx.save_for_backward(input, offset, mask, weight, bias)
- output = input.new_empty(_ModulatedDeformConv._infer_shape(ctx, input, weight))
- ctx._bufs = [input.new_empty(0), input.new_empty(0)]
- _C.modulated_deform_conv_forward(
- input,
- weight,
- bias,
- ctx._bufs[0],
- offset,
- mask,
- output,
- ctx._bufs[1],
- weight.shape[2],
- weight.shape[3],
- ctx.stride,
- ctx.stride,
- ctx.padding,
- ctx.padding,
- ctx.dilation,
- ctx.dilation,
- ctx.groups,
- ctx.deformable_groups,
- ctx.with_bias,
- )
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- if not grad_output.is_cuda:
- raise NotImplementedError("Deformable Conv is not supported on CPUs!")
- input, offset, mask, weight, bias = ctx.saved_tensors
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- grad_mask = torch.zeros_like(mask)
- grad_weight = torch.zeros_like(weight)
- grad_bias = torch.zeros_like(bias)
- _C.modulated_deform_conv_backward(
- input,
- weight,
- bias,
- ctx._bufs[0],
- offset,
- mask,
- ctx._bufs[1],
- grad_input,
- grad_weight,
- grad_bias,
- grad_offset,
- grad_mask,
- grad_output,
- weight.shape[2],
- weight.shape[3],
- ctx.stride,
- ctx.stride,
- ctx.padding,
- ctx.padding,
- ctx.dilation,
- ctx.dilation,
- ctx.groups,
- ctx.deformable_groups,
- ctx.with_bias,
- )
- if not ctx.with_bias:
- grad_bias = None
-
- return (
- grad_input,
- grad_offset,
- grad_mask,
- grad_weight,
- grad_bias,
- None,
- None,
- None,
- None,
- None,
- )
-
- @staticmethod
- def _infer_shape(ctx, input, weight):
- n = input.size(0)
- channels_out = weight.size(0)
- height, width = input.shape[2:4]
- kernel_h, kernel_w = weight.shape[2:4]
- height_out = (
- height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)
- ) // ctx.stride + 1
- width_out = (
- width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)
- ) // ctx.stride + 1
- return n, channels_out, height_out, width_out
-
-
-deform_conv = _DeformConv.apply
-modulated_deform_conv = _ModulatedDeformConv.apply
-
-
-class DeformConv(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- bias=False,
- norm=None,
- activation=None,
- ):
- """
- Deformable convolution from :paper:`deformconv`.
-
- Arguments are similar to :class:`Conv2D`. Extra arguments:
-
- Args:
- deformable_groups (int): number of groups used in deformable convolution.
- norm (nn.Module, optional): a normalization layer
- activation (callable(Tensor) -> Tensor): a callable activation function
- """
- super(DeformConv, self).__init__()
-
- assert not bias
- assert in_channels % groups == 0, "in_channels {} cannot be divisible by groups {}".format(
- in_channels, groups
- )
- assert (
- out_channels % groups == 0
- ), "out_channels {} cannot be divisible by groups {}".format(out_channels, groups)
-
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.deformable_groups = deformable_groups
- self.norm = norm
- self.activation = activation
-
- self.weight = nn.Parameter(
- torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size)
- )
- self.bias = None
-
- nn.init.kaiming_uniform_(self.weight, nonlinearity="relu")
-
- def forward(self, x, offset):
- if x.numel() == 0:
- # When input is empty, we want to return a empty tensor with "correct" shape,
- # So that the following operations will not panic
- # if they check for the shape of the tensor.
- # This computes the height and width of the output tensor
- output_shape = [
- (i + 2 * p - (di * (k - 1) + 1)) // s + 1
- for i, p, di, k, s in zip(
- x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride
- )
- ]
- output_shape = [x.shape[0], self.weight.shape[0]] + output_shape
- return _NewEmptyTensorOp.apply(x, output_shape)
-
- x = deform_conv(
- x,
- offset,
- self.weight,
- self.stride,
- self.padding,
- self.dilation,
- self.groups,
- self.deformable_groups,
- )
- if self.norm is not None:
- x = self.norm(x)
- if self.activation is not None:
- x = self.activation(x)
- return x
-
- def extra_repr(self):
- tmpstr = "in_channels=" + str(self.in_channels)
- tmpstr += ", out_channels=" + str(self.out_channels)
- tmpstr += ", kernel_size=" + str(self.kernel_size)
- tmpstr += ", stride=" + str(self.stride)
- tmpstr += ", padding=" + str(self.padding)
- tmpstr += ", dilation=" + str(self.dilation)
- tmpstr += ", groups=" + str(self.groups)
- tmpstr += ", deformable_groups=" + str(self.deformable_groups)
- tmpstr += ", bias=False"
- return tmpstr
-
-
-class ModulatedDeformConv(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- bias=True,
- norm=None,
- activation=None,
- ):
- """
- Modulated deformable convolution from :paper:`deformconv2`.
-
- Arguments are similar to :class:`Conv2D`. Extra arguments:
-
- Args:
- deformable_groups (int): number of groups used in deformable convolution.
- norm (nn.Module, optional): a normalization layer
- activation (callable(Tensor) -> Tensor): a callable activation function
- """
- super(ModulatedDeformConv, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = stride
- self.padding = padding
- self.dilation = dilation
- self.groups = groups
- self.deformable_groups = deformable_groups
- self.with_bias = bias
- self.norm = norm
- self.activation = activation
-
- self.weight = nn.Parameter(
- torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)
- )
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_channels))
- else:
- self.bias = None
-
- nn.init.kaiming_uniform_(self.weight, nonlinearity="relu")
- if self.bias is not None:
- nn.init.constant_(self.bias, 0)
-
- def forward(self, x, offset, mask):
- if x.numel() == 0:
- output_shape = [
- (i + 2 * p - (di * (k - 1) + 1)) // s + 1
- for i, p, di, k, s in zip(
- x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride
- )
- ]
- output_shape = [x.shape[0], self.weight.shape[0]] + output_shape
- return _NewEmptyTensorOp.apply(x, output_shape)
-
- x = modulated_deform_conv(
- x,
- offset,
- mask,
- self.weight,
- self.bias,
- self.stride,
- self.padding,
- self.dilation,
- self.groups,
- self.deformable_groups,
- )
- if self.norm is not None:
- x = self.norm(x)
- if self.activation is not None:
- x = self.activation(x)
- return x
-
- def extra_repr(self):
- tmpstr = "in_channels=" + str(self.in_channels)
- tmpstr += ", out_channels=" + str(self.out_channels)
- tmpstr += ", kernel_size=" + str(self.kernel_size)
- tmpstr += ", stride=" + str(self.stride)
- tmpstr += ", padding=" + str(self.padding)
- tmpstr += ", dilation=" + str(self.dilation)
- tmpstr += ", groups=" + str(self.groups)
- tmpstr += ", deformable_groups=" + str(self.deformable_groups)
- tmpstr += ", bias=" + str(self.with_bias)
- return tmpstr
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/build.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/build.py
deleted file mode 100644
index 34eb12d00d94ff905b796e75e2c4c5845257c8e9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/build.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from detectron2.utils.registry import Registry
-
-PROPOSAL_GENERATOR_REGISTRY = Registry("PROPOSAL_GENERATOR")
-PROPOSAL_GENERATOR_REGISTRY.__doc__ = """
-Registry for proposal generator, which produces object proposals from feature maps.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-The call should return a `nn.Module` object.
-"""
-
-from . import rpn, rrpn # noqa F401 isort:skip
-
-
-def build_proposal_generator(cfg, input_shape):
- """
- Build a proposal generator from `cfg.MODEL.PROPOSAL_GENERATOR.NAME`.
- The name can be "PrecomputedProposals" to use no proposal generator.
- """
- name = cfg.MODEL.PROPOSAL_GENERATOR.NAME
- if name == "PrecomputedProposals":
- return None
-
- return PROPOSAL_GENERATOR_REGISTRY.get(name)(cfg, input_shape)
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/common/layout/elem.html b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/common/layout/elem.html
deleted file mode 100644
index 16ccaede617f22dccc1c487db65708ca4c921931..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/common/layout/elem.html
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-
-
-
-
-
-
-
-
- ws-plugin
- {{block 'css'}}
- {{/block}}
-
-
-
- {{block 'main'}}{{/block}}
-
{{@copyright || sys?.copyright}}
-
-
-
\ No newline at end of file
diff --git a/spaces/CloseEric/CloseEric/README.md b/spaces/CloseEric/CloseEric/README.md
deleted file mode 100644
index 2c7b182121bfe1b99b46b1a8bfce6969088cf58a..0000000000000000000000000000000000000000
--- a/spaces/CloseEric/CloseEric/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: CloseEric
-emoji: 🌖
-colorFrom: gray
-colorTo: indigo
-sdk: docker
-pinned: false
----
-
I don't need an idiot in my arms.
\ No newline at end of file
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/app.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/app.py
deleted file mode 100644
index 90c6871afc644b4c072f17f93a29b1a4f58028b1..0000000000000000000000000000000000000000
--- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/app.py
+++ /dev/null
@@ -1,198 +0,0 @@
-import random
-
-import gradio as gr
-import torch
-from diffusers.utils import load_image
-from PIL import Image
-import numpy as np
-import base64
-from io import BytesIO
-
-from mediapipe_face_common import generate_annotation
-
-from diffusers import (
- ControlNetModel,
- StableDiffusionControlNetPipeline,
-)
-
-
-# Download the SD 1.5 model from HF
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-controlnet = ControlNetModel.from_pretrained(
- "CrucibleAI/ControlNetMediaPipeFace", torch_dtype=torch.float16, variant="fp16")
-model = StableDiffusionControlNetPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, torch_dtype=torch.float16
-)
-model = model.to(device)
-model.enable_model_cpu_offload()
-
-
-canvas_html = ""
-load_js = """
-async () => {
-const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/face-canvas.js"
-fetch(url)
- .then(res => res.text())
- .then(text => {
- const script = document.createElement('script');
- script.type = "module"
- script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
- document.head.appendChild(script);
- });
-}
-"""
-get_js_image = """
-async (input_image, prompt, a_prompt, n_prompt, max_faces, min_confidence, num_samples, ddim_steps, guess_mode, strength, scale, seed, eta, image_file_live_opt, live_conditioning) => {
- const canvasEl = document.getElementById("canvas-root");
- const imageData = canvasEl? canvasEl._data : null;
- return [input_image, prompt, a_prompt, n_prompt, max_faces, min_confidence, num_samples, ddim_steps, guess_mode, strength, scale, seed, eta, image_file_live_opt, imageData];
-}
-"""
-
-
-def pad_image(input_image):
- pad_w, pad_h = np.max(((2, 2), np.ceil(
- np.array(input_image.size) / 64).astype(int)), axis=0) * 64 - input_image.size
- im_padded = Image.fromarray(
- np.pad(np.array(input_image), ((0, pad_h), (0, pad_w), (0, 0)), mode='edge'))
- w, h = im_padded.size
- if w == h:
- return im_padded
- elif w > h:
- new_image = Image.new(im_padded.mode, (w, w), (0, 0, 0))
- new_image.paste(im_padded, (0, (w - h) // 2))
- return new_image
- else:
- new_image = Image.new(im_padded.mode, (h, h), (0, 0, 0))
- new_image.paste(im_padded, ((h - w) // 2, 0))
- return new_image
-
-
-def process(input_image: Image.Image, prompt, a_prompt, n_prompt, max_faces: int, min_confidence: float, num_samples, ddim_steps, guess_mode, strength: float, scale, seed: int, eta, image_file_live_opt="file", live_conditioning=None):
- if input_image is None and 'image' not in live_conditioning:
- raise gr.Error("Please provide an image")
- try:
- if image_file_live_opt == 'file':
- # Resize before annotation so that we can keep our line-widths consistent with the training data.
- input_image = pad_image(input_image.convert('RGB')).resize((512, 512))
- empty = generate_annotation(np.array(input_image), max_faces, min_confidence)
- visualization = Image.fromarray(empty)
- elif image_file_live_opt == 'webcam':
- base64_img = live_conditioning['image']
- image_data = base64.b64decode(base64_img.split(',')[1])
- visualization = Image.open(BytesIO(image_data)).convert('RGB').resize((512, 512))
- if seed == -1:
- seed = random.randint(0, 2147483647)
- generator = torch.Generator(device).manual_seed(seed)
-
- output = model(prompt=prompt + ' ' + a_prompt,
- negative_prompt=n_prompt,
- image=visualization,
- generator=generator,
- num_images_per_prompt=num_samples,
- num_inference_steps=ddim_steps,
- controlnet_conditioning_scale=float(strength),
- guidance_scale=scale,
- eta=eta,
- )
- results = [visualization] + output.images
-
- return results
- except Exception as e:
- raise gr.Error(str(e))
-
-# switch between file upload and webcam
-
-
-def toggle(choice):
- if choice == "file":
- return gr.update(visible=True, value=None), gr.update(visible=False, value=None)
- elif choice == "webcam":
- return gr.update(visible=False, value=None), gr.update(visible=True, value=canvas_html)
-
-
-block = gr.Blocks().queue()
-with block:
- # hidden JSON component to store live conditioning
- live_conditioning = gr.JSON(value={}, visible=False)
- with gr.Row():
- gr.Markdown("## Control Stable Diffusion with a Facial Pose")
- with gr.Row():
- with gr.Column():
- image_file_live_opt = gr.Radio(["file", "webcam"], value="file",
- label="How would you like to upload your image?")
- input_image = gr.Image(source="upload", visible=True, type="pil")
- canvas = gr.HTML(None, elem_id="canvas_html", visible=False)
-
- image_file_live_opt.change(fn=toggle,
- inputs=[image_file_live_opt],
- outputs=[input_image, canvas],
- queue=False)
-
- prompt = gr.Textbox(label="Prompt")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- num_samples = gr.Slider(
- label="Images", minimum=1, maximum=4, value=1, step=1)
- max_faces = gr.Slider(
- label="Max Faces", minimum=1, maximum=10, value=5, step=1)
- min_confidence = gr.Slider(
- label="Min Confidence", minimum=0.01, maximum=1.0, value=0.5, step=0.01)
- strength = gr.Slider(
- label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
- guess_mode = gr.Checkbox(label='Guess Mode', value=False)
- ddim_steps = gr.Slider(
- label="Steps", minimum=1, maximum=100, value=20, step=1)
- scale = gr.Slider(label="Guidance Scale",
- minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=-1,
- maximum=2147483647, step=1, randomize=True)
- eta = gr.Number(label="eta (DDIM)", value=0.0)
- a_prompt = gr.Textbox(
- label="Added Prompt", value='best quality, extremely detailed')
- n_prompt = gr.Textbox(label="Negative Prompt",
- value='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality')
- with gr.Column():
- result_gallery = gr.Gallery(
- label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
- ips = [input_image, prompt, a_prompt, n_prompt, max_faces, min_confidence,
- num_samples, ddim_steps, guess_mode, strength, scale, seed, eta]
- run_button.click(fn=process, inputs=ips + [image_file_live_opt, live_conditioning],
- outputs=[result_gallery],
- _js=get_js_image)
-
- # load js for live conditioning
- block.load(None, None, None, _js=load_js)
- gr.Examples(fn=process,
- examples=[
- ["./examples/two2.jpeg",
- "Highly detailed photograph of two clowns",
- "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- 10, 0.4, 3, 20, False, 1.0, 9.0, -1, 0.0],
- ["./examples/two.jpeg",
- "a photo of two silly men",
- "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- 10, 0.4, 3, 20, False, 1.0, 9.0, -1, 0.0],
- ["./examples/pedro-512.jpg",
- "Highly detailed photograph of young woman smiling, with palm trees in the background",
- "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- 10, 0.4, 3, 20, False, 1.0, 9.0, -1, 0.0],
- ["./examples/image1.jpg",
- "Highly detailed photograph of a scary clown",
- "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- 10, 0.4, 3, 20, False, 1.0, 9.0, -1, 0.0],
- ["./examples/image0.jpg",
- "Highly detailed photograph of Madonna",
- "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- 10, 0.4, 3, 20, False, 1.0, 9.0, -1, 0.0],
- ],
- inputs=ips,
- outputs=[result_gallery],
- cache_examples=True)
-
-block.launch(server_name='0.0.0.0')
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/fcos/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/fcos/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.c638cbcc.css b/spaces/DEEMOSTECH/ChatAvatar/static/css/main.c638cbcc.css
deleted file mode 100644
index 1dad1b94155595178bf6b8aef40fae2a0039877c..0000000000000000000000000000000000000000
--- a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.c638cbcc.css
+++ /dev/null
@@ -1,2 +0,0 @@
-html{overflow-x:hidden;overflow-y:overlay}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;box-sizing:border-box;color:#cfcfcf;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;margin:0}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.root{display:flex;justify-content:center;width:100%}.container{height:100vh;width:100%}.\!container{width:100%!important}@media (min-width:640px){.container{max-width:640px}.\!container{max-width:640px!important}}@media (min-width:768px){.container{max-width:768px}.\!container{max-width:768px!important}}@media (min-width:1024px){.container{max-width:1024px}.\!container{max-width:1024px!important}}@media (min-width:1280px){.container{max-width:1280px}.\!container{max-width:1280px!important}}@media (min-width:1536px){.container{max-width:1536px}.\!container{max-width:1536px!important}}.App{--theme-color:#4a00e0;--font-dark-color:#434343;--font-gray-color:#aaa;--font-light-color:#cfcfcf;--bg-light-color:#fff;--bg-gray0-color:#f8f8f8;--bg-gray1-color:#ececec;--bg-gray2-color:#7c7c7c;--bg-gray3-color:#373737;--bg-theme-color:#e7e3f1;--bg-dark-color:#121317;--side-gap:5rem;--radius:0.5rem;--shadow:-10px 0px 12px 1px hsla(0,0%,53%,.16);display:flex;justify-content:space-between;padding:16px;text-align:center}.App *{box-sizing:border-box;transition:all .3s}.App ::-webkit-scrollbar-thumb{background-color:rgba(0,0,0,.2)}textarea{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;border:1px solid transparent;color:var(--font-dark-color);font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;font-size:1rem;line-height:1.5rem;outline:none;padding:0;resize:none}textarea:focus{border-color:var(--theme-color)}img{-webkit-user-drag:none;-webkit-user-select:none;user-select:none}.gallery_con__Y2mej{align-items:flex-start;display:flex;justify-content:center;margin-top:3rem;padding:0 1.25rem;width:100%}.gallery_menuCon__fVdFJ{margin-right:2rem;width:-webkit-max-content;width:max-content}.gallery_menu__U2btD{align-items:center;background-color:initial;border:2px solid transparent;border-radius:1.5rem;cursor:pointer;display:flex;height:3rem;justify-content:center;line-height:1rem;margin-bottom:1rem;text-align:center;width:6rem}.gallery_menu__U2btD.gallery_selected__T2qcs,.gallery_menu__U2btD:hover{background-color:var(--bg-gray3-color);color:#fff}.gallery_menu__U2btD.gallery_selected__T2qcs{border-color:#fff}.gallery_cardsCon__wAfcp{align-items:flex-start;display:flex;flex-grow:1;flex-shrink:1;flex-wrap:wrap;justify-content:space-between;max-height:100vh;max-width:calc(1600px + 9rem)}.gallery_cardsCon__wAfcp::-webkit-scrollbar-thumb{background-color:hsla(0,0%,100%,.2);border:5px solid #121317;border-radius:8px}.gallery_card__noUoL{background-color:var(--bg-gray3-color);border-radius:var(--radius);cursor:pointer;font-size:.75rem;height:260px;margin-bottom:1rem;overflow:hidden;position:relative;width:200px}.gallery_coverImg__BYj-o,.gallery_coverImg__BYj-o img{height:100%;width:100%}.gallery_prompt__9PEmb{background-color:#f8f8f880;border-radius:var(--radius);bottom:1rem;color:var(--font-dark-color);height:0;left:1rem;overflow:hidden;padding:0 .5rem;position:absolute;right:1rem;text-align:left;white-space:pre-wrap;word-break:break-all}.gallery_prompt__9PEmb.gallery_show__c2k50{height:-webkit-fit-content;height:-moz-fit-content;height:fit-content;padding:.5rem}.gallery_infoCon__E8oLy{align-items:center;bottom:1rem;color:var(--font-dark-color);display:flex;justify-content:flex-start;left:1rem;position:absolute;right:1rem}.gallery_avatar__KWBmI,.gallery_avatar__KWBmI img{border-radius:12px;height:24px;overflow:hidden;width:24px}.gallery_avatar__KWBmI{margin-right:1rem}.gallery_spaceholder__xJwYU{flex-grow:1;flex-shrink:1}.header_con__M\+u1W{align-items:center;display:flex;justify-content:center;padding:0 var(--side-gap);width:100vw}.header_header__Y7CqP{align-items:center;border-bottom:1px solid hsla(0,0%,100%,.1);display:flex;justify-content:space-between;padding:1rem 0;width:100%}.header_logoCon__MIdGL{align-items:flex-start;display:flex;height:3rem;justify-content:center}.header_logo__90zuC{height:3rem;margin-right:1rem}.header_logoCon__MIdGL>div{font-size:2rem;font-weight:700;line-height:2rem;margin-top:5px}.header_avatar__B3zXB{background:var(--bg-gray2-color);border-radius:50%;overflow:hidden}.header_avatar__B3zXB,.header_avatar__B3zXB img{height:3rem;width:3rem}.login_con__\+RJgQ{background:#000;box-shadow:-5px 0 20px 0 hsla(0,0%,100%,.2);height:100vh;padding:var(--side-gap);position:fixed;right:0;top:0;z-index:9}.login_close__JulM-{cursor:pointer;-webkit-user-select:none;user-select:none}.result_con__gHOU1{align-items:center;color:var(--font-dark-color);justify-content:center;width:50%;z-index:999}.result_con__gHOU1 *{flex-shrink:0}.result_board__PCvVJ{background-color:var(--bg-light-color);border-radius:var(--radius);display:flex;flex-flow:column;height:100%;width:100%}.result_colHead__k0Mk-{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;flex:0 1 auto;padding:8px}.result_colInner__9FccK{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 1px 2px 0 rgba(0,0,0,.05);flex-wrap:wrap;gap:1px;margin-bottom:1rem;overflow:hidden;padding:10px 12px}.result_colDetail__jggqg,.result_colInner__9FccK{align-items:center;flex-direction:column;justify-content:flex-start}.result_colDetail__jggqg{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;display:flex;flex:1 1 auto;margin-top:1rem;padding:8px 8px 24px}.result_colContent__FYZno{background:#fff;border:1px solid #e5e7eb;border-radius:8px;height:100%;width:100%}.result_colTitle__R8k\+A{align-items:flex-end;color:#6b7280;display:flex;font-size:.875rem;justify-content:space-between;line-height:1.2rem;margin-bottom:8px;width:100%}.result_passwordCon__OjFSI{border-top:1px solid #e5e7eb;padding:8px 12px 2px}.result_emailCon__eEqXk{padding-bottom:10px;padding-left:12px;padding-right:12px}.result_colTitle__R8k\+A>div{margin-bottom:.5rem}.result_colTitle__R8k\+A>div.result_restart__fLq8E{border-radius:5px;cursor:pointer;font-size:1rem;font-weight:400;margin-bottom:0;margin-left:1rem;padding:.5rem;-webkit-user-select:none;user-select:none}.result_restart__fLq8E:hover{background-color:var(--bg-gray0-color);color:var(--font-dark-color)}.result_spaceholder__GAxGZ{flex-grow:1;flex-shrink:1}.result_lang__85-De{cursor:pointer;font-weight:400;margin-right:1rem;-webkit-user-select:none;user-select:none}.result_lang__85-De.result_en__n-Jo7{margin-left:1rem;margin-right:0;width:4rem}.result_lang__85-De:hover{font-weight:700}.result_lang__85-De.result_selected__kDzD1{color:var(--font-dark-color);font-weight:700}.result_regene__yKazF{color:var(--theme-color);cursor:pointer;font-weight:400;-webkit-user-select:none;user-select:none}.result_chatCon__Hm\+zJ{background-color:var(--bg-gray0-color);border-radius:var(--radius);height:calc(100% - 4rem);padding:1rem}.result_chatCon__Hm\+zJ,.result_chatMsgCon__x8UTP{align-items:center;display:flex;flex-direction:column;flex-grow:1;flex-shrink:1;justify-content:flex-start;width:100%}.result_chatMsgCon__x8UTP{overflow-y:overlay;text-align:left}.result_chatMsgCon__x8UTP::-webkit-scrollbar-thumb{border:none;border-radius:3px}.result_chatMsgCon__x8UTP::-webkit-scrollbar{width:6px}.result_chatMsgRow__dr9Qg{align-items:flex-start;display:flex;flex-direction:row;justify-content:flex-start;margin-bottom:1rem;width:100%}.result_chatMsgRow__dr9Qg.result_user__bUuRg{flex-direction:row-reverse}.result_avatar__B2zOp{background:var(--bg-gray2-color);border-radius:1.5rem;margin-left:0;margin-right:1rem;overflow:hidden}.result_avatar__B2zOp,.result_avatar__B2zOp img{height:3rem;width:3rem}.result_user__bUuRg .result_avatar__B2zOp{margin-left:1rem;margin-right:0}.result_bubble__GexXm{background:var(--bg-theme-color);border-radius:var(--radius);flex-shrink:1;line-height:1.5rem;padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_bubble__GexXm.result_unactive__zyVF2{background:var(--bg-gray1-color)}.result_user__bUuRg .result_bubble__GexXm{background:var(--bg-light-color)}.result_chatIptCon__LXDF-{align-items:center;display:flex;flex-direction:column;justify-content:flex-start;width:100%}.result_chatTipsCon__w4uUf{align-items:flex-end;display:flex;flex-direction:row;justify-content:flex-start;margin-top:1rem;max-width:100%;overflow-x:auto;overflow-y:hidden;width:100%}.result_chatTipsCon__w4uUf::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_chatTips__6b9zJ{background:var(--bg-light-color);border-radius:var(--radius);cursor:pointer;margin-right:1rem;padding:1rem;text-align:left;white-space:pre-wrap;width:15.5rem;word-break:break-all}.result_chatTips__6b9zJ:last-child{margin-right:0}.result_chatRowCon__jLGk3{align-items:flex-start;display:flex;flex-direction:row;justify-content:space-between;margin-top:1rem;width:100%}.result_iptLineCon__nLuWa{flex-grow:1;flex-shrink:1;line-height:1.5rem;margin-right:1rem;position:relative;text-align:left}.result_iptSpaceholder__hAkD5{border:1px solid transparent;max-height:calc(9rem + 2px);visibility:hidden}.result_iptSpaceholder__hAkD5,.result_ipt__tA\+g4{padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_ipt__tA\+g4{background:var(--bg-light-color);border-radius:var(--radius);bottom:0;left:0;overflow-y:auto;position:absolute;right:0;top:0}.result_ipt__tA\+g4::-webkit-scrollbar-thumb{border-color:var(--bg-light-color)}.result_btn__h5tQr{align-items:center;background-color:var(--theme-color);border:1px solid var(--theme-color);border-radius:1.5rem;color:#fff;cursor:pointer;display:flex;font-weight:700;height:calc(3rem - 2px);justify-content:center;line-height:1rem;padding:0 1.5rem;-webkit-user-select:none;user-select:none}.result_con__gHOU1 .result_btn__h5tQr.result_disabled__lB61-{background:var(--bg-gray2-color);border-color:var(--bg-gray2-color);color:var(--font-light-color);cursor:not-allowed}.result_iptArea__23TZc{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 0 0 3px transparent,inset 0 2px 4px 0 rgba(0,0,0,.05);color:#1f2937;display:block;font-size:14px;height:42px;line-height:1.4;outline:none!important;padding:10px;position:relative;width:100%}.result_iptArea__23TZc:focus{border-color:#93c5fd;box-shadow:0 0 0 3px #dfedfe,inset 0 2px 4px 0 transparent}.result_iptArea__23TZc::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_clearBtn__r6e0y{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_clearBtn__r6e0y:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_clearBtnLogin__LOsgV{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_inputError__qtPTq{border-color:#f56565;box-shadow:0 0 0 3px #fed7d7,inset 0 2px 4px 0 transparent}.result_clearBtnLogin__LOsgV:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_btnCon__LEoi5{display:flex;justify-content:space-between}.result_generateBtn__UGmBG{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtn__UGmBG:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_generateBtnLogin__nkLOj{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtnLogin__nkLOj:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_candidateCon__x9kyB{align-items:flex-start;background-color:var(--bg-gray0-color);border-radius:var(--radius);display:flex;flex-direction:row;flex-grow:1;flex-shrink:1;height:100%;justify-content:space-between;overflow-y:overlay;padding:1rem;position:relative;width:100%}.result_candidateCon__x9kyB::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_candidateCol__eoHna{margin-right:1rem;position:relative;width:calc(33.33333% - .66667rem)}.result_candidateCol__eoHna:last-child{margin-right:0}.result_candidateCol__eoHna img{border-radius:var(--radius);cursor:pointer;margin-bottom:1rem}.result_creatorCon__tIm3e{align-items:flex-end;color:var(--font-gray-color);display:flex;font-size:1.2rem;font-weight:700;justify-content:flex-start;line-height:1.2rem;margin-bottom:1rem;width:100%}.result_creatorInfoCon__pET8h{text-align:left}.result_creatorName__VLTXL{color:var(--font-dark-color);font-size:1.2rem;font-weight:700;line-height:1.8rem}.result_creatorInfo__CkbWU{color:var(--font-gray-color);font-size:1rem;line-height:1.2rem}.result_modelView__Y25w5{background:var(--bg-gray0-color);border-radius:var(--radius);flex-grow:1;flex-shrink:1;height:100%;overflow:hidden;width:100%}.result_modelInfoCon__bXw5O{align-items:center;display:flex;flex-direction:column;justify-content:flex-end;text-align:left}.result_progressInfo__g9iwR{margin-bottom:.5rem;width:100%}.result_progressTrack__I6zDn{background:var(--bg-light-color);border-radius:2px;height:4px;position:relative;width:100%}.result_progressThumb__mbBQj{background-color:var(--theme-color);border-radius:2px;height:4px;left:0;position:absolute;top:0}.result_modelPrompt__DzUbD{background:var(--bg-light-color);border-radius:var(--radius);margin-top:1rem;min-height:3rem;padding:1rem;width:100%}.result_loadingCon__XVvXD,.result_progressCon__O57XA{font-size:14px;position:absolute;top:calc(50% - 10px)}.result_loadingCon__XVvXD{z-index:-111}.result_icon__dFKnM{height:20px;position:absolute;top:calc(50% - 10px)}.result_hideModel__3phD0{display:none}.result_descriptionLogin__xi7Yx{text-align:start}.welcome_con__o1kmf{align-items:center;background:#121317;border-radius:.5rem;display:flex;flex-direction:column;justify-content:flex-start;padding-bottom:1rem;padding-top:2rem;position:relative;width:45%}.welcome_con__o1kmf>img{position:absolute;top:0;width:100%}.welcome_mainCon__H1gv\+{margin-top:.5rem;z-index:999}.welcome_title__Gd8m4{color:#fff;font-family:Courier New;font-size:5rem;font-weight:700;line-height:5rem}.welcome_ioCon__PQZXU{background-color:#fff;border-radius:1rem;border-style:solid;margin-left:8rem;margin-right:8rem;margin-top:24rem;padding:2rem;width:calc(100% - 16rem)}.welcome_iptCon__KpWEL{align-items:center;background:#ededf2;border-radius:1rem;display:flex;height:4rem;justify-content:space-between;margin-bottom:2rem;width:100%}.welcome_iptCon__KpWEL>img{height:2rem;margin-right:1rem;position:static;width:2rem}.welcome_ipt__ayi9Z{background:#ededf2;border:none;border-radius:1rem;color:var(--font-dark-color);flex-grow:1;font-size:1rem;height:100%;outline:none;padding:0 2rem}.welcome_ipt__ayi9Z::-webkit-input-placeholder{font-size:1rem}.welcome_ipt__ayi9Z::placeholder{font-size:1rem}.welcome_btnCon__Mx-ta,.welcome_btn__jCuoG{align-items:center;display:flex;justify-content:center}.welcome_btn__jCuoG{border:1px solid #8f8f8f;border-radius:1rem;cursor:pointer;height:3rem;line-height:1rem;-webkit-user-select:none;user-select:none;width:100%}.welcome_btn__jCuoG:last-child{background:#4a00e0;border:none;font-weight:700}.welcome_btn__jCuoG.welcome_disabled__pcSzv{cursor:not-allowed}.welcome_btn__jCuoG:hover{color:#fff}
-/*# sourceMappingURL=main.c638cbcc.css.map*/
\ No newline at end of file
diff --git a/spaces/DReAMy-lib/dream_II/app.py b/spaces/DReAMy-lib/dream_II/app.py
deleted file mode 100644
index 01f2b0f8a1f07a9cdfcbcde9ca717825600caac5..0000000000000000000000000000000000000000
--- a/spaces/DReAMy-lib/dream_II/app.py
+++ /dev/null
@@ -1,27 +0,0 @@
-
-import gradio as gr
-from gradio.mix import Series
-
-description = """
- This space mixes DReAMy's large-multilingual [emotion-classification model](https://github.com/lorenzoscottb/DReAMy) with Whisper from OpenAI,
- to classify recordings on their emotional content accordingly to the Hall and Van de Castle framework. For more details,
- see the [Bertolini et al., 23](https://arxiv.org/abs/2302.14828v1) pre-print.
-
-
-
- Disclaimer: if you run into the `503, Error: Model [model_name] is currently loading` error message, just wait and or refresh the page.
- The indicated model still has to be loaded by the API.
-"""
-title = "DReAM v. II"
-
-whisper = gr.Interface.load("models/openai/whisper-large")
-
-interface_model_L = gr.Interface.load("models/DReAMy-lib/xlm-roberta-large-DreamBank-emotion-presence")
-
-Series(
- whisper,
- interface_model_L,
- description = description,
- title = title,
- inputs = gr.Audio(source="microphone"),
-).launch()
diff --git a/spaces/Dao3/OpenArt/style.css b/spaces/Dao3/OpenArt/style.css
deleted file mode 100644
index fdbef9e64cc6b9f8003698ffa38997ee22a640ac..0000000000000000000000000000000000000000
--- a/spaces/Dao3/OpenArt/style.css
+++ /dev/null
@@ -1,84 +0,0 @@
-#col-container {
- max-width: 800px;
- margin-left: auto;
- margin-right: auto;
-}
-a {
- color: inherit;
- text-decoration: underline;
-}
-.gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
-}
-.gr-button {
- color: white;
- border-color: #9d66e5;
- background: #9d66e5;
-}
-input[type='range'] {
- accent-color: #9d66e5;
-}
-.dark input[type='range'] {
- accent-color: #dfdfdf;
-}
-.container {
- max-width: 800px;
- margin: auto;
- padding-top: 1.5rem;
-}
-#gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
-}
-#gallery>div>.h-full {
- min-height: 20rem;
-}
-.details:hover {
- text-decoration: underline;
-}
-.gr-button {
- white-space: nowrap;
-}
-.gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
-}
-#advanced-options {
- margin-bottom: 20px;
-}
-.footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
-}
-.footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
-}
-.dark .logo{ filter: invert(1); }
-.dark .footer {
- border-color: #303030;
-}
-.dark .footer>p {
- background: #0b0f19;
-}
-.acknowledgments h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
-}
-
diff --git a/spaces/Dauzy/whisper-webui/src/vadParallel.py b/spaces/Dauzy/whisper-webui/src/vadParallel.py
deleted file mode 100644
index c2323c0b632c34014ac1fe7ac79141b5bd9c5731..0000000000000000000000000000000000000000
--- a/spaces/Dauzy/whisper-webui/src/vadParallel.py
+++ /dev/null
@@ -1,298 +0,0 @@
-import multiprocessing
-from queue import Empty
-import threading
-import time
-from src.hooks.progressListener import ProgressListener
-from src.vad import AbstractTranscription, TranscriptionConfig, get_audio_duration
-
-from multiprocessing import Pool, Queue
-
-from typing import Any, Dict, List, Union
-import os
-
-from src.whisper.abstractWhisperContainer import AbstractWhisperCallback
-
-class _ProgressListenerToQueue(ProgressListener):
- def __init__(self, progress_queue: Queue):
- self.progress_queue = progress_queue
- self.progress_total = 0
- self.prev_progress = 0
-
- def on_progress(self, current: Union[int, float], total: Union[int, float]):
- delta = current - self.prev_progress
- self.prev_progress = current
- self.progress_total = total
- self.progress_queue.put(delta)
-
- def on_finished(self):
- if self.progress_total > self.prev_progress:
- delta = self.progress_total - self.prev_progress
- self.progress_queue.put(delta)
- self.prev_progress = self.progress_total
-
-class ParallelContext:
- def __init__(self, num_processes: int = None, auto_cleanup_timeout_seconds: float = None):
- self.num_processes = num_processes
- self.auto_cleanup_timeout_seconds = auto_cleanup_timeout_seconds
- self.lock = threading.Lock()
-
- self.ref_count = 0
- self.pool = None
- self.cleanup_timer = None
-
- def get_pool(self):
- # Initialize pool lazily
- if (self.pool is None):
- context = multiprocessing.get_context('spawn')
- self.pool = context.Pool(self.num_processes)
-
- self.ref_count = self.ref_count + 1
-
- if (self.auto_cleanup_timeout_seconds is not None):
- self._stop_auto_cleanup()
-
- return self.pool
-
- def return_pool(self, pool):
- if (self.pool == pool and self.ref_count > 0):
- self.ref_count = self.ref_count - 1
-
- if (self.ref_count == 0):
- if (self.auto_cleanup_timeout_seconds is not None):
- self._start_auto_cleanup()
-
- def _start_auto_cleanup(self):
- if (self.cleanup_timer is not None):
- self.cleanup_timer.cancel()
- self.cleanup_timer = threading.Timer(self.auto_cleanup_timeout_seconds, self._execute_cleanup)
- self.cleanup_timer.start()
-
- print("Started auto cleanup of pool in " + str(self.auto_cleanup_timeout_seconds) + " seconds")
-
- def _stop_auto_cleanup(self):
- if (self.cleanup_timer is not None):
- self.cleanup_timer.cancel()
- self.cleanup_timer = None
-
- print("Stopped auto cleanup of pool")
-
- def _execute_cleanup(self):
- print("Executing cleanup of pool")
-
- if (self.ref_count == 0):
- self.close()
-
- def close(self):
- self._stop_auto_cleanup()
-
- if (self.pool is not None):
- print("Closing pool of " + str(self.num_processes) + " processes")
- self.pool.close()
- self.pool.join()
- self.pool = None
-
-class ParallelTranscriptionConfig(TranscriptionConfig):
- def __init__(self, device_id: str, override_timestamps, initial_segment_index, copy: TranscriptionConfig = None):
- super().__init__(copy.non_speech_strategy, copy.segment_padding_left, copy.segment_padding_right, copy.max_silent_period, copy.max_merge_size, copy.max_prompt_window, initial_segment_index)
- self.device_id = device_id
- self.override_timestamps = override_timestamps
-
-class ParallelTranscription(AbstractTranscription):
- # Silero VAD typically takes about 3 seconds per minute, so there's no need to split the chunks
- # into smaller segments than 2 minute (min 6 seconds per CPU core)
- MIN_CPU_CHUNK_SIZE_SECONDS = 2 * 60
-
- def __init__(self, sampling_rate: int = 16000):
- super().__init__(sampling_rate=sampling_rate)
-
- def transcribe_parallel(self, transcription: AbstractTranscription, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig,
- cpu_device_count: int, gpu_devices: List[str], cpu_parallel_context: ParallelContext = None, gpu_parallel_context: ParallelContext = None,
- progress_listener: ProgressListener = None):
- total_duration = get_audio_duration(audio)
-
- # First, get the timestamps for the original audio
- if (cpu_device_count > 1 and not transcription.is_transcribe_timestamps_fast()):
- merged = self._get_merged_timestamps_parallel(transcription, audio, config, total_duration, cpu_device_count, cpu_parallel_context)
- else:
- timestamp_segments = transcription.get_transcribe_timestamps(audio, config, 0, total_duration)
- merged = transcription.get_merged_timestamps(timestamp_segments, config, total_duration)
-
- # We must make sure the whisper model is downloaded
- if (len(gpu_devices) > 1):
- whisperCallable.model_container.ensure_downloaded()
-
- # Split into a list for each device
- # TODO: Split by time instead of by number of chunks
- merged_split = list(self._split(merged, len(gpu_devices)))
-
- # Parameters that will be passed to the transcribe function
- parameters = []
- segment_index = config.initial_segment_index
-
- processing_manager = multiprocessing.Manager()
- progress_queue = processing_manager.Queue()
-
- for i in range(len(gpu_devices)):
- # Note that device_segment_list can be empty. But we will still create a process for it,
- # as otherwise we run the risk of assigning the same device to multiple processes.
- device_segment_list = list(merged_split[i]) if i < len(merged_split) else []
- device_id = gpu_devices[i]
-
- print("Device " + str(device_id) + " (index " + str(i) + ") has " + str(len(device_segment_list)) + " segments")
-
- # Create a new config with the given device ID
- device_config = ParallelTranscriptionConfig(device_id, device_segment_list, segment_index, config)
- segment_index += len(device_segment_list)
-
- progress_listener_to_queue = _ProgressListenerToQueue(progress_queue)
- parameters.append([audio, whisperCallable, device_config, progress_listener_to_queue]);
-
- merged = {
- 'text': '',
- 'segments': [],
- 'language': None
- }
-
- created_context = False
-
- perf_start_gpu = time.perf_counter()
-
- # Spawn a separate process for each device
- try:
- if (gpu_parallel_context is None):
- gpu_parallel_context = ParallelContext(len(gpu_devices))
- created_context = True
-
- # Get a pool of processes
- pool = gpu_parallel_context.get_pool()
-
- # Run the transcription in parallel
- results_async = pool.starmap_async(self.transcribe, parameters)
- total_progress = 0
-
- while not results_async.ready():
- try:
- delta = progress_queue.get(timeout=5) # Set a timeout of 5 seconds
- except Empty:
- continue
-
- total_progress += delta
- if progress_listener is not None:
- progress_listener.on_progress(total_progress, total_duration)
-
- results = results_async.get()
-
- # Call the finished callback
- if progress_listener is not None:
- progress_listener.on_finished()
-
- for result in results:
- # Merge the results
- if (result['text'] is not None):
- merged['text'] += result['text']
- if (result['segments'] is not None):
- merged['segments'].extend(result['segments'])
- if (result['language'] is not None):
- merged['language'] = result['language']
-
- finally:
- # Return the pool to the context
- if (gpu_parallel_context is not None):
- gpu_parallel_context.return_pool(pool)
- # Always close the context if we created it
- if (created_context):
- gpu_parallel_context.close()
-
- perf_end_gpu = time.perf_counter()
- print("Parallel transcription took " + str(perf_end_gpu - perf_start_gpu) + " seconds")
-
- return merged
-
- def _get_merged_timestamps_parallel(self, transcription: AbstractTranscription, audio: str, config: TranscriptionConfig, total_duration: float,
- cpu_device_count: int, cpu_parallel_context: ParallelContext = None):
- parameters = []
-
- chunk_size = max(total_duration / cpu_device_count, self.MIN_CPU_CHUNK_SIZE_SECONDS)
- chunk_start = 0
- cpu_device_id = 0
-
- perf_start_time = time.perf_counter()
-
- # Create chunks that will be processed on the CPU
- while (chunk_start < total_duration):
- chunk_end = min(chunk_start + chunk_size, total_duration)
-
- if (chunk_end - chunk_start < 1):
- # No need to process chunks that are less than 1 second
- break
-
- print("Parallel VAD: Executing chunk from " + str(chunk_start) + " to " +
- str(chunk_end) + " on CPU device " + str(cpu_device_id))
- parameters.append([audio, config, chunk_start, chunk_end]);
-
- cpu_device_id += 1
- chunk_start = chunk_end
-
- created_context = False
-
- # Spawn a separate process for each device
- try:
- if (cpu_parallel_context is None):
- cpu_parallel_context = ParallelContext(cpu_device_count)
- created_context = True
-
- # Get a pool of processes
- pool = cpu_parallel_context.get_pool()
-
- # Run the transcription in parallel. Note that transcription must be picklable.
- results = pool.starmap(transcription.get_transcribe_timestamps, parameters)
-
- timestamps = []
-
- # Flatten the results
- for result in results:
- timestamps.extend(result)
-
- merged = transcription.get_merged_timestamps(timestamps, config, total_duration)
-
- perf_end_time = time.perf_counter()
- print("Parallel VAD processing took {} seconds".format(perf_end_time - perf_start_time))
- return merged
-
- finally:
- # Return the pool to the context
- if (cpu_parallel_context is not None):
- cpu_parallel_context.return_pool(pool)
- # Always close the context if we created it
- if (created_context):
- cpu_parallel_context.close()
-
- def get_transcribe_timestamps(self, audio: str, config: ParallelTranscriptionConfig, start_time: float, duration: float):
- return []
-
- def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: ParallelTranscriptionConfig, total_duration: float):
- # Override timestamps that will be processed
- if (config.override_timestamps is not None):
- print("(get_merged_timestamps) Using override timestamps of size " + str(len(config.override_timestamps)))
- return config.override_timestamps
- return super().get_merged_timestamps(timestamps, config, total_duration)
-
- def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: ParallelTranscriptionConfig,
- progressListener: ProgressListener = None):
- # Override device ID the first time
- if (os.environ.get("INITIALIZED", None) is None):
- os.environ["INITIALIZED"] = "1"
-
- # Note that this may be None if the user didn't specify a device. In that case, Whisper will
- # just use the default GPU device.
- if (config.device_id is not None):
- print("Using device " + config.device_id)
- os.environ["CUDA_VISIBLE_DEVICES"] = config.device_id
-
- return super().transcribe(audio, whisperCallable, config, progressListener)
-
- def _split(self, a, n):
- """Split a list into n approximately equal parts."""
- k, m = divmod(len(a), n)
- return (a[i*k+min(i, m):(i+1)*k+min(i+1, m)] for i in range(n))
-
diff --git a/spaces/DeliaPaladines/CursoIA/README.md b/spaces/DeliaPaladines/CursoIA/README.md
deleted file mode 100644
index 0c83ee5efff44eec0aaa0c63b2de8c5493501fda..0000000000000000000000000000000000000000
--- a/spaces/DeliaPaladines/CursoIA/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: CursoIA
-emoji: 🚀
-colorFrom: gray
-colorTo: purple
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Detomo/Depth_estimation/README.md b/spaces/Detomo/Depth_estimation/README.md
deleted file mode 100644
index 3e052e30f43796d3240431d8c5dee66fcce658b9..0000000000000000000000000000000000000000
--- a/spaces/Detomo/Depth_estimation/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Depth Estimation
-emoji: 🌖
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Eddycrack864/Applio-Inference/Dockerfile b/spaces/Eddycrack864/Applio-Inference/Dockerfile
deleted file mode 100644
index b81f131c79cc585012b28002f4916491e85f3a33..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/Dockerfile
+++ /dev/null
@@ -1,29 +0,0 @@
-# syntax=docker/dockerfile:1
-
-FROM python:3.10-bullseye
-
-EXPOSE 7865
-
-WORKDIR /app
-
-COPY . .
-
-RUN apt update && apt install -y -qq ffmpeg aria2 && apt clean
-
-RUN pip3 install --no-cache-dir -r requirements.txt
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d assets/pretrained_v2/ -o D40k.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d assets/pretrained_v2/ -o G40k.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d assets/pretrained_v2/ -o f0D40k.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d assets/pretrained_v2/ -o f0G40k.pth
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d assets/uvr5_weights/ -o HP2-人声vocals+非人声instrumentals.pth
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d assets/uvr5_weights/ -o HP5-主旋律人声vocals+其他instrumentals.pth
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d assets/hubert -o hubert_base.pt
-
-RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/rmvpe.pt -d assets/hubert -o rmvpe.pt
-
-VOLUME [ "/app/weights", "/app/opt" ]
-
-CMD ["python3", "infer-web.py"]
\ No newline at end of file
diff --git a/spaces/Eddycrack864/Applio-Inference/lib/uvr5_pack/lib_v5/spec_utils.py b/spaces/Eddycrack864/Applio-Inference/lib/uvr5_pack/lib_v5/spec_utils.py
deleted file mode 100644
index a3fd46d333da7becc7f09f42c084ac7cde661035..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/lib/uvr5_pack/lib_v5/spec_utils.py
+++ /dev/null
@@ -1,667 +0,0 @@
-import os, librosa
-import numpy as np
-import soundfile as sf
-from tqdm import tqdm
-import json, math, hashlib
-
-
-def crop_center(h1, h2):
- h1_shape = h1.size()
- h2_shape = h2.size()
-
- if h1_shape[3] == h2_shape[3]:
- return h1
- elif h1_shape[3] < h2_shape[3]:
- raise ValueError("h1_shape[3] must be greater than h2_shape[3]")
-
- # s_freq = (h2_shape[2] - h1_shape[2]) // 2
- # e_freq = s_freq + h1_shape[2]
- s_time = (h1_shape[3] - h2_shape[3]) // 2
- e_time = s_time + h2_shape[3]
- h1 = h1[:, :, :, s_time:e_time]
-
- return h1
-
-
-def wave_to_spectrogram(
- wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False
-):
- if reverse:
- wave_left = np.flip(np.asfortranarray(wave[0]))
- wave_right = np.flip(np.asfortranarray(wave[1]))
- elif mid_side:
- wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2)
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1]))
- elif mid_side_b2:
- wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5))
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5))
- else:
- wave_left = np.asfortranarray(wave[0])
- wave_right = np.asfortranarray(wave[1])
-
- spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length)
- spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length)
-
- spec = np.asfortranarray([spec_left, spec_right])
-
- return spec
-
-
-def wave_to_spectrogram_mt(
- wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False
-):
- import threading
-
- if reverse:
- wave_left = np.flip(np.asfortranarray(wave[0]))
- wave_right = np.flip(np.asfortranarray(wave[1]))
- elif mid_side:
- wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2)
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1]))
- elif mid_side_b2:
- wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5))
- wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5))
- else:
- wave_left = np.asfortranarray(wave[0])
- wave_right = np.asfortranarray(wave[1])
-
- def run_thread(**kwargs):
- global spec_left
- spec_left = librosa.stft(**kwargs)
-
- thread = threading.Thread(
- target=run_thread,
- kwargs={"y": wave_left, "n_fft": n_fft, "hop_length": hop_length},
- )
- thread.start()
- spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length)
- thread.join()
-
- spec = np.asfortranarray([spec_left, spec_right])
-
- return spec
-
-
-def combine_spectrograms(specs, mp):
- l = min([specs[i].shape[2] for i in specs])
- spec_c = np.zeros(shape=(2, mp.param["bins"] + 1, l), dtype=np.complex64)
- offset = 0
- bands_n = len(mp.param["band"])
-
- for d in range(1, bands_n + 1):
- h = mp.param["band"][d]["crop_stop"] - mp.param["band"][d]["crop_start"]
- spec_c[:, offset : offset + h, :l] = specs[d][
- :, mp.param["band"][d]["crop_start"] : mp.param["band"][d]["crop_stop"], :l
- ]
- offset += h
-
- if offset > mp.param["bins"]:
- raise ValueError("Too much bins")
-
- # lowpass fiter
- if (
- mp.param["pre_filter_start"] > 0
- ): # and mp.param['band'][bands_n]['res_type'] in ['scipy', 'polyphase']:
- if bands_n == 1:
- spec_c = fft_lp_filter(
- spec_c, mp.param["pre_filter_start"], mp.param["pre_filter_stop"]
- )
- else:
- gp = 1
- for b in range(
- mp.param["pre_filter_start"] + 1, mp.param["pre_filter_stop"]
- ):
- g = math.pow(
- 10, -(b - mp.param["pre_filter_start"]) * (3.5 - gp) / 20.0
- )
- gp = g
- spec_c[:, b, :] *= g
-
- return np.asfortranarray(spec_c)
-
-
-def spectrogram_to_image(spec, mode="magnitude"):
- if mode == "magnitude":
- if np.iscomplexobj(spec):
- y = np.abs(spec)
- else:
- y = spec
- y = np.log10(y**2 + 1e-8)
- elif mode == "phase":
- if np.iscomplexobj(spec):
- y = np.angle(spec)
- else:
- y = spec
-
- y -= y.min()
- y *= 255 / y.max()
- img = np.uint8(y)
-
- if y.ndim == 3:
- img = img.transpose(1, 2, 0)
- img = np.concatenate([np.max(img, axis=2, keepdims=True), img], axis=2)
-
- return img
-
-
-def reduce_vocal_aggressively(X, y, softmask):
- v = X - y
- y_mag_tmp = np.abs(y)
- v_mag_tmp = np.abs(v)
-
- v_mask = v_mag_tmp > y_mag_tmp
- y_mag = np.clip(y_mag_tmp - v_mag_tmp * v_mask * softmask, 0, np.inf)
-
- return y_mag * np.exp(1.0j * np.angle(y))
-
-
-def mask_silence(mag, ref, thres=0.2, min_range=64, fade_size=32):
- if min_range < fade_size * 2:
- raise ValueError("min_range must be >= fade_area * 2")
-
- mag = mag.copy()
-
- idx = np.where(ref.mean(axis=(0, 1)) < thres)[0]
- starts = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0])
- ends = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1])
- uninformative = np.where(ends - starts > min_range)[0]
- if len(uninformative) > 0:
- starts = starts[uninformative]
- ends = ends[uninformative]
- old_e = None
- for s, e in zip(starts, ends):
- if old_e is not None and s - old_e < fade_size:
- s = old_e - fade_size * 2
-
- if s != 0:
- weight = np.linspace(0, 1, fade_size)
- mag[:, :, s : s + fade_size] += weight * ref[:, :, s : s + fade_size]
- else:
- s -= fade_size
-
- if e != mag.shape[2]:
- weight = np.linspace(1, 0, fade_size)
- mag[:, :, e - fade_size : e] += weight * ref[:, :, e - fade_size : e]
- else:
- e += fade_size
-
- mag[:, :, s + fade_size : e - fade_size] += ref[
- :, :, s + fade_size : e - fade_size
- ]
- old_e = e
-
- return mag
-
-
-def align_wave_head_and_tail(a, b):
- l = min([a[0].size, b[0].size])
-
- return a[:l, :l], b[:l, :l]
-
-
-def cache_or_load(mix_path, inst_path, mp):
- mix_basename = os.path.splitext(os.path.basename(mix_path))[0]
- inst_basename = os.path.splitext(os.path.basename(inst_path))[0]
-
- cache_dir = "mph{}".format(
- hashlib.sha1(json.dumps(mp.param, sort_keys=True).encode("utf-8")).hexdigest()
- )
- mix_cache_dir = os.path.join("cache", cache_dir)
- inst_cache_dir = os.path.join("cache", cache_dir)
-
- os.makedirs(mix_cache_dir, exist_ok=True)
- os.makedirs(inst_cache_dir, exist_ok=True)
-
- mix_cache_path = os.path.join(mix_cache_dir, mix_basename + ".npy")
- inst_cache_path = os.path.join(inst_cache_dir, inst_basename + ".npy")
-
- if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path):
- X_spec_m = np.load(mix_cache_path)
- y_spec_m = np.load(inst_cache_path)
- else:
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
-
- for d in range(len(mp.param["band"]), 0, -1):
- bp = mp.param["band"][d]
-
- if d == len(mp.param["band"]): # high-end band
- X_wave[d], _ = librosa.load(
- mix_path, bp["sr"], False, dtype=np.float32, res_type=bp["res_type"]
- )
- y_wave[d], _ = librosa.load(
- inst_path,
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
- else: # lower bands
- X_wave[d] = librosa.resample(
- X_wave[d + 1],
- mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
- y_wave[d] = librosa.resample(
- y_wave[d + 1],
- mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
-
- X_wave[d], y_wave[d] = align_wave_head_and_tail(X_wave[d], y_wave[d])
-
- X_spec_s[d] = wave_to_spectrogram(
- X_wave[d],
- bp["hl"],
- bp["n_fft"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
- y_spec_s[d] = wave_to_spectrogram(
- y_wave[d],
- bp["hl"],
- bp["n_fft"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
-
- del X_wave, y_wave
-
- X_spec_m = combine_spectrograms(X_spec_s, mp)
- y_spec_m = combine_spectrograms(y_spec_s, mp)
-
- if X_spec_m.shape != y_spec_m.shape:
- raise ValueError("The combined spectrograms are different: " + mix_path)
-
- _, ext = os.path.splitext(mix_path)
-
- np.save(mix_cache_path, X_spec_m)
- np.save(inst_cache_path, y_spec_m)
-
- return X_spec_m, y_spec_m
-
-
-def spectrogram_to_wave(spec, hop_length, mid_side, mid_side_b2, reverse):
- spec_left = np.asfortranarray(spec[0])
- spec_right = np.asfortranarray(spec[1])
-
- wave_left = librosa.istft(spec_left, hop_length=hop_length)
- wave_right = librosa.istft(spec_right, hop_length=hop_length)
-
- if reverse:
- return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)])
- elif mid_side:
- return np.asfortranarray(
- [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)]
- )
- elif mid_side_b2:
- return np.asfortranarray(
- [
- np.add(wave_right / 1.25, 0.4 * wave_left),
- np.subtract(wave_left / 1.25, 0.4 * wave_right),
- ]
- )
- else:
- return np.asfortranarray([wave_left, wave_right])
-
-
-def spectrogram_to_wave_mt(spec, hop_length, mid_side, reverse, mid_side_b2):
- import threading
-
- spec_left = np.asfortranarray(spec[0])
- spec_right = np.asfortranarray(spec[1])
-
- def run_thread(**kwargs):
- global wave_left
- wave_left = librosa.istft(**kwargs)
-
- thread = threading.Thread(
- target=run_thread, kwargs={"stft_matrix": spec_left, "hop_length": hop_length}
- )
- thread.start()
- wave_right = librosa.istft(spec_right, hop_length=hop_length)
- thread.join()
-
- if reverse:
- return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)])
- elif mid_side:
- return np.asfortranarray(
- [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)]
- )
- elif mid_side_b2:
- return np.asfortranarray(
- [
- np.add(wave_right / 1.25, 0.4 * wave_left),
- np.subtract(wave_left / 1.25, 0.4 * wave_right),
- ]
- )
- else:
- return np.asfortranarray([wave_left, wave_right])
-
-
-def cmb_spectrogram_to_wave(spec_m, mp, extra_bins_h=None, extra_bins=None):
- wave_band = {}
- bands_n = len(mp.param["band"])
- offset = 0
-
- for d in range(1, bands_n + 1):
- bp = mp.param["band"][d]
- spec_s = np.ndarray(
- shape=(2, bp["n_fft"] // 2 + 1, spec_m.shape[2]), dtype=complex
- )
- h = bp["crop_stop"] - bp["crop_start"]
- spec_s[:, bp["crop_start"] : bp["crop_stop"], :] = spec_m[
- :, offset : offset + h, :
- ]
-
- offset += h
- if d == bands_n: # higher
- if extra_bins_h: # if --high_end_process bypass
- max_bin = bp["n_fft"] // 2
- spec_s[:, max_bin - extra_bins_h : max_bin, :] = extra_bins[
- :, :extra_bins_h, :
- ]
- if bp["hpf_start"] > 0:
- spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1)
- if bands_n == 1:
- wave = spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
- else:
- wave = np.add(
- wave,
- spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- ),
- )
- else:
- sr = mp.param["band"][d + 1]["sr"]
- if d == 1: # lower
- spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"])
- wave = librosa.resample(
- spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- ),
- bp["sr"],
- sr,
- res_type="sinc_fastest",
- )
- else: # mid
- spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1)
- spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"])
- wave2 = np.add(
- wave,
- spectrogram_to_wave(
- spec_s,
- bp["hl"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- ),
- )
- # wave = librosa.core.resample(wave2, bp['sr'], sr, res_type="sinc_fastest")
- wave = librosa.core.resample(wave2, bp["sr"], sr, res_type="scipy")
-
- return wave.T
-
-
-def fft_lp_filter(spec, bin_start, bin_stop):
- g = 1.0
- for b in range(bin_start, bin_stop):
- g -= 1 / (bin_stop - bin_start)
- spec[:, b, :] = g * spec[:, b, :]
-
- spec[:, bin_stop:, :] *= 0
-
- return spec
-
-
-def fft_hp_filter(spec, bin_start, bin_stop):
- g = 1.0
- for b in range(bin_start, bin_stop, -1):
- g -= 1 / (bin_start - bin_stop)
- spec[:, b, :] = g * spec[:, b, :]
-
- spec[:, 0 : bin_stop + 1, :] *= 0
-
- return spec
-
-
-def mirroring(a, spec_m, input_high_end, mp):
- if "mirroring" == a:
- mirror = np.flip(
- np.abs(
- spec_m[
- :,
- mp.param["pre_filter_start"]
- - 10
- - input_high_end.shape[1] : mp.param["pre_filter_start"]
- - 10,
- :,
- ]
- ),
- 1,
- )
- mirror = mirror * np.exp(1.0j * np.angle(input_high_end))
-
- return np.where(
- np.abs(input_high_end) <= np.abs(mirror), input_high_end, mirror
- )
-
- if "mirroring2" == a:
- mirror = np.flip(
- np.abs(
- spec_m[
- :,
- mp.param["pre_filter_start"]
- - 10
- - input_high_end.shape[1] : mp.param["pre_filter_start"]
- - 10,
- :,
- ]
- ),
- 1,
- )
- mi = np.multiply(mirror, input_high_end * 1.7)
-
- return np.where(np.abs(input_high_end) <= np.abs(mi), input_high_end, mi)
-
-
-def ensembling(a, specs):
- for i in range(1, len(specs)):
- if i == 1:
- spec = specs[0]
-
- ln = min([spec.shape[2], specs[i].shape[2]])
- spec = spec[:, :, :ln]
- specs[i] = specs[i][:, :, :ln]
-
- if "min_mag" == a:
- spec = np.where(np.abs(specs[i]) <= np.abs(spec), specs[i], spec)
- if "max_mag" == a:
- spec = np.where(np.abs(specs[i]) >= np.abs(spec), specs[i], spec)
-
- return spec
-
-
-def stft(wave, nfft, hl):
- wave_left = np.asfortranarray(wave[0])
- wave_right = np.asfortranarray(wave[1])
- spec_left = librosa.stft(wave_left, nfft, hop_length=hl)
- spec_right = librosa.stft(wave_right, nfft, hop_length=hl)
- spec = np.asfortranarray([spec_left, spec_right])
-
- return spec
-
-
-def istft(spec, hl):
- spec_left = np.asfortranarray(spec[0])
- spec_right = np.asfortranarray(spec[1])
-
- wave_left = librosa.istft(spec_left, hop_length=hl)
- wave_right = librosa.istft(spec_right, hop_length=hl)
- wave = np.asfortranarray([wave_left, wave_right])
-
-
-if __name__ == "__main__":
- import cv2
- import sys
- import time
- import argparse
- from model_param_init import ModelParameters
-
- p = argparse.ArgumentParser()
- p.add_argument(
- "--algorithm",
- "-a",
- type=str,
- choices=["invert", "invert_p", "min_mag", "max_mag", "deep", "align"],
- default="min_mag",
- )
- p.add_argument(
- "--model_params",
- "-m",
- type=str,
- default=os.path.join("modelparams", "1band_sr44100_hl512.json"),
- )
- p.add_argument("--output_name", "-o", type=str, default="output")
- p.add_argument("--vocals_only", "-v", action="store_true")
- p.add_argument("input", nargs="+")
- args = p.parse_args()
-
- start_time = time.time()
-
- if args.algorithm.startswith("invert") and len(args.input) != 2:
- raise ValueError("There should be two input files.")
-
- if not args.algorithm.startswith("invert") and len(args.input) < 2:
- raise ValueError("There must be at least two input files.")
-
- wave, specs = {}, {}
- mp = ModelParameters(args.model_params)
-
- for i in range(len(args.input)):
- spec = {}
-
- for d in range(len(mp.param["band"]), 0, -1):
- bp = mp.param["band"][d]
-
- if d == len(mp.param["band"]): # high-end band
- wave[d], _ = librosa.load(
- args.input[i],
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
-
- if len(wave[d].shape) == 1: # mono to stereo
- wave[d] = np.array([wave[d], wave[d]])
- else: # lower bands
- wave[d] = librosa.resample(
- wave[d + 1],
- mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
-
- spec[d] = wave_to_spectrogram(
- wave[d],
- bp["hl"],
- bp["n_fft"],
- mp.param["mid_side"],
- mp.param["mid_side_b2"],
- mp.param["reverse"],
- )
-
- specs[i] = combine_spectrograms(spec, mp)
-
- del wave
-
- if args.algorithm == "deep":
- d_spec = np.where(np.abs(specs[0]) <= np.abs(spec[1]), specs[0], spec[1])
- v_spec = d_spec - specs[1]
- sf.write(
- os.path.join("{}.wav".format(args.output_name)),
- cmb_spectrogram_to_wave(v_spec, mp),
- mp.param["sr"],
- )
-
- if args.algorithm.startswith("invert"):
- ln = min([specs[0].shape[2], specs[1].shape[2]])
- specs[0] = specs[0][:, :, :ln]
- specs[1] = specs[1][:, :, :ln]
-
- if "invert_p" == args.algorithm:
- X_mag = np.abs(specs[0])
- y_mag = np.abs(specs[1])
- max_mag = np.where(X_mag >= y_mag, X_mag, y_mag)
- v_spec = specs[1] - max_mag * np.exp(1.0j * np.angle(specs[0]))
- else:
- specs[1] = reduce_vocal_aggressively(specs[0], specs[1], 0.2)
- v_spec = specs[0] - specs[1]
-
- if not args.vocals_only:
- X_mag = np.abs(specs[0])
- y_mag = np.abs(specs[1])
- v_mag = np.abs(v_spec)
-
- X_image = spectrogram_to_image(X_mag)
- y_image = spectrogram_to_image(y_mag)
- v_image = spectrogram_to_image(v_mag)
-
- cv2.imwrite("{}_X.png".format(args.output_name), X_image)
- cv2.imwrite("{}_y.png".format(args.output_name), y_image)
- cv2.imwrite("{}_v.png".format(args.output_name), v_image)
-
- sf.write(
- "{}_X.wav".format(args.output_name),
- cmb_spectrogram_to_wave(specs[0], mp),
- mp.param["sr"],
- )
- sf.write(
- "{}_y.wav".format(args.output_name),
- cmb_spectrogram_to_wave(specs[1], mp),
- mp.param["sr"],
- )
-
- sf.write(
- "{}_v.wav".format(args.output_name),
- cmb_spectrogram_to_wave(v_spec, mp),
- mp.param["sr"],
- )
- else:
- if not args.algorithm == "deep":
- sf.write(
- os.path.join("ensembled", "{}.wav".format(args.output_name)),
- cmb_spectrogram_to_wave(ensembling(args.algorithm, specs), mp),
- mp.param["sr"],
- )
-
- if args.algorithm == "align":
- trackalignment = [
- {
- "file1": '"{}"'.format(args.input[0]),
- "file2": '"{}"'.format(args.input[1]),
- }
- ]
-
- for i, e in tqdm(enumerate(trackalignment), desc="Performing Alignment..."):
- os.system(f"python lib/align_tracks.py {e['file1']} {e['file2']}")
-
- # print('Total time: {0:.{1}f}s'.format(time.time() - start_time, 1))
diff --git a/spaces/Eddycrack864/Applio-Inference/utils/backups.py b/spaces/Eddycrack864/Applio-Inference/utils/backups.py
deleted file mode 100644
index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/utils/backups.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import os
-import shutil
-import hashlib
-import time
-import base64
-
-
-
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- weights_exist = False
- for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH):
- for filename in files:
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- print(f'Imported file from Google Drive backup: {filename}')
- elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'):
- weights_exist = True
- weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights')))
- weights_folderpath = os.path.dirname(weights_filepath)
- if not os.path.exists(weights_folderpath):
- os.makedirs(weights_folderpath)
- print(f'Created weights folder: {weights_folderpath}', flush=True)
- shutil.copy2(filepath, weights_filepath) # copy file with metadata
- print(f'Imported file from weights: {filename}')
- if weights_exist:
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("No weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def get_md5_hash(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-
-def copy_weights_folder_to_drive():
- destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights')
- try:
- if not os.path.exists(destination_folder):
- os.makedirs(destination_folder)
-
- num_copied = 0
- for filename in os.listdir(WEIGHTS_FOLDER):
- if filename.endswith('.pth'):
- source_file = os.path.join(WEIGHTS_FOLDER, filename)
- destination_file = os.path.join(destination_folder, filename)
- if not os.path.exists(destination_file):
- shutil.copy2(source_file, destination_file)
- num_copied += 1
- print(f"Copied {filename} to Google Drive!")
-
- if num_copied == 0:
- print("No new finished models found for copying.")
- else:
- print(f"Finished copying {num_copied} files to Google Drive!")
-
- except Exception as e:
- print(f"An error occurred while copying weights: {str(e)}")
- # You can log the error or take appropriate actions here.
-
-def backup_files():
- print("\nStarting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
-
- while True:
- try:
- updated = False # flag to check if any files were updated
- last_backup_timestamps = {}
-
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except FileNotFoundError:
- pass # File does not exist yet, which is fine
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- if last_backup_timestamp is None:
- print(f'Backed up file: {filename}')
- else:
- print(f'Updating backed up file: {filename}')
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- os.remove(backup_filepath)
- print(f'Deleted file: {filepath}')
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
- sleep_time = 15
- else:
- sleep_time = 0.1
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
-
- time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups
-
- except Exception as e:
- print(f"An error occurred: {str(e)}")
- # You can log the error or take appropriate actions here.
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/weights/README.md b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/weights/README.md
deleted file mode 100644
index 4d7b7e642591ef88575d9e6c360a4d29e0cc1a4f..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/weights/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Weights
-
-Put the downloaded weights to this folder.
diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/streaming.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/streaming.py
deleted file mode 100644
index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/streaming.py
+++ /dev/null
@@ -1,135 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Streaming module API that should be implemented by all Streaming components,
-"""
-
-from contextlib import contextmanager
-import typing as tp
-from torch import nn
-import torch
-
-
-State = tp.Dict[str, torch.Tensor]
-
-
-class StreamingModule(nn.Module):
- """Common API for streaming components.
-
- Each streaming component has a streaming state, which is just a dict[str, Tensor].
- By convention, the first dim of each tensor must be the batch size.
- Don't use dots in the key names, as this would clash with submodules
- (like in state_dict).
-
- If `self._is_streaming` is True, the component should use and remember
- the proper state inside `self._streaming_state`.
-
- To set a streaming component in streaming state, use
-
- with module.streaming():
- ...
-
- This will automatically reset the streaming state when exiting the context manager.
- This also automatically propagates to all streaming children module.
-
- Some module might also implement the `StreamingModule.flush` method, although
- this one is trickier, as all parents module must be StreamingModule and implement
- it as well for it to work properly. See `StreamingSequential` after.
- """
- def __init__(self) -> None:
- super().__init__()
- self._streaming_state: State = {}
- self._is_streaming = False
-
- def _apply_named_streaming(self, fn: tp.Any):
- for name, module in self.named_modules():
- if isinstance(module, StreamingModule):
- fn(name, module)
-
- def _set_streaming(self, streaming: bool):
- def _set_streaming(name, module):
- module._is_streaming = streaming
- self._apply_named_streaming(_set_streaming)
-
- @contextmanager
- def streaming(self):
- """Context manager to enter streaming mode. Reset streaming state on exit.
- """
- self._set_streaming(True)
- try:
- yield
- finally:
- self._set_streaming(False)
- self.reset_streaming()
-
- def reset_streaming(self):
- """Reset the streaming state.
- """
- def _reset(name: str, module: StreamingModule):
- module._streaming_state.clear()
-
- self._apply_named_streaming(_reset)
-
- def get_streaming_state(self) -> State:
- """Return the streaming state, including that of sub-modules.
- """
- state: State = {}
-
- def _add(name: str, module: StreamingModule):
- if name:
- name += "."
- for key, value in module._streaming_state.items():
- state[name + key] = value
-
- self._apply_named_streaming(_add)
- return state
-
- def set_streaming_state(self, state: State):
- """Set the streaming state, including that of sub-modules.
- """
- state = dict(state)
-
- def _set(name: str, module: StreamingModule):
- if name:
- name += "."
- module._streaming_state.clear()
- for key, value in list(state.items()):
- # complexity is not ideal here, but probably fine.
- if key.startswith(name):
- local_key = key[len(name):]
- if '.' not in local_key:
- module._streaming_state[local_key] = value
- del state[key]
-
- self._apply_named_streaming(_set)
- assert len(state) == 0, list(state.keys())
-
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- """Flush any remaining outputs that were waiting for completion.
- Typically, for convolutions, this will add the final padding
- and process the last buffer.
-
- This should take an optional argument `x`, which will be provided
- if a module before this one in the streaming pipeline has already
- spitted out a flushed out buffer.
- """
- if x is None:
- return None
- else:
- return self(x)
-
-
-class StreamingSequential(StreamingModule, nn.Sequential):
- """A streaming compatible alternative of `nn.Sequential`.
- """
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- for module in self:
- if isinstance(module, StreamingModule):
- x = module.flush(x)
- elif x is not None:
- x = module(x)
- return x
diff --git a/spaces/Enutrof/English-NigerianPidgin-Translator/app.py b/spaces/Enutrof/English-NigerianPidgin-Translator/app.py
deleted file mode 100644
index 6f41145034cdd4208d901c456cfb26eeb32e95b3..0000000000000000000000000000000000000000
--- a/spaces/Enutrof/English-NigerianPidgin-Translator/app.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import os
-import gradio as gr
-from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
-
-
-# os.environ["TOKENIZERS_PARALLELISM"] = "false"
-
-
-def load_translator(model_name='Enutrof/marian-mt-en-pcm'):
- '''
- This method loads the sequence to sequence model for translation.
- :return: model
- '''
- pmodel_args = Seq2SeqArgs()
- pmodel_args.max_length = 1024
- pmodel_args.length_penalty = 1
- pmodel_args.num_beams = 20
- pmodel_args.num_return_sequences = 3
-
- pmodel = Seq2SeqModel(
- encoder_decoder_type="marian",
- encoder_decoder_name=model_name,
- args=pmodel_args,
- use_cuda=False
- )
- return pmodel
-
-
-en_pcm_model = load_translator()
-
-
-def predict(input):
- if isinstance(input, str):
- input = [input]
- predictions = en_pcm_model.predict(input)
- return [i.replace('▁', ' ') for i in predictions[0]]
-
-
-# HF_TOKEN = os.getenv('english-pidgin-flagging')
-# hf_writer = gr.HuggingFaceDatasetSaver(HF_TOKEN,
-# dataset_name="English-NigerianPidgin-Result-Validation",
-# organization="Enutrof",
-# )
-gr.Interface(
- fn=predict,
- inputs=gr.inputs.Textbox(lines=1, label="Input Text in English"),
- outputs=[
- gr.outputs.Textbox(label="Translated texts in 🇳🇬 Pidgin"),
- gr.outputs.Textbox(label=''),
- gr.outputs.Textbox(label=''),
- ],
- # theme="peach",
- title='English to 🇳🇬 Pidgin Automatic Translation',
- description='Type your English text in the left text box to get 🇳🇬 Pidgin translations on the right. '
- 'Tell us the best translation by clicking one of the buttons below.',
- examples=[
- 'Who are you?',
- 'You shall not pervert justice due the stranger or the fatherless, nor take a widow’s garment as a pledge.',
- 'I know every song by that artiste.',
- 'They should not be permitted here.',
- 'What are you looking for?',
- 'I am lost please help me find my way to the market.',
- ],
- allow_flagging="manual",
- flagging_options=["translation 1 ✅", "translation 2 ✅",
- "translation 3 ✅"],
- #flagging_callback=hf_writer,
-).launch(enable_queue=True)
diff --git a/spaces/FDSRashid/Taraf_by_Year/README.md b/spaces/FDSRashid/Taraf_by_Year/README.md
deleted file mode 100644
index efff1b788617ed3af79d16e40432dd25c4101883..0000000000000000000000000000000000000000
--- a/spaces/FDSRashid/Taraf_by_Year/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Taraf By Year
-emoji: 💻
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Farazquraishi/pendora/options/swap_options.py b/spaces/Farazquraishi/pendora/options/swap_options.py
deleted file mode 100644
index 2a90c349bb7078823ddd99ed96700cb2569579cd..0000000000000000000000000000000000000000
--- a/spaces/Farazquraishi/pendora/options/swap_options.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import argparse
-
-
-class SwapOptions():
- def __init__(self):
- self.parser = argparse.ArgumentParser()
- self.initialized = False
-
- def initialize(self):
- # paths (data, models, etc...)
- self.parser.add_argument('--arcface_path', type=str,
- default="arcface_model/arcface/arc_res50.h5",
- help='path to arcface model. Used to extract identity from source.')
-
- # Video/Image necessary models
- self.parser.add_argument('--retina_path', type=str,
- default="retinaface/retinaface_res50.h5",
- help='path to retinaface model.')
- self.parser.add_argument('--compare', type=bool,
- default=True,
- help='If true, concatenates the frame with the manipulated frame')
-
- self.parser.add_argument('--load', type=int,
- default=30,
- help='int of number to load checkpoint weights.')
- self.parser.add_argument('--device_id', type=int, default=0,
- help='which device to use')
-
- # logging and checkpointing
- self.parser.add_argument('--log_dir', type=str, default='logs/runs/',
- help='logging directory')
- self.parser.add_argument('--log_name', type=str, default='affa_f',
- help='name of the run, change this to track several experiments')
-
- self.parser.add_argument('--chkp_dir', type=str, default='checkpoints/',
- help='checkpoint directory (will use same name as log_name!)')
- self.initialized = True
-
- def parse(self):
- if not self.initialized:
- self.initialize()
- self.opt = self.parser.parse_args()
- return self.opt
\ No newline at end of file
diff --git a/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/jquery.min.js b/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/jquery.min.js
deleted file mode 100644
index ab28a24729b320bffd3d2f60302af949db39ab85..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/docs/waifu_plugin/jquery.min.js
+++ /dev/null
@@ -1,4 +0,0 @@
-/*! jQuery v1.11.1 | (c) 2005, 2014 jQuery Foundation, Inc. | jquery.org/license */
-!function(a,b){"object"==typeof module&&"object"==typeof module.exports?module.exports=a.document?b(a,!0):function(a){if(!a.document)throw new Error("jQuery requires a window with a document");return b(a)}:b(a)}("undefined"!=typeof window?window:this,function(a,b){var c=[],d=c.slice,e=c.concat,f=c.push,g=c.indexOf,h={},i=h.toString,j=h.hasOwnProperty,k={},l="1.11.1",m=function(a,b){return new m.fn.init(a,b)},n=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,o=/^-ms-/,p=/-([\da-z])/gi,q=function(a,b){return b.toUpperCase()};m.fn=m.prototype={jquery:l,constructor:m,selector:"",length:0,toArray:function(){return d.call(this)},get:function(a){return null!=a?0>a?this[a+this.length]:this[a]:d.call(this)},pushStack:function(a){var b=m.merge(this.constructor(),a);return b.prevObject=this,b.context=this.context,b},each:function(a,b){return m.each(this,a,b)},map:function(a){return this.pushStack(m.map(this,function(b,c){return a.call(b,c,b)}))},slice:function(){return this.pushStack(d.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(a){var b=this.length,c=+a+(0>a?b:0);return this.pushStack(c>=0&&b>c?[this[c]]:[])},end:function(){return this.prevObject||this.constructor(null)},push:f,sort:c.sort,splice:c.splice},m.extend=m.fn.extend=function(){var a,b,c,d,e,f,g=arguments[0]||{},h=1,i=arguments.length,j=!1;for("boolean"==typeof g&&(j=g,g=arguments[h]||{},h++),"object"==typeof g||m.isFunction(g)||(g={}),h===i&&(g=this,h--);i>h;h++)if(null!=(e=arguments[h]))for(d in e)a=g[d],c=e[d],g!==c&&(j&&c&&(m.isPlainObject(c)||(b=m.isArray(c)))?(b?(b=!1,f=a&&m.isArray(a)?a:[]):f=a&&m.isPlainObject(a)?a:{},g[d]=m.extend(j,f,c)):void 0!==c&&(g[d]=c));return g},m.extend({expando:"jQuery"+(l+Math.random()).replace(/\D/g,""),isReady:!0,error:function(a){throw new Error(a)},noop:function(){},isFunction:function(a){return"function"===m.type(a)},isArray:Array.isArray||function(a){return"array"===m.type(a)},isWindow:function(a){return null!=a&&a==a.window},isNumeric:function(a){return!m.isArray(a)&&a-parseFloat(a)>=0},isEmptyObject:function(a){var b;for(b in a)return!1;return!0},isPlainObject:function(a){var b;if(!a||"object"!==m.type(a)||a.nodeType||m.isWindow(a))return!1;try{if(a.constructor&&!j.call(a,"constructor")&&!j.call(a.constructor.prototype,"isPrototypeOf"))return!1}catch(c){return!1}if(k.ownLast)for(b in a)return j.call(a,b);for(b in a);return void 0===b||j.call(a,b)},type:function(a){return null==a?a+"":"object"==typeof a||"function"==typeof a?h[i.call(a)]||"object":typeof a},globalEval:function(b){b&&m.trim(b)&&(a.execScript||function(b){a.eval.call(a,b)})(b)},camelCase:function(a){return a.replace(o,"ms-").replace(p,q)},nodeName:function(a,b){return a.nodeName&&a.nodeName.toLowerCase()===b.toLowerCase()},each:function(a,b,c){var d,e=0,f=a.length,g=r(a);if(c){if(g){for(;f>e;e++)if(d=b.apply(a[e],c),d===!1)break}else for(e in a)if(d=b.apply(a[e],c),d===!1)break}else if(g){for(;f>e;e++)if(d=b.call(a[e],e,a[e]),d===!1)break}else for(e in a)if(d=b.call(a[e],e,a[e]),d===!1)break;return a},trim:function(a){return null==a?"":(a+"").replace(n,"")},makeArray:function(a,b){var c=b||[];return null!=a&&(r(Object(a))?m.merge(c,"string"==typeof a?[a]:a):f.call(c,a)),c},inArray:function(a,b,c){var d;if(b){if(g)return g.call(b,a,c);for(d=b.length,c=c?0>c?Math.max(0,d+c):c:0;d>c;c++)if(c in b&&b[c]===a)return c}return-1},merge:function(a,b){var c=+b.length,d=0,e=a.length;while(c>d)a[e++]=b[d++];if(c!==c)while(void 0!==b[d])a[e++]=b[d++];return a.length=e,a},grep:function(a,b,c){for(var d,e=[],f=0,g=a.length,h=!c;g>f;f++)d=!b(a[f],f),d!==h&&e.push(a[f]);return e},map:function(a,b,c){var d,f=0,g=a.length,h=r(a),i=[];if(h)for(;g>f;f++)d=b(a[f],f,c),null!=d&&i.push(d);else for(f in a)d=b(a[f],f,c),null!=d&&i.push(d);return e.apply([],i)},guid:1,proxy:function(a,b){var c,e,f;return"string"==typeof b&&(f=a[b],b=a,a=f),m.isFunction(a)?(c=d.call(arguments,2),e=function(){return a.apply(b||this,c.concat(d.call(arguments)))},e.guid=a.guid=a.guid||m.guid++,e):void 0},now:function(){return+new Date},support:k}),m.each("Boolean Number String Function Array Date RegExp Object Error".split(" "),function(a,b){h["[object "+b+"]"]=b.toLowerCase()});function r(a){var b=a.length,c=m.type(a);return"function"===c||m.isWindow(a)?!1:1===a.nodeType&&b?!0:"array"===c||0===b||"number"==typeof b&&b>0&&b-1 in a}var s=function(a){var b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u="sizzle"+-new Date,v=a.document,w=0,x=0,y=gb(),z=gb(),A=gb(),B=function(a,b){return a===b&&(l=!0),0},C="undefined",D=1<<31,E={}.hasOwnProperty,F=[],G=F.pop,H=F.push,I=F.push,J=F.slice,K=F.indexOf||function(a){for(var b=0,c=this.length;c>b;b++)if(this[b]===a)return b;return-1},L="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",M="[\\x20\\t\\r\\n\\f]",N="(?:\\\\.|[\\w-]|[^\\x00-\\xa0])+",O=N.replace("w","w#"),P="\\["+M+"*("+N+")(?:"+M+"*([*^$|!~]?=)"+M+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+O+"))|)"+M+"*\\]",Q=":("+N+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+P+")*)|.*)\\)|)",R=new RegExp("^"+M+"+|((?:^|[^\\\\])(?:\\\\.)*)"+M+"+$","g"),S=new RegExp("^"+M+"*,"+M+"*"),T=new RegExp("^"+M+"*([>+~]|"+M+")"+M+"*"),U=new RegExp("="+M+"*([^\\]'\"]*?)"+M+"*\\]","g"),V=new RegExp(Q),W=new RegExp("^"+O+"$"),X={ID:new RegExp("^#("+N+")"),CLASS:new RegExp("^\\.("+N+")"),TAG:new RegExp("^("+N.replace("w","w*")+")"),ATTR:new RegExp("^"+P),PSEUDO:new RegExp("^"+Q),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+L+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/^(?:input|select|textarea|button)$/i,Z=/^h\d$/i,$=/^[^{]+\{\s*\[native \w/,_=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ab=/[+~]/,bb=/'|\\/g,cb=new RegExp("\\\\([\\da-f]{1,6}"+M+"?|("+M+")|.)","ig"),db=function(a,b,c){var d="0x"+b-65536;return d!==d||c?b:0>d?String.fromCharCode(d+65536):String.fromCharCode(d>>10|55296,1023&d|56320)};try{I.apply(F=J.call(v.childNodes),v.childNodes),F[v.childNodes.length].nodeType}catch(eb){I={apply:F.length?function(a,b){H.apply(a,J.call(b))}:function(a,b){var c=a.length,d=0;while(a[c++]=b[d++]);a.length=c-1}}}function fb(a,b,d,e){var f,h,j,k,l,o,r,s,w,x;if((b?b.ownerDocument||b:v)!==n&&m(b),b=b||n,d=d||[],!a||"string"!=typeof a)return d;if(1!==(k=b.nodeType)&&9!==k)return[];if(p&&!e){if(f=_.exec(a))if(j=f[1]){if(9===k){if(h=b.getElementById(j),!h||!h.parentNode)return d;if(h.id===j)return d.push(h),d}else if(b.ownerDocument&&(h=b.ownerDocument.getElementById(j))&&t(b,h)&&h.id===j)return d.push(h),d}else{if(f[2])return I.apply(d,b.getElementsByTagName(a)),d;if((j=f[3])&&c.getElementsByClassName&&b.getElementsByClassName)return I.apply(d,b.getElementsByClassName(j)),d}if(c.qsa&&(!q||!q.test(a))){if(s=r=u,w=b,x=9===k&&a,1===k&&"object"!==b.nodeName.toLowerCase()){o=g(a),(r=b.getAttribute("id"))?s=r.replace(bb,"\\$&"):b.setAttribute("id",s),s="[id='"+s+"'] ",l=o.length;while(l--)o[l]=s+qb(o[l]);w=ab.test(a)&&ob(b.parentNode)||b,x=o.join(",")}if(x)try{return I.apply(d,w.querySelectorAll(x)),d}catch(y){}finally{r||b.removeAttribute("id")}}}return i(a.replace(R,"$1"),b,d,e)}function gb(){var a=[];function b(c,e){return a.push(c+" ")>d.cacheLength&&delete b[a.shift()],b[c+" "]=e}return b}function hb(a){return a[u]=!0,a}function ib(a){var b=n.createElement("div");try{return!!a(b)}catch(c){return!1}finally{b.parentNode&&b.parentNode.removeChild(b),b=null}}function jb(a,b){var c=a.split("|"),e=a.length;while(e--)d.attrHandle[c[e]]=b}function kb(a,b){var c=b&&a,d=c&&1===a.nodeType&&1===b.nodeType&&(~b.sourceIndex||D)-(~a.sourceIndex||D);if(d)return d;if(c)while(c=c.nextSibling)if(c===b)return-1;return a?1:-1}function lb(a){return function(b){var c=b.nodeName.toLowerCase();return"input"===c&&b.type===a}}function mb(a){return function(b){var c=b.nodeName.toLowerCase();return("input"===c||"button"===c)&&b.type===a}}function nb(a){return hb(function(b){return b=+b,hb(function(c,d){var e,f=a([],c.length,b),g=f.length;while(g--)c[e=f[g]]&&(c[e]=!(d[e]=c[e]))})})}function ob(a){return a&&typeof a.getElementsByTagName!==C&&a}c=fb.support={},f=fb.isXML=function(a){var b=a&&(a.ownerDocument||a).documentElement;return b?"HTML"!==b.nodeName:!1},m=fb.setDocument=function(a){var b,e=a?a.ownerDocument||a:v,g=e.defaultView;return e!==n&&9===e.nodeType&&e.documentElement?(n=e,o=e.documentElement,p=!f(e),g&&g!==g.top&&(g.addEventListener?g.addEventListener("unload",function(){m()},!1):g.attachEvent&&g.attachEvent("onunload",function(){m()})),c.attributes=ib(function(a){return a.className="i",!a.getAttribute("className")}),c.getElementsByTagName=ib(function(a){return a.appendChild(e.createComment("")),!a.getElementsByTagName("*").length}),c.getElementsByClassName=$.test(e.getElementsByClassName)&&ib(function(a){return a.innerHTML="",a.firstChild.className="i",2===a.getElementsByClassName("i").length}),c.getById=ib(function(a){return o.appendChild(a).id=u,!e.getElementsByName||!e.getElementsByName(u).length}),c.getById?(d.find.ID=function(a,b){if(typeof b.getElementById!==C&&p){var c=b.getElementById(a);return c&&c.parentNode?[c]:[]}},d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){return a.getAttribute("id")===b}}):(delete d.find.ID,d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){var c=typeof a.getAttributeNode!==C&&a.getAttributeNode("id");return c&&c.value===b}}),d.find.TAG=c.getElementsByTagName?function(a,b){return typeof b.getElementsByTagName!==C?b.getElementsByTagName(a):void 0}:function(a,b){var c,d=[],e=0,f=b.getElementsByTagName(a);if("*"===a){while(c=f[e++])1===c.nodeType&&d.push(c);return d}return f},d.find.CLASS=c.getElementsByClassName&&function(a,b){return typeof b.getElementsByClassName!==C&&p?b.getElementsByClassName(a):void 0},r=[],q=[],(c.qsa=$.test(e.querySelectorAll))&&(ib(function(a){a.innerHTML="",a.querySelectorAll("[msallowclip^='']").length&&q.push("[*^$]="+M+"*(?:''|\"\")"),a.querySelectorAll("[selected]").length||q.push("\\["+M+"*(?:value|"+L+")"),a.querySelectorAll(":checked").length||q.push(":checked")}),ib(function(a){var b=e.createElement("input");b.setAttribute("type","hidden"),a.appendChild(b).setAttribute("name","D"),a.querySelectorAll("[name=d]").length&&q.push("name"+M+"*[*^$|!~]?="),a.querySelectorAll(":enabled").length||q.push(":enabled",":disabled"),a.querySelectorAll("*,:x"),q.push(",.*:")})),(c.matchesSelector=$.test(s=o.matches||o.webkitMatchesSelector||o.mozMatchesSelector||o.oMatchesSelector||o.msMatchesSelector))&&ib(function(a){c.disconnectedMatch=s.call(a,"div"),s.call(a,"[s!='']:x"),r.push("!=",Q)}),q=q.length&&new RegExp(q.join("|")),r=r.length&&new RegExp(r.join("|")),b=$.test(o.compareDocumentPosition),t=b||$.test(o.contains)?function(a,b){var c=9===a.nodeType?a.documentElement:a,d=b&&b.parentNode;return a===d||!(!d||1!==d.nodeType||!(c.contains?c.contains(d):a.compareDocumentPosition&&16&a.compareDocumentPosition(d)))}:function(a,b){if(b)while(b=b.parentNode)if(b===a)return!0;return!1},B=b?function(a,b){if(a===b)return l=!0,0;var d=!a.compareDocumentPosition-!b.compareDocumentPosition;return d?d:(d=(a.ownerDocument||a)===(b.ownerDocument||b)?a.compareDocumentPosition(b):1,1&d||!c.sortDetached&&b.compareDocumentPosition(a)===d?a===e||a.ownerDocument===v&&t(v,a)?-1:b===e||b.ownerDocument===v&&t(v,b)?1:k?K.call(k,a)-K.call(k,b):0:4&d?-1:1)}:function(a,b){if(a===b)return l=!0,0;var c,d=0,f=a.parentNode,g=b.parentNode,h=[a],i=[b];if(!f||!g)return a===e?-1:b===e?1:f?-1:g?1:k?K.call(k,a)-K.call(k,b):0;if(f===g)return kb(a,b);c=a;while(c=c.parentNode)h.unshift(c);c=b;while(c=c.parentNode)i.unshift(c);while(h[d]===i[d])d++;return d?kb(h[d],i[d]):h[d]===v?-1:i[d]===v?1:0},e):n},fb.matches=function(a,b){return fb(a,null,null,b)},fb.matchesSelector=function(a,b){if((a.ownerDocument||a)!==n&&m(a),b=b.replace(U,"='$1']"),!(!c.matchesSelector||!p||r&&r.test(b)||q&&q.test(b)))try{var d=s.call(a,b);if(d||c.disconnectedMatch||a.document&&11!==a.document.nodeType)return d}catch(e){}return fb(b,n,null,[a]).length>0},fb.contains=function(a,b){return(a.ownerDocument||a)!==n&&m(a),t(a,b)},fb.attr=function(a,b){(a.ownerDocument||a)!==n&&m(a);var e=d.attrHandle[b.toLowerCase()],f=e&&E.call(d.attrHandle,b.toLowerCase())?e(a,b,!p):void 0;return void 0!==f?f:c.attributes||!p?a.getAttribute(b):(f=a.getAttributeNode(b))&&f.specified?f.value:null},fb.error=function(a){throw new Error("Syntax error, unrecognized expression: "+a)},fb.uniqueSort=function(a){var b,d=[],e=0,f=0;if(l=!c.detectDuplicates,k=!c.sortStable&&a.slice(0),a.sort(B),l){while(b=a[f++])b===a[f]&&(e=d.push(f));while(e--)a.splice(d[e],1)}return k=null,a},e=fb.getText=function(a){var b,c="",d=0,f=a.nodeType;if(f){if(1===f||9===f||11===f){if("string"==typeof a.textContent)return a.textContent;for(a=a.firstChild;a;a=a.nextSibling)c+=e(a)}else if(3===f||4===f)return a.nodeValue}else while(b=a[d++])c+=e(b);return c},d=fb.selectors={cacheLength:50,createPseudo:hb,match:X,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(a){return a[1]=a[1].replace(cb,db),a[3]=(a[3]||a[4]||a[5]||"").replace(cb,db),"~="===a[2]&&(a[3]=" "+a[3]+" "),a.slice(0,4)},CHILD:function(a){return a[1]=a[1].toLowerCase(),"nth"===a[1].slice(0,3)?(a[3]||fb.error(a[0]),a[4]=+(a[4]?a[5]+(a[6]||1):2*("even"===a[3]||"odd"===a[3])),a[5]=+(a[7]+a[8]||"odd"===a[3])):a[3]&&fb.error(a[0]),a},PSEUDO:function(a){var b,c=!a[6]&&a[2];return X.CHILD.test(a[0])?null:(a[3]?a[2]=a[4]||a[5]||"":c&&V.test(c)&&(b=g(c,!0))&&(b=c.indexOf(")",c.length-b)-c.length)&&(a[0]=a[0].slice(0,b),a[2]=c.slice(0,b)),a.slice(0,3))}},filter:{TAG:function(a){var b=a.replace(cb,db).toLowerCase();return"*"===a?function(){return!0}:function(a){return a.nodeName&&a.nodeName.toLowerCase()===b}},CLASS:function(a){var b=y[a+" "];return b||(b=new RegExp("(^|"+M+")"+a+"("+M+"|$)"))&&y(a,function(a){return b.test("string"==typeof a.className&&a.className||typeof a.getAttribute!==C&&a.getAttribute("class")||"")})},ATTR:function(a,b,c){return function(d){var e=fb.attr(d,a);return null==e?"!="===b:b?(e+="","="===b?e===c:"!="===b?e!==c:"^="===b?c&&0===e.indexOf(c):"*="===b?c&&e.indexOf(c)>-1:"$="===b?c&&e.slice(-c.length)===c:"~="===b?(" "+e+" ").indexOf(c)>-1:"|="===b?e===c||e.slice(0,c.length+1)===c+"-":!1):!0}},CHILD:function(a,b,c,d,e){var f="nth"!==a.slice(0,3),g="last"!==a.slice(-4),h="of-type"===b;return 1===d&&0===e?function(a){return!!a.parentNode}:function(b,c,i){var j,k,l,m,n,o,p=f!==g?"nextSibling":"previousSibling",q=b.parentNode,r=h&&b.nodeName.toLowerCase(),s=!i&&!h;if(q){if(f){while(p){l=b;while(l=l[p])if(h?l.nodeName.toLowerCase()===r:1===l.nodeType)return!1;o=p="only"===a&&!o&&"nextSibling"}return!0}if(o=[g?q.firstChild:q.lastChild],g&&s){k=q[u]||(q[u]={}),j=k[a]||[],n=j[0]===w&&j[1],m=j[0]===w&&j[2],l=n&&q.childNodes[n];while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if(1===l.nodeType&&++m&&l===b){k[a]=[w,n,m];break}}else if(s&&(j=(b[u]||(b[u]={}))[a])&&j[0]===w)m=j[1];else while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if((h?l.nodeName.toLowerCase()===r:1===l.nodeType)&&++m&&(s&&((l[u]||(l[u]={}))[a]=[w,m]),l===b))break;return m-=e,m===d||m%d===0&&m/d>=0}}},PSEUDO:function(a,b){var c,e=d.pseudos[a]||d.setFilters[a.toLowerCase()]||fb.error("unsupported pseudo: "+a);return e[u]?e(b):e.length>1?(c=[a,a,"",b],d.setFilters.hasOwnProperty(a.toLowerCase())?hb(function(a,c){var d,f=e(a,b),g=f.length;while(g--)d=K.call(a,f[g]),a[d]=!(c[d]=f[g])}):function(a){return e(a,0,c)}):e}},pseudos:{not:hb(function(a){var b=[],c=[],d=h(a.replace(R,"$1"));return d[u]?hb(function(a,b,c,e){var f,g=d(a,null,e,[]),h=a.length;while(h--)(f=g[h])&&(a[h]=!(b[h]=f))}):function(a,e,f){return b[0]=a,d(b,null,f,c),!c.pop()}}),has:hb(function(a){return function(b){return fb(a,b).length>0}}),contains:hb(function(a){return function(b){return(b.textContent||b.innerText||e(b)).indexOf(a)>-1}}),lang:hb(function(a){return W.test(a||"")||fb.error("unsupported lang: "+a),a=a.replace(cb,db).toLowerCase(),function(b){var c;do if(c=p?b.lang:b.getAttribute("xml:lang")||b.getAttribute("lang"))return c=c.toLowerCase(),c===a||0===c.indexOf(a+"-");while((b=b.parentNode)&&1===b.nodeType);return!1}}),target:function(b){var c=a.location&&a.location.hash;return c&&c.slice(1)===b.id},root:function(a){return a===o},focus:function(a){return a===n.activeElement&&(!n.hasFocus||n.hasFocus())&&!!(a.type||a.href||~a.tabIndex)},enabled:function(a){return a.disabled===!1},disabled:function(a){return a.disabled===!0},checked:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&!!a.checked||"option"===b&&!!a.selected},selected:function(a){return a.parentNode&&a.parentNode.selectedIndex,a.selected===!0},empty:function(a){for(a=a.firstChild;a;a=a.nextSibling)if(a.nodeType<6)return!1;return!0},parent:function(a){return!d.pseudos.empty(a)},header:function(a){return Z.test(a.nodeName)},input:function(a){return Y.test(a.nodeName)},button:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&"button"===a.type||"button"===b},text:function(a){var b;return"input"===a.nodeName.toLowerCase()&&"text"===a.type&&(null==(b=a.getAttribute("type"))||"text"===b.toLowerCase())},first:nb(function(){return[0]}),last:nb(function(a,b){return[b-1]}),eq:nb(function(a,b,c){return[0>c?c+b:c]}),even:nb(function(a,b){for(var c=0;b>c;c+=2)a.push(c);return a}),odd:nb(function(a,b){for(var c=1;b>c;c+=2)a.push(c);return a}),lt:nb(function(a,b,c){for(var d=0>c?c+b:c;--d>=0;)a.push(d);return a}),gt:nb(function(a,b,c){for(var d=0>c?c+b:c;++db;b++)d+=a[b].value;return d}function rb(a,b,c){var d=b.dir,e=c&&"parentNode"===d,f=x++;return b.first?function(b,c,f){while(b=b[d])if(1===b.nodeType||e)return a(b,c,f)}:function(b,c,g){var h,i,j=[w,f];if(g){while(b=b[d])if((1===b.nodeType||e)&&a(b,c,g))return!0}else while(b=b[d])if(1===b.nodeType||e){if(i=b[u]||(b[u]={}),(h=i[d])&&h[0]===w&&h[1]===f)return j[2]=h[2];if(i[d]=j,j[2]=a(b,c,g))return!0}}}function sb(a){return a.length>1?function(b,c,d){var e=a.length;while(e--)if(!a[e](b,c,d))return!1;return!0}:a[0]}function tb(a,b,c){for(var d=0,e=b.length;e>d;d++)fb(a,b[d],c);return c}function ub(a,b,c,d,e){for(var f,g=[],h=0,i=a.length,j=null!=b;i>h;h++)(f=a[h])&&(!c||c(f,d,e))&&(g.push(f),j&&b.push(h));return g}function vb(a,b,c,d,e,f){return d&&!d[u]&&(d=vb(d)),e&&!e[u]&&(e=vb(e,f)),hb(function(f,g,h,i){var j,k,l,m=[],n=[],o=g.length,p=f||tb(b||"*",h.nodeType?[h]:h,[]),q=!a||!f&&b?p:ub(p,m,a,h,i),r=c?e||(f?a:o||d)?[]:g:q;if(c&&c(q,r,h,i),d){j=ub(r,n),d(j,[],h,i),k=j.length;while(k--)(l=j[k])&&(r[n[k]]=!(q[n[k]]=l))}if(f){if(e||a){if(e){j=[],k=r.length;while(k--)(l=r[k])&&j.push(q[k]=l);e(null,r=[],j,i)}k=r.length;while(k--)(l=r[k])&&(j=e?K.call(f,l):m[k])>-1&&(f[j]=!(g[j]=l))}}else r=ub(r===g?r.splice(o,r.length):r),e?e(null,g,r,i):I.apply(g,r)})}function wb(a){for(var b,c,e,f=a.length,g=d.relative[a[0].type],h=g||d.relative[" "],i=g?1:0,k=rb(function(a){return a===b},h,!0),l=rb(function(a){return K.call(b,a)>-1},h,!0),m=[function(a,c,d){return!g&&(d||c!==j)||((b=c).nodeType?k(a,c,d):l(a,c,d))}];f>i;i++)if(c=d.relative[a[i].type])m=[rb(sb(m),c)];else{if(c=d.filter[a[i].type].apply(null,a[i].matches),c[u]){for(e=++i;f>e;e++)if(d.relative[a[e].type])break;return vb(i>1&&sb(m),i>1&&qb(a.slice(0,i-1).concat({value:" "===a[i-2].type?"*":""})).replace(R,"$1"),c,e>i&&wb(a.slice(i,e)),f>e&&wb(a=a.slice(e)),f>e&&qb(a))}m.push(c)}return sb(m)}function xb(a,b){var c=b.length>0,e=a.length>0,f=function(f,g,h,i,k){var l,m,o,p=0,q="0",r=f&&[],s=[],t=j,u=f||e&&d.find.TAG("*",k),v=w+=null==t?1:Math.random()||.1,x=u.length;for(k&&(j=g!==n&&g);q!==x&&null!=(l=u[q]);q++){if(e&&l){m=0;while(o=a[m++])if(o(l,g,h)){i.push(l);break}k&&(w=v)}c&&((l=!o&&l)&&p--,f&&r.push(l))}if(p+=q,c&&q!==p){m=0;while(o=b[m++])o(r,s,g,h);if(f){if(p>0)while(q--)r[q]||s[q]||(s[q]=G.call(i));s=ub(s)}I.apply(i,s),k&&!f&&s.length>0&&p+b.length>1&&fb.uniqueSort(i)}return k&&(w=v,j=t),r};return c?hb(f):f}return h=fb.compile=function(a,b){var c,d=[],e=[],f=A[a+" "];if(!f){b||(b=g(a)),c=b.length;while(c--)f=wb(b[c]),f[u]?d.push(f):e.push(f);f=A(a,xb(e,d)),f.selector=a}return f},i=fb.select=function(a,b,e,f){var i,j,k,l,m,n="function"==typeof a&&a,o=!f&&g(a=n.selector||a);if(e=e||[],1===o.length){if(j=o[0]=o[0].slice(0),j.length>2&&"ID"===(k=j[0]).type&&c.getById&&9===b.nodeType&&p&&d.relative[j[1].type]){if(b=(d.find.ID(k.matches[0].replace(cb,db),b)||[])[0],!b)return e;n&&(b=b.parentNode),a=a.slice(j.shift().value.length)}i=X.needsContext.test(a)?0:j.length;while(i--){if(k=j[i],d.relative[l=k.type])break;if((m=d.find[l])&&(f=m(k.matches[0].replace(cb,db),ab.test(j[0].type)&&ob(b.parentNode)||b))){if(j.splice(i,1),a=f.length&&qb(j),!a)return I.apply(e,f),e;break}}}return(n||h(a,o))(f,b,!p,e,ab.test(a)&&ob(b.parentNode)||b),e},c.sortStable=u.split("").sort(B).join("")===u,c.detectDuplicates=!!l,m(),c.sortDetached=ib(function(a){return 1&a.compareDocumentPosition(n.createElement("div"))}),ib(function(a){return a.innerHTML="","#"===a.firstChild.getAttribute("href")})||jb("type|href|height|width",function(a,b,c){return c?void 0:a.getAttribute(b,"type"===b.toLowerCase()?1:2)}),c.attributes&&ib(function(a){return a.innerHTML="",a.firstChild.setAttribute("value",""),""===a.firstChild.getAttribute("value")})||jb("value",function(a,b,c){return c||"input"!==a.nodeName.toLowerCase()?void 0:a.defaultValue}),ib(function(a){return null==a.getAttribute("disabled")})||jb(L,function(a,b,c){var d;return c?void 0:a[b]===!0?b.toLowerCase():(d=a.getAttributeNode(b))&&d.specified?d.value:null}),fb}(a);m.find=s,m.expr=s.selectors,m.expr[":"]=m.expr.pseudos,m.unique=s.uniqueSort,m.text=s.getText,m.isXMLDoc=s.isXML,m.contains=s.contains;var t=m.expr.match.needsContext,u=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,v=/^.[^:#\[\.,]*$/;function w(a,b,c){if(m.isFunction(b))return m.grep(a,function(a,d){return!!b.call(a,d,a)!==c});if(b.nodeType)return m.grep(a,function(a){return a===b!==c});if("string"==typeof b){if(v.test(b))return m.filter(b,a,c);b=m.filter(b,a)}return m.grep(a,function(a){return m.inArray(a,b)>=0!==c})}m.filter=function(a,b,c){var d=b[0];return c&&(a=":not("+a+")"),1===b.length&&1===d.nodeType?m.find.matchesSelector(d,a)?[d]:[]:m.find.matches(a,m.grep(b,function(a){return 1===a.nodeType}))},m.fn.extend({find:function(a){var b,c=[],d=this,e=d.length;if("string"!=typeof a)return this.pushStack(m(a).filter(function(){for(b=0;e>b;b++)if(m.contains(d[b],this))return!0}));for(b=0;e>b;b++)m.find(a,d[b],c);return c=this.pushStack(e>1?m.unique(c):c),c.selector=this.selector?this.selector+" "+a:a,c},filter:function(a){return this.pushStack(w(this,a||[],!1))},not:function(a){return this.pushStack(w(this,a||[],!0))},is:function(a){return!!w(this,"string"==typeof a&&t.test(a)?m(a):a||[],!1).length}});var x,y=a.document,z=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]*))$/,A=m.fn.init=function(a,b){var c,d;if(!a)return this;if("string"==typeof a){if(c="<"===a.charAt(0)&&">"===a.charAt(a.length-1)&&a.length>=3?[null,a,null]:z.exec(a),!c||!c[1]&&b)return!b||b.jquery?(b||x).find(a):this.constructor(b).find(a);if(c[1]){if(b=b instanceof m?b[0]:b,m.merge(this,m.parseHTML(c[1],b&&b.nodeType?b.ownerDocument||b:y,!0)),u.test(c[1])&&m.isPlainObject(b))for(c in b)m.isFunction(this[c])?this[c](b[c]):this.attr(c,b[c]);return this}if(d=y.getElementById(c[2]),d&&d.parentNode){if(d.id!==c[2])return x.find(a);this.length=1,this[0]=d}return this.context=y,this.selector=a,this}return a.nodeType?(this.context=this[0]=a,this.length=1,this):m.isFunction(a)?"undefined"!=typeof x.ready?x.ready(a):a(m):(void 0!==a.selector&&(this.selector=a.selector,this.context=a.context),m.makeArray(a,this))};A.prototype=m.fn,x=m(y);var B=/^(?:parents|prev(?:Until|All))/,C={children:!0,contents:!0,next:!0,prev:!0};m.extend({dir:function(a,b,c){var d=[],e=a[b];while(e&&9!==e.nodeType&&(void 0===c||1!==e.nodeType||!m(e).is(c)))1===e.nodeType&&d.push(e),e=e[b];return d},sibling:function(a,b){for(var c=[];a;a=a.nextSibling)1===a.nodeType&&a!==b&&c.push(a);return c}}),m.fn.extend({has:function(a){var b,c=m(a,this),d=c.length;return this.filter(function(){for(b=0;d>b;b++)if(m.contains(this,c[b]))return!0})},closest:function(a,b){for(var c,d=0,e=this.length,f=[],g=t.test(a)||"string"!=typeof a?m(a,b||this.context):0;e>d;d++)for(c=this[d];c&&c!==b;c=c.parentNode)if(c.nodeType<11&&(g?g.index(c)>-1:1===c.nodeType&&m.find.matchesSelector(c,a))){f.push(c);break}return this.pushStack(f.length>1?m.unique(f):f)},index:function(a){return a?"string"==typeof a?m.inArray(this[0],m(a)):m.inArray(a.jquery?a[0]:a,this):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(a,b){return this.pushStack(m.unique(m.merge(this.get(),m(a,b))))},addBack:function(a){return this.add(null==a?this.prevObject:this.prevObject.filter(a))}});function D(a,b){do a=a[b];while(a&&1!==a.nodeType);return a}m.each({parent:function(a){var b=a.parentNode;return b&&11!==b.nodeType?b:null},parents:function(a){return m.dir(a,"parentNode")},parentsUntil:function(a,b,c){return m.dir(a,"parentNode",c)},next:function(a){return D(a,"nextSibling")},prev:function(a){return D(a,"previousSibling")},nextAll:function(a){return m.dir(a,"nextSibling")},prevAll:function(a){return m.dir(a,"previousSibling")},nextUntil:function(a,b,c){return m.dir(a,"nextSibling",c)},prevUntil:function(a,b,c){return m.dir(a,"previousSibling",c)},siblings:function(a){return m.sibling((a.parentNode||{}).firstChild,a)},children:function(a){return m.sibling(a.firstChild)},contents:function(a){return m.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:m.merge([],a.childNodes)}},function(a,b){m.fn[a]=function(c,d){var e=m.map(this,b,c);return"Until"!==a.slice(-5)&&(d=c),d&&"string"==typeof d&&(e=m.filter(d,e)),this.length>1&&(C[a]||(e=m.unique(e)),B.test(a)&&(e=e.reverse())),this.pushStack(e)}});var E=/\S+/g,F={};function G(a){var b=F[a]={};return m.each(a.match(E)||[],function(a,c){b[c]=!0}),b}m.Callbacks=function(a){a="string"==typeof a?F[a]||G(a):m.extend({},a);var b,c,d,e,f,g,h=[],i=!a.once&&[],j=function(l){for(c=a.memory&&l,d=!0,f=g||0,g=0,e=h.length,b=!0;h&&e>f;f++)if(h[f].apply(l[0],l[1])===!1&&a.stopOnFalse){c=!1;break}b=!1,h&&(i?i.length&&j(i.shift()):c?h=[]:k.disable())},k={add:function(){if(h){var d=h.length;!function f(b){m.each(b,function(b,c){var d=m.type(c);"function"===d?a.unique&&k.has(c)||h.push(c):c&&c.length&&"string"!==d&&f(c)})}(arguments),b?e=h.length:c&&(g=d,j(c))}return this},remove:function(){return h&&m.each(arguments,function(a,c){var d;while((d=m.inArray(c,h,d))>-1)h.splice(d,1),b&&(e>=d&&e--,f>=d&&f--)}),this},has:function(a){return a?m.inArray(a,h)>-1:!(!h||!h.length)},empty:function(){return h=[],e=0,this},disable:function(){return h=i=c=void 0,this},disabled:function(){return!h},lock:function(){return i=void 0,c||k.disable(),this},locked:function(){return!i},fireWith:function(a,c){return!h||d&&!i||(c=c||[],c=[a,c.slice?c.slice():c],b?i.push(c):j(c)),this},fire:function(){return k.fireWith(this,arguments),this},fired:function(){return!!d}};return k},m.extend({Deferred:function(a){var b=[["resolve","done",m.Callbacks("once memory"),"resolved"],["reject","fail",m.Callbacks("once memory"),"rejected"],["notify","progress",m.Callbacks("memory")]],c="pending",d={state:function(){return c},always:function(){return e.done(arguments).fail(arguments),this},then:function(){var a=arguments;return m.Deferred(function(c){m.each(b,function(b,f){var g=m.isFunction(a[b])&&a[b];e[f[1]](function(){var a=g&&g.apply(this,arguments);a&&m.isFunction(a.promise)?a.promise().done(c.resolve).fail(c.reject).progress(c.notify):c[f[0]+"With"](this===d?c.promise():this,g?[a]:arguments)})}),a=null}).promise()},promise:function(a){return null!=a?m.extend(a,d):d}},e={};return d.pipe=d.then,m.each(b,function(a,f){var g=f[2],h=f[3];d[f[1]]=g.add,h&&g.add(function(){c=h},b[1^a][2].disable,b[2][2].lock),e[f[0]]=function(){return e[f[0]+"With"](this===e?d:this,arguments),this},e[f[0]+"With"]=g.fireWith}),d.promise(e),a&&a.call(e,e),e},when:function(a){var b=0,c=d.call(arguments),e=c.length,f=1!==e||a&&m.isFunction(a.promise)?e:0,g=1===f?a:m.Deferred(),h=function(a,b,c){return function(e){b[a]=this,c[a]=arguments.length>1?d.call(arguments):e,c===i?g.notifyWith(b,c):--f||g.resolveWith(b,c)}},i,j,k;if(e>1)for(i=new Array(e),j=new Array(e),k=new Array(e);e>b;b++)c[b]&&m.isFunction(c[b].promise)?c[b].promise().done(h(b,k,c)).fail(g.reject).progress(h(b,j,i)):--f;return f||g.resolveWith(k,c),g.promise()}});var H;m.fn.ready=function(a){return m.ready.promise().done(a),this},m.extend({isReady:!1,readyWait:1,holdReady:function(a){a?m.readyWait++:m.ready(!0)},ready:function(a){if(a===!0?!--m.readyWait:!m.isReady){if(!y.body)return setTimeout(m.ready);m.isReady=!0,a!==!0&&--m.readyWait>0||(H.resolveWith(y,[m]),m.fn.triggerHandler&&(m(y).triggerHandler("ready"),m(y).off("ready")))}}});function I(){y.addEventListener?(y.removeEventListener("DOMContentLoaded",J,!1),a.removeEventListener("load",J,!1)):(y.detachEvent("onreadystatechange",J),a.detachEvent("onload",J))}function J(){(y.addEventListener||"load"===event.type||"complete"===y.readyState)&&(I(),m.ready())}m.ready.promise=function(b){if(!H)if(H=m.Deferred(),"complete"===y.readyState)setTimeout(m.ready);else if(y.addEventListener)y.addEventListener("DOMContentLoaded",J,!1),a.addEventListener("load",J,!1);else{y.attachEvent("onreadystatechange",J),a.attachEvent("onload",J);var c=!1;try{c=null==a.frameElement&&y.documentElement}catch(d){}c&&c.doScroll&&!function e(){if(!m.isReady){try{c.doScroll("left")}catch(a){return setTimeout(e,50)}I(),m.ready()}}()}return H.promise(b)};var K="undefined",L;for(L in m(k))break;k.ownLast="0"!==L,k.inlineBlockNeedsLayout=!1,m(function(){var a,b,c,d;c=y.getElementsByTagName("body")[0],c&&c.style&&(b=y.createElement("div"),d=y.createElement("div"),d.style.cssText="position:absolute;border:0;width:0;height:0;top:0;left:-9999px",c.appendChild(d).appendChild(b),typeof b.style.zoom!==K&&(b.style.cssText="display:inline;margin:0;border:0;padding:1px;width:1px;zoom:1",k.inlineBlockNeedsLayout=a=3===b.offsetWidth,a&&(c.style.zoom=1)),c.removeChild(d))}),function(){var a=y.createElement("div");if(null==k.deleteExpando){k.deleteExpando=!0;try{delete a.test}catch(b){k.deleteExpando=!1}}a=null}(),m.acceptData=function(a){var b=m.noData[(a.nodeName+" ").toLowerCase()],c=+a.nodeType||1;return 1!==c&&9!==c?!1:!b||b!==!0&&a.getAttribute("classid")===b};var M=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,N=/([A-Z])/g;function O(a,b,c){if(void 0===c&&1===a.nodeType){var d="data-"+b.replace(N,"-$1").toLowerCase();if(c=a.getAttribute(d),"string"==typeof c){try{c="true"===c?!0:"false"===c?!1:"null"===c?null:+c+""===c?+c:M.test(c)?m.parseJSON(c):c}catch(e){}m.data(a,b,c)}else c=void 0}return c}function P(a){var b;for(b in a)if(("data"!==b||!m.isEmptyObject(a[b]))&&"toJSON"!==b)return!1;return!0}function Q(a,b,d,e){if(m.acceptData(a)){var f,g,h=m.expando,i=a.nodeType,j=i?m.cache:a,k=i?a[h]:a[h]&&h;
-if(k&&j[k]&&(e||j[k].data)||void 0!==d||"string"!=typeof b)return k||(k=i?a[h]=c.pop()||m.guid++:h),j[k]||(j[k]=i?{}:{toJSON:m.noop}),("object"==typeof b||"function"==typeof b)&&(e?j[k]=m.extend(j[k],b):j[k].data=m.extend(j[k].data,b)),g=j[k],e||(g.data||(g.data={}),g=g.data),void 0!==d&&(g[m.camelCase(b)]=d),"string"==typeof b?(f=g[b],null==f&&(f=g[m.camelCase(b)])):f=g,f}}function R(a,b,c){if(m.acceptData(a)){var d,e,f=a.nodeType,g=f?m.cache:a,h=f?a[m.expando]:m.expando;if(g[h]){if(b&&(d=c?g[h]:g[h].data)){m.isArray(b)?b=b.concat(m.map(b,m.camelCase)):b in d?b=[b]:(b=m.camelCase(b),b=b in d?[b]:b.split(" ")),e=b.length;while(e--)delete d[b[e]];if(c?!P(d):!m.isEmptyObject(d))return}(c||(delete g[h].data,P(g[h])))&&(f?m.cleanData([a],!0):k.deleteExpando||g!=g.window?delete g[h]:g[h]=null)}}}m.extend({cache:{},noData:{"applet ":!0,"embed ":!0,"object ":"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"},hasData:function(a){return a=a.nodeType?m.cache[a[m.expando]]:a[m.expando],!!a&&!P(a)},data:function(a,b,c){return Q(a,b,c)},removeData:function(a,b){return R(a,b)},_data:function(a,b,c){return Q(a,b,c,!0)},_removeData:function(a,b){return R(a,b,!0)}}),m.fn.extend({data:function(a,b){var c,d,e,f=this[0],g=f&&f.attributes;if(void 0===a){if(this.length&&(e=m.data(f),1===f.nodeType&&!m._data(f,"parsedAttrs"))){c=g.length;while(c--)g[c]&&(d=g[c].name,0===d.indexOf("data-")&&(d=m.camelCase(d.slice(5)),O(f,d,e[d])));m._data(f,"parsedAttrs",!0)}return e}return"object"==typeof a?this.each(function(){m.data(this,a)}):arguments.length>1?this.each(function(){m.data(this,a,b)}):f?O(f,a,m.data(f,a)):void 0},removeData:function(a){return this.each(function(){m.removeData(this,a)})}}),m.extend({queue:function(a,b,c){var d;return a?(b=(b||"fx")+"queue",d=m._data(a,b),c&&(!d||m.isArray(c)?d=m._data(a,b,m.makeArray(c)):d.push(c)),d||[]):void 0},dequeue:function(a,b){b=b||"fx";var c=m.queue(a,b),d=c.length,e=c.shift(),f=m._queueHooks(a,b),g=function(){m.dequeue(a,b)};"inprogress"===e&&(e=c.shift(),d--),e&&("fx"===b&&c.unshift("inprogress"),delete f.stop,e.call(a,g,f)),!d&&f&&f.empty.fire()},_queueHooks:function(a,b){var c=b+"queueHooks";return m._data(a,c)||m._data(a,c,{empty:m.Callbacks("once memory").add(function(){m._removeData(a,b+"queue"),m._removeData(a,c)})})}}),m.fn.extend({queue:function(a,b){var c=2;return"string"!=typeof a&&(b=a,a="fx",c--),arguments.lengthh;h++)b(a[h],c,g?d:d.call(a[h],h,b(a[h],c)));return e?a:j?b.call(a):i?b(a[0],c):f},W=/^(?:checkbox|radio)$/i;!function(){var a=y.createElement("input"),b=y.createElement("div"),c=y.createDocumentFragment();if(b.innerHTML="
- )
-}
diff --git a/spaces/Ripaxxs/Tommy/README.md b/spaces/Ripaxxs/Tommy/README.md
deleted file mode 100644
index bae4a2097a3a73bd2927b019f4dad1f4c70454a2..0000000000000000000000000000000000000000
--- a/spaces/Ripaxxs/Tommy/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Tommy
-emoji: ⚡
-colorFrom: yellow
-colorTo: blue
-sdk: docker
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/correlation.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/correlation.py
deleted file mode 100644
index 3d0b79c301b29915dfaf4d2b1846c59be73127d3..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/correlation.py
+++ /dev/null
@@ -1,196 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-from torch import Tensor, nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['correlation_forward', 'correlation_backward'])
-
-
-class CorrelationFunction(Function):
-
- @staticmethod
- def forward(ctx,
- input1,
- input2,
- kernel_size=1,
- max_displacement=1,
- stride=1,
- padding=1,
- dilation=1,
- dilation_patch=1):
-
- ctx.save_for_backward(input1, input2)
-
- kH, kW = ctx.kernel_size = _pair(kernel_size)
- patch_size = max_displacement * 2 + 1
- ctx.patch_size = patch_size
- dH, dW = ctx.stride = _pair(stride)
- padH, padW = ctx.padding = _pair(padding)
- dilationH, dilationW = ctx.dilation = _pair(dilation)
- dilation_patchH, dilation_patchW = ctx.dilation_patch = _pair(
- dilation_patch)
-
- output_size = CorrelationFunction._output_size(ctx, input1)
-
- output = input1.new_zeros(output_size)
-
- ext_module.correlation_forward(
- input1,
- input2,
- output,
- kH=kH,
- kW=kW,
- patchH=patch_size,
- patchW=patch_size,
- padH=padH,
- padW=padW,
- dilationH=dilationH,
- dilationW=dilationW,
- dilation_patchH=dilation_patchH,
- dilation_patchW=dilation_patchW,
- dH=dH,
- dW=dW)
-
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input1, input2 = ctx.saved_tensors
-
- kH, kW = ctx.kernel_size
- patch_size = ctx.patch_size
- padH, padW = ctx.padding
- dilationH, dilationW = ctx.dilation
- dilation_patchH, dilation_patchW = ctx.dilation_patch
- dH, dW = ctx.stride
- grad_input1 = torch.zeros_like(input1)
- grad_input2 = torch.zeros_like(input2)
-
- ext_module.correlation_backward(
- grad_output,
- input1,
- input2,
- grad_input1,
- grad_input2,
- kH=kH,
- kW=kW,
- patchH=patch_size,
- patchW=patch_size,
- padH=padH,
- padW=padW,
- dilationH=dilationH,
- dilationW=dilationW,
- dilation_patchH=dilation_patchH,
- dilation_patchW=dilation_patchW,
- dH=dH,
- dW=dW)
- return grad_input1, grad_input2, None, None, None, None, None, None
-
- @staticmethod
- def _output_size(ctx, input1):
- iH, iW = input1.size(2), input1.size(3)
- batch_size = input1.size(0)
- kH, kW = ctx.kernel_size
- patch_size = ctx.patch_size
- dH, dW = ctx.stride
- padH, padW = ctx.padding
- dilationH, dilationW = ctx.dilation
- dilatedKH = (kH - 1) * dilationH + 1
- dilatedKW = (kW - 1) * dilationW + 1
-
- oH = int((iH + 2 * padH - dilatedKH) / dH + 1)
- oW = int((iW + 2 * padW - dilatedKW) / dW + 1)
-
- output_size = (batch_size, patch_size, patch_size, oH, oW)
- return output_size
-
-
-class Correlation(nn.Module):
- r"""Correlation operator
-
- This correlation operator works for optical flow correlation computation.
-
- There are two batched tensors with shape :math:`(N, C, H, W)`,
- and the correlation output's shape is :math:`(N, max\_displacement \times
- 2 + 1, max\_displacement * 2 + 1, H_{out}, W_{out})`
-
- where
-
- .. math::
- H_{out} = \left\lfloor\frac{H_{in} + 2 \times padding -
- dilation \times (kernel\_size - 1) - 1}
- {stride} + 1\right\rfloor
-
- .. math::
- W_{out} = \left\lfloor\frac{W_{in} + 2 \times padding - dilation
- \times (kernel\_size - 1) - 1}
- {stride} + 1\right\rfloor
-
- the correlation item :math:`(N_i, dy, dx)` is formed by taking the sliding
- window convolution between input1 and shifted input2,
-
- .. math::
- Corr(N_i, dx, dy) =
- \sum_{c=0}^{C-1}
- input1(N_i, c) \star
- \mathcal{S}(input2(N_i, c), dy, dx)
-
- where :math:`\star` is the valid 2d sliding window convolution operator,
- and :math:`\mathcal{S}` means shifting the input features (auto-complete
- zero marginal), and :math:`dx, dy` are shifting distance, :math:`dx, dy \in
- [-max\_displacement \times dilation\_patch, max\_displacement \times
- dilation\_patch]`.
-
- Args:
- kernel_size (int): The size of sliding window i.e. local neighborhood
- representing the center points and involved in correlation
- computation. Defaults to 1.
- max_displacement (int): The radius for computing correlation volume,
- but the actual working space can be dilated by dilation_patch.
- Defaults to 1.
- stride (int): The stride of the sliding blocks in the input spatial
- dimensions. Defaults to 1.
- padding (int): Zero padding added to all four sides of the input1.
- Defaults to 0.
- dilation (int): The spacing of local neighborhood that will involved
- in correlation. Defaults to 1.
- dilation_patch (int): The spacing between position need to compute
- correlation. Defaults to 1.
- """
-
- def __init__(self,
- kernel_size: int = 1,
- max_displacement: int = 1,
- stride: int = 1,
- padding: int = 0,
- dilation: int = 1,
- dilation_patch: int = 1) -> None:
- super().__init__()
- self.kernel_size = kernel_size
- self.max_displacement = max_displacement
- self.stride = stride
- self.padding = padding
- self.dilation = dilation
- self.dilation_patch = dilation_patch
-
- def forward(self, input1: Tensor, input2: Tensor) -> Tensor:
- return CorrelationFunction.apply(input1, input2, self.kernel_size,
- self.max_displacement, self.stride,
- self.padding, self.dilation,
- self.dilation_patch)
-
- def __repr__(self) -> str:
- s = self.__class__.__name__
- s += f'(kernel_size={self.kernel_size}, '
- s += f'max_displacement={self.max_displacement}, '
- s += f'stride={self.stride}, '
- s += f'padding={self.padding}, '
- s += f'dilation={self.dilation}, '
- s += f'dilation_patch={self.dilation_patch})'
- return s
diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/README.md b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/README.md
deleted file mode 100644
index 1b24e6efdb04cb1460e4fe3257d2303677c5a0e1..0000000000000000000000000000000000000000
--- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Multilingual Anime TTS
-emoji: 🎙🐴
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.7
-app_file: app.py
-pinned: false
-duplicated_from: Plachta/VITS-Umamusume-voice-synthesizer
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SShaik/SS-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md b/spaces/SShaik/SS-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md
deleted file mode 100644
index c7f042e4c9c0f401731f009842a325e2d1386bf5..0000000000000000000000000000000000000000
--- a/spaces/SShaik/SS-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md
+++ /dev/null
@@ -1,51 +0,0 @@
-
-# Image Generation for Art, Marketing, Ideation, Design, and Use in Business
-
-A number of multiple AI pipeline element strategies have evolved on the open market which allow you to generate images using a combination of image prompts and word prompts. This brief analysis gives an idea of the prompting capabilities as well as image rendering techniques that are used in the strategy to generate art from human understanding of images and text used to describe a scene.
-
-First a top five list on state of the art generators both free and paid is worth consideration.
-
-1) Midjourney - a Discord server based chatboat AI that allows /imagine prompts which can generate multiple images at a time. This is best at parallel creation, high accuracy even photo real creations.
-2) Artbreeder - A multiple capability tool which now features a Collager which assists in starting image composition. By far the most innovative approach which does great to combine the right partial elements in a scene.
-3) Dreamstudio - A Huggingface derived art program in beta which uses stable diffusion to create highly accurate art and images.
-4) Nightcafe - A credit based creation AI app that can do generation of video dives into an AI art piece which can produce some of the best experiences in Video.
-5) RunwayML - a quintessential tool in processing morph audio and video tracks which rival most high end video edit tools.
-
-These 5 tools make up some of the best AI pipeline programs that are cloud based that allow anyone to begin easily building their portfolio of art.
-
-The prompting capabilities often involve having a set of text based prompts to get started. Most also feature a starter image which could be an example of what you would like to create.
-
-URL Links:
-1) Collager: https://www.artbreeder.com/beta/collage
-2) NightCafe: https://creator.nightcafe.studio/explore
-3) Midjourney: https://www.midjourney.com/app/users/779773261440614430/
-4) Dreamstudio: https://beta.dreamstudio.ai/dream
-5) RunwayML: https://app.runwayml.com/
-
-## Getting Started and Organizing Your AI Pipeline and Process
-
-Any great strategy has a number of steps that combine all capabilities at your disposal. It is useful to note how you can easily fir these together into a process that works for you.
-
-The techniques worth noted are listed below. Consider how you will use them will make your pipeline easier and more automated to allow you to spend the majority of your time curating what you have made, and ideating what you want to create next.
-
-1) Source materials: Since prompting requires text and text examples can quickly help you compose good input, its worth considering and documenting some effective prompts. Nightcafe with its integration into email, sends you a copy of your creation plus the prompting text so one option is to use your email account to keep a record of which prompts work for which outputs.
-2) Source materials: Discord since its a public chat format allows you to easily see what others are using for prompts in bulk. There are a number of chat channels designed for people new to the platform and often you can copy and paste if you see very effective prompts with material you are looking for.
-3) Source materials: Collager is unique in its ability to add additive parts and then dial in the percent of AI you would like with that. This allows you to add a few image elements which help start out your generation.
-4) Source materials: Since images and prompts are going to be your mainstay for inputs its worth considering an open standard for storing and retrieving these from anywhere. Github is a good place since markdown language can involve text in table or list format and includes a capability to reference uploaded images within markdown. This is also a good form for portability since you can later fork and download your repository with a few clicks from anywhere.
-5) Source materials: Google drive is integrated into the Artbreeder Collager workflow which allows you easily expand your work and even compose albums of the ones you like to place in Google photo albums. The portfolio you save on different sites have different degrees of ease when aggregating your collections. Collager for instance allows right click save for instant saving of your creation. Dreamstudio features a history. Midjourney features a profile site for you to store and review creations even triggering Upscales which important to use to get the highest resolution output for your creations.
-
-## Social Media integration
-
-Depending on your target "safe for work" exports of your work, it is sometimes important to know your accepted social media outlets that you can integrate. Cloud based interactions are the key to successful audiences if you want to scale and share your process with others.
-
-The key social media outlets supported for these tools are here in a sorted link list which start with public open source first:
-
-1) Github - Github is open at most companies and allow creation of a free space to share your content.
-2) LinkedIn - LinkedIn is acceptable use at nearly every company.
-3) Twitter - Twitter is supported as a social media outlet at most companies yet can also be used with security restrictions which might limit posting but allow read access.
-4) Facebook - Meta's Facebook is a good outlet since it allows creation of large folios of your images along with stories. This venue however is locked down at many organizations.
-5) Instagram - Instagram is supported as an output channel for many tools yet has decreased in popularity due to high frequency of ads and pay for likes models. While it can still be one of the best places for domain specific arrangements of images it is likely locked down in most secure organizations.
-6) Youtube - For video uploads with automated captioning and long term storage of short and long form video this is an essential for any creation you compose as video. It is also useful to review and compose playlists of videos here for yourself that speed up your learning - Spend some time at Youtube university and keep a record of keyword searches there sometimes along with your playlists to accelerate learning.
-7) Gmail - With the baility to move email in and out its useful to create and wrap up details within email. Most email policies come with a content limitation (for example no files larger than 25MB. For this reason get used to creating pproject wrap up archives with winzip or compression software. With the convenience of keyword searching you can usually use this as a base.
-8) Last a worth mention is Huggingface.com. Like github as you become more sophisticated in your public open source capabilities, HuggingFace can allow you to wrap up using one of three software development kits which are gadio, streamlit, and HTML5 each with unique AI and UI integration components and features. If you want to create your own AI pipelines this one also has the open source code and models ready to go to help you on your journey.
-
diff --git a/spaces/Sa-m/YoloV5-Party-Symbol-Detector-V1/README.md b/spaces/Sa-m/YoloV5-Party-Symbol-Detector-V1/README.md
deleted file mode 100644
index b1929ed7049baf00b31d9dd70cad22d33dfbd4f6..0000000000000000000000000000000000000000
--- a/spaces/Sa-m/YoloV5-Party-Symbol-Detector-V1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Political Party Symbol Detector V1
-emoji: 👀
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Salama1429/speech-to-speech-translation/app.py b/spaces/Salama1429/speech-to-speech-translation/app.py
deleted file mode 100644
index 3b8a6a06bad8423b6588d1c906fc3ed2f0aba606..0000000000000000000000000000000000000000
--- a/spaces/Salama1429/speech-to-speech-translation/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import gradio as gr
-import numpy as np
-import torch
-from datasets import load_dataset
-
-from transformers import SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Processor, pipeline
-
-
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
-# load speech translation checkpoint
-asr_pipe = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=device)
-
-# load text-to-speech checkpoint and speaker embeddings
-processor = SpeechT5Processor.from_pretrained("Salama1429/TTS_German_Speecht5_finetuned_voxpopuli_nl")
-
-model = SpeechT5ForTextToSpeech.from_pretrained("Salama1429/TTS_German_Speecht5_finetuned_voxpopuli_nl").to(device)
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan").to(device)
-
-embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
-speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0)
-
-
-def translate(audio):
- outputs = asr_pipe(audio, max_new_tokens=256, generate_kwargs={"task": "transcribe", "language": "de"})
- return outputs["text"]
-
-
-def synthesise(text):
- inputs = processor(text=text, return_tensors="pt")
- speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder)
- return speech.cpu()
-
-
-def speech_to_speech_translation(audio):
- translated_text = translate(audio)
- synthesised_speech = synthesise(translated_text)
- synthesised_speech = (synthesised_speech.numpy() * 32767).astype(np.int16)
- return 16000, synthesised_speech
-
-
-title = "Cascaded STST"
-description = """
-Demo for cascaded speech-to-speech translation (STST), mapping from source speech in any language to target speech in German. Demo uses OpenAI's [Whisper Base](https://huggingface.co/openai/whisper-base) model for speech translation, and
-[My German TTS](https://huggingface.co/Salama1429/TTS_German_Speecht5_finetuned_voxpopuli_nl) model for text-to-speech:
-
-
-"""
-
-demo = gr.Blocks()
-
-mic_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="microphone", type="filepath"),
- outputs=gr.Audio(label="Generated Speech", type="numpy"),
- title=title,
- description=description,
-)
-
-file_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="upload", type="filepath"),
- outputs=gr.Audio(label="Generated Speech", type="numpy"),
- examples=[["./example.wav"]],
- title=title,
- description=description,
-)
-
-with demo:
- gr.TabbedInterface([mic_translate, file_translate], ["Microphone", "Audio File"])
-
-demo.launch()
diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/utils/logging.py b/spaces/Salesforce/EDICT/my_half_diffusers/utils/logging.py
deleted file mode 100644
index 1f2d0227b87c66205ceb3391a8e98f5f33285dc4..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_half_diffusers/utils/logging.py
+++ /dev/null
@@ -1,344 +0,0 @@
-# coding=utf-8
-# Copyright 2020 Optuna, Hugging Face
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Logging utilities."""
-
-import logging
-import os
-import sys
-import threading
-from logging import CRITICAL # NOQA
-from logging import DEBUG # NOQA
-from logging import ERROR # NOQA
-from logging import FATAL # NOQA
-from logging import INFO # NOQA
-from logging import NOTSET # NOQA
-from logging import WARN # NOQA
-from logging import WARNING # NOQA
-from typing import Optional
-
-from tqdm import auto as tqdm_lib
-
-
-_lock = threading.Lock()
-_default_handler: Optional[logging.Handler] = None
-
-log_levels = {
- "debug": logging.DEBUG,
- "info": logging.INFO,
- "warning": logging.WARNING,
- "error": logging.ERROR,
- "critical": logging.CRITICAL,
-}
-
-_default_log_level = logging.WARNING
-
-_tqdm_active = True
-
-
-def _get_default_logging_level():
- """
- If DIFFUSERS_VERBOSITY env var is set to one of the valid choices return that as the new default level. If it is
- not - fall back to `_default_log_level`
- """
- env_level_str = os.getenv("DIFFUSERS_VERBOSITY", None)
- if env_level_str:
- if env_level_str in log_levels:
- return log_levels[env_level_str]
- else:
- logging.getLogger().warning(
- f"Unknown option DIFFUSERS_VERBOSITY={env_level_str}, "
- f"has to be one of: { ', '.join(log_levels.keys()) }"
- )
- return _default_log_level
-
-
-def _get_library_name() -> str:
-
- return __name__.split(".")[0]
-
-
-def _get_library_root_logger() -> logging.Logger:
-
- return logging.getLogger(_get_library_name())
-
-
-def _configure_library_root_logger() -> None:
-
- global _default_handler
-
- with _lock:
- if _default_handler:
- # This library has already configured the library root logger.
- return
- _default_handler = logging.StreamHandler() # Set sys.stderr as stream.
- _default_handler.flush = sys.stderr.flush
-
- # Apply our default configuration to the library root logger.
- library_root_logger = _get_library_root_logger()
- library_root_logger.addHandler(_default_handler)
- library_root_logger.setLevel(_get_default_logging_level())
- library_root_logger.propagate = False
-
-
-def _reset_library_root_logger() -> None:
-
- global _default_handler
-
- with _lock:
- if not _default_handler:
- return
-
- library_root_logger = _get_library_root_logger()
- library_root_logger.removeHandler(_default_handler)
- library_root_logger.setLevel(logging.NOTSET)
- _default_handler = None
-
-
-def get_log_levels_dict():
- return log_levels
-
-
-def get_logger(name: Optional[str] = None) -> logging.Logger:
- """
- Return a logger with the specified name.
-
- This function is not supposed to be directly accessed unless you are writing a custom diffusers module.
- """
-
- if name is None:
- name = _get_library_name()
-
- _configure_library_root_logger()
- return logging.getLogger(name)
-
-
-def get_verbosity() -> int:
- """
- Return the current level for the 🤗 Diffusers' root logger as an int.
-
- Returns:
- `int`: The logging level.
-
-
-
- 🤗 Diffusers has following logging levels:
-
- - 50: `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL`
- - 40: `diffusers.logging.ERROR`
- - 30: `diffusers.logging.WARNING` or `diffusers.logging.WARN`
- - 20: `diffusers.logging.INFO`
- - 10: `diffusers.logging.DEBUG`
-
- """
-
- _configure_library_root_logger()
- return _get_library_root_logger().getEffectiveLevel()
-
-
-def set_verbosity(verbosity: int) -> None:
- """
- Set the verbosity level for the 🤗 Diffusers' root logger.
-
- Args:
- verbosity (`int`):
- Logging level, e.g., one of:
-
- - `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL`
- - `diffusers.logging.ERROR`
- - `diffusers.logging.WARNING` or `diffusers.logging.WARN`
- - `diffusers.logging.INFO`
- - `diffusers.logging.DEBUG`
- """
-
- _configure_library_root_logger()
- _get_library_root_logger().setLevel(verbosity)
-
-
-def set_verbosity_info():
- """Set the verbosity to the `INFO` level."""
- return set_verbosity(INFO)
-
-
-def set_verbosity_warning():
- """Set the verbosity to the `WARNING` level."""
- return set_verbosity(WARNING)
-
-
-def set_verbosity_debug():
- """Set the verbosity to the `DEBUG` level."""
- return set_verbosity(DEBUG)
-
-
-def set_verbosity_error():
- """Set the verbosity to the `ERROR` level."""
- return set_verbosity(ERROR)
-
-
-def disable_default_handler() -> None:
- """Disable the default handler of the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert _default_handler is not None
- _get_library_root_logger().removeHandler(_default_handler)
-
-
-def enable_default_handler() -> None:
- """Enable the default handler of the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert _default_handler is not None
- _get_library_root_logger().addHandler(_default_handler)
-
-
-def add_handler(handler: logging.Handler) -> None:
- """adds a handler to the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert handler is not None
- _get_library_root_logger().addHandler(handler)
-
-
-def remove_handler(handler: logging.Handler) -> None:
- """removes given handler from the HuggingFace Diffusers' root logger."""
-
- _configure_library_root_logger()
-
- assert handler is not None and handler not in _get_library_root_logger().handlers
- _get_library_root_logger().removeHandler(handler)
-
-
-def disable_propagation() -> None:
- """
- Disable propagation of the library log outputs. Note that log propagation is disabled by default.
- """
-
- _configure_library_root_logger()
- _get_library_root_logger().propagate = False
-
-
-def enable_propagation() -> None:
- """
- Enable propagation of the library log outputs. Please disable the HuggingFace Diffusers' default handler to prevent
- double logging if the root logger has been configured.
- """
-
- _configure_library_root_logger()
- _get_library_root_logger().propagate = True
-
-
-def enable_explicit_format() -> None:
- """
- Enable explicit formatting for every HuggingFace Diffusers' logger. The explicit formatter is as follows:
- ```
- [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE
- ```
- All handlers currently bound to the root logger are affected by this method.
- """
- handlers = _get_library_root_logger().handlers
-
- for handler in handlers:
- formatter = logging.Formatter("[%(levelname)s|%(filename)s:%(lineno)s] %(asctime)s >> %(message)s")
- handler.setFormatter(formatter)
-
-
-def reset_format() -> None:
- """
- Resets the formatting for HuggingFace Diffusers' loggers.
-
- All handlers currently bound to the root logger are affected by this method.
- """
- handlers = _get_library_root_logger().handlers
-
- for handler in handlers:
- handler.setFormatter(None)
-
-
-def warning_advice(self, *args, **kwargs):
- """
- This method is identical to `logger.warninging()`, but if env var DIFFUSERS_NO_ADVISORY_WARNINGS=1 is set, this
- warning will not be printed
- """
- no_advisory_warnings = os.getenv("DIFFUSERS_NO_ADVISORY_WARNINGS", False)
- if no_advisory_warnings:
- return
- self.warning(*args, **kwargs)
-
-
-logging.Logger.warning_advice = warning_advice
-
-
-class EmptyTqdm:
- """Dummy tqdm which doesn't do anything."""
-
- def __init__(self, *args, **kwargs): # pylint: disable=unused-argument
- self._iterator = args[0] if args else None
-
- def __iter__(self):
- return iter(self._iterator)
-
- def __getattr__(self, _):
- """Return empty function."""
-
- def empty_fn(*args, **kwargs): # pylint: disable=unused-argument
- return
-
- return empty_fn
-
- def __enter__(self):
- return self
-
- def __exit__(self, type_, value, traceback):
- return
-
-
-class _tqdm_cls:
- def __call__(self, *args, **kwargs):
- if _tqdm_active:
- return tqdm_lib.tqdm(*args, **kwargs)
- else:
- return EmptyTqdm(*args, **kwargs)
-
- def set_lock(self, *args, **kwargs):
- self._lock = None
- if _tqdm_active:
- return tqdm_lib.tqdm.set_lock(*args, **kwargs)
-
- def get_lock(self):
- if _tqdm_active:
- return tqdm_lib.tqdm.get_lock()
-
-
-tqdm = _tqdm_cls()
-
-
-def is_progress_bar_enabled() -> bool:
- """Return a boolean indicating whether tqdm progress bars are enabled."""
- global _tqdm_active
- return bool(_tqdm_active)
-
-
-def enable_progress_bar():
- """Enable tqdm progress bar."""
- global _tqdm_active
- _tqdm_active = True
-
-
-def disable_progress_bar():
- """Disable tqdm progress bar."""
- global _tqdm_active
- _tqdm_active = False
diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/yolo/bbox.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/yolo/bbox.py
deleted file mode 100644
index cce22d297c8e20597f04760416e7719c19941e53..0000000000000000000000000000000000000000
--- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/yolo/bbox.py
+++ /dev/null
@@ -1,113 +0,0 @@
-from __future__ import division
-
-import torch
-import random
-
-import numpy as np
-import cv2
-
-def confidence_filter(result, confidence):
- conf_mask = (result[:,:,4] > confidence).float().unsqueeze(2)
- result = result*conf_mask
-
- return result
-
-def confidence_filter_cls(result, confidence):
- max_scores = torch.max(result[:,:,5:25], 2)[0]
- res = torch.cat((result, max_scores),2)
- print(res.shape)
-
-
- cond_1 = (res[:,:,4] > confidence).float()
- cond_2 = (res[:,:,25] > 0.995).float()
-
- conf = cond_1 + cond_2
- conf = torch.clamp(conf, 0.0, 1.0)
- conf = conf.unsqueeze(2)
- result = result*conf
- return result
-
-
-
-def get_abs_coord(box):
- box[2], box[3] = abs(box[2]), abs(box[3])
- x1 = (box[0] - box[2]/2) - 1
- y1 = (box[1] - box[3]/2) - 1
- x2 = (box[0] + box[2]/2) - 1
- y2 = (box[1] + box[3]/2) - 1
- return x1, y1, x2, y2
-
-
-
-def sanity_fix(box):
- if (box[0] > box[2]):
- box[0], box[2] = box[2], box[0]
-
- if (box[1] > box[3]):
- box[1], box[3] = box[3], box[1]
-
- return box
-
-def bbox_iou(box1, box2):
- """
- Returns the IoU of two bounding boxes
-
-
- """
- #Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[:,0], box1[:,1], box1[:,2], box1[:,3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[:,0], box2[:,1], box2[:,2], box2[:,3]
-
- #get the corrdinates of the intersection rectangle
- inter_rect_x1 = torch.max(b1_x1, b2_x1)
- inter_rect_y1 = torch.max(b1_y1, b2_y1)
- inter_rect_x2 = torch.min(b1_x2, b2_x2)
- inter_rect_y2 = torch.min(b1_y2, b2_y2)
-
- #Intersection area
-
- inter_area = torch.max(inter_rect_x2 - inter_rect_x1 + 1,torch.zeros(inter_rect_x2.shape))*torch.max(inter_rect_y2 - inter_rect_y1 + 1, torch.zeros(inter_rect_x2.shape))
-
- #Union Area
- b1_area = (b1_x2 - b1_x1 + 1)*(b1_y2 - b1_y1 + 1)
- b2_area = (b2_x2 - b2_x1 + 1)*(b2_y2 - b2_y1 + 1)
-
- iou = inter_area / (b1_area + b2_area - inter_area)
-
- return iou
-
-
-def pred_corner_coord(prediction):
- #Get indices of non-zero confidence bboxes
- ind_nz = torch.nonzero(prediction[:,:,4]).transpose(0,1).contiguous()
-
- box = prediction[ind_nz[0], ind_nz[1]]
-
-
- box_a = box.new(box.shape)
- box_a[:,0] = (box[:,0] - box[:,2]/2)
- box_a[:,1] = (box[:,1] - box[:,3]/2)
- box_a[:,2] = (box[:,0] + box[:,2]/2)
- box_a[:,3] = (box[:,1] + box[:,3]/2)
- box[:,:4] = box_a[:,:4]
-
- prediction[ind_nz[0], ind_nz[1]] = box
-
- return prediction
-
-
-
-
-def write(x, batches, results, colors, classes):
- c1 = tuple(x[1:3].int())
- c2 = tuple(x[3:5].int())
- img = results[int(x[0])]
- cls = int(x[-1])
- label = "{0}".format(classes[cls])
- color = random.choice(colors)
- cv2.rectangle(img, c1, c2,color, 1)
- t_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_PLAIN, 1 , 1)[0]
- c2 = c1[0] + t_size[0] + 3, c1[1] + t_size[1] + 4
- cv2.rectangle(img, c1, c2,color, -1)
- cv2.putText(img, label, (c1[0], c1[1] + t_size[1] + 4), cv2.FONT_HERSHEY_PLAIN, 1, [225,255,255], 1);
- return img
diff --git a/spaces/Saturdays/Student_Experience/README.md b/spaces/Saturdays/Student_Experience/README.md
deleted file mode 100644
index fb1812b18754068c464634f3483b93268280f3be..0000000000000000000000000000000000000000
--- a/spaces/Saturdays/Student_Experience/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Student_Experience
-emoji: 📈
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/SenY/Civitai/README.md b/spaces/SenY/Civitai/README.md
deleted file mode 100644
index c53fe7e3ada0727e0fb8d363e126f7badbf7c459..0000000000000000000000000000000000000000
--- a/spaces/SenY/Civitai/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Civitai
-emoji: 🔥
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Shreeraj/SEO_APP/README.md b/spaces/Shreeraj/SEO_APP/README.md
deleted file mode 100644
index 5735a2a7022ada18380206dbc31bd927972e0661..0000000000000000000000000000000000000000
--- a/spaces/Shreeraj/SEO_APP/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SEO APP
-emoji: 🐠
-colorFrom: pink
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.27.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/optimizers/__init__.py b/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/optimizers/__init__.py
deleted file mode 100644
index a0e0c5932838281e912079e5784d84d43444a61a..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/optimizers/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from torch.optim import * # NOQA
-from .radam import * # NOQA
diff --git a/spaces/SjoerdTeunisse/upscaler/app.py b/spaces/SjoerdTeunisse/upscaler/app.py
deleted file mode 100644
index 1f3736667bfd4e5ac6d9ee2ef9b95416cb80f9c0..0000000000000000000000000000000000000000
--- a/spaces/SjoerdTeunisse/upscaler/app.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import numpy as np
-import cv2
-import onnxruntime
-import gradio as gr
-
-
-def pre_process(img: np.array) -> np.array:
- # H, W, C -> C, H, W
- img = np.transpose(img[:, :, 0:3], (2, 0, 1))
- # C, H, W -> 1, C, H, W
- img = np.expand_dims(img, axis=0).astype(np.float32)
- return img
-
-
-def post_process(img: np.array) -> np.array:
- # 1, C, H, W -> C, H, W
- img = np.squeeze(img)
- # C, H, W -> H, W, C
- img = np.transpose(img, (1, 2, 0))[:, :, ::-1].astype(np.uint8)
- return img
-
-
-def inference(model_path: str, img_array: np.array) -> np.array:
- options = onnxruntime.SessionOptions()
- options.intra_op_num_threads = 1
- options.inter_op_num_threads = 1
- ort_session = onnxruntime.InferenceSession(model_path, options)
- ort_inputs = {ort_session.get_inputs()[0].name: img_array}
- ort_outs = ort_session.run(None, ort_inputs)
-
- return ort_outs[0]
-
-
-def convert_pil_to_cv2(image):
- # pil_image = image.convert("RGB")
- open_cv_image = np.array(image)
- # RGB to BGR
- open_cv_image = open_cv_image[:, :, ::-1].copy()
- return open_cv_image
-
-
-def upscale(image, model):
- model_path = f"models/{model}.ort"
- img = convert_pil_to_cv2(image)
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
-
- if img.shape[2] == 4:
- alpha = img[:, :, 3] # GRAY
- alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2BGR) # BGR
- alpha_output = post_process(inference(model_path, pre_process(alpha))) # BGR
- alpha_output = cv2.cvtColor(alpha_output, cv2.COLOR_BGR2GRAY) # GRAY
-
- img = img[:, :, 0:3] # BGR
- image_output = post_process(inference(model_path, pre_process(img))) # BGR
- image_output = cv2.cvtColor(image_output, cv2.COLOR_BGR2BGRA) # BGRA
- image_output[:, :, 3] = alpha_output
-
- elif img.shape[2] == 3:
- image_output = post_process(inference(model_path, pre_process(img))) # BGR
-
- return image_output
-
-
-css = ".output-image, .input-image, .image-preview {height: 480px !important} "
-model_choices = ["modelx2", "modelx2 25 JXL", "modelx4", "minecraft_modelx4"]
-
-gr.Interface(
- fn=upscale,
- inputs=[
- gr.inputs.Image(type="pil", label="Input Image"),
- gr.inputs.Radio(
- model_choices,
- type="value",
- default=None,
- label="Choose Upscaler",
- optional=False,
- ),
- ],
- outputs="image",
- title="Image Upscaling 🦆",
- description="Model: [Anchor-based Plain Net for Mobile Image Super-Resolution](https://arxiv.org/abs/2105.09750). Repository: [SR Mobile PyTorch](https://github.com/w11wo/sr_mobile_pytorch)",
- allow_flagging="never",
- css=css,
-).launch()
diff --git a/spaces/Skyler123/TangGPT/run_Linux.sh b/spaces/Skyler123/TangGPT/run_Linux.sh
deleted file mode 100644
index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000
--- a/spaces/Skyler123/TangGPT/run_Linux.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$0")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir"
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/__init__.py
deleted file mode 100644
index 4547fc522b690ba2697843edd044f2039a4123a9..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/__init__.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from __future__ import absolute_import
-
-# For backwards compatibility, provide imports that used to be here.
-from .connection import is_connection_dropped
-from .request import SKIP_HEADER, SKIPPABLE_HEADERS, make_headers
-from .response import is_fp_closed
-from .retry import Retry
-from .ssl_ import (
- ALPN_PROTOCOLS,
- HAS_SNI,
- IS_PYOPENSSL,
- IS_SECURETRANSPORT,
- PROTOCOL_TLS,
- SSLContext,
- assert_fingerprint,
- resolve_cert_reqs,
- resolve_ssl_version,
- ssl_wrap_socket,
-)
-from .timeout import Timeout, current_time
-from .url import Url, get_host, parse_url, split_first
-from .wait import wait_for_read, wait_for_write
-
-__all__ = (
- "HAS_SNI",
- "IS_PYOPENSSL",
- "IS_SECURETRANSPORT",
- "SSLContext",
- "PROTOCOL_TLS",
- "ALPN_PROTOCOLS",
- "Retry",
- "Timeout",
- "Url",
- "assert_fingerprint",
- "current_time",
- "is_connection_dropped",
- "is_fp_closed",
- "get_host",
- "parse_url",
- "make_headers",
- "resolve_cert_reqs",
- "resolve_ssl_version",
- "split_first",
- "ssl_wrap_socket",
- "wait_for_read",
- "wait_for_write",
- "SKIP_HEADER",
- "SKIPPABLE_HEADERS",
-)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/vendored/packaging/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/vendored/packaging/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Tekknoman/SG161222-Realistic_Vision_V1.4/README.md b/spaces/Tekknoman/SG161222-Realistic_Vision_V1.4/README.md
deleted file mode 100644
index 14d22b561ea127e7999aa4767f40816895a239a1..0000000000000000000000000000000000000000
--- a/spaces/Tekknoman/SG161222-Realistic_Vision_V1.4/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SG161222-Realistic Vision V1.4
-emoji: 👁
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Terminus0501/vits-uma-genshin-honkai/app.py b/spaces/Terminus0501/vits-uma-genshin-honkai/app.py
deleted file mode 100644
index 92ddafdcd240434f58569b0e6964ef331a971dcf..0000000000000000000000000000000000000000
--- a/spaces/Terminus0501/vits-uma-genshin-honkai/app.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import time
-import gradio as gr
-import utils
-import commons
-from models import SynthesizerTrn
-from text import text_to_sequence
-from torch import no_grad, LongTensor
-import torch
-
-hps_ms = utils.get_hparams_from_file(r'./model/config.json')
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-net_g_ms = SynthesizerTrn(
- len(hps_ms.symbols),
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=hps_ms.data.n_speakers,
- **hps_ms.model).to(device)
-_ = net_g_ms.eval()
-speakers = hps_ms.speakers
-model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None)
-
-def get_text(text, hps):
- text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm, clean_text
-
-def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale):
- start = time.perf_counter()
- if not len(text):
- return "输入文本不能为空!", None, None
- text = text.replace('\n', ' ').replace('\r', '').replace(" ", "")
- if len(text) > 500:
- return f"输入文字过长!{len(text)}>100", None, None
- if language == 0:
- text = f"[ZH]{text}[ZH]"
- elif language == 1:
- text = f"[JA]{text}[JA]"
- else:
- text = f"{text}"
- stn_tst, clean_text = get_text(text, hps_ms)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = LongTensor([stn_tst.size(0)])
- speaker_id = LongTensor([speaker_id])
- audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w,
- length_scale=length_scale)[0][0, 0].data.cpu().float().numpy()
-
- return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s"
-
-def search_speaker(search_value):
- for s in speakers:
- if search_value == s:
- return s
- for s in speakers:
- if search_value in s:
- return s
-
-def change_lang(language):
- if language == 0:
- return 0.6, 0.668, 1.2
- else:
- return 0.6, 0.668, 1.1
-
-download_audio_js = """
-() =>{{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let audio = root.querySelector("#tts-audio").querySelector("audio");
- let text = root.querySelector("#input-text").querySelector("textarea");
- if (audio == undefined)
- return;
- text = text.value;
- if (text == undefined)
- text = Math.floor(Math.random()*100000000);
- audio = audio.src;
- let oA = document.createElement("a");
- oA.download = text.substr(0, 20)+'.wav';
- oA.href = audio;
- document.body.appendChild(oA);
- oA.click();
- oA.remove();
-}}
-"""
-
-if __name__ == '__main__':
- with gr.Blocks() as app:
- gr.Markdown(
- "#
- """)
-demo.queue().launch(debug=True)
\ No newline at end of file
diff --git a/spaces/awacke1/Sentence2Paragraph/app.py b/spaces/awacke1/Sentence2Paragraph/app.py
deleted file mode 100644
index d4166b071a8d943ea832318840ad761620cf0d09..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Sentence2Paragraph/app.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import gradio as gr
-import os
-from transformers import pipeline
-title = "📗Health and Mindful Story Gen❤️"
-examples = [
- ["Mental Body Scan"],
- ["Stretch, Calm, Breath"],
- ["Relaxed Seat Breath"],
- ["Walk Feel"],
- ["Brain gamification"],
- ["alleviating stress"],
- ["helping breathing, satisfaction"],
- ["Relieve Stress, Build Support"],
- ["Relaxation Response"],
- ["Deep Breaths"],
- ["Delete Not Helpful Thoughts"],
- ["Strengthen Helpful"],
- ["Reprogram Pain Stress Reactions"],
- ["Sleep Better and Find Joy"],
- ["Yoga Sleep"],
- ["Being a Happier and Healthier Person"],
- ["Relieve Pain"],
- ["Learn to Use Mindfulness to Affect Well Being"],
- ["Build and Boost Mental Strength"],
- ["Spending Time Outdoors"],
- ["Daily Routine Tasks"],
- ["Eating and Drinking - Find Healthy Nutrition Habits"],
- ["Drinking - Find Reasons and Cut Back or Quit Entirely"],
- ["Feel better each day when you awake by"],
- ["Feel better physically by"],
- ["Practicing mindfulness each day"],
- ["Be happier by"],
- ["Meditation can improve health"],
- ["Spending time outdoors"],
- ["Stress is relieved by quieting your mind, getting exercise and time with nature"],
- ["Break the cycle of stress and anxiety"],
- ["Feel calm in stressful situations"],
- ["Deal with work pressure"],
- ["Learn to reduce feelings of overwhelmed"]
-]
-from gradio import inputs
-from gradio.inputs import Textbox
-from gradio import outputs
-
-HF_TOKEN = os.environ.get("HF_TOKEN") # get token from secrets, copy token value HF_TOKEN from Profile settings token into this repo settings
-generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B", api_key=HF_TOKEN) # add api_key=HF_TOKEN to get over the quota error
-generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B", api_key=HF_TOKEN)
-generator1 = gr.Interface.load("huggingface/gpt2-large", api_key=HF_TOKEN)
-gr.Parallel(generator1, generator2, generator3, inputs=gr.inputs.Textbox(lines=5, label="Enter a sentence to get another sentence."),
- title=title, examples=examples).launch(share=False)
\ No newline at end of file
diff --git a/spaces/awacke1/Whisper2ChatUsingInferenceEndpoints/app.py b/spaces/awacke1/Whisper2ChatUsingInferenceEndpoints/app.py
deleted file mode 100644
index 2a0c1d155f88b52e21b8525bcb22efbd97ed7e99..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Whisper2ChatUsingInferenceEndpoints/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import requests
-import pytz
-import streamlit as st
-import os
-
-from datetime import datetime
-from audio_recorder_streamlit import audio_recorder
-
-API_URL = f'https://tonpixzfvq3791u9.us-east-1.aws.endpoints.huggingface.cloud'
-headers = {
- "Authorization": "Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
- "Content-Type": "audio/wav"
-}
-
-def query(filename):
- with open(filename, "rb") as f:
- data = f.read()
- response = requests.post(API_URL, headers=headers, data=data)
- return response.json()
-
-def generate_filename(prompt, file_type):
- central = pytz.timezone('US/Central')
- safe_date_time = datetime.now(central).strftime("%m%d_%H%M")
- replaced_prompt = prompt.replace(" ", "_").replace("\n", "_")
- safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:90]
- return f"{safe_date_time}_{safe_prompt}.{file_type}"
-
-# 10. Audio recorder to Wav file:
-def save_and_play_audio(audio_recorder):
- audio_bytes = audio_recorder()
- if audio_bytes:
- filename = generate_filename("Recording", "wav")
- with open(filename, 'wb') as f:
- f.write(audio_bytes)
- st.audio(audio_bytes, format="audio/wav")
- return filename
-
-# 9B. Speech transcription to file output - OPENAI Whisper
-def transcribe_audio(filename):
- output = query(filename)
- return output
-
-
-def main():
- st.title("Speech to Text")
- st.write("Record your speech and get the text.")
-
- # Audio, transcribe, GPT:
- filename = save_and_play_audio(audio_recorder)
- if filename is not None:
- transcription = transcribe_audio(filename)
- st.write(transcription)
-
-if __name__ == "__main__":
- main()
-
diff --git a/spaces/azizbarank/Turkish-Sentiment-Analysis/app.py b/spaces/azizbarank/Turkish-Sentiment-Analysis/app.py
deleted file mode 100644
index 93e8281d72a7cc428107c840c8f8d74fcb5c35a2..0000000000000000000000000000000000000000
--- a/spaces/azizbarank/Turkish-Sentiment-Analysis/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os
-os.system("pip install torch")
-os.system("pip install transformers")
-os.system("pip install sentencepiece")
-
-import streamlit as st
-from transformers import pipeline
-from transformers import AutoTokenizer, AutoModelForSequenceClassification
-
-tokenizer = AutoTokenizer.from_pretrained("azizbarank/distilbert-base-turkish-cased-sentiment")
-model = AutoModelForSequenceClassification.from_pretrained("azizbarank/distilbert-base-turkish-cased-sentiment")
-
-def classify(text):
- cls= pipeline("text-classification",model=model, tokenizer=tokenizer)
- return cls(text)[0]['label']
-
-
-site_header = st.container()
-text_input = st.container()
-model_results = st.container()
-
-with site_header:
- st.title('Turkish Sentiment Analysis 😀😠')
- st.markdown(
- """
- [Distilled Turkish BERT model](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) that I fine-tuned on the [sepidmnorozy/Turkish_sentiment](https://huggingface.co/datasets/sepidmnorozy/Turkish_sentiment) dataset that is heavily based on different reviews about services/places.
-
- For more information on the dataset:
-
- * [Hugging Face](https://huggingface.co/datasets/sepidmnorozy/Turkish_sentiment)
- """
-)
-
-with text_input:
- st.header('Is Your Review Considered Positive or Negative?')
- st.write("""*Please note that predictions are based on how the model was trained, so it may not be an accurate representation.*""")
- user_text = st.text_input('Enter Text', max_chars=300)
-
-with model_results:
- st.subheader('Prediction:')
- if user_text:
- prediction = classify(user_text)
-
- if prediction == "LABEL_0":
- st.subheader('**Negative**')
- else:
- st.subheader('**Positive**')
- st.text('')
\ No newline at end of file
diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/pann_model.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/pann_model.py
deleted file mode 100644
index 0d9a8eb0bf897ad6ec04923361b01e5de433b2ef..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/pann_model.py
+++ /dev/null
@@ -1,704 +0,0 @@
-# PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition
-# Reference from https://github.com/qiuqiangkong/audioset_tagging_cnn
-# Some layers are re-designed for CLAP
-import os
-
-os.environ["NUMBA_CACHE_DIR"] = "/tmp/"
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchlibrosa.stft import Spectrogram, LogmelFilterBank
-from torchlibrosa.augmentation import SpecAugmentation
-
-from .utils import do_mixup, interpolate, pad_framewise_output
-from .feature_fusion import iAFF, AFF, DAF
-
-
-def init_layer(layer):
- """Initialize a Linear or Convolutional layer."""
- nn.init.xavier_uniform_(layer.weight)
-
- if hasattr(layer, "bias"):
- if layer.bias is not None:
- layer.bias.data.fill_(0.0)
-
-
-def init_bn(bn):
- """Initialize a Batchnorm layer."""
- bn.bias.data.fill_(0.0)
- bn.weight.data.fill_(1.0)
-
-
-class ConvBlock(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- )
-
- self.conv2 = nn.Conv2d(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- )
-
- self.bn1 = nn.BatchNorm2d(out_channels)
- self.bn2 = nn.BatchNorm2d(out_channels)
-
- self.init_weight()
-
- def init_weight(self):
- init_layer(self.conv1)
- init_layer(self.conv2)
- init_bn(self.bn1)
- init_bn(self.bn2)
-
- def forward(self, input, pool_size=(2, 2), pool_type="avg"):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- x = F.relu_(self.bn2(self.conv2(x)))
- if pool_type == "max":
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == "avg":
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == "avg+max":
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception("Incorrect argument!")
-
- return x
-
-
-class ConvBlock5x5(nn.Module):
- def __init__(self, in_channels, out_channels):
-
- super(ConvBlock5x5, self).__init__()
-
- self.conv1 = nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(5, 5),
- stride=(1, 1),
- padding=(2, 2),
- bias=False,
- )
-
- self.bn1 = nn.BatchNorm2d(out_channels)
-
- self.init_weight()
-
- def init_weight(self):
- init_layer(self.conv1)
- init_bn(self.bn1)
-
- def forward(self, input, pool_size=(2, 2), pool_type="avg"):
-
- x = input
- x = F.relu_(self.bn1(self.conv1(x)))
- if pool_type == "max":
- x = F.max_pool2d(x, kernel_size=pool_size)
- elif pool_type == "avg":
- x = F.avg_pool2d(x, kernel_size=pool_size)
- elif pool_type == "avg+max":
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
- x2 = F.max_pool2d(x, kernel_size=pool_size)
- x = x1 + x2
- else:
- raise Exception("Incorrect argument!")
-
- return x
-
-
-class AttBlock(nn.Module):
- def __init__(self, n_in, n_out, activation="linear", temperature=1.0):
- super(AttBlock, self).__init__()
-
- self.activation = activation
- self.temperature = temperature
- self.att = nn.Conv1d(
- in_channels=n_in,
- out_channels=n_out,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=True,
- )
- self.cla = nn.Conv1d(
- in_channels=n_in,
- out_channels=n_out,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=True,
- )
-
- self.bn_att = nn.BatchNorm1d(n_out)
- self.init_weights()
-
- def init_weights(self):
- init_layer(self.att)
- init_layer(self.cla)
- init_bn(self.bn_att)
-
- def forward(self, x):
- # x: (n_samples, n_in, n_time)
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
- cla = self.nonlinear_transform(self.cla(x))
- x = torch.sum(norm_att * cla, dim=2)
- return x, norm_att, cla
-
- def nonlinear_transform(self, x):
- if self.activation == "linear":
- return x
- elif self.activation == "sigmoid":
- return torch.sigmoid(x)
-
-
-class Cnn14(nn.Module):
- def __init__(
- self,
- sample_rate,
- window_size,
- hop_size,
- mel_bins,
- fmin,
- fmax,
- classes_num,
- enable_fusion=False,
- fusion_type="None",
- ):
-
- super(Cnn14, self).__init__()
-
- window = "hann"
- center = True
- pad_mode = "reflect"
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(
- n_fft=window_size,
- hop_length=hop_size,
- win_length=window_size,
- window=window,
- center=center,
- pad_mode=pad_mode,
- freeze_parameters=True,
- )
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(
- sr=sample_rate,
- n_fft=window_size,
- n_mels=mel_bins,
- fmin=fmin,
- fmax=fmax,
- ref=ref,
- amin=amin,
- top_db=top_db,
- freeze_parameters=True,
- )
-
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(
- time_drop_width=64,
- time_stripes_num=2,
- freq_drop_width=8,
- freq_stripes_num=2,
- )
-
- self.bn0 = nn.BatchNorm2d(64)
-
- if (self.enable_fusion) and (self.fusion_type == "channel_map"):
- self.conv_block1 = ConvBlock(in_channels=4, out_channels=64)
- else:
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
-
- self.fc1 = nn.Linear(2048, 2048, bias=True)
- self.fc_audioset = nn.Linear(2048, classes_num, bias=True)
-
- if (self.enable_fusion) and (
- self.fusion_type in ["daf_1d", "aff_1d", "iaff_1d"]
- ):
- self.mel_conv1d = nn.Sequential(
- nn.Conv1d(64, 64, kernel_size=5, stride=3, padding=2),
- nn.BatchNorm1d(64), # No Relu
- )
- if self.fusion_type == "daf_1d":
- self.fusion_model = DAF()
- elif self.fusion_type == "aff_1d":
- self.fusion_model = AFF(channels=64, type="1D")
- elif self.fusion_type == "iaff_1d":
- self.fusion_model = iAFF(channels=64, type="1D")
-
- if (self.enable_fusion) and (
- self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d"]
- ):
- self.mel_conv2d = nn.Sequential(
- nn.Conv2d(1, 64, kernel_size=(5, 5), stride=(6, 2), padding=(2, 2)),
- nn.BatchNorm2d(64),
- nn.ReLU(inplace=True),
- )
-
- if self.fusion_type == "daf_2d":
- self.fusion_model = DAF()
- elif self.fusion_type == "aff_2d":
- self.fusion_model = AFF(channels=64, type="2D")
- elif self.fusion_type == "iaff_2d":
- self.fusion_model = iAFF(channels=64, type="2D")
- self.init_weight()
-
- def init_weight(self):
- init_bn(self.bn0)
- init_layer(self.fc1)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None, device=None):
- """
- Input: (batch_size, data_length)"""
-
- if self.enable_fusion and input["longer"].sum() == 0:
- # if no audio is longer than 10s, then randomly select one audio to be longer
- input["longer"][torch.randint(0, input["longer"].shape[0], (1,))] = True
-
- if not self.enable_fusion:
- x = self.spectrogram_extractor(
- input["waveform"].to(device=device, non_blocking=True)
- ) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
- else:
- longer_list = input["longer"].to(device=device, non_blocking=True)
- x = input["mel_fusion"].to(device=device, non_blocking=True)
- longer_list_idx = torch.where(longer_list)[0]
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
- if self.fusion_type in ["daf_1d", "aff_1d", "iaff_1d"]:
- new_x = x[:, 0:1, :, :].clone().contiguous()
- # local processing
- if len(longer_list_idx) > 0:
- fusion_x_local = x[longer_list_idx, 1:, :, :].clone().contiguous()
- FB, FC, FT, FF = fusion_x_local.size()
- fusion_x_local = fusion_x_local.view(FB * FC, FT, FF)
- fusion_x_local = torch.permute(
- fusion_x_local, (0, 2, 1)
- ).contiguous()
- fusion_x_local = self.mel_conv1d(fusion_x_local)
- fusion_x_local = fusion_x_local.view(
- FB, FC, FF, fusion_x_local.size(-1)
- )
- fusion_x_local = (
- torch.permute(fusion_x_local, (0, 2, 1, 3))
- .contiguous()
- .flatten(2)
- )
- if fusion_x_local.size(-1) < FT:
- fusion_x_local = torch.cat(
- [
- fusion_x_local,
- torch.zeros(
- (FB, FF, FT - fusion_x_local.size(-1)),
- device=device,
- ),
- ],
- dim=-1,
- )
- else:
- fusion_x_local = fusion_x_local[:, :, :FT]
- # 1D fusion
- new_x = new_x.squeeze(1).permute((0, 2, 1)).contiguous()
- new_x[longer_list_idx] = self.fusion_model(
- new_x[longer_list_idx], fusion_x_local
- )
- x = new_x.permute((0, 2, 1)).contiguous()[:, None, :, :]
- else:
- x = new_x
- elif self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d", "channel_map"]:
- x = x # no change
-
- if self.training:
- x = self.spec_augmenter(x)
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
- if (self.enable_fusion) and (
- self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d"]
- ):
- global_x = x[:, 0:1, :, :]
-
- # global processing
- B, C, H, W = global_x.shape
- global_x = self.conv_block1(global_x, pool_size=(2, 2), pool_type="avg")
- if len(longer_list_idx) > 0:
- local_x = x[longer_list_idx, 1:, :, :].contiguous()
- TH = global_x.size(-2)
- # local processing
- B, C, H, W = local_x.shape
- local_x = local_x.view(B * C, 1, H, W)
- local_x = self.mel_conv2d(local_x)
- local_x = local_x.view(
- B, C, local_x.size(1), local_x.size(2), local_x.size(3)
- )
- local_x = local_x.permute((0, 2, 1, 3, 4)).contiguous().flatten(2, 3)
- TB, TC, _, TW = local_x.size()
- if local_x.size(-2) < TH:
- local_x = torch.cat(
- [
- local_x,
- torch.zeros(
- (TB, TC, TH - local_x.size(-2), TW),
- device=global_x.device,
- ),
- ],
- dim=-2,
- )
- else:
- local_x = local_x[:, :, :TH, :]
-
- global_x[longer_list_idx] = self.fusion_model(
- global_x[longer_list_idx], local_x
- )
- x = global_x
- else:
- x = self.conv_block1(x, pool_size=(2, 2), pool_type="avg")
-
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block5(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block6(x, pool_size=(1, 1), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x = latent_x1 + latent_x2
- latent_x = latent_x.transpose(1, 2)
- latent_x = F.relu_(self.fc1(latent_x))
- latent_output = interpolate(latent_x, 32)
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {
- "clipwise_output": clipwise_output,
- "embedding": embedding,
- "fine_grained_embedding": latent_output,
- }
- return output_dict
-
-
-class Cnn6(nn.Module):
- def __init__(
- self,
- sample_rate,
- window_size,
- hop_size,
- mel_bins,
- fmin,
- fmax,
- classes_num,
- enable_fusion=False,
- fusion_type="None",
- ):
-
- super(Cnn6, self).__init__()
-
- window = "hann"
- center = True
- pad_mode = "reflect"
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(
- n_fft=window_size,
- hop_length=hop_size,
- win_length=window_size,
- window=window,
- center=center,
- pad_mode=pad_mode,
- freeze_parameters=True,
- )
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(
- sr=sample_rate,
- n_fft=window_size,
- n_mels=mel_bins,
- fmin=fmin,
- fmax=fmax,
- ref=ref,
- amin=amin,
- top_db=top_db,
- freeze_parameters=True,
- )
-
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(
- time_drop_width=64,
- time_stripes_num=2,
- freq_drop_width=8,
- freq_stripes_num=2,
- )
-
- self.bn0 = nn.BatchNorm2d(64)
-
- self.conv_block1 = ConvBlock5x5(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock5x5(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock5x5(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock5x5(in_channels=256, out_channels=512)
-
- self.fc1 = nn.Linear(512, 512, bias=True)
- self.fc_audioset = nn.Linear(512, classes_num, bias=True)
-
- self.init_weight()
-
- def init_weight(self):
- init_bn(self.bn0)
- init_layer(self.fc1)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None, device=None):
- """
- Input: (batch_size, data_length)"""
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
-
- x = self.conv_block1(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x = latent_x1 + latent_x2
- latent_x = latent_x.transpose(1, 2)
- latent_x = F.relu_(self.fc1(latent_x))
- latent_output = interpolate(latent_x, 16)
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {
- "clipwise_output": clipwise_output,
- "embedding": embedding,
- "fine_grained_embedding": latent_output,
- }
-
- return output_dict
-
-
-class Cnn10(nn.Module):
- def __init__(
- self,
- sample_rate,
- window_size,
- hop_size,
- mel_bins,
- fmin,
- fmax,
- classes_num,
- enable_fusion=False,
- fusion_type="None",
- ):
-
- super(Cnn10, self).__init__()
-
- window = "hann"
- center = True
- pad_mode = "reflect"
- ref = 1.0
- amin = 1e-10
- top_db = None
-
- self.enable_fusion = enable_fusion
- self.fusion_type = fusion_type
-
- # Spectrogram extractor
- self.spectrogram_extractor = Spectrogram(
- n_fft=window_size,
- hop_length=hop_size,
- win_length=window_size,
- window=window,
- center=center,
- pad_mode=pad_mode,
- freeze_parameters=True,
- )
-
- # Logmel feature extractor
- self.logmel_extractor = LogmelFilterBank(
- sr=sample_rate,
- n_fft=window_size,
- n_mels=mel_bins,
- fmin=fmin,
- fmax=fmax,
- ref=ref,
- amin=amin,
- top_db=top_db,
- freeze_parameters=True,
- )
-
- # Spec augmenter
- self.spec_augmenter = SpecAugmentation(
- time_drop_width=64,
- time_stripes_num=2,
- freq_drop_width=8,
- freq_stripes_num=2,
- )
-
- self.bn0 = nn.BatchNorm2d(64)
-
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
-
- self.fc1 = nn.Linear(1024, 1024, bias=True)
- self.fc_audioset = nn.Linear(1024, classes_num, bias=True)
-
- self.init_weight()
-
- def init_weight(self):
- init_bn(self.bn0)
- init_layer(self.fc1)
- init_layer(self.fc_audioset)
-
- def forward(self, input, mixup_lambda=None, device=None):
- """
- Input: (batch_size, data_length)"""
-
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
-
- x = x.transpose(1, 3)
- x = self.bn0(x)
- x = x.transpose(1, 3)
-
- if self.training:
- x = self.spec_augmenter(x)
-
- # Mixup on spectrogram
- if self.training and mixup_lambda is not None:
- x = do_mixup(x, mixup_lambda)
-
- x = self.conv_block1(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block2(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block3(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block4(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = self.conv_block5(x, pool_size=(2, 2), pool_type="avg")
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.mean(x, dim=3)
-
- latent_x1 = F.max_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x2 = F.avg_pool1d(x, kernel_size=3, stride=1, padding=1)
- latent_x = latent_x1 + latent_x2
- latent_x = latent_x.transpose(1, 2)
- latent_x = F.relu_(self.fc1(latent_x))
- latent_output = interpolate(latent_x, 32)
-
- (x1, _) = torch.max(x, dim=2)
- x2 = torch.mean(x, dim=2)
- x = x1 + x2
- x = F.dropout(x, p=0.5, training=self.training)
- x = F.relu_(self.fc1(x))
- embedding = F.dropout(x, p=0.5, training=self.training)
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
-
- output_dict = {
- "clipwise_output": clipwise_output,
- "embedding": embedding,
- "fine_grained_embedding": latent_output,
- }
-
- return output_dict
-
-
-def create_pann_model(audio_cfg, enable_fusion=False, fusion_type="None"):
- try:
- ModelProto = eval(audio_cfg.model_name)
- model = ModelProto(
- sample_rate=audio_cfg.sample_rate,
- window_size=audio_cfg.window_size,
- hop_size=audio_cfg.hop_size,
- mel_bins=audio_cfg.mel_bins,
- fmin=audio_cfg.fmin,
- fmax=audio_cfg.fmax,
- classes_num=audio_cfg.class_num,
- enable_fusion=enable_fusion,
- fusion_type=fusion_type,
- )
- return model
- except:
- raise RuntimeError(
- f"Import Model for {audio_cfg.model_name} not found, or the audio cfg parameters are not enough."
- )
diff --git a/spaces/banana-projects/datasets-card-creator/build/static/css/main.a2993414.chunk.css b/spaces/banana-projects/datasets-card-creator/build/static/css/main.a2993414.chunk.css
deleted file mode 100644
index 09cc25b2b1c7bef49e4bda3d7c1c63184215345e..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/datasets-card-creator/build/static/css/main.a2993414.chunk.css
+++ /dev/null
@@ -1,2 +0,0 @@
-.space-y-8>:not([hidden])~:not([hidden]){--tw-space-y-reverse:0;margin-top:calc(2rem*(1 - var(--tw-space-y-reverse)));margin-bottom:calc(2rem*var(--tw-space-y-reverse))}.divide-y-2>:not([hidden])~:not([hidden]){--tw-divide-y-reverse:0;border-top-width:calc(2px*(1 - var(--tw-divide-y-reverse)));border-bottom-width:calc(2px*var(--tw-divide-y-reverse))}.divide-y>:not([hidden])~:not([hidden]){--tw-divide-y-reverse:0;border-top-width:calc(1px*(1 - var(--tw-divide-y-reverse)));border-bottom-width:calc(1px*var(--tw-divide-y-reverse))}.divide-gray-200>:not([hidden])~:not([hidden]){--tw-divide-opacity:1;border-color:rgba(237,242,247,var(--tw-divide-opacity))}.bg-white{--tw-bg-opacity:1;background-color:rgba(255,255,255,var(--tw-bg-opacity))}.bg-gray-100{--tw-bg-opacity:1;background-color:rgba(247,250,252,var(--tw-bg-opacity))}.border-gray-200{--tw-border-opacity:1;border-color:rgba(237,242,247,var(--tw-border-opacity))}.border-gray-300{--tw-border-opacity:1;border-color:rgba(226,232,240,var(--tw-border-opacity))}.rounded-md{border-radius:.375rem}.rounded-lg{border-radius:.5rem}.border-solid{border-style:solid}.border-none{border-style:none}.border{border-width:1px}.cursor-pointer{cursor:pointer}.block{display:block}.inline-block{display:inline-block}.flex{display:flex}.inline-flex{display:inline-flex}.table{display:table}.items-center{align-items:center}.justify-end{justify-content:flex-end}.justify-center{justify-content:center}.justify-between{justify-content:space-between}.font-sans{font-family:system-ui,-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji"}.font-normal{font-weight:400}.font-medium{font-weight:500}.font-extrabold{font-weight:800}.h-10{height:2.5rem}.h-screen{height:100vh}.text-xs{font-size:.75rem}.text-base{font-size:1rem}.text-lg{font-size:1.125rem}.text-xl{font-size:1.25rem}.text-4xl{font-size:2.25rem}.leading-4{line-height:1rem}.mx-auto{margin-left:auto;margin-right:auto}.mt-1{margin-top:.25rem}.ml-1{margin-left:.25rem}.mt-2{margin-top:.5rem}.ml-2{margin-left:.5rem}.mt-4{margin-top:1rem}.mr-4{margin-right:1rem}.ml-4{margin-left:1rem}.mt-5{margin-top:1.25rem}.mt-12{margin-top:3rem}.mb-32{margin-bottom:8rem}.max-h-screen{max-height:100vh}.max-w-xl{max-width:36rem}.max-w-7xl{max-width:80rem}.min-h-full{min-height:100%}.focus\:outline-none:focus{outline:2px solid transparent;outline-offset:2px}.overflow-hidden{overflow:hidden}.overflow-y-auto{overflow-y:auto}.p-2{padding:.5rem}.p-6{padding:1.5rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.px-3{padding-left:.75rem;padding-right:.75rem}.py-4{padding-top:1rem;padding-bottom:1rem}.px-4{padding-left:1rem;padding-right:1rem}.py-8{padding-top:2rem;padding-bottom:2rem}.py-12{padding-top:3rem;padding-bottom:3rem}.pt-6{padding-top:1.5rem}.absolute{position:absolute}.bottom-0{bottom:0}.left-0{left:0}*{--tw-shadow:0 0 transparent}.shadow-sm{--tw-shadow:0 1px 2px 0 rgba(0,0,0,0.05)}.shadow,.shadow-sm{box-shadow:var(--tw-ring-offset-shadow,0 0 transparent),var(--tw-ring-shadow,0 0 transparent),var(--tw-shadow)}.shadow{--tw-shadow:0 1px 3px 0 rgba(0,0,0,0.1),0 1px 2px 0 rgba(0,0,0,0.06)}*{--tw-ring-inset:var(--tw-empty,/*!*/ /*!*/);--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(66,153,225,0.5);--tw-ring-offset-shadow:0 0 transparent;--tw-ring-shadow:0 0 transparent}.focus\:ring-2:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 transparent)}.focus\:ring-offset-2:focus{--tw-ring-offset-width:2px}.focus\:ring-gray-500:focus{--tw-ring-opacity:1;--tw-ring-color:rgba(160,174,192,var(--tw-ring-opacity))}.text-left{text-align:left}.text-center{text-align:center}.text-gray-500{--tw-text-opacity:1;color:rgba(160,174,192,var(--tw-text-opacity))}.text-gray-600{--tw-text-opacity:1;color:rgba(113,128,150,var(--tw-text-opacity))}.text-gray-700{--tw-text-opacity:1;color:rgba(74,85,104,var(--tw-text-opacity))}.no-underline{text-decoration:none}.w-80{width:20rem}.w-full{width:100%}.gap-6{gap:1.5rem}.grid-cols-12{grid-template-columns:repeat(12,minmax(0,1fr))}.col-span-4{grid-column:span 4/span 4}.col-span-8{grid-column:span 8/span 8}@-webkit-keyframes spin{to{transform:rotate(1turn)}}@keyframes spin{to{transform:rotate(1turn)}}@-webkit-keyframes ping{75%,to{transform:scale(2);opacity:0}}@keyframes ping{75%,to{transform:scale(2);opacity:0}}@-webkit-keyframes pulse{50%{opacity:.5}}@keyframes pulse{50%{opacity:.5}}@-webkit-keyframes bounce{0%,to{transform:translateY(-25%);-webkit-animation-timing-function:cubic-bezier(.8,0,1,1);animation-timing-function:cubic-bezier(.8,0,1,1)}50%{transform:none;-webkit-animation-timing-function:cubic-bezier(0,0,.2,1);animation-timing-function:cubic-bezier(0,0,.2,1)}}@keyframes bounce{0%,to{transform:translateY(-25%);-webkit-animation-timing-function:cubic-bezier(.8,0,1,1);animation-timing-function:cubic-bezier(.8,0,1,1)}50%{transform:none;-webkit-animation-timing-function:cubic-bezier(0,0,.2,1);animation-timing-function:cubic-bezier(0,0,.2,1)}}.grid{display:grid}.col-span-4{grid-column-start:span 4}.col-span-8{grid-column-start:span 8}@media (min-width:500px){.xs\:max-w-xs{max-width:20rem}}@media (min-width:640px){.sm\:text-sm{font-size:.875rem}.sm\:px-6{padding-left:1.5rem;padding-right:1.5rem}.sm\:py-12{padding-top:3rem;padding-bottom:3rem}.sm\:tracking-tight{letter-spacing:-.025em}}@media (min-width:768px){.md\:max-h-xs{max-height:20rem}.md\:max-w-2xl{max-width:42rem}}@media (min-width:1024px){.lg\:max-h-md{max-height:28rem}.lg\:px-8{padding-left:2rem;padding-right:2rem}.lg\:py-16{padding-top:4rem;padding-bottom:4rem}}@media (min-width:1280px){.xl\:max-h-xl{max-height:36rem}.xl\:max-w-4xl{max-width:56rem}}@media (min-width:1650px){.xxl\:max-h-screen{max-height:100vh}.xxl\:min-w-5xl{min-width:64rem}}
-/*# sourceMappingURL=main.a2993414.chunk.css.map */
\ No newline at end of file
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/MMDLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/MMDLoader.js
deleted file mode 100644
index 152ed7e4aa130127ce73c5b11ecb597111df7210..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/MMDLoader.js
+++ /dev/null
@@ -1,2020 +0,0 @@
-/**
- * @author takahiro / https://github.com/takahirox
- *
- * Dependencies
- * - mmd-parser https://github.com/takahirox/mmd-parser
- * - THREE.TGALoader
- * - THREE.OutlineEffect
- *
- * MMDLoader creates Three.js Objects from MMD resources as
- * PMD, PMX, VMD, and VPD files.
- *
- * PMD/PMX is a model data format, VMD is a motion data format
- * VPD is a posing data format used in MMD(Miku Miku Dance).
- *
- * MMD official site
- * - http://www.geocities.jp/higuchuu4/index_e.htm
- *
- * PMD, VMD format (in Japanese)
- * - http://blog.goo.ne.jp/torisu_tetosuki/e/209ad341d3ece2b1b4df24abf619d6e4
- *
- * PMX format
- * - https://gist.github.com/felixjones/f8a06bd48f9da9a4539f
- *
- * TODO
- * - light motion in vmd support.
- * - SDEF support.
- * - uv/material/bone morphing support.
- * - more precise grant skinning support.
- * - shadow support.
- */
-
-THREE.MMDLoader = ( function () {
-
- /**
- * @param {THREE.LoadingManager} manager
- */
- function MMDLoader( manager ) {
-
- this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager;
-
- this.loader = new THREE.FileLoader( this.manager );
-
- this.parser = null; // lazy generation
- this.meshBuilder = new MeshBuilder( this.manager );
- this.animationBuilder = new AnimationBuilder();
-
- }
-
- MMDLoader.prototype = {
-
- constructor: MMDLoader,
-
- crossOrigin: 'anonymous',
-
- /**
- * @param {string} crossOrigin
- * @return {THREE.MMDLoader}
- */
- setCrossOrigin: function ( crossOrigin ) {
-
- this.crossOrigin = crossOrigin;
- return this;
-
- },
-
- /**
- * @param {string} animationPath
- * @return {THREE.MMDLoader}
- */
- setAnimationPath: function ( animationPath ) {
-
- this.animationPath = animationPath;
- return this;
-
- },
-
- /**
- * @param {string} path
- * @return {THREE.MMDLoader}
- */
- setPath: function ( path ) {
-
- this.path = path;
- return this;
-
- },
-
- /**
- * @param {string} resourcePath
- * @return {THREE.MMDLoader}
- */
- setResoucePath: function ( resourcePath ) {
-
- this.resourcePath = resourcePath;
- return this;
-
- },
-
- // Load MMD assets as Three.js Object
-
- /**
- * Loads Model file (.pmd or .pmx) as a THREE.SkinnedMesh.
- *
- * @param {string} url - url to Model(.pmd or .pmx) file
- * @param {function} onLoad
- * @param {function} onProgress
- * @param {function} onError
- */
- load: function ( url, onLoad, onProgress, onError ) {
-
- var builder = this.meshBuilder.setCrossOrigin( this.crossOrigin );
-
- // resource path
-
- var resourcePath;
-
- if ( this.resourcePath !== undefined ) {
-
- resourcePath = this.resourcePath;
-
- } else if ( this.path !== undefined ) {
-
- resourcePath = this.path;
-
- } else {
-
- resourcePath = THREE.LoaderUtils.extractUrlBase( url );
-
- }
-
- var modelExtension = this._extractExtension( url ).toLowerCase();
-
- // Should I detect by seeing header?
- if ( modelExtension !== 'pmd' && modelExtension !== 'pmx' ) {
-
- if ( onError ) onError( new Error( 'THREE.MMDLoader: Unknown model file extension .' + modelExtension + '.' ) );
-
- return;
-
- }
-
- this[ modelExtension === 'pmd' ? 'loadPMD' : 'loadPMX' ]( url, function ( data ) {
-
- onLoad( builder.build( data, resourcePath, onProgress, onError ) );
-
- }, onProgress, onError );
-
- },
-
- /**
- * Loads Motion file(s) (.vmd) as a THREE.AnimationClip.
- * If two or more files are specified, they'll be merged.
- *
- * @param {string|Array} url - url(s) to animation(.vmd) file(s)
- * @param {THREE.SkinnedMesh|THREE.Camera} object - tracks will be fitting to this object
- * @param {function} onLoad
- * @param {function} onProgress
- * @param {function} onError
- */
- loadAnimation: function ( url, object, onLoad, onProgress, onError ) {
-
- var builder = this.animationBuilder;
-
- this.loadVMD( url, function ( vmd ) {
-
- onLoad( object.isCamera
- ? builder.buildCameraAnimation( vmd )
- : builder.build( vmd, object ) );
-
- }, onProgress, onError );
-
- },
-
- /**
- * Loads mode file and motion file(s) as an object containing
- * a THREE.SkinnedMesh and a THREE.AnimationClip.
- * Tracks of THREE.AnimationClip are fitting to the model.
- *
- * @param {string} modelUrl - url to Model(.pmd or .pmx) file
- * @param {string|Array{string}} vmdUrl - url(s) to animation(.vmd) file
- * @param {function} onLoad
- * @param {function} onProgress
- * @param {function} onError
- */
- loadWithAnimation: function ( modelUrl, vmdUrl, onLoad, onProgress, onError ) {
-
- var scope = this;
-
- this.load( modelUrl, function ( mesh ) {
-
- scope.loadAnimation( vmdUrl, mesh, function ( animation ) {
-
- onLoad( {
- mesh: mesh,
- animation: animation
- } );
-
- }, onProgress, onError );
-
- }, onProgress, onError );
-
- },
-
- // Load MMD assets as Object data parsed by MMDParser
-
- /**
- * Loads .pmd file as an Object.
- *
- * @param {string} url - url to .pmd file
- * @param {function} onLoad
- * @param {function} onProgress
- * @param {function} onError
- */
- loadPMD: function ( url, onLoad, onProgress, onError ) {
-
- var parser = this._getParser();
-
- this.loader
- .setMimeType( undefined )
- .setPath( this.path )
- .setResponseType( 'arraybuffer' )
- .load( url, function ( buffer ) {
-
- onLoad( parser.parsePmd( buffer, true ) );
-
- }, onProgress, onError );
-
- },
-
- /**
- * Loads .pmx file as an Object.
- *
- * @param {string} url - url to .pmx file
- * @param {function} onLoad
- * @param {function} onProgress
- * @param {function} onError
- */
- loadPMX: function ( url, onLoad, onProgress, onError ) {
-
- var parser = this._getParser();
-
- this.loader
- .setMimeType( undefined )
- .setPath( this.path )
- .setResponseType( 'arraybuffer' )
- .load( url, function ( buffer ) {
-
- onLoad( parser.parsePmx( buffer, true ) );
-
- }, onProgress, onError );
-
- },
-
- /**
- * Loads .vmd file as an Object. If two or more files are specified
- * they'll be merged.
- *
- * @param {string|Array} url - url(s) to .vmd file(s)
- * @param {function} onLoad
- * @param {function} onProgress
- * @param {function} onError
- */
- loadVMD: function ( url, onLoad, onProgress, onError ) {
-
- var urls = Array.isArray( url ) ? url : [ url ];
-
- var vmds = [];
- var vmdNum = urls.length;
-
- var parser = this._getParser();
-
- this.loader
- .setMimeType( undefined )
- .setPath( this.animationPath )
- .setResponseType( 'arraybuffer' );
-
- for ( var i = 0, il = urls.length; i < il; i ++ ) {
-
- this.loader.load( urls[ i ], function ( buffer ) {
-
- vmds.push( parser.parseVmd( buffer, true ) );
-
- if ( vmds.length === vmdNum ) onLoad( parser.mergeVmds( vmds ) );
-
- }, onProgress, onError );
-
- }
-
- },
-
- /**
- * Loads .vpd file as an Object.
- *
- * @param {string} url - url to .vpd file
- * @param {boolean} isUnicode
- * @param {function} onLoad
- * @param {function} onProgress
- * @param {function} onError
- */
- loadVPD: function ( url, isUnicode, onLoad, onProgress, onError ) {
-
- var parser = this._getParser();
-
- this.loader
- .setMimeType( isUnicode ? undefined : 'text/plain; charset=shift_jis' )
- .setPath( this.animationPath )
- .setResponseType( 'text' )
- .load( url, function ( text ) {
-
- onLoad( parser.parseVpd( text, true ) );
-
- }, onProgress, onError );
-
- },
-
- // private methods
-
- _extractExtension: function ( url ) {
-
- var index = url.lastIndexOf( '.' );
- return index < 0 ? '' : url.slice( index + 1 );
-
- },
-
- _getParser: function () {
-
- if ( this.parser === null ) {
-
- if ( typeof MMDParser === 'undefined' ) {
-
- throw new Error( 'THREE.MMDLoader: Import MMDParser https://github.com/takahirox/mmd-parser' );
-
- }
-
- this.parser = new MMDParser.Parser();
-
- }
-
- return this.parser;
-
- }
-
- };
-
- // Utilities
-
- /*
- * base64 encoded defalut toon textures toon00.bmp - toon10.bmp.
- * We don't need to request external toon image files.
- * This idea is from http://www20.atpages.jp/katwat/three.js_r58/examples/mytest37/mmd.three.js
- */
- var DEFAULT_TOON_TEXTURES = [
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAL0lEQVRYR+3QQREAAAzCsOFfNJPBJ1XQS9r2hsUAAQIECBAgQIAAAQIECBAgsBZ4MUx/ofm2I/kAAAAASUVORK5CYII=',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAN0lEQVRYR+3WQREAMBACsZ5/bWiiMvgEBTt5cW37hjsBBAgQIECAwFwgyfYPCCBAgAABAgTWAh8aBHZBl14e8wAAAABJRU5ErkJggg==',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAOUlEQVRYR+3WMREAMAwDsYY/yoDI7MLwIiP40+RJklfcCCBAgAABAgTqArfb/QMCCBAgQIAAgbbAB3z/e0F3js2cAAAAAElFTkSuQmCC',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAN0lEQVRYR+3WQREAMBACsZ5/B5ilMvgEBTt5cW37hjsBBAgQIECAwFwgyfYPCCBAgAABAgTWAh81dWyx0gFwKAAAAABJRU5ErkJggg==',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAOklEQVRYR+3WoREAMAwDsWb/UQtCy9wxTOQJ/oQ8SXKKGwEECBAgQIBAXeDt7f4BAQQIECBAgEBb4AOz8Hzx7WLY4wAAAABJRU5ErkJggg==',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAABPUlEQVRYR+1XwW7CMAy1+f9fZOMysSEOEweEOPRNdm3HbdOyIhAcklPrOs/PLy9RygBALxzcCDQFmgJNgaZAU6Ap0BR4PwX8gsRMVLssMRH5HcpzJEaWL7EVg9F1IHRlyqQohgVr4FGUlUcMJSjcUlDw0zvjeun70cLWmneoyf7NgBTQSniBTQQSuJAZsOnnaczjIMb5hCiuHKxokCrJfVnrctyZL0PkJAJe1HMil4nxeyi3Ypfn1kX51jpPvo/JeCNC4PhVdHdJw2XjBR8brF8PEIhNVn12AgP7uHsTBguBn53MUZCqv7Lp07Pn5k1Ro+uWmUNn7D+M57rtk7aG0Vo73xyF/fbFf0bPJjDXngnGocDTdFhygZjwUQrMNrDcmZlQT50VJ/g/UwNyHpu778+yW+/ksOz/BFo54P4AsUXMfRq7XWsAAAAASUVORK5CYII=',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAACMElEQVRYR+2Xv4pTQRTGf2dubhLdICiii2KnYKHVolhauKWPoGAnNr6BD6CvIVaihYuI2i1ia0BY0MZGRHQXjZj/mSPnnskfNWiWZUlzJ5k7M2cm833nO5Mziej2DWWJRUoCpQKlAntSQCqgw39/iUWAGmh37jrRnVsKlgpiqmkoGVABA7E57fvY+pJDdgKqF6HzFCSADkDq+F6AHABtQ+UMVE5D7zXod7fFNhTEckTbj5XQgHzNN+5tQvc5NG7C6BNkp6D3EmpXHDR+dQAjFLchW3VS9rlw3JBh+B7ys5Cf9z0GW1C/7P32AyBAOAz1q4jGliIH3YPuBnSfQX4OGreTIgEYQb/pBDtPnEQ4CivXYPAWBk13oHrB54yA9QuSn2H4AcKRpEILDt0BUzj+RLR1V5EqjD66NPRBVpLcQwjHoHYJOhsQv6U4mnzmrIXJCFr4LDwm/xBUoboG9XX4cc9VKdYoSA2yk5NQLJaKDUjTBoveG3Z2TElTxwjNK4M3LEZgUdDdruvcXzKBpStgp2NPiWi3ks9ZXxIoFVi+AvHLdc9TqtjL3/aYjpPlrzOcEnK62Szhimdd7xX232zFDTgtxezOu3WNMRLjiKgjtOhHVMd1loynVHvOgjuIIJMaELEqhJAV/RCSLbWTcfPFakFgFlALTRRvx+ok6Hlp/Q+v3fmx90bMyUzaEAhmM3KvHlXTL5DxnbGf/1M8RNNACLL5MNtPxP/mypJAqcDSFfgFhpYqWUzhTEAAAAAASUVORK5CYII=',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAL0lEQVRYR+3QQREAAAzCsOFfNJPBJ1XQS9r2hsUAAQIECBAgQIAAAQIECBAgsBZ4MUx/ofm2I/kAAAAASUVORK5CYII=',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAL0lEQVRYR+3QQREAAAzCsOFfNJPBJ1XQS9r2hsUAAQIECBAgQIAAAQIECBAgsBZ4MUx/ofm2I/kAAAAASUVORK5CYII=',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAL0lEQVRYR+3QQREAAAzCsOFfNJPBJ1XQS9r2hsUAAQIECBAgQIAAAQIECBAgsBZ4MUx/ofm2I/kAAAAASUVORK5CYII=',
- 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAL0lEQVRYR+3QQREAAAzCsOFfNJPBJ1XQS9r2hsUAAQIECBAgQIAAAQIECBAgsBZ4MUx/ofm2I/kAAAAASUVORK5CYII='
- ];
-
- // Builders. They build Three.js object from Object data parsed by MMDParser.
-
- /**
- * @param {THREE.LoadingManager} manager
- */
- function MeshBuilder( manager ) {
-
- this.geometryBuilder = new GeometryBuilder();
- this.materialBuilder = new MaterialBuilder( manager );
-
- }
-
- MeshBuilder.prototype = {
-
- constructor: MeshBuilder,
-
- crossOrigin: 'anonymous',
-
- /**
- * @param {string} crossOrigin
- * @return {MeshBuilder}
- */
- setCrossOrigin: function ( crossOrigin ) {
-
- this.crossOrigin = crossOrigin;
- return this;
-
- },
-
- /**
- * @param {Object} data - parsed PMD/PMX data
- * @param {string} resourcePath
- * @param {function} onProgress
- * @param {function} onError
- * @return {THREE.SkinnedMesh}
- */
- build: function ( data, resourcePath, onProgress, onError ) {
-
- var geometry = this.geometryBuilder.build( data );
- var material = this.materialBuilder
- .setCrossOrigin( this.crossOrigin )
- .setResourcePath( resourcePath )
- .build( data, geometry, onProgress, onError );
-
- var mesh = new THREE.SkinnedMesh( geometry, material );
-
- var skeleton = new THREE.Skeleton( initBones( mesh ) );
- mesh.bind( skeleton );
-
- // console.log( mesh ); // for console debug
-
- return mesh;
-
- }
-
- };
-
- // TODO: Try to remove this function
-
- function initBones( mesh ) {
-
- var geometry = mesh.geometry;
-
- var bones = [], bone, gbone;
- var i, il;
-
- if ( geometry && geometry.bones !== undefined ) {
-
- // first, create array of 'Bone' objects from geometry data
-
- for ( i = 0, il = geometry.bones.length; i < il; i ++ ) {
-
- gbone = geometry.bones[ i ];
-
- // create new 'Bone' object
-
- bone = new THREE.Bone();
- bones.push( bone );
-
- // apply values
-
- bone.name = gbone.name;
- bone.position.fromArray( gbone.pos );
- bone.quaternion.fromArray( gbone.rotq );
- if ( gbone.scl !== undefined ) bone.scale.fromArray( gbone.scl );
-
- }
-
- // second, create bone hierarchy
-
- for ( i = 0, il = geometry.bones.length; i < il; i ++ ) {
-
- gbone = geometry.bones[ i ];
-
- if ( ( gbone.parent !== - 1 ) && ( gbone.parent !== null ) && ( bones[ gbone.parent ] !== undefined ) ) {
-
- // subsequent bones in the hierarchy
-
- bones[ gbone.parent ].add( bones[ i ] );
-
- } else {
-
- // topmost bone, immediate child of the skinned mesh
-
- mesh.add( bones[ i ] );
-
- }
-
- }
-
- }
-
- // now the bones are part of the scene graph and children of the skinned mesh.
- // let's update the corresponding matrices
-
- mesh.updateMatrixWorld( true );
-
- return bones;
-
- }
-
- //
-
- function GeometryBuilder() {
-
- }
-
- GeometryBuilder.prototype = {
-
- constructor: GeometryBuilder,
-
- /**
- * @param {Object} data - parsed PMD/PMX data
- * @return {THREE.BufferGeometry}
- */
- build: function ( data ) {
-
- // for geometry
- var positions = [];
- var uvs = [];
- var normals = [];
-
- var indices = [];
-
- var groups = [];
-
- var bones = [];
- var skinIndices = [];
- var skinWeights = [];
-
- var morphTargets = [];
- var morphPositions = [];
-
- var iks = [];
- var grants = [];
-
- var rigidBodies = [];
- var constraints = [];
-
- // for work
- var offset = 0;
- var boneTypeTable = {};
-
- // positions, normals, uvs, skinIndices, skinWeights
-
- for ( var i = 0; i < data.metadata.vertexCount; i ++ ) {
-
- var v = data.vertices[ i ];
-
- for ( var j = 0, jl = v.position.length; j < jl; j ++ ) {
-
- positions.push( v.position[ j ] );
-
- }
-
- for ( var j = 0, jl = v.normal.length; j < jl; j ++ ) {
-
- normals.push( v.normal[ j ] );
-
- }
-
- for ( var j = 0, jl = v.uv.length; j < jl; j ++ ) {
-
- uvs.push( v.uv[ j ] );
-
- }
-
- for ( var j = 0; j < 4; j ++ ) {
-
- skinIndices.push( v.skinIndices.length - 1 >= j ? v.skinIndices[ j ] : 0.0 );
-
- }
-
- for ( var j = 0; j < 4; j ++ ) {
-
- skinWeights.push( v.skinWeights.length - 1 >= j ? v.skinWeights[ j ] : 0.0 );
-
- }
-
- }
-
- // indices
-
- for ( var i = 0; i < data.metadata.faceCount; i ++ ) {
-
- var face = data.faces[ i ];
-
- for ( var j = 0, jl = face.indices.length; j < jl; j ++ ) {
-
- indices.push( face.indices[ j ] );
-
- }
-
- }
-
- // groups
-
- for ( var i = 0; i < data.metadata.materialCount; i ++ ) {
-
- var material = data.materials[ i ];
-
- groups.push( {
- offset: offset * 3,
- count: material.faceCount * 3
- } );
-
- offset += material.faceCount;
-
- }
-
- // bones
-
- for ( var i = 0; i < data.metadata.rigidBodyCount; i ++ ) {
-
- var body = data.rigidBodies[ i ];
- var value = boneTypeTable[ body.boneIndex ];
-
- // keeps greater number if already value is set without any special reasons
- value = value === undefined ? body.type : Math.max( body.type, value );
-
- boneTypeTable[ body.boneIndex ] = value;
-
- }
-
- for ( var i = 0; i < data.metadata.boneCount; i ++ ) {
-
- var boneData = data.bones[ i ];
-
- var bone = {
- parent: boneData.parentIndex,
- name: boneData.name,
- pos: boneData.position.slice( 0, 3 ),
- rotq: [ 0, 0, 0, 1 ],
- scl: [ 1, 1, 1 ],
- rigidBodyType: boneTypeTable[ i ] !== undefined ? boneTypeTable[ i ] : - 1
- };
-
- if ( bone.parent !== - 1 ) {
-
- bone.pos[ 0 ] -= data.bones[ bone.parent ].position[ 0 ];
- bone.pos[ 1 ] -= data.bones[ bone.parent ].position[ 1 ];
- bone.pos[ 2 ] -= data.bones[ bone.parent ].position[ 2 ];
-
- }
-
- bones.push( bone );
-
- }
-
- // iks
-
- // TODO: remove duplicated codes between PMD and PMX
- if ( data.metadata.format === 'pmd' ) {
-
- for ( var i = 0; i < data.metadata.ikCount; i ++ ) {
-
- var ik = data.iks[ i ];
-
- var param = {
- target: ik.target,
- effector: ik.effector,
- iteration: ik.iteration,
- maxAngle: ik.maxAngle * 4,
- links: []
- };
-
- for ( var j = 0, jl = ik.links.length; j < jl; j ++ ) {
-
- var link = {};
- link.index = ik.links[ j ].index;
- link.enabled = true;
-
- if ( data.bones[ link.index ].name.indexOf( 'ひざ' ) >= 0 ) {
-
- link.limitation = new THREE.Vector3( 1.0, 0.0, 0.0 );
-
- }
-
- param.links.push( link );
-
- }
-
- iks.push( param );
-
- }
-
- } else {
-
- for ( var i = 0; i < data.metadata.boneCount; i ++ ) {
-
- var ik = data.bones[ i ].ik;
-
- if ( ik === undefined ) continue;
-
- var param = {
- target: i,
- effector: ik.effector,
- iteration: ik.iteration,
- maxAngle: ik.maxAngle,
- links: []
- };
-
- for ( var j = 0, jl = ik.links.length; j < jl; j ++ ) {
-
- var link = {};
- link.index = ik.links[ j ].index;
- link.enabled = true;
-
- if ( ik.links[ j ].angleLimitation === 1 ) {
-
- // Revert if rotationMin/Max doesn't work well
- // link.limitation = new THREE.Vector3( 1.0, 0.0, 0.0 );
-
- var rotationMin = ik.links[ j ].lowerLimitationAngle;
- var rotationMax = ik.links[ j ].upperLimitationAngle;
-
- // Convert Left to Right coordinate by myself because
- // MMDParser doesn't convert. It's a MMDParser's bug
-
- var tmp1 = - rotationMax[ 0 ];
- var tmp2 = - rotationMax[ 1 ];
- rotationMax[ 0 ] = - rotationMin[ 0 ];
- rotationMax[ 1 ] = - rotationMin[ 1 ];
- rotationMin[ 0 ] = tmp1;
- rotationMin[ 1 ] = tmp2;
-
- link.rotationMin = new THREE.Vector3().fromArray( rotationMin );
- link.rotationMax = new THREE.Vector3().fromArray( rotationMax );
-
- }
-
- param.links.push( link );
-
- }
-
- iks.push( param );
-
- }
-
- }
-
- // grants
-
- if ( data.metadata.format === 'pmx' ) {
-
- for ( var i = 0; i < data.metadata.boneCount; i ++ ) {
-
- var boneData = data.bones[ i ];
- var grant = boneData.grant;
-
- if ( grant === undefined ) continue;
-
- var param = {
- index: i,
- parentIndex: grant.parentIndex,
- ratio: grant.ratio,
- isLocal: grant.isLocal,
- affectRotation: grant.affectRotation,
- affectPosition: grant.affectPosition,
- transformationClass: boneData.transformationClass
- };
-
- grants.push( param );
-
- }
-
- grants.sort( function ( a, b ) {
-
- return a.transformationClass - b.transformationClass;
-
- } );
-
- }
-
- // morph
-
- function updateAttributes( attribute, morph, ratio ) {
-
- for ( var i = 0; i < morph.elementCount; i ++ ) {
-
- var element = morph.elements[ i ];
-
- var index;
-
- if ( data.metadata.format === 'pmd' ) {
-
- index = data.morphs[ 0 ].elements[ element.index ].index;
-
- } else {
-
- index = element.index;
-
- }
-
- attribute.array[ index * 3 + 0 ] += element.position[ 0 ] * ratio;
- attribute.array[ index * 3 + 1 ] += element.position[ 1 ] * ratio;
- attribute.array[ index * 3 + 2 ] += element.position[ 2 ] * ratio;
-
- }
-
- }
-
- for ( var i = 0; i < data.metadata.morphCount; i ++ ) {
-
- var morph = data.morphs[ i ];
- var params = { name: morph.name };
-
- var attribute = new THREE.Float32BufferAttribute( data.metadata.vertexCount * 3, 3 );
- attribute.name = morph.name;
-
- for ( var j = 0; j < data.metadata.vertexCount * 3; j ++ ) {
-
- attribute.array[ j ] = positions[ j ];
-
- }
-
- if ( data.metadata.format === 'pmd' ) {
-
- if ( i !== 0 ) {
-
- updateAttributes( attribute, morph, 1.0 );
-
- }
-
- } else {
-
- if ( morph.type === 0 ) { // group
-
- for ( var j = 0; j < morph.elementCount; j ++ ) {
-
- var morph2 = data.morphs[ morph.elements[ j ].index ];
- var ratio = morph.elements[ j ].ratio;
-
- if ( morph2.type === 1 ) {
-
- updateAttributes( attribute, morph2, ratio );
-
- } else {
-
- // TODO: implement
-
- }
-
- }
-
- } else if ( morph.type === 1 ) { // vertex
-
- updateAttributes( attribute, morph, 1.0 );
-
- } else if ( morph.type === 2 ) { // bone
-
- // TODO: implement
-
- } else if ( morph.type === 3 ) { // uv
-
- // TODO: implement
-
- } else if ( morph.type === 4 ) { // additional uv1
-
- // TODO: implement
-
- } else if ( morph.type === 5 ) { // additional uv2
-
- // TODO: implement
-
- } else if ( morph.type === 6 ) { // additional uv3
-
- // TODO: implement
-
- } else if ( morph.type === 7 ) { // additional uv4
-
- // TODO: implement
-
- } else if ( morph.type === 8 ) { // material
-
- // TODO: implement
-
- }
-
- }
-
- morphTargets.push( params );
- morphPositions.push( attribute );
-
- }
-
- // rigid bodies from rigidBodies field.
-
- for ( var i = 0; i < data.metadata.rigidBodyCount; i ++ ) {
-
- var rigidBody = data.rigidBodies[ i ];
- var params = {};
-
- for ( var key in rigidBody ) {
-
- params[ key ] = rigidBody[ key ];
-
- }
-
- /*
- * RigidBody position parameter in PMX seems global position
- * while the one in PMD seems offset from corresponding bone.
- * So unify being offset.
- */
- if ( data.metadata.format === 'pmx' ) {
-
- if ( params.boneIndex !== - 1 ) {
-
- var bone = data.bones[ params.boneIndex ];
- params.position[ 0 ] -= bone.position[ 0 ];
- params.position[ 1 ] -= bone.position[ 1 ];
- params.position[ 2 ] -= bone.position[ 2 ];
-
- }
-
- }
-
- rigidBodies.push( params );
-
- }
-
- // constraints from constraints field.
-
- for ( var i = 0; i < data.metadata.constraintCount; i ++ ) {
-
- var constraint = data.constraints[ i ];
- var params = {};
-
- for ( var key in constraint ) {
-
- params[ key ] = constraint[ key ];
-
- }
-
- var bodyA = rigidBodies[ params.rigidBodyIndex1 ];
- var bodyB = rigidBodies[ params.rigidBodyIndex2 ];
-
- // Refer to http://www20.atpages.jp/katwat/wp/?p=4135
- if ( bodyA.type !== 0 && bodyB.type === 2 ) {
-
- if ( bodyA.boneIndex !== - 1 && bodyB.boneIndex !== - 1 &&
- data.bones[ bodyB.boneIndex ].parentIndex === bodyA.boneIndex ) {
-
- bodyB.type = 1;
-
- }
-
- }
-
- constraints.push( params );
-
- }
-
- // build BufferGeometry.
-
- var geometry = new THREE.BufferGeometry();
-
- geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) );
- geometry.addAttribute( 'normal', new THREE.Float32BufferAttribute( normals, 3 ) );
- geometry.addAttribute( 'uv', new THREE.Float32BufferAttribute( uvs, 2 ) );
- geometry.addAttribute( 'skinIndex', new THREE.Uint16BufferAttribute( skinIndices, 4 ) );
- geometry.addAttribute( 'skinWeight', new THREE.Float32BufferAttribute( skinWeights, 4 ) );
- geometry.setIndex( indices );
-
- for ( var i = 0, il = groups.length; i < il; i ++ ) {
-
- geometry.addGroup( groups[ i ].offset, groups[ i ].count, i );
-
- }
-
- geometry.bones = bones;
-
- geometry.morphTargets = morphTargets;
- geometry.morphAttributes.position = morphPositions;
-
- geometry.userData.MMD = {
- bones: bones,
- iks: iks,
- grants: grants,
- rigidBodies: rigidBodies,
- constraints: constraints,
- format: data.metadata.format
- };
-
- geometry.computeBoundingSphere();
-
- return geometry;
-
- }
-
- };
-
- //
-
- /**
- * @param {THREE.LoadingManager} manager
- */
- function MaterialBuilder( manager ) {
-
- this.manager = manager;
-
- this.textureLoader = new THREE.TextureLoader( this.manager );
- this.tgaLoader = null; // lazy generation
-
- }
-
- MaterialBuilder.prototype = {
-
- constructor: MaterialBuilder,
-
- crossOrigin: 'anonymous',
-
- resourcePath: undefined,
-
- /**
- * @param {string} crossOrigin
- * @return {MaterialBuilder}
- */
- setCrossOrigin: function ( crossOrigin ) {
-
- this.crossOrigin = crossOrigin;
- return this;
-
- },
-
- /**
- * @param {string} resourcePath
- * @return {MaterialBuilder}
- */
- setResourcePath: function ( resourcePath ) {
-
- this.resourcePath = resourcePath;
- return this;
-
- },
-
- /**
- * @param {Object} data - parsed PMD/PMX data
- * @param {THREE.BufferGeometry} geometry - some properties are dependend on geometry
- * @param {function} onProgress
- * @param {function} onError
- * @return {Array}
- */
- build: function ( data, geometry, onProgress, onError ) {
-
- var materials = [];
-
- var textures = {};
-
- this.textureLoader.setCrossOrigin( this.crossOrigin );
-
- // materials
-
- for ( var i = 0; i < data.metadata.materialCount; i ++ ) {
-
- var material = data.materials[ i ];
-
- var params = { userData: {} };
-
- if ( material.name !== undefined ) params.name = material.name;
-
- /*
- * Color
- *
- * MMD MeshToonMaterial
- * diffuse - color
- * specular - specular
- * ambient - emissive * a
- * (a = 1.0 without map texture or 0.2 with map texture)
- *
- * MeshToonMaterial doesn't have ambient. Set it to emissive instead.
- * It'll be too bright if material has map texture so using coef 0.2.
- */
- params.color = new THREE.Color().fromArray( material.diffuse );
- params.opacity = material.diffuse[ 3 ];
- params.specular = new THREE.Color().fromArray( material.specular );
- params.emissive = new THREE.Color().fromArray( material.ambient );
- params.shininess = Math.max( material.shininess, 1e-4 ); // to prevent pow( 0.0, 0.0 )
- params.transparent = params.opacity !== 1.0;
-
- //
-
- params.skinning = geometry.bones.length > 0 ? true : false;
- params.morphTargets = geometry.morphTargets.length > 0 ? true : false;
- params.lights = true;
- params.fog = true;
-
- // blend
-
- params.blending = THREE.CustomBlending;
- params.blendSrc = THREE.SrcAlphaFactor;
- params.blendDst = THREE.OneMinusSrcAlphaFactor;
- params.blendSrcAlpha = THREE.SrcAlphaFactor;
- params.blendDstAlpha = THREE.DstAlphaFactor;
-
- // side
-
- if ( data.metadata.format === 'pmx' && ( material.flag & 0x1 ) === 1 ) {
-
- params.side = THREE.DoubleSide;
-
- } else {
-
- params.side = params.opacity === 1.0 ? THREE.FrontSide : THREE.DoubleSide;
-
- }
-
- if ( data.metadata.format === 'pmd' ) {
-
- // map, envMap
-
- if ( material.fileName ) {
-
- var fileName = material.fileName;
- var fileNames = fileName.split( '*' );
-
- // fileNames[ 0 ]: mapFileName
- // fileNames[ 1 ]: envMapFileName( optional )
-
- params.map = this._loadTexture( fileNames[ 0 ], textures );
-
- if ( fileNames.length > 1 ) {
-
- var extension = fileNames[ 1 ].slice( - 4 ).toLowerCase();
-
- params.envMap = this._loadTexture(
- fileNames[ 1 ],
- textures,
- { sphericalReflectionMapping: true }
- );
-
- params.combine = extension === '.sph'
- ? THREE.MultiplyOperation
- : THREE.AddOperation;
-
- }
-
- }
-
- // gradientMap
-
- var toonFileName = ( material.toonIndex === - 1 )
- ? 'toon00.bmp'
- : data.toonTextures[ material.toonIndex ].fileName;
-
- params.gradientMap = this._loadTexture(
- toonFileName,
- textures,
- {
- isToonTexture: true,
- isDefaultToonTexture: this._isDefaultToonTexture( toonFileName )
- }
- );
-
- // parameters for OutlineEffect
-
- params.userData.outlineParameters = {
- thickness: material.edgeFlag === 1 ? 0.003 : 0.0,
- color: [ 0, 0, 0 ],
- alpha: 1.0,
- visible: material.edgeFlag === 1
- };
-
- } else {
-
- // map
-
- if ( material.textureIndex !== - 1 ) {
-
- params.map = this._loadTexture( data.textures[ material.textureIndex ], textures );
-
- }
-
- // envMap TODO: support m.envFlag === 3
-
- if ( material.envTextureIndex !== - 1 && ( material.envFlag === 1 || material.envFlag == 2 ) ) {
-
- params.envMap = this._loadTexture(
- data.textures[ material.envTextureIndex ],
- textures, { sphericalReflectionMapping: true }
- );
-
- params.combine = material.envFlag === 1
- ? THREE.MultiplyOperation
- : THREE.AddOperation;
-
- }
-
- // gradientMap
-
- var toonFileName, isDefaultToon;
-
- if ( material.toonIndex === - 1 || material.toonFlag !== 0 ) {
-
- toonFileName = 'toon' + ( '0' + ( material.toonIndex + 1 ) ).slice( - 2 ) + '.bmp';
- isDefaultToon = true;
-
- } else {
-
- toonFileName = data.textures[ material.toonIndex ];
- isDefaultToon = false;
-
- }
-
- params.gradientMap = this._loadTexture(
- toonFileName,
- textures,
- {
- isToonTexture: true,
- isDefaultToonTexture: isDefaultToon
- }
- );
-
- // parameters for OutlineEffect
- params.userData.outlineParameters = {
- thickness: material.edgeSize / 300, // TODO: better calculation?
- color: material.edgeColor.slice( 0, 3 ),
- alpha: material.edgeColor[ 3 ],
- visible: ( material.flag & 0x10 ) !== 0 && material.edgeSize > 0.0
- };
-
- }
-
- if ( params.map !== undefined ) {
-
- if ( ! params.transparent ) {
-
- this._checkImageTransparency( params.map, geometry, i );
-
- }
-
- params.emissive.multiplyScalar( 0.2 );
-
- }
-
- materials.push( new THREE.MeshToonMaterial( params ) );
-
- }
-
- if ( data.metadata.format === 'pmx' ) {
-
- // set transparent true if alpha morph is defined.
-
- function checkAlphaMorph( elements, materials ) {
-
- for ( var i = 0, il = elements.length; i < il; i ++ ) {
-
- var element = elements[ i ];
-
- if ( element.index === - 1 ) continue;
-
- var material = materials[ element.index ];
-
- if ( material.opacity !== element.diffuse[ 3 ] ) {
-
- material.transparent = true;
-
- }
-
- }
-
- }
-
- for ( var i = 0, il = data.morphs.length; i < il; i ++ ) {
-
- var morph = data.morphs[ i ];
- var elements = morph.elements;
-
- if ( morph.type === 0 ) {
-
- for ( var j = 0, jl = elements.length; j < jl; j ++ ) {
-
- var morph2 = data.morphs[ elements[ j ].index ];
-
- if ( morph2.type !== 8 ) continue;
-
- checkAlphaMorph( morph2.elements, materials );
-
- }
-
- } else if ( morph.type === 8 ) {
-
- checkAlphaMorph( elements, materials );
-
- }
-
- }
-
- }
-
- return materials;
-
- },
-
- // private methods
-
- _getTGALoader: function () {
-
- if ( this.tgaLoader === null ) {
-
- if ( THREE.TGALoader === undefined ) {
-
- throw new Error( 'THREE.MMDLoader: Import THREE.TGALoader' );
-
- }
-
- this.tgaLoader = new THREE.TGALoader( this.manager );
-
- }
-
- return this.tgaLoader;
-
- },
-
- _isDefaultToonTexture: function ( name ) {
-
- if ( name.length !== 10 ) return false;
-
- return /toon(10|0[0-9])\.bmp/.test( name );
-
- },
-
- _loadTexture: function ( filePath, textures, params, onProgress, onError ) {
-
- params = params || {};
-
- var scope = this;
-
- var fullPath;
-
- if ( params.isDefaultToonTexture === true ) {
-
- var index;
-
- try {
-
- index = parseInt( filePath.match( 'toon([0-9]{2})\.bmp$' )[ 1 ] );
-
- } catch ( e ) {
-
- console.warn( 'THREE.MMDLoader: ' + filePath + ' seems like a '
- + 'not right default texture path. Using toon00.bmp instead.' );
-
- index = 0;
-
- }
-
- fullPath = DEFAULT_TOON_TEXTURES[ index ];
-
- } else {
-
- fullPath = this.resourcePath + filePath;
-
- }
-
- if ( textures[ fullPath ] !== undefined ) return textures[ fullPath ];
-
- var loader = THREE.Loader.Handlers.get( fullPath );
-
- if ( loader === null ) {
-
- loader = ( filePath.slice( - 4 ).toLowerCase() === '.tga' )
- ? this._getTGALoader()
- : this.textureLoader;
-
- }
-
- var texture = loader.load( fullPath, function ( t ) {
-
- // MMD toon texture is Axis-Y oriented
- // but Three.js gradient map is Axis-X oriented.
- // So here replaces the toon texture image with the rotated one.
- if ( params.isToonTexture === true ) {
-
- t.image = scope._getRotatedImage( t.image );
-
- }
-
- t.flipY = false;
- t.wrapS = THREE.RepeatWrapping;
- t.wrapT = THREE.RepeatWrapping;
-
- for ( var i = 0; i < texture.readyCallbacks.length; i ++ ) {
-
- texture.readyCallbacks[ i ]( texture );
-
- }
-
- delete texture.readyCallbacks;
-
- }, onProgress, onError );
-
- if ( params.sphericalReflectionMapping === true ) {
-
- texture.mapping = THREE.SphericalReflectionMapping;
-
- }
-
- texture.readyCallbacks = [];
-
- textures[ fullPath ] = texture;
-
- return texture;
-
- },
-
- _getRotatedImage: function ( image ) {
-
- var canvas = document.createElement( 'canvas' );
- var context = canvas.getContext( '2d' );
-
- var width = image.width;
- var height = image.height;
-
- canvas.width = width;
- canvas.height = height;
-
- context.clearRect( 0, 0, width, height );
- context.translate( width / 2.0, height / 2.0 );
- context.rotate( 0.5 * Math.PI ); // 90.0 * Math.PI / 180.0
- context.translate( - width / 2.0, - height / 2.0 );
- context.drawImage( image, 0, 0 );
-
- return context.getImageData( 0, 0, width, height );
-
- },
-
- // Check if the partial image area used by the texture is transparent.
- _checkImageTransparency: function ( map, geometry, groupIndex ) {
-
- map.readyCallbacks.push( function ( texture ) {
-
- // Is there any efficient ways?
- function createImageData( image ) {
-
- var canvas = document.createElement( 'canvas' );
- canvas.width = image.width;
- canvas.height = image.height;
-
- var context = canvas.getContext( '2d' );
- context.drawImage( image, 0, 0 );
-
- return context.getImageData( 0, 0, canvas.width, canvas.height );
-
- }
-
- function detectImageTransparency( image, uvs, indices ) {
-
- var width = image.width;
- var height = image.height;
- var data = image.data;
- var threshold = 253;
-
- if ( data.length / ( width * height ) !== 4 ) return false;
-
- for ( var i = 0; i < indices.length; i += 3 ) {
-
- var centerUV = { x: 0.0, y: 0.0 };
-
- for ( var j = 0; j < 3; j ++ ) {
-
- var index = indices[ i * 3 + j ];
- var uv = { x: uvs[ index * 2 + 0 ], y: uvs[ index * 2 + 1 ] };
-
- if ( getAlphaByUv( image, uv ) < threshold ) return true;
-
- centerUV.x += uv.x;
- centerUV.y += uv.y;
-
- }
-
- centerUV.x /= 3;
- centerUV.y /= 3;
-
- if ( getAlphaByUv( image, centerUV ) < threshold ) return true;
-
- }
-
- return false;
-
- }
-
- /*
- * This method expects
- * texture.flipY = false
- * texture.wrapS = THREE.RepeatWrapping
- * texture.wrapT = THREE.RepeatWrapping
- * TODO: more precise
- */
- function getAlphaByUv( image, uv ) {
-
- var width = image.width;
- var height = image.height;
-
- var x = Math.round( uv.x * width ) % width;
- var y = Math.round( uv.y * height ) % height;
-
- if ( x < 0 ) x += width;
- if ( y < 0 ) y += height;
-
- var index = y * width + x;
-
- return image.data[ index * 4 + 3 ];
-
- }
-
- var imageData = texture.image.data !== undefined
- ? texture.image
- : createImageData( texture.image );
-
- var group = geometry.groups[ groupIndex ];
-
- if ( detectImageTransparency(
- imageData,
- geometry.attributes.uv.array,
- geometry.index.array.slice( group.start, group.start + group.count ) ) ) {
-
- map.transparent = true;
-
- }
-
- } );
-
- }
-
- };
-
- //
-
- function AnimationBuilder() {
-
- }
-
- AnimationBuilder.prototype = {
-
- constructor: AnimationBuilder,
-
- /**
- * @param {Object} vmd - parsed VMD data
- * @param {THREE.SkinnedMesh} mesh - tracks will be fitting to mesh
- * @return {THREE.AnimationClip}
- */
- build: function ( vmd, mesh ) {
-
- // combine skeletal and morph animations
-
- var tracks = this.buildSkeletalAnimation( vmd, mesh ).tracks;
- var tracks2 = this.buildMorphAnimation( vmd, mesh ).tracks;
-
- for ( var i = 0, il = tracks2.length; i < il; i ++ ) {
-
- tracks.push( tracks2[ i ] );
-
- }
-
- return new THREE.AnimationClip( '', - 1, tracks );
-
- },
-
- /**
- * @param {Object} vmd - parsed VMD data
- * @param {THREE.SkinnedMesh} mesh - tracks will be fitting to mesh
- * @return {THREE.AnimationClip}
- */
- buildSkeletalAnimation: function ( vmd, mesh ) {
-
- function pushInterpolation( array, interpolation, index ) {
-
- array.push( interpolation[ index + 0 ] / 127 ); // x1
- array.push( interpolation[ index + 8 ] / 127 ); // x2
- array.push( interpolation[ index + 4 ] / 127 ); // y1
- array.push( interpolation[ index + 12 ] / 127 ); // y2
-
- }
-
- var tracks = [];
-
- var motions = {};
- var bones = mesh.skeleton.bones;
- var boneNameDictionary = {};
-
- for ( var i = 0, il = bones.length; i < il; i ++ ) {
-
- boneNameDictionary[ bones[ i ].name ] = true;
-
- }
-
- for ( var i = 0; i < vmd.metadata.motionCount; i ++ ) {
-
- var motion = vmd.motions[ i ];
- var boneName = motion.boneName;
-
- if ( boneNameDictionary[ boneName ] === undefined ) continue;
-
- motions[ boneName ] = motions[ boneName ] || [];
- motions[ boneName ].push( motion );
-
- }
-
- for ( var key in motions ) {
-
- var array = motions[ key ];
-
- array.sort( function ( a, b ) {
-
- return a.frameNum - b.frameNum;
-
- } );
-
- var times = [];
- var positions = [];
- var rotations = [];
- var pInterpolations = [];
- var rInterpolations = [];
-
- var basePosition = mesh.skeleton.getBoneByName( key ).position.toArray();
-
- for ( var i = 0, il = array.length; i < il; i ++ ) {
-
- var time = array[ i ].frameNum / 30;
- var position = array[ i ].position;
- var rotation = array[ i ].rotation;
- var interpolation = array[ i ].interpolation;
-
- times.push( time );
-
- for ( var j = 0; j < 3; j ++ ) positions.push( basePosition[ j ] + position[ j ] );
- for ( var j = 0; j < 4; j ++ ) rotations.push( rotation[ j ] );
- for ( var j = 0; j < 3; j ++ ) pushInterpolation( pInterpolations, interpolation, j );
-
- pushInterpolation( rInterpolations, interpolation, 3 );
-
- }
-
- var targetName = '.bones[' + key + ']';
-
- tracks.push( this._createTrack( targetName + '.position', THREE.VectorKeyframeTrack, times, positions, pInterpolations ) );
- tracks.push( this._createTrack( targetName + '.quaternion', THREE.QuaternionKeyframeTrack, times, rotations, rInterpolations ) );
-
- }
-
- return new THREE.AnimationClip( '', - 1, tracks );
-
- },
-
- /**
- * @param {Object} vmd - parsed VMD data
- * @param {THREE.SkinnedMesh} mesh - tracks will be fitting to mesh
- * @return {THREE.AnimationClip}
- */
- buildMorphAnimation: function ( vmd, mesh ) {
-
- var tracks = [];
-
- var morphs = {};
- var morphTargetDictionary = mesh.morphTargetDictionary;
-
- for ( var i = 0; i < vmd.metadata.morphCount; i ++ ) {
-
- var morph = vmd.morphs[ i ];
- var morphName = morph.morphName;
-
- if ( morphTargetDictionary[ morphName ] === undefined ) continue;
-
- morphs[ morphName ] = morphs[ morphName ] || [];
- morphs[ morphName ].push( morph );
-
- }
-
- for ( var key in morphs ) {
-
- var array = morphs[ key ];
-
- array.sort( function ( a, b ) {
-
- return a.frameNum - b.frameNum;
-
- } );
-
- var times = [];
- var values = [];
-
- for ( var i = 0, il = array.length; i < il; i ++ ) {
-
- times.push( array[ i ].frameNum / 30 );
- values.push( array[ i ].weight );
-
- }
-
- tracks.push( new THREE.NumberKeyframeTrack( '.morphTargetInfluences[' + morphTargetDictionary[ key ] + ']', times, values ) );
-
- }
-
- return new THREE.AnimationClip( '', - 1, tracks );
-
- },
-
- /**
- * @param {Object} vmd - parsed VMD data
- * @return {THREE.AnimationClip}
- */
- buildCameraAnimation: function ( vmd ) {
-
- function pushVector3( array, vec ) {
-
- array.push( vec.x );
- array.push( vec.y );
- array.push( vec.z );
-
- }
-
- function pushQuaternion( array, q ) {
-
- array.push( q.x );
- array.push( q.y );
- array.push( q.z );
- array.push( q.w );
-
- }
-
- function pushInterpolation( array, interpolation, index ) {
-
- array.push( interpolation[ index * 4 + 0 ] / 127 ); // x1
- array.push( interpolation[ index * 4 + 1 ] / 127 ); // x2
- array.push( interpolation[ index * 4 + 2 ] / 127 ); // y1
- array.push( interpolation[ index * 4 + 3 ] / 127 ); // y2
-
- }
-
- var tracks = [];
-
- var cameras = vmd.cameras === undefined ? [] : vmd.cameras.slice();
-
- cameras.sort( function ( a, b ) {
-
- return a.frameNum - b.frameNum;
-
- } );
-
- var times = [];
- var centers = [];
- var quaternions = [];
- var positions = [];
- var fovs = [];
-
- var cInterpolations = [];
- var qInterpolations = [];
- var pInterpolations = [];
- var fInterpolations = [];
-
- var quaternion = new THREE.Quaternion();
- var euler = new THREE.Euler();
- var position = new THREE.Vector3();
- var center = new THREE.Vector3();
-
- for ( var i = 0, il = cameras.length; i < il; i ++ ) {
-
- var motion = cameras[ i ];
-
- var time = motion.frameNum / 30;
- var pos = motion.position;
- var rot = motion.rotation;
- var distance = motion.distance;
- var fov = motion.fov;
- var interpolation = motion.interpolation;
-
- times.push( time );
-
- position.set( 0, 0, - distance );
- center.set( pos[ 0 ], pos[ 1 ], pos[ 2 ] );
-
- euler.set( - rot[ 0 ], - rot[ 1 ], - rot[ 2 ] );
- quaternion.setFromEuler( euler );
-
- position.add( center );
- position.applyQuaternion( quaternion );
-
- pushVector3( centers, center );
- pushQuaternion( quaternions, quaternion );
- pushVector3( positions, position );
-
- fovs.push( fov );
-
- for ( var j = 0; j < 3; j ++ ) {
-
- pushInterpolation( cInterpolations, interpolation, j );
-
- }
-
- pushInterpolation( qInterpolations, interpolation, 3 );
-
- // use the same parameter for x, y, z axis.
- for ( var j = 0; j < 3; j ++ ) {
-
- pushInterpolation( pInterpolations, interpolation, 4 );
-
- }
-
- pushInterpolation( fInterpolations, interpolation, 5 );
-
- }
-
- var tracks = [];
-
- // I expect an object whose name 'target' exists under THREE.Camera
- tracks.push( this._createTrack( 'target.position', THREE.VectorKeyframeTrack, times, centers, cInterpolations ) );
-
- tracks.push( this._createTrack( '.quaternion', THREE.QuaternionKeyframeTrack, times, quaternions, qInterpolations ) );
- tracks.push( this._createTrack( '.position', THREE.VectorKeyframeTrack, times, positions, pInterpolations ) );
- tracks.push( this._createTrack( '.fov', THREE.NumberKeyframeTrack, times, fovs, fInterpolations ) );
-
- return new THREE.AnimationClip( '', - 1, tracks );
-
- },
-
- // private method
-
- _createTrack: function ( node, typedKeyframeTrack, times, values, interpolations ) {
-
- /*
- * optimizes here not to let KeyframeTrackPrototype optimize
- * because KeyframeTrackPrototype optimizes times and values but
- * doesn't optimize interpolations.
- */
- if ( times.length > 2 ) {
-
- times = times.slice();
- values = values.slice();
- interpolations = interpolations.slice();
-
- var stride = values.length / times.length;
- var interpolateStride = interpolations.length / times.length;
-
- var index = 1;
-
- for ( var aheadIndex = 2, endIndex = times.length; aheadIndex < endIndex; aheadIndex ++ ) {
-
- for ( var i = 0; i < stride; i ++ ) {
-
- if ( values[ index * stride + i ] !== values[ ( index - 1 ) * stride + i ] ||
- values[ index * stride + i ] !== values[ aheadIndex * stride + i ] ) {
-
- index ++;
- break;
-
- }
-
- }
-
- if ( aheadIndex > index ) {
-
- times[ index ] = times[ aheadIndex ];
-
- for ( var i = 0; i < stride; i ++ ) {
-
- values[ index * stride + i ] = values[ aheadIndex * stride + i ];
-
- }
-
- for ( var i = 0; i < interpolateStride; i ++ ) {
-
- interpolations[ index * interpolateStride + i ] = interpolations[ aheadIndex * interpolateStride + i ];
-
- }
-
- }
-
- }
-
- times.length = index + 1;
- values.length = ( index + 1 ) * stride;
- interpolations.length = ( index + 1 ) * interpolateStride;
-
- }
-
- var track = new typedKeyframeTrack( node, times, values );
-
- track.createInterpolant = function InterpolantFactoryMethodCubicBezier( result ) {
-
- return new CubicBezierInterpolation( this.times, this.values, this.getValueSize(), result, new Float32Array( interpolations ) );
-
- };
-
- return track;
-
- }
-
- };
-
- // interpolation
-
- function CubicBezierInterpolation( parameterPositions, sampleValues, sampleSize, resultBuffer, params ) {
-
- THREE.Interpolant.call( this, parameterPositions, sampleValues, sampleSize, resultBuffer );
-
- this.interpolationParams = params;
-
- }
-
- CubicBezierInterpolation.prototype = Object.assign( Object.create( THREE.Interpolant.prototype ), {
-
- constructor: CubicBezierInterpolation,
-
- interpolate_: function ( i1, t0, t, t1 ) {
-
- var result = this.resultBuffer;
- var values = this.sampleValues;
- var stride = this.valueSize;
- var params = this.interpolationParams;
-
- var offset1 = i1 * stride;
- var offset0 = offset1 - stride;
-
- // No interpolation if next key frame is in one frame in 30fps.
- // This is from MMD animation spec.
- // '1.5' is for precision loss. times are Float32 in Three.js Animation system.
- var weight1 = ( ( t1 - t0 ) < 1 / 30 * 1.5 ) ? 0.0 : ( t - t0 ) / ( t1 - t0 );
-
- if ( stride === 4 ) { // Quaternion
-
- var x1 = params[ i1 * 4 + 0 ];
- var x2 = params[ i1 * 4 + 1 ];
- var y1 = params[ i1 * 4 + 2 ];
- var y2 = params[ i1 * 4 + 3 ];
-
- var ratio = this._calculate( x1, x2, y1, y2, weight1 );
-
- THREE.Quaternion.slerpFlat( result, 0, values, offset0, values, offset1, ratio );
-
- } else if ( stride === 3 ) { // Vector3
-
- for ( var i = 0; i !== stride; ++ i ) {
-
- var x1 = params[ i1 * 12 + i * 4 + 0 ];
- var x2 = params[ i1 * 12 + i * 4 + 1 ];
- var y1 = params[ i1 * 12 + i * 4 + 2 ];
- var y2 = params[ i1 * 12 + i * 4 + 3 ];
-
- var ratio = this._calculate( x1, x2, y1, y2, weight1 );
-
- result[ i ] = values[ offset0 + i ] * ( 1 - ratio ) + values[ offset1 + i ] * ratio;
-
- }
-
- } else { // Number
-
- var x1 = params[ i1 * 4 + 0 ];
- var x2 = params[ i1 * 4 + 1 ];
- var y1 = params[ i1 * 4 + 2 ];
- var y2 = params[ i1 * 4 + 3 ];
-
- var ratio = this._calculate( x1, x2, y1, y2, weight1 );
-
- result[ 0 ] = values[ offset0 ] * ( 1 - ratio ) + values[ offset1 ] * ratio;
-
- }
-
- return result;
-
- },
-
- _calculate: function ( x1, x2, y1, y2, x ) {
-
- /*
- * Cubic Bezier curves
- * https://en.wikipedia.org/wiki/B%C3%A9zier_curve#Cubic_B.C3.A9zier_curves
- *
- * B(t) = ( 1 - t ) ^ 3 * P0
- * + 3 * ( 1 - t ) ^ 2 * t * P1
- * + 3 * ( 1 - t ) * t^2 * P2
- * + t ^ 3 * P3
- * ( 0 <= t <= 1 )
- *
- * MMD uses Cubic Bezier curves for bone and camera animation interpolation.
- * http://d.hatena.ne.jp/edvakf/20111016/1318716097
- *
- * x = ( 1 - t ) ^ 3 * x0
- * + 3 * ( 1 - t ) ^ 2 * t * x1
- * + 3 * ( 1 - t ) * t^2 * x2
- * + t ^ 3 * x3
- * y = ( 1 - t ) ^ 3 * y0
- * + 3 * ( 1 - t ) ^ 2 * t * y1
- * + 3 * ( 1 - t ) * t^2 * y2
- * + t ^ 3 * y3
- * ( x0 = 0, y0 = 0 )
- * ( x3 = 1, y3 = 1 )
- * ( 0 <= t, x1, x2, y1, y2 <= 1 )
- *
- * Here solves this equation with Bisection method,
- * https://en.wikipedia.org/wiki/Bisection_method
- * gets t, and then calculate y.
- *
- * f(t) = 3 * ( 1 - t ) ^ 2 * t * x1
- * + 3 * ( 1 - t ) * t^2 * x2
- * + t ^ 3 - x = 0
- *
- * (Another option: Newton's method
- * https://en.wikipedia.org/wiki/Newton%27s_method)
- */
-
- var c = 0.5;
- var t = c;
- var s = 1.0 - t;
- var loop = 15;
- var eps = 1e-5;
- var math = Math;
-
- var sst3, stt3, ttt;
-
- for ( var i = 0; i < loop; i ++ ) {
-
- sst3 = 3.0 * s * s * t;
- stt3 = 3.0 * s * t * t;
- ttt = t * t * t;
-
- var ft = ( sst3 * x1 ) + ( stt3 * x2 ) + ( ttt ) - x;
-
- if ( math.abs( ft ) < eps ) break;
-
- c /= 2.0;
-
- t += ( ft < 0 ) ? c : - c;
- s = 1.0 - t;
-
- }
-
- return ( sst3 * y1 ) + ( stt3 * y2 ) + ttt;
-
- }
-
- } );
-
- return MMDLoader;
-
-} )();
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLInfo.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLInfo.d.ts
deleted file mode 100644
index dd13fc337c6aba38e62e8addf70ade973a82cd56..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/webgl/WebGLInfo.d.ts
+++ /dev/null
@@ -1,21 +0,0 @@
-import { WebGLProgram } from './WebGLProgram';
-
-/**
- * An object with a series of statistical information about the graphics board memory and the rendering process.
- */
-export class WebGLInfo {
- autoReset: boolean;
- memory: {
- geometries: number;
- textures: number;
- };
- programs: WebGLProgram[] | null;
- render: {
- calls: number;
- frame: number;
- lines: number;
- points: number;
- triangles: number;
- };
- reset(): void;
-}
diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/models/encoders/__init__.py b/spaces/bankholdup/stylegan_petbreeder/e4e/models/encoders/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/beastboy/WizardLM-WizardCoder-15B-V1.0/README.md b/spaces/beastboy/WizardLM-WizardCoder-15B-V1.0/README.md
deleted file mode 100644
index 02b8847d9de3510004bde223f9c59535eeaac83d..0000000000000000000000000000000000000000
--- a/spaces/beastboy/WizardLM-WizardCoder-15B-V1.0/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: WizardLM WizardCoder 15B V1.0
-emoji: 🚀
-colorFrom: indigo
-colorTo: indigo
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bigjoker/stable-diffusion-webui/html/extra-networks-no-cards.html b/spaces/bigjoker/stable-diffusion-webui/html/extra-networks-no-cards.html
deleted file mode 100644
index 389358d6c4b383fdc3c5686e029e7b3b1ae9a493..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/html/extra-networks-no-cards.html
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
Nothing here. Add some content to the following directories:
-
-
-{dirs}
-
-
-
diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/sd_samplers.py b/spaces/bigjoker/stable-diffusion-webui/modules/sd_samplers.py
deleted file mode 100644
index 981702b85cbf5734bc42a3cbfeeedfc2db57b647..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/modules/sd_samplers.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from modules import sd_samplers_compvis, sd_samplers_kdiffusion, shared
-
-# imports for functions that previously were here and are used by other modules
-from modules.sd_samplers_common import samples_to_image_grid, sample_to_image
-
-all_samplers = [
- *sd_samplers_kdiffusion.samplers_data_k_diffusion,
- *sd_samplers_compvis.samplers_data_compvis,
-]
-all_samplers_map = {x.name: x for x in all_samplers}
-
-samplers = []
-samplers_for_img2img = []
-samplers_map = {}
-
-
-def create_sampler(name, model):
- if name is not None:
- config = all_samplers_map.get(name, None)
- else:
- config = all_samplers[0]
-
- assert config is not None, f'bad sampler name: {name}'
-
- sampler = config.constructor(model)
- sampler.config = config
-
- return sampler
-
-
-def set_samplers():
- global samplers, samplers_for_img2img
-
- hidden = set(shared.opts.hide_samplers)
- hidden_img2img = set(shared.opts.hide_samplers + ['PLMS'])
-
- samplers = [x for x in all_samplers if x.name not in hidden]
- samplers_for_img2img = [x for x in all_samplers if x.name not in hidden_img2img]
-
- samplers_map.clear()
- for sampler in all_samplers:
- samplers_map[sampler.name.lower()] = sampler.name
- for alias in sampler.aliases:
- samplers_map[alias.lower()] = sampler.name
-
-
-set_samplers()
diff --git a/spaces/bioriAsaeru/text-to-voice/Abacre Restaurant Point Of Sale Keygen Generator VERIFIED.md b/spaces/bioriAsaeru/text-to-voice/Abacre Restaurant Point Of Sale Keygen Generator VERIFIED.md
deleted file mode 100644
index f4ffa9eb5f02941e3d900574b22507e1b202ff07..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Abacre Restaurant Point Of Sale Keygen Generator VERIFIED.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
Abacre Restaurant Point Of Sale Keygen Generator: What You Need to Know
-
Managing a restaurant business can be challenging and demanding. You need to handle various aspects of your operations, such as taking orders, billing, inventory, reporting, and more. You also need to ensure that your customers are satisfied and loyal. To do all these tasks efficiently and effectively, you need a reliable and powerful software that can help you streamline your workflow and improve your performance.
-
One of the software that claims to offer you a complete solution for your restaurant management needs is Abacre Restaurant Point Of Sale. This is a new generation of restaurant management software for Windows that can handle everything from taking orders from patrons to billing and tax reports. It has a user-friendly interface that is optimized for high speed input and error prevention. It also has secure and flexible authorization levels that allow you to use it on multiple computers.
Abacre Restaurant Point Of Sale also has customizable layouts for your guest bills that can accommodate any currencies, taxes, and gratuities. It can accept payments by cash, credit cards, or checks. It also has a comprehensive set of reports that can give you a complete picture of your restaurant operations and life cycles, such as menu consumption, reservation frequency, hours of high restaurant load, busiest tables, most active employees, payment methods, and automatic tax calculations.
-
With Abacre Restaurant Point Of Sale, you can standardize your entire restaurant management process and improve your serving speed. It is easy to install and use, and it has very affordable licensing options that allow you to use it in any environment from small family-owned restaurants to large chains.
-
However, Abacre Restaurant Point Of Sale is not cheap. The official price for this software is $149.99 per computer. That's why some people might be looking for a way to get it for free. One of the ways to do that is to use a keygen generator.
-
What is a keygen generator and how does it work?
-
A keygen generator is a crack tool that can generate serial numbers and activation codes for various software products. A serial number is a unique code that identifies a specific copy of a software product. An activation code is a code that verifies that the serial number is valid and unlocks the full features of the software product.
-
To use a keygen generator, you need to download it from a source that claims to provide it. Then you need to run it on your computer and select the software product that you want to crack. The keygen generator will then generate a serial number and an activation code for you. You need to copy these codes and paste them in the installation or activation window of the software product. If the codes are valid, you will be able to use the software product for free.
-
What is Abacre Restaurant Point Of Sale Keygen Generator?
-
Abacre Restaurant Point Of Sale Keygen Generator is a crack tool that can generate serial numbers and activation codes for Abacre Restaurant Point Of Sale. It is one of the most popular keygen generators for this software product on the internet. It claims to be able to generate valid codes that can bypass the activation process and let you use Abacre Restaurant Point Of Sale for free.
-
How to use Abacre Restaurant Point Of Sale Keygen Generator?
-
To use Abacre Restaurant Point Of Sale Keygen Generator, you need to follow these steps:
-
-
-
Download Abacre Restaurant Point Of Sale Keygen Generator from a reliable source.
-
Install Abacre Restaurant Point Of Sale on your computer.
-
Run Abacre Restaurant Point Of Sale Keygen Generator as administrator.
-
Select Abacre Restaurant Point Of Sale from the product list.
-
Click on Generate to get a serial number and an activation code.
-
Copy the serial number and paste it in the installation window of Abacre Restaurant Point Of Sale.
-
Click on Next and select I have an activation code from Abacre Corporation.
-
Copy the activation code and paste it in the activation window of Abacre Restaurant Point Of Sale.
-
Click on Next and enjoy your cracked software.
-
-
What are the risks of using Abacre Restaurant Point Of Sale Keygen Generator?
-
While using Abacre Restaurant Point Of Sale Keygen Generator might seem like an easy and convenient way to get Abacre Restaurant Point Of Sale for free, it also comes with some serious risks. Here are some of them:
-
-
You might be violating the intellectual property rights of Abacre Corporation by using their software without paying for it.
-
You might be exposing your computer to malware, viruses, or spyware that can harm your system or steal your personal information.
-
You might be compromising the performance and stability of your software by using a cracked version that might not work properly or have some bugs or errors.
-
You might be missing out on the latest updates, features, and support from Abacre Corporation by using an outdated version of their software.
-
-
What are the alternatives to using Abacre Restaurant Point Of Sale Keygen Generator?
-
If you want to use Abacre Restaurant Point Of Sale without risking your security, privacy, or quality of work, you have some alternatives to using Abacre Restaurant Point Of Sale Keygen Generator. Here are some of them:
-
-
You can buy a legitimate license of Abacre Restaurant Point Of Sale from Abacre Corporation or their authorized resellers.
-
You can sign up for a free trial of Abacre Restaurant Point Of Sale from Abacre Corporation's website and test it for 30 days.
-
You can use a free or open source software that can perform similar functions as Abacre Restaurant Point Of Sale, such as Floreant POS, SambaPOS, or uniCenta oPOS.
-
-
Conclusion
-
Abacre Restaurant Point Of Sale Keygen Generator is a crack tool that can help you to get Abacre Restaurant Point Of Sale for free. This software is a powerful and versatile tool for restaurant management. However, using a crack tool also comes with some risks, such as legal, ethical, and technical problems that can outweigh the benefits of saving money. Therefore, it is better to use one of the alternatives mentioned above and enjoy your software without any worries.
-
What are the features of Abacre Restaurant Point Of Sale?
-
Abacre Restaurant Point Of Sale is a software that offers you a complete solution for your restaurant management needs. It has many features that can help you improve your customer service, increase your sales, optimize your inventory, monitor your performance, and save time and money. Here are some of the features of Abacre Restaurant Point Of Sale:
-
-
Ordering and billing: You can take orders from patrons using touch screens, keyboards, or mice. You can print guest bills or send them to kitchen printers or displays. You can accept payments by cash, credit cards, or checks. You can also offer loyalty points, gift cards, discounts, and promotions to your customers.
-
Reporting: You can generate various reports that show you a complete picture of your restaurant operations and life cycles. You can see your menu consumption, reservation frequency, hours of high restaurant load, busiest tables, most active employees, payment methods, and automatic tax calculations. You can also export your daily data to third-party inventory software.
-
Inventory: You can track your stock levels, ingredients, and suppliers. You can use weighted average, LIFO, or FIFO inventory calculation methods. You can create composite menu items with ingredients. You can also adjust your physical inventory and make purchase orders.
-
Accounting: You can use double-entry transactions journal to record your income and expenses. You can also use multi-currency support to handle different currencies.
-
Security: You can encrypt your database files and configuration files with password. You can also create different jobs and access levels for your staff. You can use employee swipe cards or barcode cards for quick login.
-
User interface: You can customize your user interface with different color themes, gradient buttons, and panels. You can use full-featured virtual keyboard for touch screens. You can also choose different kinds of views for menu items and categories: buttons, icons, list, and details.
-
-
How to get Abacre Restaurant Point Of Sale Keygen Generator?
-
If you want to get Abacre Restaurant Point Of Sale Keygen Generator, you need to follow these steps:
-
-
Download Abacre Restaurant Point Of Sale Keygen Generator from a reliable source.
-
Install Abacre Restaurant Point Of Sale on your computer.
-
Run Abacre Restaurant Point Of Sale Keygen Generator as administrator.
-
Select Abacre Restaurant Point Of Sale from the product list.
-
Click on Generate to get a serial number and an activation code.
-
Copy the serial number and paste it in the installation window of Abacre Restaurant Point Of Sale.
-
Click on Next and select I have an activation code from Abacre Corporation.
-
Copy the activation code and paste it in the activation window of Abacre Restaurant Point Of Sale.
-
Click on Next and enjoy your cracked software.
-
-
What are the testimonials of Abacre Restaurant Point Of Sale users?
-
Abacre Restaurant Point Of Sale has many satisfied users from different countries and backgrounds. They have shared their experiences and opinions about the software on various platforms, such as Capterra, SourceForge, and Abacre's website. Here are some of the testimonials of Abacre Restaurant Point Of Sale users:
-
-
"I'm the owner of a mediun bar, pizzeria and restaurant, I start with a free pos but is not affordable, after I bought the first licence from Abacre and I increased with 2 more. Affordable, easy to use, resonable price per licence and very good support, I really reccomend." - Roberto T., general manager in Thailand
-
-
-
"I deal with abacre software since 2006 ( 10 years now) and really I always satisfied with it and appreciate their after sales service and supporting .thanks for abacre." - Hesham I., operations manager in Egypt
-
-
-
"Abacre Restaurant Point of Sale is the pest program in pos system and all the time make updat to his silf." - Mohamed A., information technology in Kuwait
-
-
-
"I tried several different software options for my gastro pub, they all seemed to be written by people that have never worked in the catering industry. Simple things like splitting bills or moving orders from one table to another were not available. I have now used Abacre for 6 years, and will probably not change for anything else. The only drawback is that it operates only on Windows, which is a shame as most tablets work on Android (I believe they are working on an Android system) . It's easy to move the programme from an old computer to a new one, even I did it with help from their excellent help Dept, I could not get a thermal printer to work and they configured my computer from their end to get it working." - Lee M., Colombia
-
-
Conclusion
-
Abacre Restaurant Point Of Sale Keygen Generator is a crack tool that can help you to get Abacre Restaurant Point Of Sale for free. This software is a powerful and versatile tool for restaurant management. It has many features that can help you improve your customer service, increase your sales, optimize your inventory, monitor your performance, and save time and money. However, using a crack tool also comes with some risks, such as legal, ethical, and technical problems that can outweigh the benefits of saving money. Therefore, it is better to use one of the alternatives mentioned above and enjoy your software without any worries.
-
Abacre Restaurant Point Of Sale Keygen Generator is a crack tool that can help you to get Abacre Restaurant Point Of Sale for free. This software is a powerful and versatile tool for restaurant management. It has many features that can help you improve your customer service, increase your sales, optimize your inventory, monitor your performance, and save time and money. However, using a crack tool also comes with some risks, such as legal, ethical, and technical problems that can outweigh the benefits of saving money. Therefore, it is better to use one of the alternatives mentioned above and enjoy your software without any worries.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Cambiar La Funcion De Las Teclas Ctrl Y Fn Gua Completa Para Windows 10 y 11.md b/spaces/bioriAsaeru/text-to-voice/Cambiar La Funcion De Las Teclas Ctrl Y Fn Gua Completa Para Windows 10 y 11.md
deleted file mode 100644
index 5e50fa144f874e8147f6a42bb3d7018e5e6c68dd..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Cambiar La Funcion De Las Teclas Ctrl Y Fn Gua Completa Para Windows 10 y 11.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
En el caso de los teclados para computadoras con Windows, usa la tecla Alt en lugar de la tecla Option y la tecla del logotipo de Windows en lugar de la tecla Command.Algunas teclas de algunos teclados Apple tienen símbolos y funciones especiales, por ejemplo, para el brillo de la pantalla , el brillo del teclado , y mucho más. Si estas funciones no están disponibles en el teclado, es posible que puedas reproducir algunas de ellas creando tus propias funciones rápidas de teclado. Para usar estas teclas como F1, F2, F3 u otras teclas de función estándar, combínalas con la tecla Fn.
Las teclas de F, que están en la parte superior del teclado, sirven normalmente como acceso directo para controlar ciertas funciones del hardware. Así es como vienen predeterminadas en muchos casos, sobre todo si lo que estamos utilizando es un ordenador portátil.
-
La manera que vamos a enseñar a continuación de cómo podemos cambiar las teclas de función, suele funcionar en varias marcas de ordenador, como en los Dell. pero sí que es cierto que hay ciertas marcas en las que este método no está operativo.
-
También podemos intentar cambiar las teclas de función por medio de la UEFI. Ocurre igual que en el caso anterior, no todos tendremos los menús completamente exactos, dependerá de la marca del ordenador.
-
-
Importante: En función del teclado, puedes presionar la tecla de Búsqueda o la del Selector para realizar algunas combinaciones de teclas. Ambas teclas funcionan de la misma manera.
-
Me gustaría cambiar el Fn y Ctrl en mi ThinkPad W500 (como muchos otros! Mira..: ¿Cómo puedo cambiar las teclas de función y control de mi portátil? y Interceptar la tecla Fn en los portátiles )
-
Sin entrenar mi meñique, estoy considerando quitar el teclado y resolver las conexiones para intercambiar esas teclas.
Me encantaría recibir alguna aportación en cuanto a root de los problemas técnicos y las posibles soluciones aquí.
-
¡El cambio de bios para las teclas de función y ctrl ya está implementado por Lenovo! Si tienes un portátil Lenovo más reciente (el mío es un thinkpad x201), puedes encontrar la opción para cambiarlas en las opciones de "configurar el teclado y el ratón". (En mi portátil, accedo a la Bios pulsando el botón azul "ThinkVantage" mientras el ordenador se está iniciando).
-
Tu última frase me ha hecho reír :) En cuanto a tu afirmación "la tecla Fn no genera ningún código de escaneo", creo que en realidad sí lo hace (ver arriba - 57443) genera un código de escaneo de hardware. No sólo eso, sino que utilicé con éxito KeyTweak para asignar Fn a Ctrl y en una base de una sola tecla funcionó de forma idéntica; Windows, de hecho, lo vio. Lo que no parece hacer es generar un valor único de pulsación de teclas ASCII y/o soportar pulsaciones de teclas junto con otra tecla (por ejemplo, Ctrl+c) que requieren un código ASCII único para el combo.
-
Las teclas de acceso rápido del teclado ASUS se pueden usar con la tecla Fn para proporcionar un acceso rápido a ciertas funciones y cambiar entre ciertas funciones. Puede activar la función de teclas de acceso rápido presionando y manteniendo presionadas en combinación con las teclas de acceso rápido (F1 ~ F12).
-
Al seleccionar la opción Teclas de acceso rápido, puede obtener funciones de teclas de acceso rápidopresionando F1-F12. Además, aún puede acceder a las funciones F1-F12 presionando Fn y F1 - F12.
-
Al seleccionar la opción F1-F12, puede obtener funciones F1-F12presionando F1-F12. Aún puede acceder a las funciones de teclas de acceso rápido presionando Fn y F1-F12.
-
A continuación se explica el procedimiento para cambiar el [Modo manejo] en el menú de funciones de imagen fija a [Vis. lín. cuadrícul.]. Para cambiar el menú de funciones de película, seleccione un elemento que quiera cambiar en el menú de funciones de película.
-
La tecla Fn es conocida, por norma general, solo por los usuarios de ordenadores portátiles, pues se encuentra principalmente en teclados pequeños. La razón radica en su función: con la combinación de teclas correspondiente, se activan las asignaciones alternativas de otras teclas. Esta asignación múltiple es necesaria sobre todo en los teclados pequeños de portátiles para poder ofrecer todas las funciones habituales. Te explicamos qué comandos se pueden ejecutar con la tecla Fn y cómo activarla y desactivarla.
-
La tecla Función está marcada con la abreviatura Fn en los teclados de ordenador. Funciona de forma similar a las teclas BloqMayús y AltGr, que permiten acceder a asignaciones secundarias o terciarias en todo tipo de teclados. Del mismo modo, con la tecla Fn también se accede a asignaciones secundarias, aunque esta tecla se encuentra principalmente en los teclados de los ordenadores portátiles. Gracias a la asignación múltiple es posible tener acceso a un sinfín de funciones sin necesidad de una tecla adicional.
-
Si activas la tecla Fn, esta modifica la función de diferentes teclas. Dependiendo del fabricante y del modelo del teclado, los usos secundarios que se activan son distintos. A continuación, te mostramos un resumen de las funciones más comunes:
-
Navegación: si dispones de un teclado numérico, también tienes disponible una asignación secundaria para la navegación en documentos. Las funciones se indican en las teclas de la siguiente forma:
-
Presionando simultáneamente la tecla Fn y las teclas con asignaciones secundarias, accedes a las funciones adicionales de estas. Si, durante periodos largos, utilizas principalmente las asignaciones secundarias, es recomendable activar la tecla Fn. De esta forma, te ahorras tener que estar presionando varias teclas a la vez, lo cual es especialmente práctico para utilizar el teclado numérico opcional. En muchos modelos de ordenador portátil, la tecla Fn está activada por defecto, pero las asignaciones secundarias más utilizadas son las que se asocian con los ajustes de sistema habituales. También en este caso merece la pena activar la tecla Fn.
-
En macOS, mantenga pulsada la tecla FN junto con la tecla de función (F1-F12) para anular funciones de macOS como brillo de pantalla, volumen, etc. Para obtener más información sobre el comportamiento de las teclas de función en su Mac, consulte los siguientes documentos de Apple:
-
En este capítulo le presentamos VoiceOver, la avanzada tecnología de lectura de pantalla integrada en el sistema operativo Mac OS X. VoiceOver permite a los usuarios con discapacidades visuales controlar el ordenador mediante un nutrido conjunto de gestos y comandos de teclado. Esta capítulo proporciona una visión general de VoiceOver y de sus temas principales, tales como el foco actual y el cursor de VoiceOver, las funciones rápidas de teclado y el uso de las teclas de función en algunos teclados.
-
Las teclas de función se encuentran sobre las teclas de números, en la parte superior del teclado. En algunos teclados, muchas de las teclas de función están programadas para realizar acciones relacionadas con el hardware, como ajustar el volumen, silenciar el sonido y controlar el brillo de la pantalla. Si su teclado tiene una tecla Fn (función), tiene que pulsar la tecla Fn y la tecla de función juntas para utilizar la tecla de función para otras acciones.Si utiliza VoiceOver todo el tiempo o con mucha frecuencia, puede cambiar el comportamiento por omisión de las teclas de función para realizar acciones de software. De esta forma, solo tendrá que pulsar la tecla Fn para cambiar el volumen u otros ajustes de hardware.
-
Esto puede terminar siendo un tanto molesto para los usuarios veteranos y avanzados que tienen asumido que las funcionalidades ofrecidas por Fn son lo secundario y que lo principal es que las teclas de función hagan aquello para lo que fueron concebidas hace décadas. Afortunadamente, esto se puede cambiar, en caso extremo, fácilmente accediendo a la BIOS/UEFI del portátil.
-
Por ejemplo, las teclas F11 y F12 permiten subir y bajar el volumen del sonido sin tener que presionar Fn. Y como ya hemos dicho, el presionar Fn permite acceder a las acciones secundarias soportadas por el sistema o la aplicación que el usuario tenga delante. El usuario puede cambiar este modo de funcionar a través de la configuración del teclado de macOS.
-
Que no se entienda mal, porque gracias a que la tecla Fn está estandarizada, lo normal es que funcione correctamente en Linux, sobre todo cuando se quiere cambiar el volumen del sonido, el brillo de la pantalla, habilitar o no el Bluetooth y el Wi-Fi, la activación/desactivación del touchpad, además de la configuración rápida de la pantalla y la suspensión.
-
Sin embargo, el usuario puede encontrarse, más si hablamos de un portátil Windows al que se le ha cambiado el sistema, con algunas combinaciones de teclas que no funcionan o que han cambiado de ubicación. Un ejemplo es lo que le pasa este servidor con su portátil Acer Aspire 5 A515-54-735N, en el que las teclas para ajustar el brillo están invertidas. Por suerte funcionan bien y puedo subir y bajar el brillo sin problemas, así que esto queda reducido a una mera anécdota.
-
yo personalmente intente todo con mi latop, la cual es una dell y no funciono nada, hasta que descubri presionando fn mas esc el cual tiene un simbolo de un candado y SORPRESA ya puedo usar las teclas de F sin necesidad de usar Fn, espero que esto les sirva
-
HOla buenos dias, tengo una laptop Dell XPS M1530 de repente como si nada dejo de funcionar las teclas B, N, Barra Espaciadora, las flechas de Cursor Izquierda-Derecha-abajo pero si arriba, la tecla de Funcion tampoco y las de Volumen, estoy utilizando el teclado virtual pero ya se me hace incomodo, usted que opinion y ayuda me podria ofrecer? Seran los Bios?
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Crack Volleyball Scoreboard Pro 2 0 2l Download and Install the Best Software for Your Scoreboard.md b/spaces/bioriAsaeru/text-to-voice/Crack Volleyball Scoreboard Pro 2 0 2l Download and Install the Best Software for Your Scoreboard.md
deleted file mode 100644
index 87d0a9c1474c21b837299d9026d253c41b4c49a3..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Crack Volleyball Scoreboard Pro 2 0 2l Download and Install the Best Software for Your Scoreboard.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
Download united plugins & fire sonic - firecobra 861aa36fb4 uk truck simulator 1.32 crack free 13 Printeradmin Print Job Manager 9.0 Crack .... This can include scan/print/copy image data, user entered data and device ... Temporary data is actively overwritten and thereby erased each time a job is ...
Om Shanti Om movie download utorrent free any know this software ??? this is the website: i want licence code or crack for this program Printer Admin - Print .... Free Download and information on Printer Admin - Print Job Manager - Printer Management Software.. PATCHED [crack] Sound Forge Pro 10.0c (build.491)- DeGun TPB 2011 Download 30 DeGun ... printeradmin print job manager 9.0 cracked. official x32 .... Printer Admin Print Job Manager Vista download - Count, quota, control and charge printing. ... Software piracy is theft, using crack, .... Vray 5.4.02 Max 2018/crack/run_ab046.exe, 55 KB.... Learn why V-Ray for 3ds Max's ... printeradmin print job manager cracked · FIGHT NIGHT ... Crack Volleyball Scoreboard Pro 2 0 2 11
-
-
Control, quota, count, and track printing activities and collect printer inventory in your network. Download PrinterAdmin Print Job Manager 9.0.0.8 .... Using warez version, crack, warez passwords, patches, serial numbers, registration codes, key generator, pirate key, keymaker or keygen for PrinterAdmin Print .... Download crack for Ulead DVD MovieFactory Pro 7 or keygen : Corel DVD ... 5 or Crack Ulead ... Printeradmin Print Job Manager 9.0 Crack. Download PrinterAdmin Print Job Manager 9.0.0.8 Full Version ... (MD5 Crack link) or Control, quota, count, and .... Sony Vegas Pro 11 Crack >>>. DOWNLOAD ... ilok authorization pro tools 10 crack download · arabic fonts ... printeradmin print job manager 6.0 crack. Powered .... Printeradmin Print Job Manager Cracked ->>> 1 / 5 ... fabasoft video converter for mac crack torrent ... pro tools 9 crack .... You should be careful when using serials, cracks, torrents, keygens and warez that you download from crack sites. They often contain adware, spyware, .... Autodesk Revit 2016 R2 X64 + Revit Extensions + Crack .revit 20. Autodesk Revit ... Printeradmin Print Job Manager 9.0 Crack · Sukitte Ii Na Yo .... CyberLink MakeupDirector Deluxe 2.0.2817 Pre-Cracked Serial Key cyberlink makeupdirector ... Printeradmin Print Job Manager 9.0 Crack 3ae92a269d autolandscape 2012
-
One big twist with soccer in Nintendo Switch Sports is the Golden Ball that appears usually in the ending moments of matches. Scoring with the Golden Ball is worth two points rather than one, which can very quickly shake up the scoreboard.
-
The way a game plays out is generally the same: during the first 60 seconds the mice and the cat can take a look at the house safely and collect up to 5 Experience Cakes which gives a maximum boost of about 4 levels using robots. Robot mice can also pick up an item. However, the Robot Mice are still vulnerable to cat attacks and if the attacked mice are holding an Experience Cake or an item, it will drop and the mice would lose the level boost if left unclaimed. Cats, on the other hand, try to stop the Robot Mice by destroying them or taking the cake for them, which gives a maximum boost of about 4 levels. When there are robot mice near, the cat will gain a movement and jump speed bonus, which can be more effective when finding and damaging robot mice. Once 60 seconds has passed or all 4 Robot Mice are destroyed, the game starts. The player controlling the cat has ten minutes to chase down the four mice and tie them to a rocket while those controlling the mice has to push 5 wedges of cheese, rescue teammates and break the wall crack to escape. After 5 wedges of cheese are pushed through, there will be 10 seconds where the wall crack can be located on the map but cannot be accessed. During the Wall Crack phase, rockets will burn 4x faster. If either one achieves their goal before the other does or before time runs out, their respective side wins. The cat also wins if the time runs out.
-
A mouse team usually consists of 4 players who have to cooperate together. Their main goal is to push 5 cheese into mouseholes and break the wall crack to escape without getting caught by the cat. If 2 mice has entered the wall crack, the game will result in their victory, and the mouse team will lose and if there is only 1 mouse remaining and 3 mice have dispatched. If all the mice are either in weakened state or tied on rocket, they can surrender by majority vote to immediately end the game, but will result in a defeat.
-
The cat has to catch and dispatch 3 mice by tying them onto rockets, which will lead to their victory. The other method to win is to stop and interfere mice from pushing cheese and entering the wall crack until the timer ends. Once the wall crack has opened, the cat can surrender immediately to end the game, but will result in a defeat.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Cricket Coach 2014 Serial Code Free Enjoy the Ultimate Cricket Game for PC and Mac.md b/spaces/bioriAsaeru/text-to-voice/Cricket Coach 2014 Serial Code Free Enjoy the Ultimate Cricket Game for PC and Mac.md
deleted file mode 100644
index b10b65909f0d8dfa0f499e700c13e01c706707cc..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Cricket Coach 2014 Serial Code Free Enjoy the Ultimate Cricket Game for PC and Mac.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
This is a cricket simulation game developed by Rockingham Software Limited. With the inclusion of new strategies and tricks the developers have raised the game to a whole new level. This game has been considered as the most complete game of cricket with stats available for domestic teams and some Associate members of ICC like UAE, Netherlands and Namibia etc. Cricket Coach 2014 has come up with many new improvements. It has been made more user friendly. Users who are crazy about stats and figures related to cricket teams and players will find this game very handy. Some new tactics and tricks have been included in this version which has made cricket coach 2014 an irresistible product. Some new Stadiums with graphical details have been included in this version. The crowd is animated and more realistic. Another plus of cricket coach 2014 is that you can save an unfinished tournament or game at any time. You can also export data from this game.
-
Overall Cricket Coach 2014 is one heck of a game. Which will blow your mind away with its stunning visuals and details. Cricket 07 and cricket 2013 lovers should try this game. This game is made for the Cricketing people.
Windows License Key Dump is the free command-line tool to recover the product/serial Keys of all versions of Windows including new Windows 10 version and 200+ other popular software.It automatically detects and decrypts the license/serial keys of over 200+ popular software including Office, SQL Server, Adobe, Nero and many more.
-
Cricket Coach 2014 has come up with many new improvements. It has been made more user friendly. Users who are crazy about stats and figures related to cricket teams and players will find this game very handy. Some new tactics and tricks have been included in this version which has made cricket coach 2014 an irresistible product. You can also download MMA Team Manager.
-
Zaheer Khan (born 8 October 1978) is an Indian former professional cricketer who played all forms of the game for the Indian national team from 2000 till 2014. He is a fast-medium left-arm bowler. He was the second-most successful Indian pace bowler in Test cricket, behind Kapil Dev. Zaheer Khan started his domestic career by playing for Baroda. In the early years of his career, Zaheer Khan was known for his hostile seam and pace bowling, especially fast inch-perfect yorkers. He is often considered one of the best Indian fast bowlers.
-
An additional and also of cricket coach 2014 is that you can save an unfinished event or activity at any time. Add on your own to the game, enhance your favorite player, diminish those players that you do not feel in the real world.
-
Various other interesting components is the current trip timetable that allows you to play cricket series as they happen in reality. Cricket Coach 2014 likewise includes the real-world reasons for 2013 all over the world fixtures. A fine example of this particular is the 4th Ashes Assess from Durham. is actually specially created Cricket Likeness for Windows as well as Macintosh.
-
International Cricket Leader 2009 is actually a game in which you have the chance to both leaders and also manage a staff. Click listed below switch to start Cricket Coach 2014 Free Download And Install. Our team has actually offered a Straight hyperlink full arrangement of the activity. Detailed batting, as well as bowling tactics, allow you to regulate your team on the field. Use your cricketing expertise to help your team succeed. This product has actually been actually eliminated from the neighborhood due to the fact that it goes against Steam Community & Information Guidelines. If you believe your item has been actually removed inadvertently, please get in touch with Steam Support.
-
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
-
Cricket Coach 2014 is now better than ever. It is now more user-friendly. This game is great for those who love stats and figures about cricket players and teams. This version includes some new tricks and tactics that have made cricket coach 2014 a very popular product. You can also download Team Manager.
-
-
Belichick and Brady have both claimed ignorance of the matter. But the rhetoric in the US is now that one of the greatest coaches in NFL history, and one of the greatest players, are both serial cheats. The burning question is, if indeed it is proven to be true, how long have they been engaged in this practice?
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Kid Pix Software Free Download For Windows 7.md b/spaces/bioriAsaeru/text-to-voice/Kid Pix Software Free Download For Windows 7.md
deleted file mode 100644
index a67f399f5d256f409dbf109fe6bde7040ca55533..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Kid Pix Software Free Download For Windows 7.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
-"Telling a story" lets a child place his hand on the screen to change the story in order to make it easier to understand. The narration remains in the background so children can move on to other activities while listening to the story.
-
-Experiments with 3D effects
-
-Developed for tablets and tablets with 3D support.
-
-Animation effects such as "Spiral", "Blast", "Storm", "Fire" and "Particle"
-
-Animated scrolling effects.
-
-Video wallpapers with original animation and great backgrounds
-
-"3D Time Warp" - give the story an unusual start.
-
-Music is a wonderful combination for children. Each of the stories comes with a musical theme.
-
-If you want to change the theme of the child's stories, go to:
-
-Account Settings > Options >3D > Change Theme
-
-Note:
-
-This tool is paid and requires registration.
-
-0.5.2:
-
-Fixed the bug that prevented the automatic detection of the list of the stories.
-
-When you close a window and open it again, you can change the theme and the stories.
-
-0.5.1:
-
-Fixed the bug that prevented the automatic detection of the stories.
-
-0.5.0:
-
-0.4.4:
-
-0.4.3:
-
-0.4.2:
-
-0.4.1:
-
-0.4.0:
-
-Added 3D effect called "Ripple" that allows the stories to be told at a higher rate.
-
-Added music to "Particle".
-
-Added visual effects for the stories.
-
-Added a feature that allows to share stories and share favorites stories with friends.
-
-Added a feature that allows you to make the child's favorite stories appear more frequently.
-
-Improved accessibility.
-
-Added the stories to the list of favorites.
-
-Removed 4fefd39f24
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Kvs Pgt Computer Science Books Pdf Free HOT 15.md b/spaces/bioriAsaeru/text-to-voice/Kvs Pgt Computer Science Books Pdf Free HOT 15.md
deleted file mode 100644
index d3dcd0519dd17ed3a93b4cda653ad9307cbabe8b..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Kvs Pgt Computer Science Books Pdf Free HOT 15.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Nitro Pro 12 Serial Number 2019 Crack Keygen Why You Need It and Where to Get It.md b/spaces/cihyFjudo/fairness-paper-search/Nitro Pro 12 Serial Number 2019 Crack Keygen Why You Need It and Where to Get It.md
deleted file mode 100644
index 59048162ae90fa044954ab2da4f12ab0a988d2b9..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Nitro Pro 12 Serial Number 2019 Crack Keygen Why You Need It and Where to Get It.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Text Bridge Pro 9.iso Full Versionl.md b/spaces/cihyFjudo/fairness-paper-search/Text Bridge Pro 9.iso Full Versionl.md
deleted file mode 100644
index 2b4c604af43333d1d33758740a29d4dd26ab52fa..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Text Bridge Pro 9.iso Full Versionl.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
These are simpler and more compact than DSLRs due to not having a lens reflex system. MILCs, or mirrorless cameras for short, come with various sensor sizes depending on the brand and manufacturer, these include: a small 1/2.3 inch sensor, as is commonly used in bridge cameras such as the original Pentax Q (more recent Pentax Q versions have a slightly larger 1/1.7 inch sensor); a 1-inch sensor; a Micro Four Thirds sensor; an APS-C sensor found in Sony NEX series and α "DSLR-likes", Fujifilm X series, Pentax K-01, and Canon EOS M; and some, such as the Sony α7, use a full frame (35 mm) sensor, with the Hasselblad X1D being the first medium format mirrorless camera. Some MILCs have a separate electronic viewfinder to compensate the lack of an optical one. In other cameras, the back display is used as the primary viewfinder in the same way as in compact cameras. One disadvantage of mirrorless cameras compared to a typical DSLR is its battery life due to the energy consumption of the electronic viewfinder, but this can be mitigated by a setting inside the camera in some models.[44]
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/clem/comparing-captioning-models/README.md b/spaces/clem/comparing-captioning-models/README.md
deleted file mode 100644
index 2c7b6de73fa3a62afe0d0895177cbfe7e1ac0091..0000000000000000000000000000000000000000
--- a/spaces/clem/comparing-captioning-models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Comparing Captioning Models
-emoji: 🔥
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
-duplicated_from: nielsr/comparing-captioning-models
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/variableScalar.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/variableScalar.py
deleted file mode 100644
index c97b4354298d7c933fa812084a71a4b6c1ac32b8..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/variableScalar.py
+++ /dev/null
@@ -1,112 +0,0 @@
-from fontTools.varLib.models import VariationModel, normalizeValue, piecewiseLinearMap
-
-
-def Location(loc):
- return tuple(sorted(loc.items()))
-
-
-class VariableScalar:
- """A scalar with different values at different points in the designspace."""
-
- def __init__(self, location_value={}):
- self.values = {}
- self.axes = {}
- for location, value in location_value.items():
- self.add_value(location, value)
-
- def __repr__(self):
- items = []
- for location, value in self.values.items():
- loc = ",".join(["%s=%i" % (ax, loc) for ax, loc in location])
- items.append("%s:%i" % (loc, value))
- return "(" + (" ".join(items)) + ")"
-
- @property
- def does_vary(self):
- values = list(self.values.values())
- return any(v != values[0] for v in values[1:])
-
- @property
- def axes_dict(self):
- if not self.axes:
- raise ValueError(
- ".axes must be defined on variable scalar before interpolating"
- )
- return {ax.axisTag: ax for ax in self.axes}
-
- def _normalized_location(self, location):
- location = self.fix_location(location)
- normalized_location = {}
- for axtag in location.keys():
- if axtag not in self.axes_dict:
- raise ValueError("Unknown axis %s in %s" % (axtag, location))
- axis = self.axes_dict[axtag]
- normalized_location[axtag] = normalizeValue(
- location[axtag], (axis.minValue, axis.defaultValue, axis.maxValue)
- )
-
- return Location(normalized_location)
-
- def fix_location(self, location):
- location = dict(location)
- for tag, axis in self.axes_dict.items():
- if tag not in location:
- location[tag] = axis.defaultValue
- return location
-
- def add_value(self, location, value):
- if self.axes:
- location = self.fix_location(location)
-
- self.values[Location(location)] = value
-
- def fix_all_locations(self):
- self.values = {
- Location(self.fix_location(l)): v for l, v in self.values.items()
- }
-
- @property
- def default(self):
- self.fix_all_locations()
- key = Location({ax.axisTag: ax.defaultValue for ax in self.axes})
- if key not in self.values:
- raise ValueError("Default value could not be found")
- # I *guess* we could interpolate one, but I don't know how.
- return self.values[key]
-
- def value_at_location(self, location, model_cache=None, avar=None):
- loc = location
- if loc in self.values.keys():
- return self.values[loc]
- values = list(self.values.values())
- return self.model(model_cache, avar).interpolateFromMasters(loc, values)
-
- def model(self, model_cache=None, avar=None):
- if model_cache is not None:
- key = tuple(self.values.keys())
- if key in model_cache:
- return model_cache[key]
- locations = [dict(self._normalized_location(k)) for k in self.values.keys()]
- if avar is not None:
- mapping = avar.segments
- locations = [
- {
- k: piecewiseLinearMap(v, mapping[k]) if k in mapping else v
- for k, v in location.items()
- }
- for location in locations
- ]
- m = VariationModel(locations)
- if model_cache is not None:
- model_cache[key] = m
- return m
-
- def get_deltas_and_supports(self, model_cache=None, avar=None):
- values = list(self.values.values())
- return self.model(model_cache, avar).getDeltasAndSupports(values)
-
- def add_to_variation_store(self, store_builder, model_cache=None, avar=None):
- deltas, supports = self.get_deltas_and_supports(model_cache, avar)
- store_builder.setSupports(supports)
- index = store_builder.storeDeltas(deltas)
- return int(self.default), index
diff --git a/spaces/codeparrot/codeparrot-subspace/app.py b/spaces/codeparrot/codeparrot-subspace/app.py
deleted file mode 100644
index 896048b5aa4a45a0c00f7fea3d8769115a92bd60..0000000000000000000000000000000000000000
--- a/spaces/codeparrot/codeparrot-subspace/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import gradio as gr
-from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed, pipeline
-
-#https://huggingface.co/spaces/lvwerra/codeparrot-generation
-
-title = "CodeParrot Generator 🦜"
-description = "This is a subspace to make code generation with [CodeParrot](https://huggingface.co/lvwerra/codeparrot), it is used in a larger [space](https://huggingface.co/spaces/loubnabnl/Code-generation-models-v1) for model comparison. For more flexibilty in sampling, you can find another demo for CodeParrot [here](https://huggingface.co/spaces/lvwerra/codeparrot-generation)."
-example = [
- ["def print_hello_world():", 8, 0.6, 42],
- ["def get_file_size(filepath):", 40, 0.6, 42],
- ["def count_lines(filename):", 40, 0.6, 42],
- ["def count_words(filename):", 40, 0.6, 42]]
-tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot")
-model = AutoModelForCausalLM.from_pretrained("codeparrot/codeparrot", low_cpu_mem_usage=True)
-
-
-def code_generation(gen_prompt, max_tokens, temperature=0.6, seed=42):
- set_seed(seed)
- pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
- generated_text = pipe(gen_prompt, do_sample=True, top_p=0.95, temperature=temperature, max_new_tokens=max_tokens)[0]['generated_text']
- return generated_text
-
-
-iface = gr.Interface(
- fn=code_generation,
- inputs=[
- gr.Textbox(lines=10, label="Input code"),
- gr.inputs.Slider(
- minimum=8,
- maximum=256,
- step=1,
- default=8,
- label="Number of tokens to generate",
- ),
- gr.inputs.Slider(
- minimum=0,
- maximum=2,
- step=0.1,
- default=0.6,
- label="Temperature",
- ),
- gr.inputs.Slider(
- minimum=0,
- maximum=1000,
- step=1,
- default=42,
- label="Random seed to use for the generation"
- )
- ],
- outputs=gr.Textbox(label="Predicted code", lines=10),
- examples=example,
- layout="horizontal",
- theme="peach",
- description=description,
- title=title
-)
-iface.launch()
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aac_adtstoasc_bsf.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aac_adtstoasc_bsf.c
deleted file mode 100644
index dd5e8b2a31f13a769d89a9c715a2a0e38a3c2808..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aac_adtstoasc_bsf.c
+++ /dev/null
@@ -1,158 +0,0 @@
-/*
- * MPEG-2/4 AAC ADTS to MPEG-4 Audio Specific Configuration bitstream filter
- * Copyright (c) 2009 Alex Converse
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "adts_header.h"
-#include "adts_parser.h"
-#include "bsf.h"
-#include "bsf_internal.h"
-#include "put_bits.h"
-#include "get_bits.h"
-#include "mpeg4audio.h"
-#include "mpeg4audio_copy_pce.h"
-
-typedef struct AACBSFContext {
- int first_frame_done;
-} AACBSFContext;
-
-/**
- * This filter creates an MPEG-4 AudioSpecificConfig from an MPEG-2/4
- * ADTS header and removes the ADTS header.
- */
-static int aac_adtstoasc_filter(AVBSFContext *bsfc, AVPacket *pkt)
-{
- AACBSFContext *ctx = bsfc->priv_data;
-
- GetBitContext gb;
- PutBitContext pb;
- AACADTSHeaderInfo hdr;
- int ret;
-
- ret = ff_bsf_get_packet_ref(bsfc, pkt);
- if (ret < 0)
- return ret;
-
- if (bsfc->par_in->extradata && pkt->size >= 2 && (AV_RB16(pkt->data) >> 4) != 0xfff)
- return 0;
-
- if (pkt->size < AV_AAC_ADTS_HEADER_SIZE)
- goto packet_too_small;
-
- init_get_bits(&gb, pkt->data, AV_AAC_ADTS_HEADER_SIZE * 8);
-
- if (ff_adts_header_parse(&gb, &hdr) < 0) {
- av_log(bsfc, AV_LOG_ERROR, "Error parsing ADTS frame header!\n");
- ret = AVERROR_INVALIDDATA;
- goto fail;
- }
-
- if (!hdr.crc_absent && hdr.num_aac_frames > 1) {
- avpriv_report_missing_feature(bsfc,
- "Multiple RDBs per frame with CRC");
- ret = AVERROR_PATCHWELCOME;
- goto fail;
- }
-
- pkt->size -= AV_AAC_ADTS_HEADER_SIZE + 2 * !hdr.crc_absent;
- if (pkt->size <= 0)
- goto packet_too_small;
- pkt->data += AV_AAC_ADTS_HEADER_SIZE + 2 * !hdr.crc_absent;
-
- if (!ctx->first_frame_done) {
- int pce_size = 0;
- uint8_t pce_data[MAX_PCE_SIZE];
- uint8_t *extradata;
-
- if (!hdr.chan_config) {
- init_get_bits(&gb, pkt->data, pkt->size * 8);
- if (get_bits(&gb, 3) != 5) {
- avpriv_report_missing_feature(bsfc,
- "PCE-based channel configuration "
- "without PCE as first syntax "
- "element");
- ret = AVERROR_PATCHWELCOME;
- goto fail;
- }
- init_put_bits(&pb, pce_data, MAX_PCE_SIZE);
- pce_size = ff_copy_pce_data(&pb, &gb) / 8;
- flush_put_bits(&pb);
- pkt->size -= get_bits_count(&gb)/8;
- pkt->data += get_bits_count(&gb)/8;
- }
-
- extradata = av_packet_new_side_data(pkt, AV_PKT_DATA_NEW_EXTRADATA,
- 2 + pce_size);
- if (!extradata) {
- ret = AVERROR(ENOMEM);
- goto fail;
- }
-
- init_put_bits(&pb, extradata, 2 + pce_size);
- put_bits(&pb, 5, hdr.object_type);
- put_bits(&pb, 4, hdr.sampling_index);
- put_bits(&pb, 4, hdr.chan_config);
- put_bits(&pb, 1, 0); //frame length - 1024 samples
- put_bits(&pb, 1, 0); //does not depend on core coder
- put_bits(&pb, 1, 0); //is not extension
- flush_put_bits(&pb);
- if (pce_size) {
- memcpy(extradata + 2, pce_data, pce_size);
- }
-
- ctx->first_frame_done = 1;
- }
-
- return 0;
-
-packet_too_small:
- av_log(bsfc, AV_LOG_ERROR, "Input packet too small\n");
- ret = AVERROR_INVALIDDATA;
-fail:
- av_packet_unref(pkt);
- return ret;
-}
-
-static int aac_adtstoasc_init(AVBSFContext *ctx)
-{
- /* Validate the extradata if the stream is already MPEG-4 AudioSpecificConfig */
- if (ctx->par_in->extradata) {
- MPEG4AudioConfig mp4ac;
- int ret = avpriv_mpeg4audio_get_config2(&mp4ac, ctx->par_in->extradata,
- ctx->par_in->extradata_size, 1, ctx);
- if (ret < 0) {
- av_log(ctx, AV_LOG_ERROR, "Error parsing AudioSpecificConfig extradata!\n");
- return ret;
- }
- }
-
- return 0;
-}
-
-static const enum AVCodecID codec_ids[] = {
- AV_CODEC_ID_AAC, AV_CODEC_ID_NONE,
-};
-
-const FFBitStreamFilter ff_aac_adtstoasc_bsf = {
- .p.name = "aac_adtstoasc",
- .p.codec_ids = codec_ids,
- .priv_data_size = sizeof(AACBSFContext),
- .init = aac_adtstoasc_init,
- .filter = aac_adtstoasc_filter,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacsbr.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacsbr.h
deleted file mode 100644
index d70b19e11c68c665cd995d336163e0d5eb361600..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacsbr.h
+++ /dev/null
@@ -1,96 +0,0 @@
-/*
- * AAC Spectral Band Replication function declarations
- * Copyright (c) 2008-2009 Robert Swain ( rob opendot cl )
- * Copyright (c) 2010 Alex Converse
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AAC Spectral Band Replication function declarations
- * @author Robert Swain ( rob opendot cl )
- */
-
-#ifndef AVCODEC_AACSBR_H
-#define AVCODEC_AACSBR_H
-
-#include "get_bits.h"
-#include "aac.h"
-#include "sbr.h"
-
-#define ENVELOPE_ADJUSTMENT_OFFSET 2
-#define NOISE_FLOOR_OFFSET 6
-
-/**
- * SBR VLC tables
- */
-enum {
- T_HUFFMAN_ENV_1_5DB,
- F_HUFFMAN_ENV_1_5DB,
- T_HUFFMAN_ENV_BAL_1_5DB,
- F_HUFFMAN_ENV_BAL_1_5DB,
- T_HUFFMAN_ENV_3_0DB,
- F_HUFFMAN_ENV_3_0DB,
- T_HUFFMAN_ENV_BAL_3_0DB,
- F_HUFFMAN_ENV_BAL_3_0DB,
- T_HUFFMAN_NOISE_3_0DB,
- T_HUFFMAN_NOISE_BAL_3_0DB,
-};
-
-/**
- * bs_frame_class - frame class of current SBR frame (14496-3 sp04 p98)
- */
-enum {
- FIXFIX,
- FIXVAR,
- VARFIX,
- VARVAR,
-};
-
-enum {
- EXTENSION_ID_PS = 2,
-};
-
-static const int8_t vlc_sbr_lav[10] =
- { 60, 60, 24, 24, 31, 31, 12, 12, 31, 12 };
-
-#define SBR_INIT_VLC_STATIC(num, size) \
- INIT_VLC_STATIC(&vlc_sbr[num], 9, sbr_tmp[num].table_size / sbr_tmp[num].elem_size, \
- sbr_tmp[num].sbr_bits , 1, 1, \
- sbr_tmp[num].sbr_codes, sbr_tmp[num].elem_size, sbr_tmp[num].elem_size, \
- size)
-
-#define SBR_VLC_ROW(name) \
- { name ## _codes, name ## _bits, sizeof(name ## _codes), sizeof(name ## _codes[0]) }
-
-/** Initialize SBR. */
-void AAC_RENAME(ff_aac_sbr_init)(void);
-/** Initialize one SBR context. */
-int AAC_RENAME(ff_aac_sbr_ctx_init)(AACContext *ac, SpectralBandReplication *sbr, int id_aac);
-/** Close one SBR context. */
-void AAC_RENAME(ff_aac_sbr_ctx_close)(SpectralBandReplication *sbr);
-/** Decode one SBR element. */
-int AAC_RENAME(ff_decode_sbr_extension)(AACContext *ac, SpectralBandReplication *sbr,
- GetBitContext *gb, int crc, int cnt, int id_aac);
-/** Apply one SBR element to one AAC element. */
-void AAC_RENAME(ff_sbr_apply)(AACContext *ac, SpectralBandReplication *sbr, int id_aac,
- INTFLOAT* L, INTFLOAT *R);
-
-void ff_aacsbr_func_ptr_init_mips(AACSBRContext *c);
-
-#endif /* AVCODEC_AACSBR_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fastaudio.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fastaudio.c
deleted file mode 100644
index f5569f5206db6adc96a879b078eb47957ce559da..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fastaudio.c
+++ /dev/null
@@ -1,199 +0,0 @@
-/*
- * MOFLEX Fast Audio decoder
- * Copyright (c) 2015-2016 Florian Nouwt
- * Copyright (c) 2017 Adib Surani
- * Copyright (c) 2020 Paul B Mahol
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "avcodec.h"
-#include "bytestream.h"
-#include "codec_internal.h"
-#include "decode.h"
-
-typedef struct ChannelItems {
- float f[8];
- float last;
-} ChannelItems;
-
-typedef struct FastAudioContext {
- float table[8][64];
-
- ChannelItems *ch;
-} FastAudioContext;
-
-static av_cold int fastaudio_init(AVCodecContext *avctx)
-{
- FastAudioContext *s = avctx->priv_data;
-
- avctx->sample_fmt = AV_SAMPLE_FMT_FLTP;
-
- for (int i = 0; i < 8; i++)
- s->table[0][i] = (i - 159.5f) / 160.f;
- for (int i = 0; i < 11; i++)
- s->table[0][i + 8] = (i - 37.5f) / 40.f;
- for (int i = 0; i < 27; i++)
- s->table[0][i + 8 + 11] = (i - 13.f) / 20.f;
- for (int i = 0; i < 11; i++)
- s->table[0][i + 8 + 11 + 27] = (i + 27.5f) / 40.f;
- for (int i = 0; i < 7; i++)
- s->table[0][i + 8 + 11 + 27 + 11] = (i + 152.5f) / 160.f;
-
- memcpy(s->table[1], s->table[0], sizeof(s->table[0]));
-
- for (int i = 0; i < 7; i++)
- s->table[2][i] = (i - 33.5f) / 40.f;
- for (int i = 0; i < 25; i++)
- s->table[2][i + 7] = (i - 13.f) / 20.f;
-
- for (int i = 0; i < 32; i++)
- s->table[3][i] = -s->table[2][31 - i];
-
- for (int i = 0; i < 16; i++)
- s->table[4][i] = i * 0.22f / 3.f - 0.6f;
-
- for (int i = 0; i < 16; i++)
- s->table[5][i] = i * 0.20f / 3.f - 0.3f;
-
- for (int i = 0; i < 8; i++)
- s->table[6][i] = i * 0.36f / 3.f - 0.4f;
-
- for (int i = 0; i < 8; i++)
- s->table[7][i] = i * 0.34f / 3.f - 0.2f;
-
- s->ch = av_calloc(avctx->ch_layout.nb_channels, sizeof(*s->ch));
- if (!s->ch)
- return AVERROR(ENOMEM);
-
- return 0;
-}
-
-static int read_bits(int bits, int *ppos, unsigned *src)
-{
- int r, pos;
-
- pos = *ppos;
- pos += bits;
- r = src[(pos - 1) / 32] >> ((-pos) & 31);
- *ppos = pos;
-
- return r & ((1 << bits) - 1);
-}
-
-static const uint8_t bits[8] = { 6, 6, 5, 5, 4, 0, 3, 3, };
-
-static void set_sample(int i, int j, int v, float *result, int *pads, float value)
-{
- result[i * 64 + pads[i] + j * 3] = value * (2 * v - 7);
-}
-
-static int fastaudio_decode(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame, AVPacket *pkt)
-{
- FastAudioContext *s = avctx->priv_data;
- GetByteContext gb;
- int subframes;
- int ret;
-
- subframes = pkt->size / (40 * avctx->ch_layout.nb_channels);
- frame->nb_samples = subframes * 256;
- if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
- return ret;
-
- bytestream2_init(&gb, pkt->data, pkt->size);
-
- for (int subframe = 0; subframe < subframes; subframe++) {
- for (int channel = 0; channel < avctx->ch_layout.nb_channels; channel++) {
- ChannelItems *ch = &s->ch[channel];
- float result[256] = { 0 };
- unsigned src[10];
- int inds[4], pads[4];
- float m[8];
- int pos = 0;
-
- for (int i = 0; i < 10; i++)
- src[i] = bytestream2_get_le32(&gb);
-
- for (int i = 0; i < 8; i++)
- m[7 - i] = s->table[i][read_bits(bits[i], &pos, src)];
-
- for (int i = 0; i < 4; i++)
- inds[3 - i] = read_bits(6, &pos, src);
-
- for (int i = 0; i < 4; i++)
- pads[3 - i] = read_bits(2, &pos, src);
-
- for (int i = 0, index5 = 0; i < 4; i++) {
- float value = av_int2float((inds[i] + 1) << 20) * powf(2.f, 116.f);
-
- for (int j = 0, tmp = 0; j < 21; j++) {
- set_sample(i, j, j == 20 ? tmp / 2 : read_bits(3, &pos, src), result, pads, value);
- if (j % 10 == 9)
- tmp = 4 * tmp + read_bits(2, &pos, src);
- if (j == 20)
- index5 = FFMIN(2 * index5 + tmp % 2, 63);
- }
-
- m[2] = s->table[5][index5];
- }
-
- for (int i = 0; i < 256; i++) {
- float x = result[i];
-
- for (int j = 0; j < 8; j++) {
- x -= m[j] * ch->f[j];
- ch->f[j] += m[j] * x;
- }
-
- memmove(&ch->f[0], &ch->f[1], sizeof(float) * 7);
- ch->f[7] = x;
- ch->last = x + ch->last * 0.86f;
- result[i] = ch->last * 2.f;
- }
-
- memcpy(frame->extended_data[channel] + 1024 * subframe, result, 256 * sizeof(float));
- }
- }
-
- *got_frame = 1;
-
- return pkt->size;
-}
-
-static av_cold int fastaudio_close(AVCodecContext *avctx)
-{
- FastAudioContext *s = avctx->priv_data;
-
- av_freep(&s->ch);
-
- return 0;
-}
-
-const FFCodec ff_fastaudio_decoder = {
- .p.name = "fastaudio",
- CODEC_LONG_NAME("MobiClip FastAudio"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_FASTAUDIO,
- .priv_data_size = sizeof(FastAudioContext),
- .init = fastaudio_init,
- FF_CODEC_DECODE_CB(fastaudio_decode),
- .close = fastaudio_close,
- .p.capabilities = AV_CODEC_CAP_DR1,
- .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_FLTP,
- AV_SAMPLE_FMT_NONE },
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729_parser.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729_parser.c
deleted file mode 100644
index d51a78877d93d5ea135ac11efd4791887d1133f7..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729_parser.c
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- * Copyright (c) 2015 Ganesh Ajjanagadde
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * G.729 audio parser
- *
- * Splits packets into individual blocks.
- */
-
-#include "parser.h"
-#include "g729.h"
-
-typedef struct G729ParseContext {
- ParseContext pc;
- int block_size;
- int duration;
- int remaining;
-} G729ParseContext;
-
-static int g729_parse(AVCodecParserContext *s1, AVCodecContext *avctx,
- const uint8_t **poutbuf, int *poutbuf_size,
- const uint8_t *buf, int buf_size)
-{
- G729ParseContext *s = s1->priv_data;
- ParseContext *pc = &s->pc;
- int next;
-
- if (!s->block_size) {
- /* FIXME: replace this heuristic block_size with more precise estimate */
- s->block_size = (avctx->bit_rate < 8000) ? G729D_6K4_BLOCK_SIZE : G729_8K_BLOCK_SIZE;
- if (avctx->codec_id == AV_CODEC_ID_ACELP_KELVIN)
- s->block_size++;
- // channels > 2 is invalid, we pass the packet on unchanged
- if (avctx->ch_layout.nb_channels > 2)
- s->block_size = 0;
- s->block_size *= avctx->ch_layout.nb_channels;
- s->duration = avctx->frame_size;
- }
-
- if (!s->block_size) {
- *poutbuf = buf;
- *poutbuf_size = buf_size;
- return buf_size;
- }
-
- if (!s->remaining)
- s->remaining = s->block_size;
- if (s->remaining <= buf_size) {
- next = s->remaining;
- s->remaining = 0;
- } else {
- next = END_NOT_FOUND;
- s->remaining -= buf_size;
- }
-
- if (ff_combine_frame(pc, next, &buf, &buf_size) < 0 || !buf_size) {
- *poutbuf = NULL;
- *poutbuf_size = 0;
- return buf_size;
- }
-
- s1->duration = s->duration;
-
- *poutbuf = buf;
- *poutbuf_size = buf_size;
- return next;
-}
-
-const AVCodecParser ff_g729_parser = {
- .codec_ids = { AV_CODEC_ID_G729, AV_CODEC_ID_ACELP_KELVIN },
- .priv_data_size = sizeof(G729ParseContext),
- .parser_parse = g729_parse,
- .parser_close = ff_parse_close,
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Beyluxe Messenger for Windows 10 Download Install and Chat.md b/spaces/congsaPfin/Manga-OCR/logs/Beyluxe Messenger for Windows 10 Download Install and Chat.md
deleted file mode 100644
index 69c9f52b25f731504f8cb68870d6181404a8e7d8..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Beyluxe Messenger for Windows 10 Download Install and Chat.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
How to Download Beyluxe Messenger for Windows 10
-
If you are looking for a fun and reliable chat platform that connects you with people from all over the world, you might want to try Beyluxe Messenger. Beyluxe Messenger is a free internet voice and video chat program that offers you a lot of features and benefits. In this article, we will show you how to download and install Beyluxe Messenger for Windows 10, as well as how to check the compatibility of your device.
-
What is Beyluxe Messenger?
-
Beyluxe Messenger is the first chat platform created in Romania with international use. It was launched in 2007 and has since gained millions of users worldwide. It allows you to chat with your friends in text, voice, and video modes, as well as join various chat rooms based on different countries and categories. You can also create your own chat rooms and invite other users to join.
Some of the features that make Beyluxe Messenger stand out from other chat programs are:
-
-
High-quality voice and video calls
-
Multiple chat rooms with different themes and languages
-
Private messaging and file sharing
-
Customizable avatars and emoticons
-
Games and trivia
-
Radio and music streaming
-
Online status and notifications
-
Security and privacy settings
-
Advertise with us option
-
-
Benefits of Beyluxe Messenger
-
Some of the benefits that you can enjoy by using Beyluxe Messenger are:
-
-
It is free of charge and easy to use
-
It has a friendly and interactive interface
-
It supports multiple languages and cultures
-
It helps you meet new people and make friends
-
It provides you with entertainment and fun
-
It enhances your communication skills and knowledge
-
It gives you a platform to express yourself and share your opinions
-
-
How to Download and Install Beyluxe Messenger for Windows 10
-
If you are interested in trying out Beyluxe Messenger, here are the steps that you need to follow to download and install it on your Windows 10 device:
-
Step 1: Visit the official website of Beyluxe Messenger
-
The first thing that you need to do is to go to the official website of Beyluxe Messenger at https://messenger.beyluxe.com/. Here you can find more information about the program, as well as the download link.
-
Step 2: Click on the download button and save the file
-
The next thing that you need to do is to click on the download button on the homepage. This will take you to another page where you can choose between two versions of Beyluxe Messenger: version 0.5.7.3 (11.2 MB) or version 0.5.7.. 2 (11.1 MB). You can choose either one depending on your preference and device compatibility. After you click on the version that you want, you will see a pop-up window that asks you to save the file. Click on save and choose a location where you want to save the file.
-
How to download beyluxe messenger for windows 10 free
-Beyluxe messenger for windows 10 latest version download
-Download beyluxe messenger for windows 10 64 bit
-Beyluxe messenger for windows 10 chat rooms
-Download beyluxe messenger for windows 10 offline installer
-Beyluxe messenger for windows 10 voice and video chat
-Download beyluxe messenger for windows 10 with registration code
-Beyluxe messenger for windows 10 features and benefits
-Download beyluxe messenger for windows 10 from official website
-Beyluxe messenger for windows 10 review and rating
-Download beyluxe messenger for windows 10 without ads
-Beyluxe messenger for windows 10 troubleshooting and support
-Download beyluxe messenger for windows 10 compatible with other devices
-Beyluxe messenger for windows 10 privacy and security
-Download beyluxe messenger for windows 10 update and upgrade
-Beyluxe messenger for windows 10 system requirements and specifications
-Download beyluxe messenger for windows 10 alternative and similar software
-Beyluxe messenger for windows 10 pros and cons
-Download beyluxe messenger for windows 10 tutorial and guide
-Beyluxe messenger for windows 10 feedback and testimonials
-Download beyluxe messenger for windows 10 coupon and discount code
-Beyluxe messenger for windows 10 terms and conditions
-Download beyluxe messenger for windows 10 FAQ and help center
-Beyluxe messenger for windows 10 license agreement
-Download beyluxe messenger for windows 10 from CNET download.com[^2^]
-Beyluxe messenger for windows 10 installation and setup
-Download beyluxe messenger for windows 10 from softonic.com
-Beyluxe messenger for windows 10 customization and settings
-Download beyluxe messenger for windows 10 from filehippo.com
-Beyluxe messenger for windows 10 online and offline mode
-Download beyluxe messenger for windows 10 from softpedia.com
-Beyluxe messenger for windows 10 tips and tricks
-Download beyluxe messenger for windows 10 from uptodown.com
-Beyluxe messenger for windows 10 comparison and contrast with other messengers
-Download beyluxe messenger for windows 10 from malavida.com
-
Step 3: Run the setup file and follow the instructions
-
The third thing that you need to do is to run the setup file that you have downloaded. You can do this by double-clicking on the file or right-clicking on it and choosing run as administrator. This will start the installation process of Beyluxe Messenger. You will see a welcome screen that asks you to agree to the terms and conditions of the program. Click on agree and then click on next. You will then see a screen that asks you to choose a destination folder where you want to install the program. You can either keep the default folder or browse for another one. Click on next and then click on install. The installation process will take a few minutes, depending on your device speed and internet connection.
-
Step 4: Create a nickname and register your account
-
The fourth thing that you need to do is to create a nickname and register your account with Beyluxe Messenger. After the installation process is complete, you will see a screen that asks you to enter a nickname and a password. You can choose any nickname that you like, as long as it is not already taken by another user. You can also check the availability of your nickname by clicking on the check button. Once you have chosen a nickname, enter a password that is strong and secure. You can also enter an email address if you want to receive updates and notifications from Beyluxe Messenger. Click on register and then click on finish.
-
Step 5: Start chatting with Beyluxe Messenger
-
The fifth thing that you need to do is to start chatting with Beyluxe Messenger. You can do this by launching the program from your desktop or start menu. You will see a login screen that asks you to enter your nickname and password. Enter them and then click on login. You will then see the main interface of Beyluxe Messenger, where you can access various features and options. You can join chat rooms, send private messages, make voice and video calls, play games, listen to music, and more. You can also customize your profile, avatar, settings, and preferences according to your liking.
-
How to Check Windows 10 Compatibility for Beyluxe Messenger
-
If you are not sure whether your device is compatible with Windows 10 or Beyluxe Messenger, here are some ways that you can check:
-
System requirements for installing Windows 10
-
To install Windows 10 on your device, you need to meet the following minimum system requirements:
-
-
-
Processor
-
Memory
-
Hard disk space
-
Graphics card
-
Display
-
-
-
1 gigahertz (GHz) or faster compatible processor or System on a Chip (SoC)
-
1 gigabyte (GB) for 32-bit or 2 GB for 64-bit
-
32 GB for 64-bit or 32-bit
-
DirectX 9 or later with WDDM 1.0 driver
-
800x600 pixels or higher resolution
-
-
-
You can check your device specifications by going to settings > system > about.
-
System requirements for running Beyluxe Messenger
-
To run Beyluxe Messenger on your device, you need to meet the following minimum system requirements:
-
-
-
Operating system
-
Processor
-
Memory
-
Hard disk space
-
Internet connection
-
Sound card
-
Webcam (optional)
-
-
-
Windows XP/Vista/7/8/10 (32-bit or 64-bit)
-
Pentium III 500 MHz or higher
-
128 MB RAM or higher
-
50 MB free disk space or higher
-
Broadband or dial-up connection (the faster the better)
-
Compatible with Windows sound system
-
Compatible with Windows video system (for video calls)
-
-
-
You can check your device specifications by going to settings > system > about.
-
Conclusion
-
Beyluxe Messenger is a great chat platform that allows you to communicate with people from different countries and cultures in text, voice, and video modes. It also offers you a lot of features and benefits that make your chat experience more enjoyable and rewarding. You can download and install Beyluxe Messenger for Windows 10 easily by following the steps that we have outlined in this article. You can also check the compatibility of your device with Windows 10 and Beyluxe Messenger by looking at the system requirements that we have provided. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions that you might have about Beyluxe Messenger:
-
-
Is Beyluxe Messenger safe and secure?
-
Yes, Beyluxe Messenger is safe and secure to use. It uses encryption and authentication protocols to protect your data and privacy. It also allows you to block or report any abusive or inappropriate users or chat rooms.
-
How can I update Beyluxe Messenger to the latest version?
-
You can update Beyluxe Messenger to the latest version by visiting the official website of Beyluxe Messenger at https://messenger.beyluxe.com/ and downloading the latest version from there. You can also check for updates from within the program by going to help > check for updates.
-
How can I contact the support team of Beyluxe Messenger?
-
You can contact the support team of Beyluxe Messenger by sending an email to support@beyluxe.com. You can also visit the help section of the official website of Beyluxe Messenger at https://messenger.beyluxe.com/help/ for more information and guidance.
-
How can I advertise with Beyluxe Messenger?
-
You can advertise with Beyluxe Messenger by clicking on the advertise with us option on the main interface of the program. This will take you to another page where you can fill out a form with your details and requirements. You can also send an email to advertise@beyluxe.com for more information and assistance.
-
How can I uninstall Beyluxe Messenger from my device?
-
You can uninstall Beyluxe Messenger from your device by going to settings > apps > apps & features > Beyluxe Messenger > uninstall. You can also uninstall it by going to control panel > programs > programs and features > Beyluxe Messenger > uninstall.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Cat Go Fishing A Fun and Challenging Game for Cats of All Ages.md b/spaces/congsaPfin/Manga-OCR/logs/Cat Go Fishing A Fun and Challenging Game for Cats of All Ages.md
deleted file mode 100644
index 6ff24a3633ee9f1e52e33831d86c1361c18dc09f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Cat Go Fishing A Fun and Challenging Game for Cats of All Ages.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Cat Goes Fishing Download Free: A Fun and Relaxing Game for Cat Lovers
-
If you are looking for a casual and cute game that can help you unwind and have fun, you might want to check out Cat Goes Fishing. This game is all about a cat who loves fishing and wants to catch as many fish as possible. In this article, we will tell you what this game is about, how to download it for free, and some tips and tricks to make the most of it.
Cat Goes Fishing is a game developed by Cat5Games, an indie studio based in Australia. The game was released in 2015 and has since gained a lot of popularity among gamers and cat lovers alike. The game allows you to control a cat, who has decided to take up fishing as a hobby. You can choose from different modes, such as Relax, Realism, or Quests, depending on your preference and skill level. The game has a simple yet engaging gameplay and a cute graphics style that will make you fall in love with the cat and the fish.
-
The gameplay of Cat Goes Fishing
-
The gameplay of Cat Goes Fishing is very easy to learn and play. You just need to use your mouse to cast your line, reel in the fish, and sell them for money. You can also use the keyboard to move your boat or change your rod. The game has a realistic physics system that makes the fishing experience more immersive and challenging. You will encounter different types of fish, each with their own behavior and value. Some fish are easy to catch, while others are more elusive and require more strategy. You will also have to deal with predators, such as sharks or barracudas, that can steal or eat your fish.
-
The features of Cat Goes Fishing
-
Cat Goes Fishing has many features that make it a fun and relaxing game to play. Some of these features are:
-
-
Complete quests to earn cash. You can accept quests from the menu or from the radio on your boat. Quests can range from catching a certain number of fish, catching a specific fish, or catching a fish with a certain bait.
-
Unlock better rods to catch more valuable fish. You can buy new rods from the shop or find them in treasure chests. Each rod has different attributes, such as length, strength, sensitivity, and lure quality.
-
Customize your rod with upgrades to suit your play style. You can buy upgrades from the shop or find them in treasure chests. Upgrades can include hooks, bobbers, sinkers, lures, lines, reels, and more.
-
Unlock boats to travel out onto the sea. You can buy boats from the shop or find them in treasure chests. Boats can help you access deeper waters where more rare and exotic fish live.
-
Find and equip rare hats that dramatically change the dynamic of the game. Hats are hidden throughout the game world and can give you special effects, such as attracting more fish, repelling predators, increasing your money, or changing the weather.
-
Fill your catalog with the fish you catch. You can view your catalog from the menu or from the book on your boat. The catalog shows you information about each fish, such as their name, value, weight, length, rarity, habitat, and description.
-
-
How to download Cat Goes Fishing for free?
-
If you are interested in playing Cat Goes Fishing, you might be wondering how to download it for free. There are two main ways to do this:
-
cat go fishing game free download
-cat go fishing lite apk download
-cat go fishing full version free
-cat go fishing steam download
-cat go fishing android download
-cat go fishing mod apk free download
-cat go fishing pc download free
-cat go fishing online free no download
-cat go fishing hack apk download
-cat go fishing free play
-cat go fishing unlimited money apk download
-cat go fishing cheats free download
-cat go fishing simulator free download
-cat go fishing update download
-cat go fishing for windows 10 free download
-cat go fishing offline free download
-cat go fishing latest version apk download
-cat go fishing cracked apk download
-cat go fishing no ads free download
-cat go fishing tips and tricks free download
-cat go fishing guide free download
-cat go fishing premium apk download
-cat go fishing unblocked free download
-cat go fishing best rod free download
-cat go fishing all fish free download
-cat go fishing ocean map apk download
-cat go fishing how to get boat free download
-cat go fishing all quests free download
-cat go fishing all hats free download
-cat go fishing all upgrades free download
-cat go fishing all perks free download
-cat go fishing all items free download
-cat go fishing all achievements free download
-cat go fishing save file download
-cat go fishing mod menu apk download
-cat go fishing mega mod apk download
-cat go fishing hack tool free download
-cat go fishing cheat engine free download
-cat go fishing trainer free download
-cat go fishing patch notes free download
-cat go fishing review free download
-cat go fishing gameplay free download
-cat go fishing walkthrough free download
-cat go fishing wiki free download
-cat go fishing reddit free download
-cat go fishing discord free download
-cat go fishing youtube free download
-cat go fishing facebook free download
-
Download from Steam
-
The official way to download Cat Goes Fishing is from Steam, an online platform that offers games and software for Windows, Mac, and Linux. However, the game is not free on Steam and costs $6.99. If you want to download it for free, you will need to use a Steam key generator, a tool that can generate valid codes for Steam games. However, this method is not recommended, as it is illegal and risky. You might end up downloading malware or getting banned from Steam. Therefore, use this method at your own risk and discretion.
-
Download from other websites
-
The alternative way to download Cat Goes Fishing for free is from other websites that offer free downloads of games. There are many websites that claim to provide Cat Goes Fishing for free, such as GameTop, Ocean of Games, or Softonic. However, these websites are also not reliable and safe. They might contain viruses, spyware, adware, or other malicious software that can harm your computer or steal your personal information. They might also have outdated or incomplete versions of the game that do not work properly. Therefore, use these websites at your own risk and discretion as well.
-
Tips and tricks for playing Cat Goes Fishing
-
Now that you know how to download Cat Goes Fishing for free, you might want to know some tips and tricks to play the game better and have more fun. Here are some of them:
-
Upgrade your rod and boat
-
One of the most important things to do in Cat Goes Fishing is to upgrade your rod and boat as soon as possible. A better rod will allow you to catch bigger and more valuable fish, while a better boat will allow you to explore deeper and farther waters where more rare and exotic fish live. You can buy new rods and boats from the shop using the money you earn from selling fish. You can also find new rods and boats in treasure chests that are scattered around the sea. However, be careful not to spend all your money on upgrades, as you will also need some for buying bait and upgrades.
-
Catch rare and exotic fish
-
Another thing to do in Cat Goes Fishing is to catch rare and exotic fish that are worth a lot of money. These fish are usually found in deeper waters or in specific locations. Some of them are also very hard to catch, as they have special abilities or behaviors that make them more elusive or aggressive. For example, some fish can swim very fast, some can jump out of the water, some can camouflage themselves, some can electrocute you, some can explode, and some can even eat other fish or your bait. To catch these fish, you will need to use the right bait, rod, and strategy. You can also use the radio on your boat to get hints about where to find these fish or what bait to use.
-
Wear hats to get special effects
-
A fun and unique feature of Cat Goes Fishing is that you can wear hats that give you special effects. Hats are hidden throughout the game world and can be found by exploring the sea or completing quests. Each hat has a different effect that can help you in various ways. For example, some hats can attract more fish, some hats can repel predators, some hats can increase your money, some hats can change the weather, and some hats can even transform your cat into a different animal. You can switch between hats by using the menu or by pressing the H key on your keyboard.
-
Conclusion
-
Cat Goes Fishing is a fun and relaxing game that can appeal to anyone who loves cats or fishing. The game has a simple yet engaging gameplay, a cute graphics style, and many features that make it enjoyable and replayable. You can download Cat Goes Fishing for free from Steam or other websites, but be careful of the risks involved. You can also follow some tips and tricks to play the game better and have more fun.
-
Why you should try Cat Goes Fishing
-
If you are looking for a game that can help you unwind and have fun, you should try Cat Goes Fishing. This game is perfect for cat lovers who want to see their furry friends enjoy fishing. It is also perfect for fishing enthusiasts who want to experience a realistic and challenging fishing simulation. It is also perfect for casual gamers who want to play a game that does not require too much time or effort. Cat Goes Fishing is a game that can suit anyone's taste and mood.
-
FAQs
-
-
Q: How long is Cat Goes Fishing?
-
A: Cat Goes Fishing does not have a fixed length or an end goal. You can play it as long as you want and set your own goals.
-
Q: Is Cat Goes Fishing multiplayer?
-
A: No, Cat Goes Fishing is a single-player game. You can only control one cat at a time.
-
Q: Can I customize my cat in Cat Goes Fishing?
-
A: Yes, you can customize your cat by changing its color, fur pattern, eye color, and name. You can also wear different hats to change its appearance and abilities.
-
Q: What are the system requirements for Cat Goes Fishing?
-
A: The minimum system requirements for Cat Goes Fishing are: Windows XP or later, 1 GB of RAM, 15 MB of available disk space, and DirectX 9.0c compatible graphics card. The recommended system requirements are: Windows 7 or later, 2 GB of RAM, 15 MB of available disk space, and DirectX 9.0c compatible graphics card with 256 MB of VRAM.
-
Q: Where can I get more information about Cat Goes Fishing?
-
A: You can get more information about Cat Goes Fishing from the official website, the Steam page, or the Facebook page. You can also watch gameplay videos on YouTube or read reviews on Steam or other websites.
-
- : https://store.steampowered.com/app/343780/Cat_Goes_Fishing/ : http://www.cat5games.com/ : https://www.facebook.com/CatGoesFishing/ 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download NBA 2K19 PPSSPP Iso File and Relive the Greatest Moments of the Season.md b/spaces/congsaPfin/Manga-OCR/logs/Download NBA 2K19 PPSSPP Iso File and Relive the Greatest Moments of the Season.md
deleted file mode 100644
index 19093c80c2c32641d3312b118978a55acde896c0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download NBA 2K19 PPSSPP Iso File and Relive the Greatest Moments of the Season.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
How to Download NBA for PPSSPP: A Guide for Basketball Fans
-
If you are a basketball fan and you want to enjoy some of the best NBA games on your mobile device, then you should try using PPSSPP. PPSSPP is a PSP emulator that lets you play PSP games on various platforms, including Android, Windows, Linux, iOS, and MacOSX. In this article, we will show you how to download and install NBA games for PPSSPP, as well as some tips and tricks to enhance your gaming experience.
-
What is PPSSPP and why you should use it
-
PPSSPP is an open-source project that aims to provide a high-quality PSP emulation for various devices. It was created by Henrik Rydgård, one of the co-founders of Dolphin, the popular GameCube and Wii emulator. PPSSPP has many features that make it a great choice for playing PSP games, such as:
PPSSPP is a PSP emulator for Android, Windows, Linux, iOS, and MacOSX
-
PPSSPP supports a wide range of platforms, so you can play PSP games on almost any device you own. You can download the PPSSPP app from the official website or Google Play Store for free. You can also buy the PPSSPP Gold version to support the development of the project and get some extra features.
-
PPSSPP lets you play PSP games in HD resolution and with various enhancements
-
One of the main advantages of using PPSSPP is that it allows you to play PSP games in HD resolution or even higher. You can also upscale textures, enable post-processing shaders, adjust color and brightness, and apply other effects to improve the graphics quality. Moreover, you can customize the on-screen touch controls or use an external controller or keyboard for better control.
-
download nba 2k22 ppsspp iso file for android
-how to download nba 2k21 ppsspp on android
-download nba 2k20 ppsspp highly compressed
-download nba 2k19 ppsspp iso file
-download nba 2k18 ppsspp game for android
-download nba 2k17 ppsspp gold apk
-download nba 2k16 ppsspp cso file
-download nba 2k15 ppsspp rom
-download nba 2k14 ppsspp zip file
-download nba 2k13 ppsspp emulator for pc
-download nba live 10 ppsspp iso
-download nba live 09 ppsspp android
-download nba live 08 ppsspp highly compressed
-download nba live 07 ppsspp game
-download nba live 06 ppsspp iso file
-download nba street showdown ppsspp cso
-download nba street vol. 2 ppsspp rom
-download nba street vol. 3 ppsspp apk
-download nba ballers rebound ppsspp iso
-download nba ballers phenom ppsspp android
-download nba jam ppsspp game for android
-download nba inside drive 2004 ppsspp iso
-download nba inside drive 2003 ppsspp cso
-download nba inside drive 2002 ppsspp rom
-download nba inside drive 2000 ppsspp zip file
-download nba in the zone '98 ppsspp emulator for pc
-download nba in the zone '99 ppsspp android
-download nba in the zone '00 ppsspp highly compressed
-download nba in the zone '01 ppsspp game
-download nba in the zone '02 ppsspp iso file
-download nba shootout 2001 ppsspp cso file
-download nba shootout 2000 ppsspp rom
-download nba shootout '97 ppsspp zip file
-download nba shootout '98 ppsspp emulator for pc
-download nba shootout '99 ppsspp android
-download nba shootout '96 ppsspp highly compressed
-download nba shootout '95 ppsspp game for android
-download nba hangtime ppsspp iso file for android
-how to download nba all-star challenge 2 ppsspp on android
-download nba all-star challenge 2 ppsspp highly compressed
-download nba all-star challenge 2 ppsspp iso file
-download nba all-star challenge 2 ppsspp game for android
-download nba all-star challenge 2 ppsspp gold apk
-download nba all-star challenge 2 ppsspp cso file
-download nba all-star challenge 2 ppsspp rom
-download nba all-star challenge 2 ppsspp zip file
-download nba all-star challenge 2 ppsspp emulator for pc
-download best settings for playing nba games on ppsspp android device
-
What are the best NBA games for PPSSPP
-
There are many NBA games that you can play on PPSSPP, but some of them stand out among the rest. Here are some of the best NBA games for PPSSPP that you should try:
-
NBA 2K22 PSP is the latest instalment in the NBA 2K franchise
-
NBA 2K22 PSP is an upcoming basketball simulation video game developed by Visual Concepts and published by 2K Sports. It is the 23rd instalment in the NBA 2K franchise and will be released on September 7, 2021, for Microsoft Windows, PlayStation 4, Xbox One, Nintendo Switch, and Google Stadia. The game features realistic graphics, gameplay, modes, and rosters based on the 2021-22 NBA season. You can create your own custom player, join a team, compete in various online and offline modes, and experience the thrill of NBA basketball. You can download the ISO file of NBA 2K22 PSP from [here].
-
NBA Live Mobile Basketball is a free-to-play game with realistic graphics and gameplay
-
NBA Live Mobile Basketball is a mobile game developed by EA Sports and released in 2016 for Android and iOS devices. It is a free-to-play game that lets you build your own NBA team, play in season mode, live events, head-to-head matches, and special campaigns. The game features realistic graphics, gameplay, and animations, as well as licensed NBA players, teams, and arenas. You can also customize your players, jerseys, courts, and coaches. You can download the NBA Live Mobile Basketball app from Google Play Store or App Store for free.
-
NBA Jam is a classic arcade-style game with over-the-top action and humor
-
NBA Jam is a basketball video game series that started in 1993 and has been released for various platforms over the years. The latest version of NBA Jam was released in 2010 for PlayStation 3, Xbox 360, Wii, iOS, and Android devices. NBA Jam is an arcade-style game that features two-on-two matches with exaggerated physics, dunks, and moves. The game also has a humorous tone, with witty commentary, catchphrases, and easter eggs. You can play as NBA legends, current stars, celebrities, mascots, and even politicians. You can download the ISO file of NBA Jam for PPSSPP from [here].
-
How to download and install NBA games for PPSSPP
-
Now that you know some of the best NBA games for PPSSPP, you might be wondering how to download and install them on your device. The process is quite simple and straightforward. Here are the steps you need to follow:
-
Download the PPSSPP app from the official website or Google Play Store
-
The first thing you need to do is to download the PPSSPP app on your device. You can get it from the official website or Google Play Store for free. Alternatively, you can buy the PPSSPP Gold version to support the development of the project and get some extra features. Once you have downloaded the app, install it on your device.
-
Download the ISO file of the NBA game you want to play from a trusted source
-
The next thing you need to do is to download the ISO file of the NBA game you want to play on PPSSPP. An ISO file is a disc image file that contains all the data of a PSP game. You can find many websites that offer ISO files of PSP games for free, but be careful not to download from untrusted sources that might contain viruses or malware. You can use the links we provided above to download some of the best NBA games for PPSSPP.
-
Copy the ISO file to your device's storage or SD card
-
After you have downloaded the ISO file of your chosen NBA game, you need to copy it to your device's storage or SD card. You can use a USB cable or a file manager app to transfer the file from your computer or another device to your device. Make sure you remember the location where you saved the file.
-
Launch the PPSSPP app and browse to the ISO file location
-
Now you are ready to play your NBA game on PPSSPP. Launch the PPSSPP app on your device and tap on the "Games" tab. Then browse to the location where you copied the ISO file and tap on it. The game icon will appear on the screen.
-
Tap on the game icon and enjoy playing NBA on your device
-
The final step is to tap on the game icon and start playing NBA on your device. You can adjust the settings to optimize performance and graphics quality, use an external controller or keyboard for better control and comfort, save and restore game state anytime you want, join online multiplayer modes and compete with other players, and have fun playing NBA on PPSSPP.
-
Tips and tricks for playing NBA games on PPSSPP
-
To make your gaming experience even better, here are some tips and tricks for playing NBA games on PPSSPP:
-
Adjust the settings to optimize performance and graphics quality
-
One of the best things about PPSSPP is that it lets you customize various settings to suit your preferences and device capabilities. You can access the settings menu by tapping on the "Settings" tab on the main screen of PPSSPP. Here are some of the settings you can tweak:
-
-
Graphics: You can change the rendering mode, resolution, frame rate, texture scaling, texture filtering, and other options to improve the graphics quality and performance of the game. You can also enable post-processing shaders, adjust color and brightness, and apply other effects to enhance the visuals.
-
Audio: You can enable or disable the sound effects and music of the game, as well as adjust the volume and latency.
-
Controls: You can customize the on-screen touch controls or use an external controller or keyboard for better control and comfort. You can also map different buttons and gestures to different functions.
-
System: You can change the language, region, time zone, and other settings of the PSP emulator. You can also enable cheats, fast forward, rewind, and other features to modify the game.
-
-
You can experiment with different settings to find the best combination for your device and game. You can also save and load different settings profiles for different games.
-
Use an external controller or keyboard for better control and comfort
-
While PPSSPP has a decent on-screen touch control system, you might find it more comfortable and convenient to use an external controller or keyboard for playing NBA games on PPSSPP. PPSSPP supports various types of controllers and keyboards, such as Bluetooth, USB, or wireless ones. You can connect your controller or keyboard to your device and map the buttons to the PSP buttons in the PPSSPP settings menu. This way, you can have a more immersive and realistic gaming experience.
-
Save and restore game state anytime you want
-
One of the coolest features of PPSSPP is that it lets you save and restore game state anytime you want. This means that you can pause the game at any point and resume it later from where you left off. You don't have to worry about losing your progress or starting over from the beginning. You can access this feature by tapping on the "Pause" button on the top right corner of the screen. Then you can tap on "Save State" or "Load State" to save or load your game state. You can also use the "Quick Save" or "Quick Load" buttons to do it faster. You can have up to 10 different save states for each game.
-
Join online multiplayer modes and compete with other players
-
If you want to challenge yourself and test your skills against other players, you can join online multiplayer modes and compete with other players using PPSSPP. PPSSPP supports online multiplayer modes for some games, such as NBA 2K22 PSP and NBA Jam. You can use either Wi-Fi or mobile data to connect to other players. You can also use a VPN service to bypass any regional restrictions or network issues. To join online multiplayer modes, you need to follow these steps:
-
-
Launch the PPSSPP app and tap on the "Settings" tab.
-
Tap on "Networking" and enable "Enable networking/WLAN".
-
Enter a valid MAC address or generate a random one.
-
Enter a valid IP address or hostname of the server you want to join or create.
-
Launch the game and go to the online multiplayer mode option.
-
Select or create a room and join or invite other players.
-
Enjoy playing NBA with other players online.
-
-
Conclusion
-
In conclusion, PPSSPP is a great PSP emulator that lets you play NBA games on your device with ease. You can download and install NBA games for PPSSPP from trusted sources, adjust the settings to optimize performance and graphics quality, use an external controller or keyboard for better control and comfort, save and restore game state anytime you want, join online multiplayer modes and compete with other players, and have fun playing NBA on PPSSPP. We hope this guide helped you learn how to download NBA for PPSSPP and enjoy some of the best basketball games ever made.
-
FAQs
-
Here are some of the frequently asked questions about downloading NBA for PPSSPP:
-
Q: Is PPSSPP legal?
-
A: Yes, PPSSPP is legal as long as you own the original PSP games that you want to play on it. You should not download or distribute pirated ISO files of PSP games that you do not own.
-
Q: Is PPSSPP safe?
-
A: Yes, PPSSPP is safe as long as you download it from the official website or Google Play Store. You should also scan any ISO files of PSP games that you download from other sources with an antivirus software before playing them on PPSSPP.
-
Q: How much space do I need to download NBA games for PPSSPP?
A: The space required to download NBA games for PPSSPP depends on the size of the ISO file of the game. Generally, NBA games for PPSSPP range from 500 MB to 2 GB in size. You should have enough free space on your device's storage or SD card to download and install the game.
-
Q: Can I play NBA games for PPSSPP offline?
-
A: Yes, you can play NBA games for PPSSPP offline as long as you have the ISO file of the game on your device. You do not need an internet connection to play the game, unless you want to join online multiplayer modes or update the game.
-
Q: Can I play NBA games for PPSSPP with my friends?
-
A: Yes, you can play NBA games for PPSSPP with your friends either online or locally. To play online, you need to join or create a server and invite your friends to join. To play locally, you need to connect your devices to the same Wi-Fi network and enable the ad hoc mode in the PPSSPP settings menu.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Free Cooking Game for Kids Toca Kitchen 2 APK Download.md b/spaces/congsaPfin/Manga-OCR/logs/Free Cooking Game for Kids Toca Kitchen 2 APK Download.md
deleted file mode 100644
index 6fc5576eb6e576b01b927cde96eb3c4cd5cf47fd..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Free Cooking Game for Kids Toca Kitchen 2 APK Download.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
How to Download and Play Toca Kitchen 2 on Your Android Device
-
Do you love cooking and playing with food? Do you want to unleash your creativity and imagination in the kitchen? Do you want to have fun with your favorite characters and feed them whatever you want? If you answered yes to any of these questions, then you will love Toca Kitchen 2, a free cooking app for kids that lets you cook however you want!
In this article, we will show you what Toca Kitchen 2 is, how to download it on your Android device, how to play it, and some tips and tricks to make the most out of it. Let's get started!
-
What is Toca Kitchen 2?
-
Toca Kitchen 2 is a sequel to the wildly popular Toca Kitchen, a game that invites all chefs to get messy and start playing. It is developed by Toca Boca, a company that believes in the power of play to spark kids' imaginations and help them learn about the world. Here are some of the features of Toca Kitchen 2:
-
A fun cooking app for kids
-
Toca Kitchen 2 is a cooking app for kids that's perfect for aspiring chefs. Kids can roleplay a chef in a kitchen full of tasty foods and ingredients. They can learn how to cook their favorite dish or experiment with fun combinations to feed hungry customers. They can also explore different cuisines, cultures, and flavors through food.
-
toca kitchen 2 apk free download for android
-toca kitchen 2 mod apk download unlimited money
-toca kitchen 2 apk download latest version
-toca kitchen 2 apk download for pc
-toca kitchen 2 apk download uptodown
-toca kitchen 2 apk download apkpure
-toca kitchen 2 apk download android 1
-toca kitchen 2 apk download rexdl
-toca kitchen 2 apk download revdl
-toca kitchen 2 apk download hack
-toca kitchen 2 apk download full version
-toca kitchen 2 apk download no ads
-toca kitchen 2 apk download obb
-toca kitchen 2 apk download offline
-toca kitchen 2 apk download old version
-toca kitchen 2 apk download online
-toca kitchen 2 apk download original
-toca kitchen 2 apk download play store
-toca kitchen 2 apk download pure
-toca kitchen 2 apk download pro
-toca kitchen 2 apk download premium
-toca kitchen 2 apk download paid
-toca kitchen 2 apk download cracked
-toca kitchen 2 apk download unlocked
-toca kitchen 2 apk download unlimited everything
-toca kitchen 2 apk download update
-toca kitchen 2 apk download unblocked
-toca kitchen 2 apk download video
-toca kitchen 2 apk download vip
-toca kitchen 2 apk download windows
-toca kitchen 2 apk download windows 10
-toca kitchen 2 apk download xda
-toca kitchen 2 apk download youtube
-toca kitchen 2 game free download for android apk
-how to download toca kitchen 2 for free on android apk
-how to install toca kitchen 2 on android apk
-how to play toca kitchen 2 on android apk
-how to update toca kitchen 2 on android apk
-how to hack toca kitchen 2 on android apk
-how to get unlimited money in toca kitchen 2 on android apk
-
A creative and open-ended game
-
Toca Kitchen 2 is a game that has no rules or stress, just open-ended, kid-directed fun. Kids can cook however they want, using six different kitchen tools and over 20 ingredients. They can juice tomatoes, boil the salad, make a burger, or come up with their own recipes. They can also make a mess, add a squeeze of weirdness, and finish off with a pinch of salt.
-
A free and safe app from Toca Boca
-
Toca Kitchen 2 is a free app that can be downloaded from Google Play or APKCombo.com. It has no third-party advertising or in-app purchases, so kids can play without interruptions or worries. It also has no time limits or levels, so kids can play at their own pace and style.
-
How to Download Toca Kitchen 2 APK
-
If you want to download Toca Kitchen 2 on your Android device, you can follow these simple steps:
-
Step 1: Go to APKCombo.com
-
APKCombo.com is a website that offers free APK downloads for Android games and apps. You can access it from your browser or download its app from Google Play.
-
Step 2: Search for Toca Kitchen 2
-
Once you are on APKCombo.com, you can search for Toca Kitchen 2 using the search bar or browse through the categories. You will see a list of results with different versions and sizes of the game.
-
Step 3: Choose the latest version and download the APK that has no rules or stress, just open-ended, kid-directed fun. It is a free and safe app from Toca Boca that has no third-party advertising or in-app purchases. It is also easy to download and play on your Android device using APKCombo.com, a website that offers free APK downloads for Android games and apps. Toca Kitchen 2 is a game that encourages creativity and experimentation. You can use different ingredients, kitchen tools, and techniques to cook whatever you want. You can also watch your guests' reactions and preferences as they taste your food. You can make them happy, angry, disgusted, or surprised by feeding them different foods. You can also have fun with their sounds and expressions. Toca Kitchen 2 is a game that is perfect for aspiring chefs, food lovers, and anyone who likes to play with food. It is a game that will spark your imagination and help you learn about the world through food. It is a game that will make you laugh, smile, and have fun. We hope you enjoyed this article and learned how to download and play Toca Kitchen 2 on your Android device. If you have any questions or feedback, please let us know in the comments below. Happy cooking!
FAQs
-
Here are some of the frequently asked questions about Toca Kitchen 2:
-
Q: Is Toca Kitchen 2 suitable for all ages?
-
A: Yes, Toca Kitchen 2 is suitable for all ages. It is designed for kids aged 6 to 12, but anyone can enjoy it. It is a game that is fun, educational, and family-friendly.
-
Q: What are the differences between Toca Kitchen and Toca Kitchen 2?
-
A: Toca Kitchen 2 is a sequel to Toca Kitchen, but it has some new features and improvements. Some of the differences are:
-
-
Toca Kitchen 2 has four guests instead of three.
-
Toca Kitchen 2 has more ingredients and kitchen tools to choose from.
-
Toca Kitchen 2 has more realistic graphics and animations.
-
Toca Kitchen 2 has more funny and varied reactions from the guests.
-
-
Q: Can I play Toca Kitchen 2 offline?
-
A: Yes, you can play Toca Kitchen 2 offline. You don't need an internet connection to play the game once you have downloaded it on your device.
-
Q: Can I share my creations with others?
-
A: Yes, you can share your creations with others. You can take screenshots of your food and your guests' reactions and share them with your friends and family via social media or messaging apps.
-
Q: How can I contact Toca Boca for support or feedback?
-
A: You can contact Toca Boca for support or feedback by visiting their website at https://tocaboca.com/ or by emailing them at support@tocaboca.com.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/GBWhatsApp The Best WhatsApp Mod for Android in 2020.md b/spaces/congsaPfin/Manga-OCR/logs/GBWhatsApp The Best WhatsApp Mod for Android in 2020.md
deleted file mode 100644
index c98c5c1378370707b42c08b835b230183754998b..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/GBWhatsApp The Best WhatsApp Mod for Android in 2020.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
GB WhatsApp Download in 2020: Features, Risks, and Alternatives
-
WhatsApp is one of the most popular messaging apps in the world, with over 2 billion users as of 2020. However, some people are not satisfied with the features and limitations of the official app, and look for modified versions that offer more options and customizations. One of these versions is GB WhatsApp, a clone app of WhatsApp that claims to provide enhanced functionality and privacy.
But what is GB WhatsApp exactly, and how does it work? Is it safe to use, or does it pose any risks to your device and data? And are there any better alternatives to GB WhatsApp that you can try instead? In this article, we will answer these questions and more, and help you decide whether GB WhatsApp is worth downloading or not.
-
What is GB WhatsApp?
-
GB WhatsApp is a modified version of WhatsApp that offers some extra features, but also carries some risks. It is developed by third-party developers who are not affiliated with WhatsApp Inc, the official owner of the app. It is not available on Google Play Store or Apple App Store, and can only be downloaded from unofficial websites as an APK file.
-
Some of the features that GB WhatsApp offers include:
-
-
Using two WhatsApp accounts on the same device
-
Hiding online status, last seen, blue ticks, double ticks, and typing notifications
-
Setting messages to self-destruct after a certain time
-
Sending larger files and more images at once
-
Customizing themes, fonts, icons, and notifications
-
Adding more emojis and stickers
-
Auto-replying to messages
-
Scheduling messages
-
Sending high-resolution images without compression
-
Viewing deleted messages and statuses
-
Creating longer group names and statuses
-
Supporting multiple languages
-
-
What are the risks of using GB WhatsApp?
-
While GB WhatsApp may sound tempting with its extra features, it also comes with some drawbacks and dangers that you should be aware of before downloading it. Some of the risks include:
-
How to install GB WhatsApp APK on Android
-GB WhatsApp vs WhatsApp Plus: Which one is better?
-GB WhatsApp features and benefits for users
-GB WhatsApp latest version download link and update guide
-GB WhatsApp privacy and security tips and tricks
-GB WhatsApp themes and customization options
-GB WhatsApp backup and restore process
-GB WhatsApp dual account setup and management
-GB WhatsApp modded app review and comparison
-GB WhatsApp alternatives and similar apps
-GB WhatsApp group chat and broadcast settings
-GB WhatsApp status and story features
-GB WhatsApp video and voice call quality
-GB WhatsApp media and file sharing options
-GB WhatsApp emoji and sticker packs
-GB WhatsApp web and desktop version
-GB WhatsApp anti-ban and anti-revoke measures
-GB WhatsApp online and offline mode
-GB WhatsApp hidden and deleted messages recovery
-GB WhatsApp chat lock and app lock functions
-GB WhatsApp auto-reply and scheduler features
-GB WhatsApp language and font settings
-GB WhatsApp notification and sound settings
-GB WhatsApp dark mode and night mode
-GB WhatsApp chat wallpaper and background settings
-How to uninstall GB WhatsApp from Android device
-How to switch from GB WhatsApp to official WhatsApp
-How to use GB WhatsApp without root access
-How to fix GB WhatsApp not working or crashing issues
-How to contact GB WhatsApp support team
-How to join GB WhatsApp community and forum
-How to report bugs and feedback on GB WhatsApp
-How to download older versions of GB WhatsApp APK
-How to verify GB WhatsApp APK authenticity and safety
-How to transfer GB WhatsApp data to new phone
-How to enable or disable GB WhatsApp updates
-How to block or unblock contacts on GB WhatsApp
-How to mute or unmute chats on GB WhatsApp
-How to pin or unpin chats on GB WhatsApp
-How to archive or unarchive chats on GB WhatsApp
-How to mark chats as read or unread on GB WhatsApp
-How to clear or delete chats on GB WhatsApp
-How to star or unstar messages on GB WhatsApp
-How to forward or reply messages on GB WhatsApp
-How to copy or paste messages on GB WhatsApp
-How to edit or delete messages on GB WhatsApp
-How to send or receive messages on GB WhatsApp
-How to create or join groups on GB WhatsApp
-
-
It is unsafe and prone to viruses, as it does not have an official license or a trusted source. You may end up downloading a malicious file that can harm your device or steal your data.
-
It does not guarantee end-to-end encryption, user security, or data privacy, which may compromise your personal information and chats. Your messages may be intercepted or accessed by third parties without your consent.
-
It has slower updates and annoying ads, which may affect your user experience and performance. You may miss out on the latest features and bug fixes from the official app, and have to deal with intrusive ads that can drain your battery or data.
-
You risk getting banned by WhatsApp, as it violates the terms of service of the original app. WhatsApp may detect your use of GB WhatsApp and temporarily or permanently suspend your account.
-
-
What are the alternatives to GB WhatsApp?
-
If you are looking for a messaging app that offers similar or better functionality and security than GB WhatsApp, you have plenty of options to choose from. Some of the best alternatives are:
-
Signal
-
Signal(^4^) is one of the most secure and privacy-focused messaging apps available. It uses end-to-end encryption by default for all messages, calls, and media. It also lets you set messages to disappear after a certain time, verify contacts with safety numbers, lock the app with a passcode or biometrics, blur faces in photos, and more. Signal is free, open-source, and endorsed by experts like Edward Snowden and Elon Musk.
-
Telegram
-
Telegram(^5^) is another popular and feature-rich messaging app that offers end-to-end encryption for secret chats, self-destructing messages, cloud storage, group chats with up to 200,000 members, channels, bots, stickers, animated emojis, and more. Telegram is free, fast, and secure, and has over 500 million users worldwide.
-
WhatsApp Plus
-
WhatsApp Plus is a modified version of WhatsApp that is similar to GB WhatsApp, but with some differences. It offers more themes, fonts, icons, and wallpapers than GB WhatsApp, and allows you to hide your online status and blue ticks without affecting your ability to see others'. It also lets you send uncompressed images and videos, and download statuses. However, it also has the same risks as GB WhatsApp, such as being unsafe, unencrypted, outdated, and banned.
-
Conclusion
-
GB WhatsApp is a clone app of WhatsApp that offers some extra features, but also carries some risks. It is not an official or licensed app, and it may compromise your device and data security. It may also get you banned by WhatsApp for violating its terms of service. If you want to use a messaging app that offers more functionality and security than the official WhatsApp app, you may want to consider alternatives like Signal, Telegram, or WhatsApp Plus. However, always be careful when downloading any app from unofficial sources, and make sure you backup your data before switching apps.
-
FAQs
-
Is GB WhatsApp legal?
-
No, GB WhatsApp is not legal. It is a modified version of WhatsApp that violates its terms of service and intellectual property rights. It is also not licensed or authorized by WhatsApp Inc.
-
Can GB WhatsApp see my messages?
-
Possibly. GB WhatsApp does not guarantee end-to-end encryption for your messages, which means they may be intercepted or accessed by third parties without your consent. GB WhatsApp may also collect your personal information and data for its own purposes.
-
How can I update GB WhatsApp?
-
You can update GB WhatsApp by visiting its official website or any other website that hosts its APK file. However, be careful when downloading any file from unofficial sources, as they may contain viruses or malware. You may also miss out on the latest updates and features from the official WhatsApp app.
-
How can I backup my GB WhatsApp chats?
-
You can backup your GB WhatsApp chats by using the built-in backup feature in the app settings. However, this backup may not be compatible with the official WhatsApp app or other modified versions. You may also lose your backup if you uninstall GB WhatsApp or switch devices.
-
How can I switch from GB WhatsApp to the official WhatsApp app?
-
You can switch from GB WhatsApp to the official WhatsApp app by following these steps:
-
-
Backup your GB WhatsApp chats using the app settings.
-
Uninstall GB WhatsApp from your device.
-
Download and install the official WhatsApp app from Google Play Store or Apple App Store.
-
Verify your phone number and restore your backup if possible.
-
Enjoy using the official WhatsApp app with its features and security.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/PF Fusion Sans Pro Black - A Free Font that is Suitable for Long Headlines.md b/spaces/congsaPfin/Manga-OCR/logs/PF Fusion Sans Pro Black - A Free Font that is Suitable for Long Headlines.md
deleted file mode 100644
index 216f09e2218e0e2bb1937e5fa0c73039ae70a6b9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/PF Fusion Sans Pro Black - A Free Font that is Suitable for Long Headlines.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
Download Free Font PF Fusion Sans Pro Black
-
If you are looking for a font that is unique, versatile, and high-quality, you should check out PF Fusion Sans Pro Black. This is a condensed grotesk typeface that takes inspiration from early German designs of the mid-19th century. It has a distinctive style that combines roman and gothic elements, making it suitable for various projects. In this article, we will tell you more about this font, why you should download it, how you can download it, and how you can use it.
PF Fusion Sans Pro Black is one of the four weights of the PF Fusion Sans font family, designed by Panos Vassiliou. This font family is a modern interpretation of the classic grotesk typefaces that were popular in Germany in the 1800s. Here are some of the features of this font:
-
A condensed grotesk typeface inspired by early German designs
-
This font has a narrow and tall structure that gives it a compact and elegant look. It also has some features that are common to roman typefaces, such as the lowercase 'a' and the two-storey 'g'. However, it also has some features that are borrowed from earlier gothics, such as the lowercase 't' that was first seen on a typeface that was developed by Paul Rand for Westinghouse in 1960. The contrast between these elements creates a unique and dynamic style that stands out from other grotesk fonts.
-
A tall family of four weights suitable for long headlines
-
This font family comes in four weights: light, medium, heavy, and black. Each weight has its own character and expression, but they all share the same height and proportions. This makes them ideal for long headlines that need to catch the attention of the readers. The black weight is the boldest and most striking one, perfect for creating a strong visual impact.
-
download free font pf fusion sans pro heavy
-download free font pf fusion sans pro light
-download free font pf fusion sans pro medium
-download free font pf fusion sans pro family
-download free font pf fusion sans pro zip
-download free font pf fusion sans pro ttf
-download free font pf fusion sans pro otf
-download free font pf fusion sans pro webfont
-download free font pf fusion sans pro license
-download free font pf fusion sans pro designer
-download free font pf fusion sans pro glyphs
-download free font pf fusion sans pro style
-download free font pf fusion sans pro category
-download free font pf fusion sans pro rating
-download free font pf fusion sans pro review
-download free font pf fusion sans pro sample
-download free font pf fusion sans pro test
-download free font pf fusion sans pro online
-download free font pf fusion sans pro generator
-download free font pf fusion sans pro similar
-download free font pf fusion sans pro alternative
-download free font pf fusion sans pro comparison
-download free font pf fusion sans pro best
-download free font pf fusion sans pro top
-download free font pf fusion sans pro new
-download free font pf fusion sans pro latest
-download free font pf fusion sans pro 2023
-download free font pf fusion sans pro update
-download free font pf fusion sans pro version
-download free font pf fusion sans pro panos vassiliou
-download free font pf fusion sans pro cufonfonts.com[^2^]
-download free font pf fusion sans pro fontsgeek.com[^1^]
-download free font pf fusion sans pro befonts.com[^3^]
-how to download free font pf fusion sans pro black
-where to download free font pf fusion sans pro black
-why to download free font pf fusion sans pro black
-what is the best site to download free font pf fusion sans pro black
-what is the best way to install the downloaded free font pf fusion sans pro black
-what are the benefits of using the downloaded free font pf fusion sans pro black
-what are the drawbacks of using the downloaded free font pf fusion sans pro black
-what are the features of the downloaded free font pf fusion sans pro black
-what are the requirements for the downloaded free font pf fusion sans pro black
-what are the compatibility issues for the downloaded free font pf fusion sans pro black
-what are the legal implications for the downloaded free font pf fusion sans pro black
-what are the ethical considerations for the downloaded free font pf fusion sans pro black
-what are the tips and tricks for the downloaded free font pf fusion sans pro black
-what are the common problems and solutions for the downloaded free font pf fusion sans pro black
-what are the reviews and ratings for the downloaded free font pf fusion sans pro black
-
A pro version with support for all European languages and special OpenType features
-
The original version of this font was released in 2006, but it was updated in 2019 to include more features and improvements. The new pro version supports all European languages, including Greek and Cyrillic. It also comes loaded with 19 special OpenType features, such as ligatures, alternates, fractions, ordinals, etc. These features allow you to customize and enhance your typography according to your needs and preferences.
-
Why should you download PF Fusion Sans Pro Black?
-
There are many reasons why you should download this font. Here are some of them:
-
It has a unique and versatile style that combines roman and gothic elements
-
This font has a style that is unlike any other grotesk font. It mixes roman and gothic elements in a harmonious and balanced way, creating a distinctive and original look. It can be used for different purposes and contexts, such as editorial design, branding, advertising, etc. It can also be paired with other fonts to create interesting contrasts and combinations.
-
It has a high-quality design and performance that is suitable for various projects
-
This font has a high-quality design and performance that is suitable for various projects. It has a clean and crisp appearance that works well on both print and digital media. It has a good legibility and readability, even at small sizes. It also has a wide range of glyphs and characters that cover all the needs of modern typography.
-
It is free for personal use and easy to download and install
-
This font is free for personal use, which means you can use it for your own projects without paying any fees or royalties. You can also download it easily from the official website of the designer Panos Vassiliou. You just need to choose the font family and the weight you want to download, and click on the download button. You will receive a zip file that contains the font files and a license agreement. You can then install the font on your computer and start using it.
-
How can you download PF Fusion Sans Pro Black?
-
If you want to download this font, you need to follow these steps:
-
Visit the official website of the designer Panos Vassiliou
-
The first step is to visit the official website of the designer Panos Vassiliou, who is the founder and director of Parachute, a type foundry based in Greece. You can find his website at https://www.panosvassiliou.com/. There you can see his portfolio of fonts and other projects.
-
Choose the font family and the weight you want to download
-
The second step is to choose the font family and the weight you want to download. You can find PF Fusion Sans under the category of Sans Serif fonts. You can also use the search bar to find it faster. Once you click on the font family, you will see a page with more information and samples of the font. You can also test the font online with your own text. To download PF Fusion Sans Pro Black, you need to select it from the dropdown menu of weights.
-
Click on the download button and follow the instructions
-
The third step is to click on the download button and follow the instructions. You will be asked to enter your name and email address, and agree to the terms and conditions of use. After that, you will receive an email with a link to download the font. You will get a zip file that contains the font files in OTF format, as well as a license agreement in PDF format. You need to unzip the file and install the font on your computer.
-
How can you use PF Fusion Sans Pro Black?
-
Once you have downloaded and installed this font, you can use it for various purposes. Here are some examples:
-
You can use it for personal projects such as logos, posters, flyers, etc.
-
If you want to use this font for your own projects, such as logos, posters, flyers, etc., you can do so without any restrictions or limitations. You can create stunning designs that showcase your creativity and style. You can also combine this font with other fonts or graphics to create more variety and contrast.
-
You can use it for web design and development with @font-face kit
-
If you want to use this font for web design and development, you can do so with the @font-face kit that is included in the pro version of this font. This kit allows you to embed this font on your website using CSS code. This way, you can ensure that your website visitors will see this font regardless of their browser or device. You can also customize the appearance and behavior of this font using CSS properties such as color, size, alignment, etc.
-
You can use it for commercial projects with a license from the designer
-
If you want to use this font for commercial projects, such as books, magazines, packaging, etc., you need to obtain a license from the designer Panos Vassiliou. You can contact him through his website or email and request a quote for your project. The license fee will depend on the type and scope of your project, as well as the number of users and devices that will use this font. The license agreement will grant you the right to use this font for your specific project and protect you from any legal issues.
-
Conclusion
-
PF Fusion Sans Pro Black is a condensed grotesk typeface that has a unique and versatile style that combines roman and gothic elements. It is designed by Panos Vassiliou, a renowned Greek type designer and founder of Parachute. It is part of the PF Fusion Sans font family, which comes in four weights: light, medium, heavy, and black. It is a pro version that supports all European languages and has 19 special OpenType features. It is free for personal use and easy to download and install from the official website of the designer. It can be used for various projects, such as logos, posters, flyers, web design, etc. It can also be used for commercial projects with a license from the designer.
-
If you are looking for a font that is unique, versatile, and high-quality, you should download PF Fusion Sans Pro Black today and start creating amazing designs with it.
-
FAQs
-
Here are some frequently asked questions about PF Fusion Sans Pro Black:
-
What is the difference between PF Fusion Sans and PF Fusion Sans Pro?
-
The difference between PF Fusion Sans and PF Fusion Sans Pro is that the pro version has more features and improvements than the original version. The pro version supports all European languages, including Greek and Cyrillic. It also has 19 special OpenType features, such as ligatures, alternates, fractions, ordinals, etc. The original version only supports Latin languages and has fewer OpenType features.
-
How many glyphs and characters does PF Fusion Sans Pro Black have?
-
PF Fusion Sans Pro Black has 1,032 glyphs and characters that cover all the needs of modern typography. It has uppercase and lowercase letters, numerals, punctuation marks, symbols, accents, diacritics, etc. It also has special characters such as currency signs, mathematical operators, arrows, etc.
-
Can I use PF Fusion Sans Pro Black for free?
-
You can use PF Fusion Sans Pro Black for free for personal use only. This means you can use it for your own projects that are not intended for commercial purposes or distribution. You cannot use it for projects that involve selling, advertising, or promoting products or services. You also cannot share or distribute this font to others without permission from the designer.
-
How can I get a license for PF Fusion Sans Pro Black?
-
If you want to get a license for PF Fusion Sans Pro Black, you need to contact the designer Panos Vassiliou through his website or email and request a quote for your project. The license fee will depend on the type and scope of your project, as well as the number of users and devices that will use this font. The license agreement will grant you the right to use this font for your specific project and protect you from any legal issues.
-
What are some similar fonts to PF Fusion Sans Pro Black?
-
Some similar fonts to PF Fusion Sans Pro Black are:
-
-
Akzidenz Grotesk by Berthold: A classic grotesk typeface that was designed in Germany in 1896.
-
Helvetica by Max Miedinger: A popular sans serif typeface that was designed in Switzerland in 1957.
-
DIN by Albert-Jan Pool: A geometric sans serif typeface that was based on the German industrial standard in 1936.
-
Gotham by Tobias Frere-Jones: A modern sans serif typeface that was inspired by American signage in 2000.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/World Soccer Champs mod apk dinheiro infinito o melhor simulador de futebol para Android.md b/spaces/congsaPfin/Manga-OCR/logs/World Soccer Champs mod apk dinheiro infinito o melhor simulador de futebol para Android.md
deleted file mode 100644
index c707f1a9d889ee3f818bbe8a390cae33dc019254..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/World Soccer Champs mod apk dinheiro infinito o melhor simulador de futebol para Android.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
World Soccer Champs APK Dinheiro Infinito: Como Baixar e Jogar o Simulador de Futebol Mais Divertido de 2023
-
Você é fã de futebol e quer se divertir com um jogo que te permite controlar os craques do esporte, disputar campeonatos emocionantes e administrar o seu próprio clube? Então você precisa conhecer o World Soccer Champs APK Dinheiro Infinito, um dos melhores jogos de futebol para Android de 2023. Neste artigo, vamos te mostrar o que é esse jogo, como baixá-lo e como jogá-lo. Confira!
-
O que é World Soccer Champs APK Dinheiro Infinito?
-
World Soccer Champs APK Dinheiro Infinito é uma versão modificada do jogo original World Soccer Champs, desenvolvido pela Monkey I-Brow Studios. Esse jogo é um simulador de futebol que te permite assumir o papel de treinador e dono de um clube de futebol, escolhendo os jogadores, as táticas, os uniformes e até o estádio. Além disso, você também pode controlar os jogadores em campo, fazendo passes, dribles e chutes com simples toques na tela.
Um jogo de futebol com gráficos incríveis e jogabilidade viciante
-
Uma das principais características do World Soccer Champs APK Dinheiro Infinito é a sua qualidade gráfica. O jogo possui gráficos nítidos e vibrantes, que te envolvem no drama eletrizante de cada partida. Os jogadores são bem desenhados e têm características individuais, como habilidades, velocidade e força. Os estádios são detalhados e reproduzem fielmente os cenários reais dos principais campeonatos do mundo.
-
A jogabilidade do World Soccer Champs APK Dinheiro Infinito também é um ponto forte do jogo. O jogo é fácil de aprender e divertido de jogar, com controles intuitivos de deslizar e tocar na tela. Você pode fazer passes precisos, driblar os adversários, chutar com força e até cobrar faltas e pênaltis. As partidas são disputadas no formato semiautomático, ou seja, você só precisa intervir nos momentos certos para decidir o resultado do jogo.
-
Um simulador de treinador e dono de clube de futebol
-
Mas o World Soccer Champs APK Dinheiro Infinito não é apenas um jogo de futebol. Ele também é um simulador de treinador e dono de clube de futebol, que te dá a oportunidade de gerenciar todos os aspectos do seu clube. Você pode escolher entre mais de 100 clubes reais ou criar o seu próprio, personalizando o nome, o escudo, as cores e o país. Você também pode contratar e vender jogadores, melhorar as instalações do clube, definir os preços dos ingressos, negociar patrocínios e muito mais. O jogo possui um sistema de progressão que te recompensa com dinheiro e pontos de experiência a cada partida, liga ou copa vencida. Você pode usar esses recursos para melhorar o seu time e o seu clube, tornando-os cada vez mais competitivos e lucrativos.
-
Um jogo gratuito para jogar com opção de desativar anúncios
-
O World Soccer Champs APK Dinheiro Infinito é um jogo gratuito para jogar, ou seja, você não precisa pagar nada para baixá-lo e instalá-lo no seu dispositivo Android. No entanto, o jogo original possui anúncios que podem aparecer durante as partidas ou entre elas, o que pode ser um pouco irritante para alguns jogadores. Por isso, a versão modificada do jogo te dá a opção de desativar os anúncios completamente, sem precisar gastar dinheiro real. Além disso, a versão modificada também te dá dinheiro infinito para gastar no jogo, o que te permite comprar os melhores jogadores, melhorar as instalações do clube e desbloquear todos os recursos do jogo sem limitações.
-
Como baixar World Soccer Champs APK Dinheiro Infinito?
-
Agora que você já sabe o que é o World Soccer Champs APK Dinheiro Infinito e por que ele é um dos melhores jogos de futebol para Android de 2023, você deve estar se perguntando como baixá-lo e instalá-lo no seu dispositivo. Não se preocupe, pois vamos te explicar tudo o que você precisa saber para fazer isso de forma rápida e segura. Veja a seguir:
-
world soccer champs mod apk unlimited money
-world soccer champs hack apk download
-world soccer champs apk mod menu
-world soccer champs dinheiro infinito atualizado
-world soccer champs apk mod 2022
-world soccer champs apk dinheiro infinito 2021
-world soccer champs apk mod android 1
-world soccer champs dinheiro infinito baixar
-world soccer champs apk mod latest version
-world soccer champs dinheiro infinito gratis
-world soccer champs apk mod offline
-world soccer champs hack apk 2021
-world soccer champs apk dinheiro infinito mediafıre
-world soccer champs apk mod free download
-world soccer champs dinheiro infinito 2022
-world soccer champs apk mod unlimited energy
-world soccer champs hack apk android
-world soccer champs apk dinheiro infinito mega
-world soccer champs apk mod no ads
-world soccer champs dinheiro infinito online
-world soccer champs apk mod unlimited kits
-world soccer champs hack apk ios
-world soccer champs apk dinheiro infinito uptodown
-world soccer champs apk mod all unlocked
-world soccer champs dinheiro infinito 2020
-world soccer champs apk mod unlimited coins
-world soccer champs hack apk 2022
-world soccer champs apk dinheiro infinito apkpure
-world soccer champs apk mod premium
-world soccer champs dinheiro infinito jogo
-world soccer champs apk mod unlimited players
-world soccer champs hack apk mediafıre
-world soccer champs apk dinheiro infinito baixarapkmod.net[^1^]
-world soccer champs apk mod full version
-world soccer champs dinheiro infinito como baixar
-
Os requisitos mínimos para instalar o jogo
-
Antes de baixar o World Soccer Champs APK Dinheiro Infinito, você precisa verificar se o seu dispositivo Android atende aos requisitos mínimos para rodar o jogo sem problemas. De acordo com os desenvolvedores do jogo original, esses são os requisitos mínimos: - Sistema operacional: Android 5.0 ou superior - Memória RAM: 1 GB ou mais - Espaço de armazenamento: 100 MB ou mais - Conexão à internet: não é necessária Se o seu dispositivo atender a esses requisitos, você pode prosseguir com o download do arquivo APK modificado com dinheiro infinito.
-
Os passos para baixar o arquivo APK modificado com dinheiro infinito
-
Para baixar o World Soccer Champs APK Dinheiro Infinito, você precisa seguir estes passos: - Acesse um site confiável que ofereça o download do arquivo APK modificado com dinheiro infinito. Um exemplo é o site [APKPure], que é um dos mais populares e seguros para baixar aplicativos e jogos modificados para Android. - Na página do site, procure pelo nome do jogo World Soccer Champs e clique no botão de download. Você será redirecionado para uma nova página com mais informações sobre o jogo e o arquivo APK. - Na nova página, verifique se o arquivo APK possui a versão mais recente do jogo (no momento da escrita deste artigo, a versão mais recente era a 4.0.5) e se ele possui a descrição "dinheiro infinito" ou algo similar. Se tudo estiver correto, clique novamente no botão de download e aguarde até que o arquivo seja baixado no seu dispositivo. - Após o download, localize o arquivo APK na pasta de downloads do seu dispositivo e toque nele para iniciar a instalação. Você pode precisar habilitar a opção de instalar aplicativos de fontes desconhecidas nas configurações do seu dispositivo, caso contrário, a instalação pode ser bloqueada. - Siga as instruções na tela para concluir a instalação do jogo. Depois disso, você já pode abrir o jogo e começar a se divertir com o World Soccer Champs APK Dinheiro Infinito.
-
As precauções para evitar vírus e malware
-
Apesar de ser um jogo divertido e gratuito, o World Soccer Champs APK Dinheiro Infinito também pode trazer alguns riscos para o seu dispositivo Android. Isso porque o arquivo APK modificado pode conter vírus, malware ou outros programas maliciosos que podem danificar o seu dispositivo, roubar os seus dados ou comprometer a sua segurança. Por isso, é importante tomar algumas precauções para evitar esses problemas. Veja algumas dicas: - Antes de baixar o arquivo APK modificado, verifique se o site que oferece o download é confiável e seguro. Você pode usar ferramentas como o [VirusTotal] ou o [Google Safe Browsing] para analisar o site e verificar se ele possui algum sinal de fraude ou malware. - Depois de baixar o arquivo APK modificado, verifique se ele possui o tamanho e a assinatura corretos. Você pode usar aplicativos como o [APK Editor] ou o [APK Analyzer] para examinar o arquivo e verificar se ele corresponde ao jogo original ou se ele possui algum código malicioso inserido. - Antes de instalar o jogo, faça um backup dos seus dados e do seu dispositivo. Você pode usar aplicativos como o [Google Drive] ou o [Titanium Backup] para salvar os seus arquivos, fotos, contatos e outras informações importantes em um local seguro. Assim, se algo der errado, você pode restaurar os seus dados facilmente. - Depois de instalar o jogo, use um antivírus ou um anti-malware para escanear o seu dispositivo e verificar se ele está livre de qualquer ameaça. Você pode usar aplicativos como o [Avast] ou o [Malwarebytes] para proteger o seu dispositivo contra vírus, malware e outros programas maliciosos.
-
Como jogar World Soccer Champs APK Dinheiro Infinito?
-
Agora que você já baixou e instalou o World Soccer Champs APK Dinheiro Infinito no seu dispositivo Android, você está pronto para jogar e se divertir com esse simulador de futebol incrível. Mas como jogar esse jogo? Quais são as ligas e copas disponíveis? Como montar um time campeão e gerenciar o seu clube? Não se preocupe, pois vamos te ensinar tudo isso a seguir. Veja as dicas:
-
Os controles intuitivos de deslizar e tocar na tela
-
O World Soccer Champs APK Dinheiro Infinito possui controles simples e intuitivos que te permitem controlar os jogadores em campo com facilidade. Você só precisa deslizar e tocar na tela para fazer passes, dribles e chutes. Veja como funciona: - Para fazer um passe, deslize na direção do jogador que você quer passar a bola. Quanto mais longo for o deslize, mais forte será o passe. - Para driblar, toque na tela quando a bola estiver perto do seu jogador. Você pode tocar várias vezes para fazer vários dribles seguidos. - Para chutar, deslize na direção do gol. Quanto mais longo for o deslize, mais forte será o chute. Você também pode deslizar em ângulos diferentes para dar efeito à bola. - Para cobrar faltas e pênaltis, deslize na direção do gol. Você pode deslizar em curvas para dar curva à bola.
-
As ligas e copas disponíveis para disputar
-
O World Soccer Champs APK Dinheiro Infinito possui uma grande variedade de ligas e copas para você disputar com o seu clube. Você pode escolher entre mais de 100 clubes reais de diferentes países e continentes, ou criar o seu próprio clube personalizado. Você também pode escolher entre diferentes níveis de dificuldade, desde iniciante até lendário. Veja algumas das ligas e copas disponíveis: - Liga dos Campeões: a competição mais prestigiada do mundo, que reúne os melhores clubes da Europa. - Copa do Mundo: a competição mais importante do mundo, que reúne as melhores seleções nacionais. - Copa Libertadores: a competição mais importante da América do Sul, que reúne os melhores clubes do continente. - Premier League: a liga mais popular do mundo, que reúne os melhores clubes da Inglaterra. - La Liga: a liga mais técnica do mundo, que reúne os melhores club es da Espanha. - Bundesliga: a liga mais equilibrada do mundo, que reúne os melhores clubes da Alemanha. - Serie A: a liga mais tradicional do mundo, que reúne os melhores clubes da Itália. - Ligue 1: a liga mais emergente do mundo, que reúne os melhores clubes da França. - E muitas outras ligas e copas de diferentes países e regiões, como Brasil, Argentina, México, Estados Unidos, Ásia, África e Oceania.
-
As dicas para montar um time campeão e gerenciar o seu clube
-
Para se dar bem no World Soccer Champs APK Dinheiro Infinito, você precisa montar um time campeão e gerenciar o seu clube com sabedoria. Você pode usar o dinheiro infinito que o jogo te oferece para comprar os melhores jogadores do mercado, mas isso não é suficiente. Você também precisa escolher as táticas adequadas para cada partida, treinar os seus jogadores, motivá-los e resolver os problemas que possam surgir. Veja algumas dicas: - Escolha os jogadores de acordo com as suas características e posições. Cada jogador possui atributos como habilidade, velocidade, força, resistência, passe, chute e defesa. Você deve escolher os jogadores que se encaixem no seu estilo de jogo e na sua formação tática. - Escolha a formação tática de acordo com o seu adversário e o seu objetivo. Você pode escolher entre diferentes formações táticas, como 4-4-2, 4-3-3, 3-5-2, 5-3-2 e outras. Você deve escolher a formação que se adapte ao seu adversário e ao seu objetivo. Por exemplo, se você quer atacar mais, você pode escolher uma formação com mais atacantes. Se você quer defender mais, você pode escolher uma formação com mais defensores. - Treine os seus jogadores regularmente para melhorar os seus atributos. Você pode treinar os seus jogadores em diferentes aspectos, como habilidade, velocidade, força, resistência, passe, chute e defesa. Você deve treinar os seus jogadores de acordo com as suas necessidades e prioridades. Por exemplo, se você quer melhorar o seu ataque, você pode treinar o chute dos seus atacantes. - Motive os seus jogadores antes e durante as partidas para aumentar o seu desempenho. Você pode motivar os seus jogadores de diferentes formas, como elogiando-os, incentivando-os ou desafiando-os. Você deve motivar os seus jogadores de acordo com as suas personalidades e situações. Por exemplo, se um jogador está desanimado, você pode elogiá-lo. Se um jogador está confiante, você pode incentivá-lo. Se um jogador está acomodado, você pode desafiá-lo. - Resolva os problemas que possam surgir no seu clube para evitar conflitos e crises. Você pode enfrentar diferentes problemas no seu clube, como lesões, suspensões, insatisfações, propostas de transferência e outros. Você deve resolver esses problemas de forma rápida e eficaz. Por exemplo, se um jogador se lesiona, você deve substituí-lo por outro. Se um jogador está suspenso, você deve respeitar a punição. Se um jogador está insatisfeito , você deve conversar com ele e tentar resolver a situação. Se um jogador recebe uma proposta de transferência, você deve avaliar se vale a pena aceitar ou recusar.
-
Conclusão
-
O World Soccer Champs APK Dinheiro Infinito é um jogo de futebol que te permite controlar os craques do esporte, disputar campeonatos emocionantes e administrar o seu próprio clube de futebol. O jogo possui gráficos incríveis, jogabilidade viciante e muitas opções de personalização. Além disso, o jogo é gratuito para jogar e te dá a opção de desativar os anúncios e ter dinheiro infinito para gastar no jogo. Para baixar o jogo, você precisa seguir alguns passos simples e tomar algumas precauções para evitar vírus e malware. Para jogar o jogo, você precisa usar os controles intuitivos de deslizar e tocar na tela, escolher as ligas e copas que quer disputar e montar um time campeão e gerenciar o seu clube com sabedoria. Se você é fã de futebol e quer se divertir com um jogo que te permite viver a emoção do esporte, não perca tempo e baixe o World Soccer Champs APK Dinheiro Infinito agora mesmo!
-
Perguntas frequentes
-
A seguir, vamos responder algumas das perguntas mais frequentes sobre o World Soccer Champs APK Dinheiro Infinito. Veja se a sua dúvida está entre elas:
-
O que é APK?
-
APK é a sigla para Android Package Kit, que é o formato de arquivo usado para instalar aplicativos e jogos no sistema operacional Android. Um arquivo APK contém todos os componentes necessários para executar um aplicativo ou jogo no seu dispositivo Android.
-
O que é dinheiro infinito?
-
Dinheiro infinito é uma modificação feita em alguns jogos que te dá uma quantidade ilimitada de dinheiro para gastar no jogo. Isso te permite comprar os melhores itens, melhorar as suas habilidades e desbloquear todos os recursos do jogo sem limitações.
-
O que é modificado?
-
Modificado é um termo usado para se referir a um aplicativo ou jogo que foi alterado por alguém que não é o desenvolvedor original. Essa alteração pode ser feita para adicionar novas funcionalidades, remover restrições, corrigir erros ou mudar a aparência do aplicativo ou jogo.
-
É seguro baixar e instalar o World Soccer Champs APK Dinheiro Infinito?
-
Baixar e instalar o World Soccer Champs APK Dinheiro Infinito pode ser seguro ou não, dependendo da fonte que você usa para fazer o download do arquivo APK modificado. Se você usar um site confiável e seguro, como o [APKPure], você pode baixar e instalar o jogo sem problemas. No entanto, se você usar um site duvidoso ou desconhecido, você pode correr o risco de baixar um arquivo APK infectado com vírus, malware ou outros programas maliciosos que podem danificar o seu dispositivo, roubar os seus dados ou comprometer a sua segurança. Por isso, é importante tomar algumas precauções antes, durante e depois de baixar e instalar o jogo, como explicamos neste artigo.
-
É legal baixar e instalar o World Soccer Champs APK Dinheiro Infinito?
-
Baixar e instalar o World Soccer Champs APK Dinheiro Infinito pode ser legal ou não, dependendo da legislação do seu país ou região. Em geral, baixar e instalar aplicativos e jogos modificados não é considerado ilegal, desde que você não use esses aplicativos e jogos para fins comerciais ou fraudulentos. No entanto, baixar e instalar aplicativos e jogos modificados pode violar os termos de uso dos desenvolvedores originais, que podem tomar medidas legais contra os usuários que usam esses aplicativos e jogos sem a sua autorização. Por isso, é recomendável que você respeite os direitos autorais dos desenvolvedores originais e use os aplicativos e jogos modificados por sua própria conta e risco.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/As Cangaceiras Erticas (1974) Download High Speed Chrmulti WORK.md b/spaces/contluForse/HuggingGPT/assets/As Cangaceiras Erticas (1974) Download High Speed Chrmulti WORK.md
deleted file mode 100644
index 56dbdf8f3f989568294f68146123168b95ff4187..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/As Cangaceiras Erticas (1974) Download High Speed Chrmulti WORK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
As Cangaceiras Erticas (1974): Download High Speed chrmulti
-
-
-- Nouvelle interface (modifier l'option LAYOUT de `config.py` pour passer d'une disposition ``gauche-droite`` à une disposition ``haut-bas``)
-
-
-
- Tous les boutons sont générés dynamiquement en lisant functional.py et peuvent être facilement personnalisés pour ajouter des fonctionnalités personnalisées, ce qui facilite l'utilisation du presse-papiers.
-
-
-
-
-- Correction d'erreurs/lissage du texte.
-
-
-
-
-- Si la sortie contient des équations, elles sont affichées à la fois sous forme de tex et sous forme rendue pour faciliter la lecture et la copie.
-
-
-
-
-- Pas envie de lire les codes de ce projet? Tout le projet est directement exposé par ChatGPT.
-
-
-
-
-- Appel à une variété de modèles de langage de grande envergure (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/)-GPT4).
-
-
-
-
----
-# Installation
-## Installation-Method 1: running directly (Windows, Linux or MacOS)
-
-1. Télécharger le projet
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Configuration de la clé API
-
-Dans `config.py`, configurez la clé API et d'autres paramètres. Consultez [Special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. Lorsque le programme est exécuté, il vérifie en premier s'il existe un fichier de configuration privé nommé `config_private.py` et remplace les paramètres portant le même nom dans `config.py` par les paramètres correspondants dans `config_private.py`. Par conséquent, si vous comprenez la logique de lecture de nos configurations, nous vous recommandons vivement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de `config.py`. `config_private.py` n'est pas contrôlé par Git et peut garantir la sécurité de vos informations privées. P.S. Le projet prend également en charge la configuration de la plupart des options via "variables d'environnement", le format d'écriture des variables d'environnement est référencé dans le fichier `docker-compose`. Priorité de lecture: "variables d'environnement" > `config_private.py` > `config.py`)
-
-
-3. Installer les dépendances
-```sh
-# (Option I: python users instalation) (Python version 3.9 or higher, the newer the better). Note: use official pip source or ali pip source. To temporarily change the source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: non-python users instalation) Use Anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # Create anaconda env
-conda activate gptac_venv # Activate anaconda env
-python -m pip install -r requirements.txt # Same step as pip instalation
-```
-
-Cliquez ici pour afficher le texte si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend.
-
-
-【Optional】 Si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend, des dépendances supplémentaires doivent être installées (prérequis: compétent en Python + utilisez Pytorch + configuration suffisante de l'ordinateur):
-```sh
-# 【Optional Step I】 Support THU ChatGLM. Remarque sur THU ChatGLM: Si vous rencontrez l'erreur "Appel à ChatGLM échoué, les paramètres ChatGLM ne peuvent pas être chargés normalement", reportez-vous à ce qui suit: 1: La version par défaut installée est torch+cpu, si vous souhaitez utiliser cuda, vous devez désinstaller torch et réinstaller torch+cuda; 2: Si le modèle ne peut pas être chargé en raison d'une configuration insuffisante de l'ordinateur local, vous pouvez modifier la précision du modèle dans request_llm/bridge_chatglm.py, modifier AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) par AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# 【Optional Step II】 Support FDU MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When running this line of code, you must be in the project root path.
-
-# 【Optional Step III】Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the desired model. Currently, all models supported are as follows (the jittorllms series currently only supports the docker scheme):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Exécution
-```sh
-python main.py
-```5. Plugin de fonction de test
-```
-- Fonction de modèle de plugin de test (requiert que GPT réponde à ce qui s'est passé dans l'histoire aujourd'hui), vous pouvez utiliser cette fonction comme modèle pour mettre en œuvre des fonctionnalités plus complexes.
- Cliquez sur "[Démo de modèle de plugin de fonction] Aujourd'hui dans l'histoire"
-```
-
-## Installation - Méthode 2: Utilisation de Docker
-
-1. ChatGPT uniquement (recommandé pour la plupart des gens)
-
-``` sh
-git clone https://github.com/binary-husky/chatgpt_academic.git # Télécharger le projet
-cd chatgpt_academic # Accéder au chemin
-nano config.py # Editez config.py avec n'importe quel éditeur de texte en configurant "Proxy", "API_KEY" et "WEB_PORT" (p. ex. 50923)
-docker build -t gpt-academic . # Installer
-
-# (Dernière étape - choix1) Dans un environnement Linux, l'utilisation de `--net=host` est plus facile et rapide
-docker run --rm -it --net=host gpt-academic
-# (Dernière étape - choix 2) Dans un environnement macOS/Windows, seule l'option -p permet d'exposer le port du récipient (p.ex. 50923) au port de l'hôte.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (il faut connaître Docker)
-
-``` sh
-# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 3, conservez la solution 2. Modifiez la configuration de la solution 2 dans docker-compose.yml en suivant les commentaires.
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + PanGu + RWKV (il faut connaître Docker)
-``` sh
-# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 2, conservez la solution 3. Modifiez la configuration de la solution 3 dans docker-compose.yml en suivant les commentaires.
-docker-compose up
-```
-
-
-## Installation - Méthode 3: Autres méthodes de déploiement
-
-1. Comment utiliser une URL de proxy inversé / Microsoft Azure Cloud API
-Configurez simplement API_URL_REDIRECT selon les instructions de config.py.
-
-2. Déploiement distant sur un serveur cloud (connaissance et expérience des serveurs cloud requises)
-Veuillez consulter [Wiki de déploiement-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97).
-
-3. Utilisation de WSL2 (sous-système Windows pour Linux)
-Veuillez consulter [Wiki de déploiement-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2).
-
-4. Comment exécuter sous un sous-répertoire (tel que `http://localhost/subpath`)
-Veuillez consulter les [instructions d'exécution de FastAPI] (docs/WithFastapi.md).
-
-5. Utilisation de docker-compose
-Veuillez lire docker-compose.yml, puis suivre les instructions fournies.
-
-# Utilisation avancée
-## Personnalisation de nouveaux boutons pratiques / Plugins de fonctions personnalisées
-
-1. Personnalisation de nouveaux boutons pratiques (raccourcis académiques)
-Ouvrez core_functional.py avec n'importe quel éditeur de texte, ajoutez une entrée comme suit, puis redémarrez le programme. (Si le bouton a été ajouté avec succès et est visible, le préfixe et le suffixe prennent en charge les modifications à chaud et ne nécessitent pas le redémarrage du programme pour prendre effet.)
-Par exemple
-```
-"Super coller sens": {
- # Préfixe, sera ajouté avant votre entrée. Par exemple, pour décrire votre demande, telle que traduire, expliquer du code, faire la mise en forme, etc.
- "Prefix": "Veuillez traduire le contenu suivant en chinois, puis expliquer chaque terme proprement nommé qui y apparaît avec un tableau markdown:\n\n",
-
- # Suffixe, sera ajouté après votre entrée. Par exemple, en utilisant le préfixe, vous pouvez entourer votre contenu d'entrée de guillemets.
- "Suffix": "",
-},
-```
-
-
-
-
-2. Plugins de fonctions personnalisées
-
-Écrivez des plugins de fonctions puissants pour effectuer toutes les tâches que vous souhaitez ou que vous ne pouvez pas imaginer.
-Les plugins de ce projet ont une difficulté de programmation et de débogage très faible. Si vous avez des connaissances de base en Python, vous pouvez simuler la fonctionnalité de votre propre plugin en suivant le modèle que nous avons fourni.
-Veuillez consulter le [Guide du plugin de fonction] (https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) pour plus de détails.
-
----
-# Latest Update
-
-## Nouvelles fonctionnalités en cours de déploiement.
-
-1. Fonction de sauvegarde de la conversation.
-Appelez simplement "Enregistrer la conversation actuelle" dans la zone de plugin de fonction pour enregistrer la conversation actuelle en tant que fichier html lisible et récupérable. De plus, dans la zone de plugin de fonction (menu déroulant), appelez "Charger une archive de l'historique de la conversation" pour restaurer la conversation précédente. Astuce : cliquer directement sur "Charger une archive de l'historique de la conversation" sans spécifier de fichier permet de consulter le cache d'archive html précédent. Cliquez sur "Supprimer tous les enregistrements locaux de l'historique de la conversation" pour supprimer le cache d'archive html.
-
-
-
-
-
-
-
-2. Générer un rapport. La plupart des plugins génèrent un rapport de travail après l'exécution.
-
-
-
-
-
-
-3. Conception de fonctionnalités modulaires avec une interface simple mais capable d'une fonctionnalité puissante.
-
-
-
-
-
-4. C'est un projet open source qui peut "se traduire de lui-même".
-
-
-
-
-5. Traduire d'autres projets open source n'est pas un problème.
-
-
-
-
-
-
-
-
-6. Fonction de décoration de live2d (désactivée par défaut, nécessite une modification de config.py).
-
-
-
-
-7. Prise en charge du modèle de langue MOSS.
-
-
-
-
-8. Génération d'images OpenAI.
-
-
-
-
-9. Analyse et synthèse vocales OpenAI.
-
-
-
-
-10. Correction de la totalité des erreurs de Latex.
-
-
-
-
-
-## Versions :
-- version 3.5 (À faire) : appel de toutes les fonctions de plugin de ce projet en langage naturel (priorité élevée)
-- version 3.4 (À faire) : amélioration du support multi-thread de chatglm en local
-- version 3.3 : Fonctionnalité intégrée d'informations d'internet
-- version 3.2 : La fonction du plugin de fonction prend désormais en charge des interfaces de paramètres plus nombreuses (fonction de sauvegarde, décodage de n'importe quel langage de code + interrogation simultanée de n'importe quelle combinaison de LLM)
-- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Support api2d, équilibrage de charge multi-clé api.
-- version 3.0 : Prise en charge de chatglm et autres LLM de petite taille.
-- version 2.6 : Refonte de la structure des plugins, amélioration de l'interactivité, ajout de plus de plugins.
-- version 2.5 : Auto-mise à jour, résolution des problèmes de texte trop long et de dépassement de jetons lors de la compilation du projet global.
-- version 2.4 : (1) Nouvelle fonction de traduction de texte intégral PDF ; (2) Nouvelle fonction de permutation de position de la zone d'entrée ; (3) Nouvelle option de mise en page verticale ; (4) Amélioration des fonctions multi-thread de plug-in.
-- version 2.3 : Amélioration de l'interactivité multithread.
-- version 2.2 : Les plugins de fonctions peuvent désormais être rechargés à chaud.
-- version 2.1 : Disposition pliable
-- version 2.0 : Introduction de plugins de fonctions modulaires
-- version 1.0 : Fonctionnalités de base
-
-gpt_academic développeur QQ groupe-2:610599535
-
-- Problèmes connus
- - Certains plugins de traduction de navigateur perturbent le fonctionnement de l'interface frontend de ce logiciel
- - Des versions gradio trop hautes ou trop basses provoquent de nombreuses anomalies
-
-## Référence et apprentissage
-
-```
-De nombreux autres excellents projets ont été référencés dans le code, notamment :
-
-# Projet 1 : ChatGLM-6B de Tsinghua :
-https://github.com/THUDM/ChatGLM-6B
-
-# Projet 2 : JittorLLMs de Tsinghua :
-https://github.com/Jittor/JittorLLMs
-
-# Projet 3 : Edge-GPT :
-https://github.com/acheong08/EdgeGPT
-
-# Projet 4 : ChuanhuChatGPT :
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projet 5 : ChatPaper :
-https://github.com/kaixindelele/ChatPaper
-
-# Plus :
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/spaces/damian0815/Erasing-Concepts-In-Diffusion/memory_efficiency.py b/spaces/damian0815/Erasing-Concepts-In-Diffusion/memory_efficiency.py
deleted file mode 100644
index c913313d2ad40229fd813fea70dff79eb308559c..0000000000000000000000000000000000000000
--- a/spaces/damian0815/Erasing-Concepts-In-Diffusion/memory_efficiency.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# adapted from EveryDream2Trainer
-import contextlib
-import traceback
-
-import torch
-from torch.cuda.amp import GradScaler
-
-from StableDiffuser import StableDiffuser
-
-
-class MemoryEfficiencyWrapper:
-
- def __init__(self,
- diffuser: StableDiffuser,
- use_amp: bool,
- use_xformers: bool,
- use_gradient_checkpointing: bool):
- self.diffuser = diffuser
- self.is_sd1attn = diffuser.unet.config["attention_head_dim"] == [8, 8, 8, 8]
- self.is_sd1attn = diffuser.unet.config["attention_head_dim"] == 8 or self.is_sd1attn
-
- self.use_amp = use_amp
- self.use_xformers = use_xformers
- self.use_gradient_checkpointing = use_gradient_checkpointing
-
- def __enter__(self):
- if self.use_gradient_checkpointing:
- self.diffuser.unet.enable_gradient_checkpointing()
- self.diffuser.text_encoder.gradient_checkpointing_enable()
-
- if self.use_xformers:
- if (self.use_amp and self.is_sd1attn) or (not self.is_sd1attn):
- try:
- self.diffuser.unet.enable_xformers_memory_efficient_attention()
- print("Enabled xformers")
- except Exception as ex:
- print("failed to load xformers, using attention slicing instead")
- self.diffuser.unet.set_attention_slice("auto")
- pass
- elif (not self.use_amp and self.is_sd1attn):
- print("AMP is disabled but model is SD1.X, using attention slicing instead of xformers")
- self.diffuser.unet.set_attention_slice("auto")
- else:
- print("xformers disabled via arg, using attention slicing instead")
- self.diffuser.unet.set_attention_slice("auto")
-
- #self.diffuser.vae = self.diffuser.vae.to(self.diffuser.vae.device, dtype=torch.float16 if self.use_amp else torch.float32)
- self.diffuser.unet = self.diffuser.unet.to(self.diffuser.unet.device, dtype=torch.float32)
-
- try:
- # unet = torch.compile(unet)
- # text_encoder = torch.compile(text_encoder)
- # vae = torch.compile(vae)
- torch.set_float32_matmul_precision('high')
- torch.backends.cudnn.allow_tf32 = True
- # logging.info("Successfully compiled models")
- except Exception as ex:
- print(f"Failed to compile model, continuing anyway, ex: {ex}")
- pass
-
- self.grad_scaler = GradScaler(
- enabled=self.use_amp,
- init_scale=2 ** 17.5,
- growth_factor=2,
- backoff_factor=1.0 / 2,
- growth_interval=25,
- )
-
- def backward(self, loss):
- self.grad_scaler.scale(loss).backward()
-
- def step(self, optimizer):
- self.grad_scaler.step(optimizer)
- self.grad_scaler.update()
- optimizer.zero_grad(set_to_none=True)
-
- def __exit__(self, exc_type, exc_value, tb):
- if exc_type is not None:
- traceback.print_exception(exc_type, exc_value, tb)
- # return False # uncomment to pass exception through):
- self.diffuser.unet.disable_gradient_checkpointing()
- try:
- self.diffuser.text_encoder.gradient_checkpointing_disable()
- except AttributeError:
- # self.diffuser.text_encoder is likely `del`eted
- pass
-
- self.diffuser.unet.disable_xformers_memory_efficient_attention()
- self.diffuser.unet.set_attention_slice("auto")
diff --git a/spaces/danterivers/music-generation-samples/app_batched.py b/spaces/danterivers/music-generation-samples/app_batched.py
deleted file mode 100644
index 769a23deea18b328a911f2b20bd29b28acdfec50..0000000000000000000000000000000000000000
--- a/spaces/danterivers/music-generation-samples/app_batched.py
+++ /dev/null
@@ -1,130 +0,0 @@
-"""
-Copyright (c) Meta Platforms, Inc. and affiliates.
-All rights reserved.
-
-This source code is licensed under the license found in the
-LICENSE file in the root directory of this source tree.
-"""
-
-from tempfile import NamedTemporaryFile
-import torch
-import gradio as gr
-from audiocraft.data.audio_utils import convert_audio
-from audiocraft.data.audio import audio_write
-from audiocraft.models import MusicGen
-
-
-MODEL = None
-
-
-def load_model():
- print("Loading model")
- return MusicGen.get_pretrained("melody")
-
-
-def predict(texts, melodies):
- global MODEL
- if MODEL is None:
- MODEL = load_model()
-
- duration = 12
- MODEL.set_generation_params(duration=duration)
-
- print(texts, melodies)
- processed_melodies = []
-
- target_sr = 32000
- target_ac = 1
- for melody in melodies:
- if melody is None:
- processed_melodies.append(None)
- else:
- sr, melody = melody[0], torch.from_numpy(melody[1]).to(MODEL.device).float().t()
- if melody.dim() == 1:
- melody = melody[None]
- melody = melody[..., :int(sr * duration)]
- melody = convert_audio(melody, sr, target_sr, target_ac)
- processed_melodies.append(melody)
-
- outputs = MODEL.generate_with_chroma(
- descriptions=texts,
- melody_wavs=processed_melodies,
- melody_sample_rate=target_sr,
- progress=False
- )
-
- outputs = outputs.detach().cpu().float()
- out_files = []
- for output in outputs:
- with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file:
- audio_write(file.name, output, MODEL.sample_rate, strategy="loudness", add_suffix=False)
- waveform_video = gr.make_waveform(file.name)
- out_files.append(waveform_video)
- return [out_files]
-
-
-with gr.Blocks() as demo:
- gr.Markdown(
- """
- # MusicGen
-
- This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), a simple and controllable model for music generation
- presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284).
-
-
-
- for longer sequences, more control and no queue.
- """
- )
- with gr.Row():
- with gr.Column():
- with gr.Row():
- text = gr.Text(label="Describe your music", lines=2, interactive=True)
- melody = gr.Audio(source="upload", type="numpy", label="Condition on a melody (optional)", interactive=True)
- with gr.Row():
- submit = gr.Button("Generate")
- with gr.Column():
- output = gr.Video(label="Generated Music")
- submit.click(predict, inputs=[text, melody], outputs=[output], batch=True, max_batch_size=12)
- gr.Examples(
- fn=predict,
- examples=[
- [
- "An 80s driving pop song with heavy drums and synth pads in the background",
- "./assets/bach.mp3",
- ],
- [
- "A cheerful country song with acoustic guitars",
- "./assets/bolero_ravel.mp3",
- ],
- [
- "90s rock song with electric guitar and heavy drums",
- None,
- ],
- [
- "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130",
- "./assets/bach.mp3",
- ],
- [
- "lofi slow bpm electro chill with organic samples",
- None,
- ],
- ],
- inputs=[text, melody],
- outputs=[output]
- )
- gr.Markdown("""
- ### More details
-
- The model will generate 12 seconds of audio based on the description you provided.
- You can optionaly provide a reference audio from which a broad melody will be extracted.
- The model will then try to follow both the description and melody provided.
- All samples are generated with the `melody` model.
-
- You can also use your own GPU or a Google Colab by following the instructions on our repo.
-
- See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft)
- for more details.
- """)
-
-demo.queue(max_size=15).launch()
diff --git a/spaces/davila7/llm-vs-llm/app.py b/spaces/davila7/llm-vs-llm/app.py
deleted file mode 100644
index 5b4943f1524e770d12a8f6e667b12914d4ee904d..0000000000000000000000000000000000000000
--- a/spaces/davila7/llm-vs-llm/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import os
-import gradio as gr
-import torch
-import numpy as np
-from transformers import pipeline
-
-name_list = ['microsoft/biogpt', 'google/flan-ul2']
-
-examples = [['COVID-19 is'],['A 65-year-old female patient with a past medical history of']]
-
-print(f"Is CUDA available: {torch.cuda.is_available()}")
-print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}")
-
-pipe_biogpt = pipeline("text-generation", model="microsoft/biogpt", device="cuda:0", model_kwargs={"torch_dtype":torch.bfloat16})
-pipe_flan_ul2 = pipeline("text-generation", model="google/flan-ul2", device="cuda:0", model_kwargs={"torch_dtype":torch.bfloat16})
-
-title = "LLM vs LLM!"
-description = "**Disclaimer:** this demo was made for research purposes only."
-
-def inference(text):
- output_biogpt = pipe_biogpt(text, max_length=100)[0]["generated_text"]
- output_flan_ul2 = pipe_flan_ul2(text, max_length=100)[0]["generated_text"]
- return [
- output_biogpt,
- output_flan_ul2
- ]
-
-io = gr.Interface(
- inference,
- gr.Textbox(lines=3),
- outputs=[
- gr.Textbox(lines=3, label="microsoft/biogpt"),
- gr.Textbox(lines=3, label="google/flan-ul2"),
- ],
- title=title,
- description=description,
- examples=examples
-)
-io.launch()
\ No newline at end of file
diff --git a/spaces/dawood/audioldm-text-to-audio-generation/README.md b/spaces/dawood/audioldm-text-to-audio-generation/README.md
deleted file mode 100644
index 1481fe9c9e501ad0edd0aaf201f4cded1b7a9593..0000000000000000000000000000000000000000
--- a/spaces/dawood/audioldm-text-to-audio-generation/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Audioldm Text To Audio Generation
-emoji: 🔊
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: bigscience-openrail-m
-duplicated_from: haoheliu/audioldm-text-to-audio-generation
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/misc.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/misc.py
deleted file mode 100644
index 3b444ff3b950e38f43a5451d1330ff1b65951a9e..0000000000000000000000000000000000000000
--- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/misc.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import numpy as np
-import os
-import random
-import time
-import torch
-from os import path as osp
-
-from .dist_util import master_only
-from .logger import get_root_logger
-
-
-def set_random_seed(seed):
- """Set random seeds."""
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
-
-
-def get_time_str():
- return time.strftime('%Y%m%d_%H%M%S', time.localtime())
-
-
-def mkdir_and_rename(path):
- """mkdirs. If path exists, rename it with timestamp and create a new one.
-
- Args:
- path (str): Folder path.
- """
- if osp.exists(path):
- new_name = path + '_archived_' + get_time_str()
- print(f'Path already exists. Rename it to {new_name}', flush=True)
- os.rename(path, new_name)
- os.makedirs(path, exist_ok=True)
-
-
-@master_only
-def make_exp_dirs(opt):
- """Make dirs for experiments."""
- path_opt = opt['path'].copy()
- if opt['is_train']:
- mkdir_and_rename(path_opt.pop('experiments_root'))
- else:
- mkdir_and_rename(path_opt.pop('results_root'))
- for key, path in path_opt.items():
- if ('strict_load' not in key) and ('pretrain_network' not in key) and ('resume' not in key):
- os.makedirs(path, exist_ok=True)
-
-
-def scandir(dir_path, suffix=None, recursive=False, full_path=False):
- """Scan a directory to find the interested files.
-
- Args:
- dir_path (str): Path of the directory.
- suffix (str | tuple(str), optional): File suffix that we are
- interested in. Default: None.
- recursive (bool, optional): If set to True, recursively scan the
- directory. Default: False.
- full_path (bool, optional): If set to True, include the dir_path.
- Default: False.
-
- Returns:
- A generator for all the interested files with relative pathes.
- """
-
- if (suffix is not None) and not isinstance(suffix, (str, tuple)):
- raise TypeError('"suffix" must be a string or tuple of strings')
-
- root = dir_path
-
- def _scandir(dir_path, suffix, recursive):
- for entry in os.scandir(dir_path):
- if not entry.name.startswith('.') and entry.is_file():
- if full_path:
- return_path = entry.path
- else:
- return_path = osp.relpath(entry.path, root)
-
- if suffix is None:
- yield return_path
- elif return_path.endswith(suffix):
- yield return_path
- else:
- if recursive:
- yield from _scandir(entry.path, suffix=suffix, recursive=recursive)
- else:
- continue
-
- return _scandir(dir_path, suffix=suffix, recursive=recursive)
-
-
-def check_resume(opt, resume_iter):
- """Check resume states and pretrain_network paths.
-
- Args:
- opt (dict): Options.
- resume_iter (int): Resume iteration.
- """
- logger = get_root_logger()
- if opt['path']['resume_state']:
- # get all the networks
- networks = [key for key in opt.keys() if key.startswith('network_')]
- flag_pretrain = False
- for network in networks:
- if opt['path'].get(f'pretrain_{network}') is not None:
- flag_pretrain = True
- if flag_pretrain:
- logger.warning('pretrain_network path will be ignored during resuming.')
- # set pretrained model paths
- for network in networks:
- name = f'pretrain_{network}'
- basename = network.replace('network_', '')
- if opt['path'].get('ignore_resume_networks') is None or (basename
- not in opt['path']['ignore_resume_networks']):
- opt['path'][name] = osp.join(opt['path']['models'], f'net_{basename}_{resume_iter}.pth')
- logger.info(f"Set {name} to {opt['path'][name]}")
-
-
-def sizeof_fmt(size, suffix='B'):
- """Get human readable file size.
-
- Args:
- size (int): File size.
- suffix (str): Suffix. Default: 'B'.
-
- Return:
- str: Formated file siz.
- """
- for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']:
- if abs(size) < 1024.0:
- return f'{size:3.1f} {unit}{suffix}'
- size /= 1024.0
- return f'{size:3.1f} Y{suffix}'
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/cython.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/cython.py
deleted file mode 100644
index 2a42d94a3591e0e8e47f184b303e4aec0a6337ef..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/cython.py
+++ /dev/null
@@ -1,27 +0,0 @@
-""" Exports a no-op 'cython' namespace similar to
-https://github.com/cython/cython/blob/master/Cython/Shadow.py
-
-This allows to optionally compile @cython decorated functions
-(when cython is available at built time), or run the same code
-as pure-python, without runtime dependency on cython module.
-
-We only define the symbols that we use. E.g. see fontTools.cu2qu
-"""
-
-from types import SimpleNamespace
-
-
-def _empty_decorator(x):
- return x
-
-
-compiled = False
-
-for name in ("double", "complex", "int"):
- globals()[name] = None
-
-for name in ("cfunc", "inline"):
- globals()[name] = _empty_decorator
-
-locals = lambda **_: _empty_decorator
-returns = lambda _: _empty_decorator
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/psOperators.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/psOperators.py
deleted file mode 100644
index d0ef432f5243e5ed0c8fa5b02f4c147dfcb032c2..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/psOperators.py
+++ /dev/null
@@ -1,574 +0,0 @@
-_accessstrings = {0: "", 1: "readonly", 2: "executeonly", 3: "noaccess"}
-
-
-class ps_object(object):
-
- literal = 1
- access = 0
- value = None
-
- def __init__(self, value):
- self.value = value
- self.type = self.__class__.__name__[3:] + "type"
-
- def __repr__(self):
- return "<%s %s>" % (self.__class__.__name__[3:], repr(self.value))
-
-
-class ps_operator(ps_object):
-
- literal = 0
-
- def __init__(self, name, function):
- self.name = name
- self.function = function
- self.type = self.__class__.__name__[3:] + "type"
-
- def __repr__(self):
- return "" % self.name
-
-
-class ps_procedure(ps_object):
- literal = 0
-
- def __repr__(self):
- return ""
-
- def __str__(self):
- psstring = "{"
- for i in range(len(self.value)):
- if i:
- psstring = psstring + " " + str(self.value[i])
- else:
- psstring = psstring + str(self.value[i])
- return psstring + "}"
-
-
-class ps_name(ps_object):
- literal = 0
-
- def __str__(self):
- if self.literal:
- return "/" + self.value
- else:
- return self.value
-
-
-class ps_literal(ps_object):
- def __str__(self):
- return "/" + self.value
-
-
-class ps_array(ps_object):
- def __str__(self):
- psstring = "["
- for i in range(len(self.value)):
- item = self.value[i]
- access = _accessstrings[item.access]
- if access:
- access = " " + access
- if i:
- psstring = psstring + " " + str(item) + access
- else:
- psstring = psstring + str(item) + access
- return psstring + "]"
-
- def __repr__(self):
- return ""
-
-
-_type1_pre_eexec_order = [
- "FontInfo",
- "FontName",
- "Encoding",
- "PaintType",
- "FontType",
- "FontMatrix",
- "FontBBox",
- "UniqueID",
- "Metrics",
- "StrokeWidth",
-]
-
-_type1_fontinfo_order = [
- "version",
- "Notice",
- "FullName",
- "FamilyName",
- "Weight",
- "ItalicAngle",
- "isFixedPitch",
- "UnderlinePosition",
- "UnderlineThickness",
-]
-
-_type1_post_eexec_order = ["Private", "CharStrings", "FID"]
-
-
-def _type1_item_repr(key, value):
- psstring = ""
- access = _accessstrings[value.access]
- if access:
- access = access + " "
- if key == "CharStrings":
- psstring = psstring + "/%s %s def\n" % (
- key,
- _type1_CharString_repr(value.value),
- )
- elif key == "Encoding":
- psstring = psstring + _type1_Encoding_repr(value, access)
- else:
- psstring = psstring + "/%s %s %sdef\n" % (str(key), str(value), access)
- return psstring
-
-
-def _type1_Encoding_repr(encoding, access):
- encoding = encoding.value
- psstring = "/Encoding 256 array\n0 1 255 {1 index exch /.notdef put} for\n"
- for i in range(256):
- name = encoding[i].value
- if name != ".notdef":
- psstring = psstring + "dup %d /%s put\n" % (i, name)
- return psstring + access + "def\n"
-
-
-def _type1_CharString_repr(charstrings):
- items = sorted(charstrings.items())
- return "xxx"
-
-
-class ps_font(ps_object):
- def __str__(self):
- psstring = "%d dict dup begin\n" % len(self.value)
- for key in _type1_pre_eexec_order:
- try:
- value = self.value[key]
- except KeyError:
- pass
- else:
- psstring = psstring + _type1_item_repr(key, value)
- items = sorted(self.value.items())
- for key, value in items:
- if key not in _type1_pre_eexec_order + _type1_post_eexec_order:
- psstring = psstring + _type1_item_repr(key, value)
- psstring = psstring + "currentdict end\ncurrentfile eexec\ndup "
- for key in _type1_post_eexec_order:
- try:
- value = self.value[key]
- except KeyError:
- pass
- else:
- psstring = psstring + _type1_item_repr(key, value)
- return (
- psstring
- + "dup/FontName get exch definefont pop\nmark currentfile closefile\n"
- + 8 * (64 * "0" + "\n")
- + "cleartomark"
- + "\n"
- )
-
- def __repr__(self):
- return ""
-
-
-class ps_file(ps_object):
- pass
-
-
-class ps_dict(ps_object):
- def __str__(self):
- psstring = "%d dict dup begin\n" % len(self.value)
- items = sorted(self.value.items())
- for key, value in items:
- access = _accessstrings[value.access]
- if access:
- access = access + " "
- psstring = psstring + "/%s %s %sdef\n" % (str(key), str(value), access)
- return psstring + "end "
-
- def __repr__(self):
- return ""
-
-
-class ps_mark(ps_object):
- def __init__(self):
- self.value = "mark"
- self.type = self.__class__.__name__[3:] + "type"
-
-
-class ps_procmark(ps_object):
- def __init__(self):
- self.value = "procmark"
- self.type = self.__class__.__name__[3:] + "type"
-
-
-class ps_null(ps_object):
- def __init__(self):
- self.type = self.__class__.__name__[3:] + "type"
-
-
-class ps_boolean(ps_object):
- def __str__(self):
- if self.value:
- return "true"
- else:
- return "false"
-
-
-class ps_string(ps_object):
- def __str__(self):
- return "(%s)" % repr(self.value)[1:-1]
-
-
-class ps_integer(ps_object):
- def __str__(self):
- return repr(self.value)
-
-
-class ps_real(ps_object):
- def __str__(self):
- return repr(self.value)
-
-
-class PSOperators(object):
- def ps_def(self):
- obj = self.pop()
- name = self.pop()
- self.dictstack[-1][name.value] = obj
-
- def ps_bind(self):
- proc = self.pop("proceduretype")
- self.proc_bind(proc)
- self.push(proc)
-
- def proc_bind(self, proc):
- for i in range(len(proc.value)):
- item = proc.value[i]
- if item.type == "proceduretype":
- self.proc_bind(item)
- else:
- if not item.literal:
- try:
- obj = self.resolve_name(item.value)
- except:
- pass
- else:
- if obj.type == "operatortype":
- proc.value[i] = obj
-
- def ps_exch(self):
- if len(self.stack) < 2:
- raise RuntimeError("stack underflow")
- obj1 = self.pop()
- obj2 = self.pop()
- self.push(obj1)
- self.push(obj2)
-
- def ps_dup(self):
- if not self.stack:
- raise RuntimeError("stack underflow")
- self.push(self.stack[-1])
-
- def ps_exec(self):
- obj = self.pop()
- if obj.type == "proceduretype":
- self.call_procedure(obj)
- else:
- self.handle_object(obj)
-
- def ps_count(self):
- self.push(ps_integer(len(self.stack)))
-
- def ps_eq(self):
- any1 = self.pop()
- any2 = self.pop()
- self.push(ps_boolean(any1.value == any2.value))
-
- def ps_ne(self):
- any1 = self.pop()
- any2 = self.pop()
- self.push(ps_boolean(any1.value != any2.value))
-
- def ps_cvx(self):
- obj = self.pop()
- obj.literal = 0
- self.push(obj)
-
- def ps_matrix(self):
- matrix = [
- ps_real(1.0),
- ps_integer(0),
- ps_integer(0),
- ps_real(1.0),
- ps_integer(0),
- ps_integer(0),
- ]
- self.push(ps_array(matrix))
-
- def ps_string(self):
- num = self.pop("integertype").value
- self.push(ps_string("\0" * num))
-
- def ps_type(self):
- obj = self.pop()
- self.push(ps_string(obj.type))
-
- def ps_store(self):
- value = self.pop()
- key = self.pop()
- name = key.value
- for i in range(len(self.dictstack) - 1, -1, -1):
- if name in self.dictstack[i]:
- self.dictstack[i][name] = value
- break
- self.dictstack[-1][name] = value
-
- def ps_where(self):
- name = self.pop()
- # XXX
- self.push(ps_boolean(0))
-
- def ps_systemdict(self):
- self.push(ps_dict(self.dictstack[0]))
-
- def ps_userdict(self):
- self.push(ps_dict(self.dictstack[1]))
-
- def ps_currentdict(self):
- self.push(ps_dict(self.dictstack[-1]))
-
- def ps_currentfile(self):
- self.push(ps_file(self.tokenizer))
-
- def ps_eexec(self):
- f = self.pop("filetype").value
- f.starteexec()
-
- def ps_closefile(self):
- f = self.pop("filetype").value
- f.skipwhite()
- f.stopeexec()
-
- def ps_cleartomark(self):
- obj = self.pop()
- while obj != self.mark:
- obj = self.pop()
-
- def ps_readstring(self, ps_boolean=ps_boolean, len=len):
- s = self.pop("stringtype")
- oldstr = s.value
- f = self.pop("filetype")
- # pad = file.value.read(1)
- # for StringIO, this is faster
- f.value.pos = f.value.pos + 1
- newstr = f.value.read(len(oldstr))
- s.value = newstr
- self.push(s)
- self.push(ps_boolean(len(oldstr) == len(newstr)))
-
- def ps_known(self):
- key = self.pop()
- d = self.pop("dicttype", "fonttype")
- self.push(ps_boolean(key.value in d.value))
-
- def ps_if(self):
- proc = self.pop("proceduretype")
- if self.pop("booleantype").value:
- self.call_procedure(proc)
-
- def ps_ifelse(self):
- proc2 = self.pop("proceduretype")
- proc1 = self.pop("proceduretype")
- if self.pop("booleantype").value:
- self.call_procedure(proc1)
- else:
- self.call_procedure(proc2)
-
- def ps_readonly(self):
- obj = self.pop()
- if obj.access < 1:
- obj.access = 1
- self.push(obj)
-
- def ps_executeonly(self):
- obj = self.pop()
- if obj.access < 2:
- obj.access = 2
- self.push(obj)
-
- def ps_noaccess(self):
- obj = self.pop()
- if obj.access < 3:
- obj.access = 3
- self.push(obj)
-
- def ps_not(self):
- obj = self.pop("booleantype", "integertype")
- if obj.type == "booleantype":
- self.push(ps_boolean(not obj.value))
- else:
- self.push(ps_integer(~obj.value))
-
- def ps_print(self):
- str = self.pop("stringtype")
- print("PS output --->", str.value)
-
- def ps_anchorsearch(self):
- seek = self.pop("stringtype")
- s = self.pop("stringtype")
- seeklen = len(seek.value)
- if s.value[:seeklen] == seek.value:
- self.push(ps_string(s.value[seeklen:]))
- self.push(seek)
- self.push(ps_boolean(1))
- else:
- self.push(s)
- self.push(ps_boolean(0))
-
- def ps_array(self):
- num = self.pop("integertype")
- array = ps_array([None] * num.value)
- self.push(array)
-
- def ps_astore(self):
- array = self.pop("arraytype")
- for i in range(len(array.value) - 1, -1, -1):
- array.value[i] = self.pop()
- self.push(array)
-
- def ps_load(self):
- name = self.pop()
- self.push(self.resolve_name(name.value))
-
- def ps_put(self):
- obj1 = self.pop()
- obj2 = self.pop()
- obj3 = self.pop("arraytype", "dicttype", "stringtype", "proceduretype")
- tp = obj3.type
- if tp == "arraytype" or tp == "proceduretype":
- obj3.value[obj2.value] = obj1
- elif tp == "dicttype":
- obj3.value[obj2.value] = obj1
- elif tp == "stringtype":
- index = obj2.value
- obj3.value = obj3.value[:index] + chr(obj1.value) + obj3.value[index + 1 :]
-
- def ps_get(self):
- obj1 = self.pop()
- if obj1.value == "Encoding":
- pass
- obj2 = self.pop(
- "arraytype", "dicttype", "stringtype", "proceduretype", "fonttype"
- )
- tp = obj2.type
- if tp in ("arraytype", "proceduretype"):
- self.push(obj2.value[obj1.value])
- elif tp in ("dicttype", "fonttype"):
- self.push(obj2.value[obj1.value])
- elif tp == "stringtype":
- self.push(ps_integer(ord(obj2.value[obj1.value])))
- else:
- assert False, "shouldn't get here"
-
- def ps_getinterval(self):
- obj1 = self.pop("integertype")
- obj2 = self.pop("integertype")
- obj3 = self.pop("arraytype", "stringtype")
- tp = obj3.type
- if tp == "arraytype":
- self.push(ps_array(obj3.value[obj2.value : obj2.value + obj1.value]))
- elif tp == "stringtype":
- self.push(ps_string(obj3.value[obj2.value : obj2.value + obj1.value]))
-
- def ps_putinterval(self):
- obj1 = self.pop("arraytype", "stringtype")
- obj2 = self.pop("integertype")
- obj3 = self.pop("arraytype", "stringtype")
- tp = obj3.type
- if tp == "arraytype":
- obj3.value[obj2.value : obj2.value + len(obj1.value)] = obj1.value
- elif tp == "stringtype":
- newstr = obj3.value[: obj2.value]
- newstr = newstr + obj1.value
- newstr = newstr + obj3.value[obj2.value + len(obj1.value) :]
- obj3.value = newstr
-
- def ps_cvn(self):
- self.push(ps_name(self.pop("stringtype").value))
-
- def ps_index(self):
- n = self.pop("integertype").value
- if n < 0:
- raise RuntimeError("index may not be negative")
- self.push(self.stack[-1 - n])
-
- def ps_for(self):
- proc = self.pop("proceduretype")
- limit = self.pop("integertype", "realtype").value
- increment = self.pop("integertype", "realtype").value
- i = self.pop("integertype", "realtype").value
- while 1:
- if increment > 0:
- if i > limit:
- break
- else:
- if i < limit:
- break
- if type(i) == type(0.0):
- self.push(ps_real(i))
- else:
- self.push(ps_integer(i))
- self.call_procedure(proc)
- i = i + increment
-
- def ps_forall(self):
- proc = self.pop("proceduretype")
- obj = self.pop("arraytype", "stringtype", "dicttype")
- tp = obj.type
- if tp == "arraytype":
- for item in obj.value:
- self.push(item)
- self.call_procedure(proc)
- elif tp == "stringtype":
- for item in obj.value:
- self.push(ps_integer(ord(item)))
- self.call_procedure(proc)
- elif tp == "dicttype":
- for key, value in obj.value.items():
- self.push(ps_name(key))
- self.push(value)
- self.call_procedure(proc)
-
- def ps_definefont(self):
- font = self.pop("dicttype")
- name = self.pop()
- font = ps_font(font.value)
- self.dictstack[0]["FontDirectory"].value[name.value] = font
- self.push(font)
-
- def ps_findfont(self):
- name = self.pop()
- font = self.dictstack[0]["FontDirectory"].value[name.value]
- self.push(font)
-
- def ps_pop(self):
- self.pop()
-
- def ps_dict(self):
- self.pop("integertype")
- self.push(ps_dict({}))
-
- def ps_begin(self):
- self.dictstack.append(self.pop("dicttype").value)
-
- def ps_end(self):
- if len(self.dictstack) > 2:
- del self.dictstack[-1]
- else:
- raise RuntimeError("dictstack underflow")
-
-
-notdef = ".notdef"
-from fontTools.encodings.StandardEncoding import StandardEncoding
-
-ps_StandardEncoding = list(map(ps_name, StandardEncoding))
diff --git a/spaces/declare-lab/tango/diffusers/docker/diffusers-flax-tpu/Dockerfile b/spaces/declare-lab/tango/diffusers/docker/diffusers-flax-tpu/Dockerfile
deleted file mode 100644
index 2517da586d74b43c4c94a0eca4651f047345ec4d..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/docker/diffusers-flax-tpu/Dockerfile
+++ /dev/null
@@ -1,46 +0,0 @@
-FROM ubuntu:20.04
-LABEL maintainer="Hugging Face"
-LABEL repository="diffusers"
-
-ENV DEBIAN_FRONTEND=noninteractive
-
-RUN apt update && \
- apt install -y bash \
- build-essential \
- git \
- git-lfs \
- curl \
- ca-certificates \
- libsndfile1-dev \
- python3.8 \
- python3-pip \
- python3.8-venv && \
- rm -rf /var/lib/apt/lists
-
-# make sure to use venv
-RUN python3 -m venv /opt/venv
-ENV PATH="/opt/venv/bin:$PATH"
-
-# pre-install the heavy dependencies (these can later be overridden by the deps from setup.py)
-# follow the instructions here: https://cloud.google.com/tpu/docs/run-in-container#train_a_jax_model_in_a_docker_container
-RUN python3 -m pip install --no-cache-dir --upgrade pip && \
- python3 -m pip install --no-cache-dir \
- "jax[tpu]>=0.2.16,!=0.3.2" \
- -f https://storage.googleapis.com/jax-releases/libtpu_releases.html && \
- python3 -m pip install --upgrade --no-cache-dir \
- clu \
- "flax>=0.4.1" \
- "jaxlib>=0.1.65" && \
- python3 -m pip install --no-cache-dir \
- accelerate \
- datasets \
- hf-doc-builder \
- huggingface-hub \
- Jinja2 \
- librosa \
- numpy \
- scipy \
- tensorboard \
- transformers
-
-CMD ["/bin/bash"]
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Jazbaa VERIFIED Full Movie With English Subtitles 720p.md b/spaces/diacanFperku/AutoGPT/Jazbaa VERIFIED Full Movie With English Subtitles 720p.md
deleted file mode 100644
index 69a4f8416fc6ecb8c43ce2f05549ce9204895bd4..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Jazbaa VERIFIED Full Movie With English Subtitles 720p.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-12.Age Of Empires 2 & The Conquerors Expansion – Full Game(3) 13.Age Of ... CRack – Driver – You Are The Wheelman(1) ... David Douillet Judo – 2006(3) 4d29de3e1b
-
-
-
diff --git a/spaces/fatiXbelha/sd/AetherSX2 A Powerful PS2 Emulator for Android that Lets You Download GTA SA Aether 2.md b/spaces/fatiXbelha/sd/AetherSX2 A Powerful PS2 Emulator for Android that Lets You Download GTA SA Aether 2.md
deleted file mode 100644
index 97cc0626b080ecfc0631885a48f9ebcc9aa38a22..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/AetherSX2 A Powerful PS2 Emulator for Android that Lets You Download GTA SA Aether 2.md
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
How to Download and Play GTA SA Aether 2
-
Grand Theft Auto: San Andreas is one of the most popular games ever made, with millions of fans around the world. But did you know that you can enhance your gaming experience with a mod called GTA SA Aether 2? This mod adds a whole new dimension to the game, where you can explore a sky realm full of floating islands, mythical creatures, and dungeons.
-
In this article, we will show you what GTA SA Aether 2 is, how to download it, and how to play it. You will discover a new world of adventure that will make you fall in love with San Andreas all over again.
GTA SA Aether 2 is a mod that combines the worlds of Grand Theft Auto: San Andreas and Minecraft. A mod is a modification that changes some aspects of the original game, such as adding new features, graphics, or gameplay elements. GTA SA Aether 2 is a mod that adds a new dimension to the game, called the Aether, which is a sky realm full of floating islands, mythical creatures, and dungeons.
-
Features of GTA SA Aether 2
-
GTA SA Aether 2 has many features that make it unique and fun to play. Some of the main features are:
-
-
New mobs: The Aether is inhabited by various creatures, some friendly and some hostile. You can encounter flying pigs, sheepuffs, aerwhales, zephyrs, cockatrices, moas, and more.
-
New blocks: The Aether has different types of blocks that have different properties and uses. You can find skyroot, holystone, icestone, quicksoil, aerclouds, ambrosium, zanite, gravitite, and more.
-
New items: The Aether has different types of items that can help you in your adventure. You can craft skyroot tools, zanite armor, gravitite gloves, golden parachutes, cloud staffs, dart shooters, and more.
-
New dungeons: The Aether has different types of dungeons that are filled with traps, puzzles, enemies, and loot. You can explore bronze dungeons, silver dungeons, gold dungeons, and slider's labyrinth.
-
New bosses: The Aether has different types of bosses that are challenging and rewarding. You can fight the valkyrie queen, the sun spirit, the slider boss, and the necromancer.
-
New biomes: The Aether has different types of biomes that have different landscapes and atmospheres. You can find highlands, coldlands, stormlands, icelands, and lostlands.
-
-
Requirements for GTA SA Aether 2
-
GTA SA Aether 2 is a mod that requires some specifications and software to run properly. Here are the minimum and recommended requirements for playing the mod:
-
-
Minimum Requirements
Recommended Requirements
-
CPU: Intel Pentium 4 or AMD Athlon XP Processor
CPU: Intel Core 2 Duo or AMD Athlon X2 Processor
-
RAM: 1 GB
RAM: 2 GB or more
-
GPU: 128 MB DirectX 9.0c compatible video card (NVIDIA GeForce 6 Series or better)
GPU: 256 MB DirectX 9.0c compatible video card (NVIDIA GeForce 7 Series or better)
-
OS: Windows XP or higher
OS: Windows Vista or higher
-
HDD: 5 GB of free space
HDD: 10 GB of free space
-
Software: GTA San Andreas (version 1.0), Minecraft (version 1.12.2), Minecraft Forge (version 14.23.5.2854), Aether II (version 0.3.0), Gilded Games Util (version 1.12.2-1.9.4)
Software: GTA San Andreas (version 1.0), Minecraft (version 1.12.2), Minecraft Forge (version 14.23.5.2854), Aether II (version 0.3.0), Gilded Games Util (version 1.12.2-1.9.4)
-
-
How to Download GTA SA Aether 2?
-
To download and install GTA SA Aether 2 on your computer, you need to follow these steps:
-
Step 1: Download and Install Minecraft Forge
-
Minecraft Forge is a tool that allows you to run mods on Minecraft. You need to download and install it for your Minecraft version before you can play GTA SA Aether 2.
-
To download Minecraft Forge, go to https://files.minecraftforge.net/ and select the version that matches your Minecraft version (1.12.2). Then click on the installer link and save the file on your computer.
-
To install Minecraft Forge, double-click on the file you downloaded and follow the instructions on the screen. Make sure you select "Install client" and choose the correct folder where your Minecraft is installed.
After you install Minecraft Forge, you should see a new profile called "Forge" in your Minecraft launcher. You can use this profile to play GTA SA Aether 2 later.
-
Step 2: Download the Aether II and Gilded Games Util Files
-
The Aether II and Gilded Games Util are the files that contain the mod itself and its dependencies. You need to download them from the official website of the mod developers.
-
download gta san andreas aether sx2 emulator
-download gta sa aether sx2 ps2 android
-download gta sa aether sx2 gameplay
-download gta sa aether sx2 poco x3 pro
-download gta sa aether sx2 bios
-download gta sa aether sx2 apk
-download gta sa aether sx2 roms
-download gta sa aether sx2 youtube
-download gta sa aether sx2 xiaomi mi a1
-download gta sa aether sx2 yakuza 1
-download gta sa aether 2 mod
-download gta sa aether 2 portal
-download gta sa aether 2 launcher
-download gta sa aether 2 minecraft
-download gta sa aether 2 server
-download gta sa aether 2 texture pack
-download gta sa aether 2 wiki
-download gta sa aether 2 guide
-download gta sa aether 2 cheats
-download gta sa aether 2 review
-download gta san andreas aether mod
-download gta san andreas aether portal mod
-download gta san andreas aether launcher mod
-download gta san andreas aether minecraft mod
-download gta san andreas aether server mod
-download gta san andreas aether texture pack mod
-download gta san andreas aether wiki mod
-download gta san andreas aether guide mod
-download gta san andreas aether cheats mod
-download gta san andreas aether review mod
-how to download gta sa aether 2
-how to install gta sa aether 2
-how to play gta sa aether 2
-how to update gta sa aether 2
-how to uninstall gta sa aether 2
-how to fix gta sa aether 2 errors
-how to mod gta sa aether 2
-how to join gta sa aether 2 server
-how to create gta sa aether 2 portal
-how to use gta sa aether 2 launcher
-free download gta sa aether 2 for pc
-free download gta sa aether 2 for android
-free download gta sa aether 2 for ios
-free download gta sa aether 2 for mac
-free download gta sa aether 2 for linux
-free download gta sa aether 2 for windows 10
-
To download the Aether II and Gilded Games Util files, go to https://www.aetherii.com/downloads and click on the "Download for 1.12.2" button. Then save the files on your computer.
-
The files you need are:
-
-
Aether II - 1.12.2-0.3.0.jar
-
Gilded Games Util - 1.12.2-1.9.4.jar
-
-
Step 3: Put the Files into the Mods Folder of Your Minecraft Installation
-
Now that you have downloaded the files, you need to put them into the mods folder of your Minecraft installation. This is where Minecraft Forge will load them when you launch the game.
-
To find the mods folder, go to your Minecraft installation folder, which is usually located at:
-C:\Users\YourName\AppData\Roaming\.minecraft
-
If you don't see the AppData folder, you may need to enable hidden files and folders in your Windows settings.
-
Once you are in the .minecraft folder, look for a folder called "mods". If you don't see it, you can create it yourself by right-clicking and selecting "New > Folder". Then name it "mods".
-
Once you have the mods folder, open it and copy and paste the files you downloaded into it. You should see something like this:
-
-
Step 4: Launch Minecraft with Forge Profile and Enjoy the Mod
-
You are almost done! Now you just need to launch Minecraft with the Forge profile and enjoy the mod.
-
To launch Minecraft with the Forge profile, open your Minecraft launcher and select the "Forge" profile from the dropdown menu. Then click on "Play".
-
You should see a loading screen with some information about Forge and the mods you have installed. Wait for it to finish loading and then you will be in the main menu of Minecraft.
-
To check if GTA SA Aether 2 is working, click on "Mods" and look for "Aether II" in the list of mods. If you see it, congratulations! You have successfully installed GTA SA Aether 2.
-
-
How to Play GTA SA Aether 2?
-
Now that you have installed GTA SA Aether 2, you are ready to play it. But how do you enter the Aether dimension and explore its features? Here are some steps to guide you:
els.
-
Icelands: This is an icy biome in the Aether. It has frozen grass, blue sky, white clouds, and icicles. It is home to rare mobs such as ice dragons, frost giants, and yetis.
-
Lostlands: This is a mysterious biome in the Aether. It has dark grass, black sky, red clouds, and blood pools. It is home to evil mobs such as dark angels, blood knights, and vampires.
-
-
As you explore the biomes, you will encounter different mobs, items, and dungeons. Some of the mobs are friendly and can be tamed or ridden, such as flying pigs, sheepuffs, moas, and aerwhales. Some of the mobs are hostile and will attack you on sight, such as zephyrs, cockatrices, tempests, and stormbringers. Some of the mobs are neutral and will only attack you if provoked, such as aerbunnies, phyg, and kirrid.
-
Some of the items are useful and can help you in your adventure, such as golden parachutes, cloud staffs, dart shooters, and gravitite gloves. Some of the items are rare and can only be obtained by defeating bosses or completing dungeons, such as valkyrie lances, sun altars, slider keys, and necromancer staffs.
-
Some of the dungeons are easy and can be completed by anyone, such as bronze dungeons. Some of the dungeons are hard and require skill and strategy, such as silver dungeons. Some of the dungeons are epic and require teamwork and preparation, such as gold dungeons and slider's labyrinth.
-
Step 5: Fight Bosses and Earn Rewards
-
The Aether is not only a place of exploration and discovery, but also a place of challenge and reward. There are four bosses in the Aether that you can fight and earn rewards from. They are:
-
-
The Valkyrie Queen: She is the ruler of the valkyries, a race of warrior women who guard the silver dungeons. She is armed with a valkyrie lance and can fly and teleport. She drops a silver key when defeated, which can be used to open a treasure chest in the silver dungeon.
-
The Sun Spirit: He is the guardian of the sun altar, a sacred device that can control the weather in the Aether. He is made of fire and can shoot fireballs and summon zephyrs. He drops a sun altar when defeated, which can be used to change the weather in the Aether.
-
The Slider Boss: He is the master of the slider's labyrinth, a massive maze that contains many traps, puzzles, enemies, and loot. He is a huge stone creature that can slide on walls and ceilings and crush anything in his way. He drops a dungeon key when defeated, which can be used to open a treasure door in the slider's labyrinth.
-
The Necromancer: He is the lord of the necromancer's tower, a dark and spooky tower that connects San Andreas with the Aether. He is a powerful mage that can summon skeletons, zombies, ghosts, and other undead creatures. He drops a necromancer staff when defeated, which can be used to summon undead minions.
-
-
These bosses are not easy to defeat. You need to prepare well before you challenge them. You need to have good weapons, armor, food, potions, and other items that can help you in combat. You also need to have a good strategy and know the weaknesses and strengths of each boss. You can find more tips and guides on how to fight the bosses online or by asking other players.
-
If you manage to defeat the bosses, you will be rewarded with some of the best items in the mod. You can use these items to enhance your gameplay, or to trade with other players. You can also brag about your achievements and show off your trophies.
-
Conclusion
-
GTA SA Aether 2 is a mod that adds a new dimension to Grand Theft Auto: San Andreas, where you can explore a sky realm full of floating islands, mythical creatures, and dungeons. It is a mod that combines the worlds of GTA and Minecraft, and offers a lot of features, challenges, and rewards.
-
To play GTA SA Aether 2, you need to download and install some files and software, such as Minecraft Forge, Aether II, and Gilded Games Util. Then you need to build an Aether portal with stone and iron ingot, and enter it to travel to the Aether dimension. There you can gather skyroot logs and make basic tools, explore the Aether biomes, mobs, items, and dungeons, and fight the bosses and earn rewards.
-
GTA SA Aether 2 is a mod that is worth playing if you are a fan of GTA or Minecraft, or both. It is a mod that will give you a new perspective on San Andreas, and a new adventure in the Aether. It is a mod that will make you fall in love with San Andreas all over again.
-
FAQs
-
Here are some frequently asked questions about GTA SA Aether 2:
-
-
Q: Is GTA SA Aether 2 compatible with other mods?
-
A: GTA SA Aether 2 is compatible with most mods that do not change the core gameplay or mechanics of GTA or Minecraft. However, some mods may cause conflicts or crashes with GTA SA Aether 2. It is recommended to test the compatibility of the mods before playing them together.
-
Q: Is GTA SA Aether 2 multiplayer?
-
A: GTA SA Aether 2 is multiplayer, meaning that you can play it with other players online or on a local network. You can join or host a server that runs GTA SA Aether 2, and explore the Aether with your friends or strangers. You can also chat, trade, cooperate, or compete with other players in the Aether.
-
Q: Is GTA SA Aether 2 updated?
-
A: GTA SA Aether 2 is updated regularly by the mod developers, who are constantly adding new features, fixing bugs, and improving performance. You can check the official website of the mod for the latest updates and news. You can also follow the mod developers on social media or join their Discord server for more information.
-
Q: Is GTA SA Aether 2 safe?
-
A: GTA SA Aether 2 is safe, meaning that it does not contain any viruses, malware, or spyware that can harm your computer or your privacy. However, you should always download the mod files from trusted sources, such as the official website of the mod developers or reputable mod websites. You should also scan the files with an antivirus program before installing them.
-
Q: Is GTA SA Aether 2 fun?
-
A: GTA SA Aether 2 is fun, meaning that it provides a lot of entertainment, enjoyment, and satisfaction to the players who play it. It is a mod that offers a lot of variety, creativity, challenge, and reward to the players who explore it. It is a mod that appeals to different types of players, whether they are casual or hardcore gamers.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Brotato The Ultimate Potato Shooter Roguelite for iPhone.md b/spaces/fatiXbelha/sd/Brotato The Ultimate Potato Shooter Roguelite for iPhone.md
deleted file mode 100644
index 3fd7dda198ab860bb88cce0661d4911b62be3e8a..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Brotato The Ultimate Potato Shooter Roguelite for iPhone.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Brotato: A Fun and Challenging Shooter Roguelite Game for iPhone
-
If you are looking for a new and exciting game to play on your iPhone, you might want to check out Brotato. Brotato is a top-down arena shooter roguelite where you play as a potato wielding up to 6 weapons at a time to fight off hordes of aliens. You can choose from a variety of traits and items to create unique builds and survive until help arrives. In this article, we will tell you more about what Brotato is, how to download it on your iPhone, what are the benefits of playing it, and what are some alternatives to it.
Brotato is a game developed by Erabit Studios, a Singapore-based indie game studio. It was released in June 2023 for iOS, Android, and Steam platforms. It has received overwhelmingly positive reviews from players and critics alike, who praised its fast-paced action, colorful graphics, humorous tone, and addictive gameplay.
-
The story behind Brotato
-
The game is set in a distant future where potatoes have evolved into intelligent beings and have colonized other planets. However, one day, an alien invasion threatens their peaceful existence. The sole survivor of the attack is Brotato, the only potato capable of handling 6 weapons at the same time. Waiting to be rescued by his mates, Brotato must survive in this hostile environment and fend off the alien menace.
-
The gameplay of Brotato
-
Brotato is a shooter roguelite, which means that each run is different and randomized. You start with a basic weapon and a random trait that affects your stats and abilities. You can find more weapons and items as you progress through the waves of enemies, but you also face more challenges and dangers. You can also unlock more characters with different traits that can change your playstyle. The game has an auto-firing option by default, but you can also manually aim if you prefer. The game has fast runs that last under 30 minutes, so you can enjoy a quick session anytime.
-
The features of Brotato
-
Some of the features that make Brotato stand out are:
-
-
Dozens of characters available to customize your runs (one-handed, crazy, lucky, mage, and many more)
-
Hundreds of items and weapons to choose from (flamethrowers, SMGs, rocket launchers, or sticks and stones)
-
Survive waves lasting 20 to 90 seconds each and kill off as many aliens as you can during that time
-
Collect materials to gain experience and get items from the shop between waves of enemies
-
Supports Game Center to challenge friends and check leaderboards and achievements
-
Offers in-app purchases to get more spuds (the in-game currency) or unlock premium features
-
Has a privacy policy that explains how the developer collects and shares your data
-
-
How to download Brotato on iPhone?
-
If you want to play Brotato on your iPhone, you have two options: download it from the App Store or from the official website.
-
Download from the App Store
-
The easiest way to get Brotato on your iPhone is to download it from the App Store. Here are the steps to do so:
-
brotato app store iphone
-brotato premium ios download
-brotato roguelite game for iphone
-how to install brotato on iphone
-brotato iphone gameplay review
-brotato arena shooter ios game
-brotato potato character iphone game
-best weapons for brotato on iphone
-brotato tips and tricks for ios
-brotato free vs premium on iphone
-brotato update for iphone users
-brotato cloud saving on ios devices
-brotato discord server for iphone players
-brotato erabit studios ios app
-brotato bullet hell action iphone game
-brotato characters and traits for ios
-brotato items and weapons for iphone
-brotato wave survival mode on ios
-brotato accessibility options for iphone
-brotato editor's choice on app store
-brotato ratings and feedback on iphone
-brotato privacy policy for ios users
-brotato in-app purchases for iphone
-brotato game center support on ios
-brotato compatible devices for iphone
-brotato screenshots and videos for ios
-brotato new features and bug fixes for iphone
-brotato contact information for ios support
-brotato social media accounts for iphone fans
-brotato fun and addictive iphone game
-download brotato latest version for ios
-play brotato offline on iphone
-customize your runs with brotato on ios
-collect materials and shop items in brotato on iphone
-kill aliens and survive in brotato on ios
-enjoy fast runs with brotato on iphone
-create unique builds with brotato on ios
-wield up to 6 weapons with brotato on iphone
-challenge friends and check leaderboards with brotato on ios
-learn more about the story of brotato on ios
- Open the App Store app on your iPhone and tap on the search icon at the bottom right corner - Type "Brotato" in the search bar and tap on the first result that appears - Tap on the "Get" button to start downloading the game. You might need to enter your Apple ID and password or use Touch ID or Face ID to confirm the download - Wait for the download to finish and then tap on the "Open" button to launch the game - Enjoy playing Brotato on your iPhone!
Download from the official website
-
Another way to get Brotato on your iPhone is to download it from the official website of Erabit Studios. Here are the steps to do so:
-- Go to https://erabit.com/brotato/ on your iPhone's browser and scroll down to the bottom of the page - Tap on the "Download for iOS" button and you will be redirected to a page with a QR code - Scan the QR code with your iPhone's camera or a QR code scanner app and you will be taken to a page where you can download the game - Tap on the "Install" button and then tap on "Allow" when prompted to install a profile on your device - Go to Settings > General > Profile & Device Management and tap on the profile named "Erabit Studios" - Tap on "Trust Erabit Studios" and then tap on "Trust" again to confirm - Go back to your home screen and you will see the Brotato icon. Tap on it to launch the game - Enjoy playing Brotato on your iPhone!
What are the benefits of playing Brotato?
-
Brotato is not only a fun and entertaining game, but also a beneficial one. Here are some of the benefits of playing Brotato:
-
Improve your reflexes and strategy skills
-
Brotato is a fast-paced game that requires quick thinking and reaction. You have to dodge bullets, avoid traps, and shoot enemies while managing your weapons, items, and health. You also have to plan ahead and choose the best traits and items for your build. Playing Brotato can help you improve your reflexes and strategy skills, which can be useful in other aspects of life.
-
Enjoy a variety of characters, items, and weapons
-
Brotato is a game that offers a lot of variety and replay value. You can play as different characters with different traits that affect your gameplay. You can also find hundreds of items and weapons that can change your abilities and performance. You can mix and match different combinations to create unique builds and experiences. Playing Brotato can help you enjoy a variety of characters, items, and weapons, which can keep you entertained for hours.
-
Compete with your friends and other players
-
Brotato is a game that supports Game Center, which means you can challenge your friends and other players online. You can compare your scores, achievements, and rankings with others and see who is the best Brotato player. You can also chat with other players and share tips and tricks. Playing Brotato can help you compete with your friends and other players, which can make you more motivated and social.
-
What are some alternatives to Brotato?
-
If you like Brotato, you might also like some of these alternatives:
-
Super Kill-BOI 9000
-
Super Kill-BOI 9000 is another top-down arena shooter roguelite where you play as a cyborg killing machine who has to survive waves of enemies in a futuristic dystopia. You can upgrade your weapons, abilities, and stats as you progress through the levels. The game has retro-style graphics, synthwave music, and dark humor.
-
Nordic Ashes
-
Nordic Ashes is a top-down action-adventure roguelite where you play as a Viking warrior who has to explore a procedurally generated world full of Norse mythology. You can collect runes, artifacts, and weapons that grant you different powers and effects. The game has pixel-art graphics, atmospheric soundtracks, and epic boss battles.
-
Stickman's Arena
-
Stickman's Arena is a top-down multiplayer shooter where you play as a stickman who has to fight against other stickmen in various arenas. You can customize your stickman with different skins, hats, outfits, and weapons. The game has simple graphics, catchy music, and chaotic gameplay.
-
Conclusion
-
Brotato is a fun and challenging shooter roguelite game for iPhone that you should definitely try out. It has a captivating story, addictive gameplay, colorful graphics, humorous tone, and tons of variety. You can download it from the App Store or from the official website of Erabit Studios. You can also enjoy the benefits of playing Brotato, such as improving your reflexes and strategy skills, enjoying a variety of characters, items, and weapons, and competing with your friends and other players. If you are looking for some alternatives to Brotato, you can try out Super Kill-BOI 9000, Nordic Ashes, or Stickman's Arena. We hope you have fun playing Brotato and other similar games on your iPhone.
-
FAQs
-
Here are some frequently asked questions about Brotato and its answers:
-
Q: How much does Brotato cost?
-
A: Brotato is a free-to-play game, which means you can download and play it without paying anything. However, the game offers in-app purchases that allow you to get more spuds (the in-game currency) or unlock premium features such as removing ads, getting more characters, or getting more materials.
-
Q: Is Brotato compatible with my iPhone?
-
A: Brotato requires iOS 10.0 or later and is compatible with iPhone 5S or newer models. You can check the compatibility of your device by going to the App Store page of Brotato and scrolling down to the "Compatibility" section.
-
Q: Is Brotato safe to download and play?
-
A: Yes, Brotato is safe to download and play. The game has been tested and verified by Apple and does not contain any viruses or malware. The game also has a privacy policy that explains how the developer collects and shares your data. You can read the privacy policy by going to the App Store page of Brotato and tapping on the "Privacy Policy" link.
-
Q: How can I contact the developer of Brotato?
-
A: If you have any questions, feedback, or issues regarding Brotato, you can contact the developer of Brotato by emailing them at support@erabit.com or by visiting their website at https://erabit.com/contact/.
-
Q: How can I learn more about Brotato?
-
A: If you want to learn more about Brotato, you can visit the official website of Erabit Studios at https://erabit.com/brotato/, where you can find more information, screenshots, videos, and news about the game. You can also follow them on social media platforms such as Facebook, Twitter, Instagram, and YouTube.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Build Manage and Customize Your City with SimCity APK Mod.md b/spaces/fatiXbelha/sd/Build Manage and Customize Your City with SimCity APK Mod.md
deleted file mode 100644
index 2d8242b290a0b8fe78ff1aa2ef2544955bccfc26..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Build Manage and Customize Your City with SimCity APK Mod.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
Download SimCity APK Mod: How to Build Your Dream City with Unlimited Resources
-
Do you love playing city-building games? Do you want to create your own metropolis with unlimited possibilities? If yes, then you should try SimCity, one of the most popular and addictive games in this genre. SimCity is a simulation game where you can design, build, and manage your own city. You can choose from various types of buildings, roads, parks, landmarks, and services to make your city unique and attractive. You can also deal with various challenges such as traffic, pollution, crime, disasters, and more.
However, building your dream city is not easy. You need to have enough resources such as money, materials, and energy to construct and upgrade your buildings. You also need to balance your budget and keep your citizens happy. Sometimes, you may feel frustrated or bored by the slow progress or the limited options in the game. That's why many players look for ways to hack or mod the game to get unlimited resources and unlock all the features.
-
In this article, we will show you how to download SimCity APK mod, a modified version of the game that gives you unlimited money, unlocked all buildings and items, and no ads. We will also tell you the pros and cons of using this mod, and answer some frequently asked questions about it. So, if you are ready to build your dream city with ease, read on!
-
What is SimCity APK Mod?
-
SimCity APK mod is a modified version of the original SimCity game that you can download and install on your Android device. It is not an official app from the game developer, but a third-party app that has been modified by some hackers or modders. The main purpose of this mod is to give you unlimited resources and unlock all the features in the game. This way, you can build your city without any restrictions or limitations.
-
Features of SimCity APK Mod
-
Here are some of the features that you can enjoy when you download SimCity APK mod:
-
Unlimited Money
-
Money is the most important resource in SimCity. You need money to buy buildings, roads, services, and other items. You also need money to upgrade your buildings and improve your city's performance. However, earning money in the game is not easy. You have to collect taxes from your citizens, complete quests, sell items, and more. Sometimes, you may run out of money or spend more than you earn.
-
With SimCity APK mod, you don't have to worry about money anymore. You will have unlimited money in your account that you can use for anything you want. You can buy any building or item that you like, upgrade them to the max level, and expand your city as much as you want. You don't have to wait for hours or days to earn enough money for your next project.
-
Unlocked All Buildings and Items
-
Another feature of SimCity APK mod is that it unlocks all the buildings and items in the game. Normally, you have to unlock them by leveling up, completing achievements, or spending real money. Some of the buildings and items are very expensive or hard to get. For example, some of the landmarks such as Eiffel Tower, Statue of Liberty, or Big Ben cost thousands of SimCash (the premium currency in the game) or require a lot of materials and energy to build. Some of the items such as specializations, disasters, or regions are only available for a limited time or in certain events.
-
With SimCity APK mod, you can access all the buildings and items in the game without any restrictions. You can build any landmark or specialization that you want, unleash any disaster or scenario that you like, and explore any region or map that you prefer. You can also customize your city with various themes, styles, and decorations. You can make your city look like Paris, Tokyo, London, or any other place in the world.
-
download simcity buildit mod apk unlimited money
-download simcity buildit mod apk latest version
-download simcity buildit mod apk offline
-download simcity buildit mod apk android 1
-download simcity buildit mod apk 2021
-download simcity buildit mod apk revdl
-download simcity buildit mod apk happymod
-download simcity buildit mod apk rexdl
-download simcity buildit mod apk no root
-download simcity buildit mod apk terbaru
-download simcity deluxe mod apk
-download simcity deluxe mod apk android
-download simcity deluxe mod apk unlimited money
-download simcity deluxe mod apk data
-download simcity deluxe mod apk obb
-download simcity 4 mod apk
-download simcity 4 mod apk android
-download simcity 4 mod apk unlimited money
-download simcity 4 mod apk data
-download simcity 4 mod apk obb
-download simcity 5 mod apk
-download simcity 5 mod apk android
-download simcity 5 mod apk unlimited money
-download simcity 5 mod apk data
-download simcity 5 mod apk obb
-download simcity classic mod apk
-download simcity classic mod apk android
-download simcity classic mod apk unlimited money
-download simcity classic mod apk data
-download simcity classic mod apk obb
-download game simcity buildit mod apk
-download game simcity buildit mod apk unlimited money
-download game simcity buildit mod apk latest version
-download game simcity buildit mod apk offline
-download game simcity buildit mod apk android 1
-download game simcity deluxe mod apk
-download game simcity deluxe mod apk android
-download game simcity deluxe mod apk unlimited money
-download game simcity deluxe mod apk data
-download game simcity deluxe mod apk obb
-free download simcity buildit mod apk
-free download simcity buildit mod apk unlimited money
-free download simcity buildit mod apk latest version
-free download simcity buildit mod apk offline
-free download simcity buildit mod apk android 1
-free download simcity deluxe mod apk
-free download simcity deluxe mod apk android
-free download simcity deluxe mod apk unlimited money
-free download simcity deluxe mod apk data
-
No Ads
-
The last feature of SimCity APK mod is that it removes all the ads in the game. Ads are annoying and distracting, especially when you are trying to enjoy your game. They can also slow down your device or consume your data. Sometimes, they may even contain viruses or malware that can harm your device or steal your information.
-
With SimCity APK mod, you can play the game without any ads. You don't have to watch any videos or click on any banners to get extra rewards or bonuses. You don't have to worry about any pop-ups or redirects that may interrupt your game or affect your device. You can have a smooth and uninterrupted gaming experience.
-
How to Download and Install SimCity APK Mod?
-
Now that you know the features of SimCity APK mod, you may be wondering how to download and install it on your device. Here are the steps that you need to follow:
-
Step 1: Enable Unknown Sources
-
Since SimCity APK mod is not an official app from the Google Play Store, you need to enable unknown sources on your device to install it. This is a security setting that prevents unauthorized apps from being installed on your device. To enable unknown sources, go to your device's settings, then security, then unknown sources, and toggle it on.
-
Step 2: Download the APK File
-
Next, you need to download the APK file of SimCity APK mod from a reliable source. There are many websites that offer this mod, but not all of them are safe or trustworthy. Some of them may contain fake or outdated files that may not work or may harm your device. To avoid this, you should do some research and read some reviews before downloading the file.
-
One of the websites that we recommend is [SimCity APK Mod], which provides the latest and working version of the mod. You can download the file from this website by clicking on the download button and following the instructions. The file size is about 100 MB, so make sure you have enough space on your device and a stable internet connection.
-
Step 3: Install the APK File
-
After downloading the file, you need to install it on your device. To do this, locate the file in your device's file manager and tap on it. You may see a warning message that says "This type of file can harm your device". Don't worry, this is just a standard message for unknown sources. Just tap on "OK" and proceed with the installation.
-
The installation process may take a few minutes, depending on your device's performance. Wait until it is finished and don't interrupt it.
-
Step 4: Launch the Game and Enjoy
-
Once the installation is done, you can launch the game and enjoy it. You will see a new icon on your home screen or app drawer that says "SimCity Mod". Tap on it and start playing. You will notice that you have unlimited money, unlocked all buildings and items, and no ads in the game. You can build your dream city with ease and have fun.
-
Pros and Cons of SimCity APK Mod
-
SimCity APK mod may sound like a perfect solution for those who want to play SimCity without any limitations or frustrations. However, like anything else in life, it has its pros and cons. Here are some of them:
-
Pros
-
Some of the advantages of using SimCity APK mod are:
-
More Fun and Creativity
-
With SimCity APK mod, you can have more fun and creativity in building your city. You can experiment with different types of buildings, roads, parks, landmarks, and services. You can also try different themes, styles, and decorations for your city. You can make your city look realistic or fantasy-like, modern or ancient, colorful or monochrome. The possibilities are endless.
-
No Need to Spend Real Money
-
Another advantage of using SimCity APK mod is that you don't need to spend real money to enjoy the game. You don't have to buy SimCash or Simoleons with your hard-earned cash. You don't have to watch ads or complete surveys to get free rewards or bonuses. You don't have to wait for hours or days to get enough resources for your next project. You can have everything you want for free.
-
No Annoying Ads
-
A third advantage of using SimCity APK mod is that you don't have to deal with annoying ads in the game. Ads can ruin your gaming experience and waste your time and data. They can also expose you to viruses or malware that can harm your device or steal your information. With SimCity APK mod, you can play the game without any ads. You can have a smooth and uninterrupted gaming experience.
-
Cons
-
Some of the disadvantages of using SimCity APK mod are:
-
Risk of Viruses and Malware
-
One of the risks of using SimCity APK mod is that you may download a file that contains viruses or malware. Since the mod is not an official app from the game developer, it may not be safe or trustworthy. Some of the websites that offer the mod may be malicious or fraudulent. They may infect your device with harmful software that can damage your device or steal your information. To avoid this, you should always download the mod from a reliable source and scan it with an antivirus app before installing it.
-
Possible Ban from the Official Game
-
Another risk of using SimCity APK mod is that you may get banned from the official game. The game developer may detect that you are using a modified version of the game and suspend or terminate your account. This means that you will lose all your progress and achievements in the game. You will also not be able to play online or connect with other players. To avoid this, you should always use the mod at your own risk and discretion.
-
Less Challenge and Satisfaction
-
A third risk of using SimCity APK mod is that you may lose the challenge and satisfaction of playing the game. The game is designed to be challenging and rewarding, where you have to work hard and smart to build your city. You have to plan, strategize, and manage your resources and budget. You have to deal with various problems and crises that may arise in your city. You have to earn your rewards and achievements by completing quests and goals.
-
With SimCity APK mod, you may lose the sense of accomplishment and enjoyment that comes from playing the game. You may feel bored or lazy by having everything handed to you on a silver platter. You may not appreciate the value or beauty of your city because you didn't work for it. You may not learn anything new or improve your skills because you didn't face any challenges or difficulties.
-
Conclusion
-
SimCity APK mod is a modified version of the original SimCity game that gives you unlimited resources and unlocks all the features in the game. It can be a great way to have more fun and creativity in building your city without any limitations or frustrations. However, it also has some risks and drawbacks that you should be aware of before using it.
-
If you decide to download SimCity APK mod, you should always do it from a reliable source and scan it with an antivirus app before installing it. You should also use it at your own risk and discretion, as you may get banned from the official game or lose the challenge and satisfaction of playing the game.
-
We hope this article has helped you understand what SimCity APK mod is, how to download and install it, and what are its pros and cons. If you have any questions or comments, feel free to leave them below.
-
FAQs
-
Here are some frequently asked questions about SimCity APK mod:
-
Is SimCity APK Mod Safe?
-
SimCity APK mod is not an official app from the game developer, but a third-party app that has been modified by some hackers or modders. It may not be safe or trustworthy, as it may contain viruses or malware that can harm your device or steal your information. To ensure your safety, you should always download the mod from a reliable source and scan it with an antivirus app before installing it.
-
Is SimCity APK Mod Legal?
-
SimCity APK mod is not legal, as it violates the terms and conditions of the game developer. It also infringes on their intellectual property rights and copyrights. By using the mod, you are breaking the law and may face legal consequences. To avoid this, you should always play the game with the original version and respect the game developer's rights and policies.
-
Does SimCity APK Mod Work Offline?
-
SimCity APK mod does not work offline, as it requires an internet connection to run. The game needs to connect to the game server and sync your data and progress. If you play the game offline, you may encounter errors or glitches, or lose your data or progress. To ensure a smooth and stable gaming experience, you should always play the game online with a good internet connection.
-
Can I Play SimCity APK Mod with Friends?
-
SimCity APK mod does not support multiplayer mode, as it is a modified version of the game that is not compatible with the official game. You cannot play the game with your friends or other players online, as you may get banned from the game server or face other issues. To enjoy the multiplayer mode, you should play the game with the original version and connect with your friends or other players through Facebook, Google Play, or Game Center.
-
Can I Update SimCity APK Mod?
-
SimCity APK mod does not support automatic updates, as it is a modified version of the game that is not linked to the Google Play Store. You cannot update the game through the app or the store, as you may lose the mod features or face other problems. To update the game, you have to download and install the latest version of the mod from a reliable source. However, you should be careful and backup your data before updating, as you may lose your data or progress in the process.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Age of Fantasy Mod APK and Become the Master of Fantasy.md b/spaces/fatiXbelha/sd/Download Age of Fantasy Mod APK and Become the Master of Fantasy.md
deleted file mode 100644
index 2c835d693f1a2c9ce23b016103270eaac4ffe96b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Age of Fantasy Mod APK and Become the Master of Fantasy.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
Download Age of Fantasy Mod APK: A Strategy Game with Epic Battles
-
If you are a fan of strategy games, you might want to check out Age of Fantasy, a turn-based game that lets you command different races and units in epic battles. And if you want to enjoy the game with more features and resources, you might want to download Age of Fantasy Mod APK, a modified version that gives you unlimited money, gems, and more. In this article, we will tell you what Age of Fantasy is, why you should download its mod APK, and how to do it.
-
What is Age of Fantasy?
-
Age of Fantasy is a strategy game developed by Zero Touch group, a team of indie developers who love pixel art and retro games. The game is inspired by classic games like Age of Empires, Warcraft, and Civilization, but with a fantasy twist. You can choose from different races, such as humans, elves, orcs, undead, dwarves, and more, each with their own unique units and abilities. You can also create your own maps and scenarios, or play online with other players in multiplayer mode. The game has a pixel art style and a retro soundtrack that will make you feel nostalgic.
Age of Fantasy has over 10 races to choose from, each with their own strengths and weaknesses. You can play as humans, elves, orcs, undead, dwarves, scaledfolk, merfolk, kobolds, trolls, goblins, and more. Each race has different units, such as warriors, archers, mages, cavalry, siege weapons, dragons, etc. You can also upgrade your units with new skills and abilities as you progress in the game.
-
- Customizable maps and scenarios
-
Age of Fantasy lets you create your own maps and scenarios with its map editor. You can design your own terrain, place buildings and resources, set the starting positions and objectives for each player, and add triggers and events to make your map more dynamic. You can also share your maps with other players online or download maps created by others.
-
- Online multiplayer and campaign mode
-
Age of Fantasy has an online multiplayer mode that lets you play with other players around the world. You can join or create rooms with different settings, such as map size, turn limit, fog of war, etc. You can also chat with other players and send them emojis. If you prefer to play solo, you can also try the campaign mode, which has over 100 missions to complete. The campaign mode will take you through different stories and scenarios involving different races and characters.
-
- Pixel art graphics and retro music
-
Age of Fantasy has a pixel art style that will remind you of the old-school games from the 90s. The game has colorful graphics and detailed animations that bring the fantasy world to life. The game also has a retro soundtrack that matches the mood and atmosphere of the game. You can enjoy the nostalgic sound effects and music while playing the game.
-
Why download Age of Fantasy Mod APK?
-
Age of Fantasy is a fun and addictive strategy game that will keep you entertained for hours. However, if you want to enjoy the game with more features and resources, you might want to download Age of Fantasy Mod APK. This is a modified version of the game that gives you some advantages
that other players don't have. Here are some of the benefits of Age of Fantasy Mod APK:
-
Benefits of Age of Fantasy Mod APK
-
- Unlimited money and gems
-
Money and gems are the main currencies in Age of Fantasy. You need them to buy new units, upgrade your existing ones, and unlock new races. However, earning money and gems can be slow and tedious, especially if you want to get the best units and races. With Age of Fantasy Mod APK, you don't have to worry about that. You will get unlimited money and gems that you can use to buy anything you want in the game. You can also use them to speed up your progress and complete the missions faster.
-
- All races and units unlocked
-
Age of Fantasy has over 10 races to choose from, but not all of them are available at the start. You have to unlock them by completing certain missions or paying with gems. Some of the races are more expensive than others, and some of them are only available for a limited time. With Age of Fantasy Mod APK, you don't have to wait or pay to unlock any race or unit. You will have access to all of them from the beginning, and you can switch between them as you please. You can also try different combinations and strategies with different races and units.
-
download age of fantasy mod apk unlimited money
-download age of fantasy mod apk latest version
-download age of fantasy mod apk for android
-download age of fantasy mod apk for ios
-download age of fantasy mod apk free
-download age of fantasy mod apk offline
-download age of fantasy mod apk no ads
-download age of fantasy mod apk 1.1831
-download age of fantasy mod apk apkloli
-download age of fantasy mod apk rexdl
-download age of fantasy mod apk revdl
-download age of fantasy mod apk happymod
-download age of fantasy mod apk an1
-download age of fantasy mod apk android 1
-download age of fantasy mod apk apkpure
-download age of fantasy mod apk apkmody
-download age of fantasy mod apk mob.org
-download age of fantasy mod apk uptodown
-download age of fantasy mod apk hack
-download age of fantasy mod apk cheat
-download age of fantasy mod apk unlocked
-download age of fantasy mod apk premium
-download age of fantasy mod apk pro
-download age of fantasy mod apk full version
-download age of fantasy mod apk mega mod
-download age of fantasy mod apk unlimited gems
-download age of fantasy mod apk unlimited coins
-download age of fantasy mod apk unlimited resources
-download age of fantasy mod apk unlimited units
-download age of fantasy mod apk unlimited spells
-download age of fantasy mod apk unlimited upgrades
-download age of fantasy mod apk unlimited everything
-download age of fantasy mod apk god mode
-download age of fantasy mod apk one hit kill
-download age of fantasy mod apk high damage
-download age of fantasy mod apk no root
-download age of fantasy mod apk no verification
-download age of fantasy mod apk no survey
-download age of fantasy mod apk online
-download age of fantasy mod apk multiplayer
-
- No ads and no root required
-
Age of Fantasy is a free game, but it has ads that can interrupt your gameplay and annoy you. Some of the ads are also misleading and can lead you to download unwanted apps or malware. With Age of Fantasy Mod APK, you don't have to deal with any ads. You can enjoy the game without any distractions or risks. Moreover, you don't need to root your device to install Age of Fantasy Mod APK. You can simply download and install it like any other app, without compromising your device's security or warranty.
-
How to download and install Age of Fantasy Mod APK?
-
Now that you know the benefits of Age of Fantasy Mod APK, you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:
-
- Step 1: Download the APK file from a trusted source
-
The first thing you need to do is to download the APK file of Age of Fantasy Mod APK from a trusted source. There are many websites that offer modded apps, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your personal information. To avoid that, you should only download the APK file from a reputable website that has positive reviews and feedback from other users. You can also scan the file with an antivirus app before installing it.
-
- Step 2: Enable unknown sources on your device
-
The next thing you need to do is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the official Google Play Store. However, since Age of Fantasy Mod APK is not available on the Play Store, you need to enable unknown sources to install it. To do that, go to your device's settings, then security, then unknown sources, and toggle it on. You may see a warning message, but don't worry, it's safe as long as you download the APK file from a trusted source.
-
- Step 3: Install the APK file and launch the game
-
The final step is to install the APK file and launch the game. To do that, locate the downloaded APK file on your device's storage, then tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. Once it's done, you can launch the game from your app drawer or home screen. You will see a mod menu where you can enable or disable the mod features as you wish. Enjoy!
-
Conclusion
-
Age of Fantasy is a strategy game that lets you command different races and units in epic battles. It has multiple features that make it fun and addictive, such as customizable maps, online multiplayer, pixel art graphics, and retro music. However, if you want to enjoy the game with more features and resources, you should download Age of Fantasy Mod APK, a modified version that gives you unlimited money, gems, all races and units unlocked, no ads, and no root required. You can download and install it easily by following our guide above.
-
If you liked this article, please share it with your friends who love strategy games. Also, let us know what you think about Age of Fantasy Mod APK in the comments below. Thank you for reading!
-
FAQs
Here are some of the frequently asked questions about Age of Fantasy Mod APK:
-
- Is Age of Fantasy Mod APK safe to download and install?
-
Yes, Age of Fantasy Mod APK is safe to download and install, as long as you get it from a trusted source. You should also scan the file with an antivirus app before installing it, just to be sure. Age of Fantasy Mod APK does not require root access, so it will not harm your device or void your warranty.
-
- Is Age of Fantasy Mod APK compatible with my device?
-
Age of Fantasy Mod APK is compatible with most Android devices that run on Android 4.1 or higher. However, some devices may have compatibility issues or performance problems due to different hardware specifications or software versions. If you encounter any problems while playing the game, you can try adjusting the settings or contacting the developer for support.
-
- How do I update Age of Fantasy Mod APK?
-
Age of Fantasy Mod APK is not available on the Google Play Store, so you will not receive automatic updates from there. However, you can check for updates from the website where you downloaded the APK file, or from the mod menu in the game. You can also follow the developer's social media accounts or join their community forums to get the latest news and updates about the game.
-
- Can I play online with other players using Age of Fantasy Mod APK?
-
Yes, you can play online with other players using Age of Fantasy Mod APK, but only with those who are using the same modded version as you. You cannot play with players who are using the original version or a different modded version, as they will have different game data and features. You can also create or join private rooms with your friends who are using the same modded version as you.
-
- Can I use Age of Fantasy Mod APK on PC?
-
Yes, you can use Age of Fantasy Mod APK on PC, but you will need an Android emulator to do so. An Android emulator is a software that allows you to run Android apps and games on your PC. There are many Android emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. You can choose one that suits your preferences and system requirements. Once you have installed an Android emulator on your PC, you can download and install Age of Fantasy Mod APK on it and play it like you would on your mobile device.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fengmuxi/ChatGpt-Web/app/api/config/route.ts b/spaces/fengmuxi/ChatGpt-Web/app/api/config/route.ts
deleted file mode 100644
index 2b3bcbf203e9cfbf671b3143dd51160cb3e1f812..0000000000000000000000000000000000000000
--- a/spaces/fengmuxi/ChatGpt-Web/app/api/config/route.ts
+++ /dev/null
@@ -1,26 +0,0 @@
-import { NextResponse } from "next/server";
-
-import { getServerSideConfig } from "../../config/server";
-
-const serverConfig = getServerSideConfig();
-
-// Danger! Don not write any secret value here!
-// 警告!不要在这里写入任何敏感信息!
-const DANGER_CONFIG = {
- needCode: serverConfig.needCode,
- hideUserApiKey: serverConfig.hideUserApiKey,
- enableGPT4: serverConfig.enableGPT4,
-};
-
-declare global {
- type DangerConfig = typeof DANGER_CONFIG;
-}
-
-async function handle() {
- return NextResponse.json(DANGER_CONFIG);
-}
-
-export const GET = handle;
-export const POST = handle;
-
-export const runtime = "edge";
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game NBA LIVE Mobile and Compete in Live Events and Tournaments.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game NBA LIVE Mobile and Compete in Live Events and Tournaments.md
deleted file mode 100644
index d73a26b03700572aab59819901692b2533e2f811..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game NBA LIVE Mobile and Compete in Live Events and Tournaments.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
How to Download NBA Games on Your Mobile Device
-
If you are a fan of basketball and the NBA, you might want to download some NBA games on your mobile device. Playing NBA games on your mobile device can be a great way to enjoy the thrill of the sport anytime and anywhere. You can also improve your basketball skills, learn new strategies, and compete with other players online.
There are many benefits of playing NBA games on your mobile device. Some of them are:
-
-
You can access a wide range of NBA games that suit your preferences and skill level
-
You can play at your own pace and schedule, without having to worry about missing a live game or a TV broadcast
-
You can save money on buying expensive consoles or subscriptions
-
You can have fun and relax while playing your favorite sport
-
-
But what are the best NBA games to download on your mobile device? There are many options available, but we have selected two of the most popular and highly rated ones for you. These are:
-
NBA 2K Mobile Basketball Game
-
NBA 2K Mobile Basketball Game is one of the most realistic and immersive basketball games you can play on your mobile device. It is developed by 2K, Inc., a leading company in sports video games. It has over 10 million downloads and a 4.5-star rating on the Google Play Store.
-
download game nba 2k mobile basketball
-download game nba live mobile basketball
-download game nba 2k23
-download game nba 2k23 arcade edition
-download game nba 2k22
-download game nba jam
-download game nba 2k21
-download game nba 2k20
-download game nba 2k19
-download game nba 2k18
-download game nba 2k17
-download game nba 2k16
-download game nba 2k15
-download game nba 2k14
-download game nba 2k13
-download game nba 2k12
-download game nba 2k11
-download game nba live 19
-download game nba live 18
-download game nba live 16
-download game nba live 15
-download game nba live 14
-download game nba live 10
-download game nba live 09
-download game nba live 08
-download game nba live 07
-download game nba live 06
-download game nba live 05
-download game nba live 04
-download game nba live 03
-download game nba live 2001
-download game nba live 2000
-download game nba live 99
-download game nba live 98
-download game nba street vol.3
-download game nba street vol.2
-download game nba street homecourt
-download game nba street showdown
-download game nba ballers phenom
-download game nba ballers rebound
-download game nba ballers chosen one
-download game nba inside drive 2004
-download game nba inside drive 2003
-download game nba inside drive 2002
-download game nba inside drive 2000
-download game nba in the zone '98
-download game nba in the zone '99
-download game nba in the zone '00
-download game nba in the zone '02
-
Features of NBA 2K Mobile Basketball Game
-
Some of the features of NBA 2K Mobile Basketball Game are:
-
-
You can collect and upgrade hundreds of basketball cards featuring 2022-23 NBA roster, NBA Playoffs Superstars, NBA All-Stars, and NBA MVPs. You can also enjoy new card tiers like Topaz, Pearl, and Jade while leveling up your cards from season to season.
-
You can play 5v5 matchups with console-quality graphics for an authentic ‘on the court’ NBA live basketball experience. You can also play 3v3 freestyle basketball in CREWS mode or join tournaments (7-game championship mode) for more challenges.
-
You can enjoy new themes, events, and rewards throughout the NBA season. You can also participate in limited-time events and complete sets to earn exclusive players, courts, and more.
-
-
How to Download NBA 2K Mobile Basketball Game
-
To download NBA 2K Mobile Basketball Game, follow these steps:
-<
- Go to the Google Play Store or the App Store and search for NBA 2K Mobile Basketball Game
-
- Tap on the Install button and wait for the game to download and install on your device
-
- Launch the game and sign in with your Google Play Games or Game Center account
-
- Customize your profile and choose your favorite NBA team
-
- Start playing and enjoy the game
-
NBA LIVE Mobile Basketball
-
NBA LIVE Mobile Basketball is another popular and highly rated basketball game you can play on your mobile device. It is developed by Electronic Arts, a renowned company in video games. It has over 100 million downloads and a 4.3-star rating on the Google Play Store.
-
Features of NBA LIVE Mobile Basketball
-
Some of the features of NBA LIVE Mobile Basketball are:
-
-
You can draft your dream NBA team from a pool of current and legendary players. You can also select your lineup from different positions and play styles.
-
You can compete in live events, campaigns, and tournaments to earn exclusive rewards and players. You can also join a league or create your own to play with friends and other players around the world.
-
You can show off your skills in 3v3 and PvP modes with real-time basketball action. You can also use advanced controls, signature moves, and special abilities to dominate the court.
-
You can experience an enhanced gameplay with new audio and UI. You can also customize your courts, jerseys, logos, and more.
-
-
How to Download NBA LIVE Mobile Basketball
-
To download NBA LIVE Mobile Basketball, follow these steps:
-
- Go to the Google Play Store or the App Store and search for NBA LIVE Mobile Basketball
-
- Tap on the Install button and wait for the game to download and install on your device
-
- Launch the game and sign in with your Facebook, Google Play Games, or Game Center account
-
- Choose your favorite NBA team and customize your jersey and court
-
- Start playing and enjoy the game
Conclusion
-
Downloading NBA games on your mobile device can be a fun and convenient way to enjoy basketball anytime and anywhere. You can choose from a variety of NBA games that offer realistic graphics, exciting gameplay, and online competition. You can also improve your basketball skills, learn new strategies, and collect your favorite NBA players and teams.
-
In this article, we have reviewed two of the best NBA games to download on your mobile device: NBA 2K Mobile Basketball Game and NBA LIVE Mobile Basketball. Both of these games have millions of downloads and high ratings on the Google Play Store and the App Store. They also have many features that make them stand out from other NBA games.
-
To download these games, you just need to follow some simple steps that we have explained in this article. You can also check out the links below for more information and reviews about these games. We hope you enjoy playing these games and have a great time with basketball.
-
FAQs
-
Q: How much space do I need to download these NBA games on my mobile device?
-
A: The size of these NBA games may vary depending on your device and the updates. However, as a general estimate, you will need about 1.5 GB of free space to download NBA 2K Mobile Basketball Game and about 100 MB of free space to download NBA LIVE Mobile Basketball.
-
Q: Do I need an internet connection to play these NBA games on my mobile device?
-
A: Yes, you will need an internet connection to play these NBA games on your mobile device. You will also need an internet connection to download and update these games.
-
Q: Are these NBA games compatible with my mobile device?
-
A: These NBA games are compatible with most Android and iOS devices that meet the minimum requirements. You can check the compatibility of your device on the Google Play Store or the App Store before downloading these games.
-
Q: Are these NBA games free to play?
-
A: Yes, these NBA games are free to play. However, they may contain in-app purchases that allow you to buy extra items, coins, or features. You can disable in-app purchases in your device settings if you do not want to use them.
-
Q: How can I contact the developers of these NBA games if I have any issues or feedback?
-
A: You can contact the developers of these NBA games by using the following methods:
-
-
For NBA 2K Mobile Basketball Game, you can visit their website at https://nba.2k.com/mobile/ or email them at support@2k.com
-
For NBA LIVE Mobile Basketball, you can visit their website at https://www.ea.com/games/nba-live/nba-live-mobile or email them at help@ea.com
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/lib/browser/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/lib/browser/index.js
deleted file mode 100644
index 6be45cc20b33f20dcdc580b9709f1a4a20bb87a1..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/depd/lib/browser/index.js
+++ /dev/null
@@ -1,77 +0,0 @@
-/*!
- * depd
- * Copyright(c) 2015 Douglas Christopher Wilson
- * MIT Licensed
- */
-
-'use strict'
-
-/**
- * Module exports.
- * @public
- */
-
-module.exports = depd
-
-/**
- * Create deprecate for namespace in caller.
- */
-
-function depd (namespace) {
- if (!namespace) {
- throw new TypeError('argument namespace is required')
- }
-
- function deprecate (message) {
- // no-op in browser
- }
-
- deprecate._file = undefined
- deprecate._ignored = true
- deprecate._namespace = namespace
- deprecate._traced = false
- deprecate._warned = Object.create(null)
-
- deprecate.function = wrapfunction
- deprecate.property = wrapproperty
-
- return deprecate
-}
-
-/**
- * Return a wrapped function in a deprecation message.
- *
- * This is a no-op version of the wrapper, which does nothing but call
- * validation.
- */
-
-function wrapfunction (fn, message) {
- if (typeof fn !== 'function') {
- throw new TypeError('argument fn must be a function')
- }
-
- return fn
-}
-
-/**
- * Wrap property in a deprecation message.
- *
- * This is a no-op version of the wrapper, which does nothing but call
- * validation.
- */
-
-function wrapproperty (obj, prop, message) {
- if (!obj || (typeof obj !== 'object' && typeof obj !== 'function')) {
- throw new TypeError('argument obj must be object')
- }
-
- var descriptor = Object.getOwnPropertyDescriptor(obj, prop)
-
- if (!descriptor) {
- throw new TypeError('must call property on owner object')
- }
-
- if (!descriptor.configurable) {
- throw new TypeError('property must be configurable')
- }
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/escape-html/Readme.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/escape-html/Readme.md
deleted file mode 100644
index 653d9eaa793317827ce724c4a0756110e9356fc8..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/escape-html/Readme.md
+++ /dev/null
@@ -1,43 +0,0 @@
-
-# escape-html
-
- Escape string for use in HTML
-
-## Example
-
-```js
-var escape = require('escape-html');
-var html = escape('foo & bar');
-// -> foo & bar
-```
-
-## Benchmark
-
-```
-$ npm run-script bench
-
-> escape-html@1.0.3 bench nodejs-escape-html
-> node benchmark/index.js
-
-
- http_parser@1.0
- node@0.10.33
- v8@3.14.5.9
- ares@1.9.0-DEV
- uv@0.10.29
- zlib@1.2.3
- modules@11
- openssl@1.0.1j
-
- 1 test completed.
- 2 tests completed.
- 3 tests completed.
-
- no special characters x 19,435,271 ops/sec ±0.85% (187 runs sampled)
- single special character x 6,132,421 ops/sec ±0.67% (194 runs sampled)
- many special characters x 3,175,826 ops/sec ±0.65% (193 runs sampled)
-```
-
-## License
-
- MIT
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/README.md
deleted file mode 100644
index 4ae71f6d06437c4217c7423b8f15cdcee383b62b..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/README.md
+++ /dev/null
@@ -1,495 +0,0 @@
-# ws: a Node.js WebSocket library
-
-[](https://www.npmjs.com/package/ws)
-[](https://github.com/websockets/ws/actions?query=workflow%3ACI+branch%3Amaster)
-[](https://coveralls.io/github/websockets/ws)
-
-ws is a simple to use, blazing fast, and thoroughly tested WebSocket client and
-server implementation.
-
-Passes the quite extensive Autobahn test suite: [server][server-report],
-[client][client-report].
-
-**Note**: This module does not work in the browser. The client in the docs is a
-reference to a back end with the role of a client in the WebSocket
-communication. Browser clients must use the native
-[`WebSocket`](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket)
-object. To make the same code work seamlessly on Node.js and the browser, you
-can use one of the many wrappers available on npm, like
-[isomorphic-ws](https://github.com/heineiuo/isomorphic-ws).
-
-## Table of Contents
-
-- [Protocol support](#protocol-support)
-- [Installing](#installing)
- - [Opt-in for performance](#opt-in-for-performance)
-- [API docs](#api-docs)
-- [WebSocket compression](#websocket-compression)
-- [Usage examples](#usage-examples)
- - [Sending and receiving text data](#sending-and-receiving-text-data)
- - [Sending binary data](#sending-binary-data)
- - [Simple server](#simple-server)
- - [External HTTP/S server](#external-https-server)
- - [Multiple servers sharing a single HTTP/S server](#multiple-servers-sharing-a-single-https-server)
- - [Client authentication](#client-authentication)
- - [Server broadcast](#server-broadcast)
- - [Round-trip time](#round-trip-time)
- - [Use the Node.js streams API](#use-the-nodejs-streams-api)
- - [Other examples](#other-examples)
-- [FAQ](#faq)
- - [How to get the IP address of the client?](#how-to-get-the-ip-address-of-the-client)
- - [How to detect and close broken connections?](#how-to-detect-and-close-broken-connections)
- - [How to connect via a proxy?](#how-to-connect-via-a-proxy)
-- [Changelog](#changelog)
-- [License](#license)
-
-## Protocol support
-
-- **HyBi drafts 07-12** (Use the option `protocolVersion: 8`)
-- **HyBi drafts 13-17** (Current default, alternatively option
- `protocolVersion: 13`)
-
-## Installing
-
-```
-npm install ws
-```
-
-### Opt-in for performance
-
-There are 2 optional modules that can be installed along side with the ws
-module. These modules are binary addons which improve certain operations.
-Prebuilt binaries are available for the most popular platforms so you don't
-necessarily need to have a C++ compiler installed on your machine.
-
-- `npm install --save-optional bufferutil`: Allows to efficiently perform
- operations such as masking and unmasking the data payload of the WebSocket
- frames.
-- `npm install --save-optional utf-8-validate`: Allows to efficiently check if a
- message contains valid UTF-8.
-
-To not even try to require and use these modules, use the
-[`WS_NO_BUFFER_UTIL`](./doc/ws.md#ws_no_buffer_util) and
-[`WS_NO_UTF_8_VALIDATE`](./doc/ws.md#ws_no_utf_8_validate) environment
-variables. These might be useful to enhance security in systems where a user can
-put a package in the package search path of an application of another user, due
-to how the Node.js resolver algorithm works.
-
-## API docs
-
-See [`/doc/ws.md`](./doc/ws.md) for Node.js-like documentation of ws classes and
-utility functions.
-
-## WebSocket compression
-
-ws supports the [permessage-deflate extension][permessage-deflate] which enables
-the client and server to negotiate a compression algorithm and its parameters,
-and then selectively apply it to the data payloads of each WebSocket message.
-
-The extension is disabled by default on the server and enabled by default on the
-client. It adds a significant overhead in terms of performance and memory
-consumption so we suggest to enable it only if it is really needed.
-
-Note that Node.js has a variety of issues with high-performance compression,
-where increased concurrency, especially on Linux, can lead to [catastrophic
-memory fragmentation][node-zlib-bug] and slow performance. If you intend to use
-permessage-deflate in production, it is worthwhile to set up a test
-representative of your workload and ensure Node.js/zlib will handle it with
-acceptable performance and memory usage.
-
-Tuning of permessage-deflate can be done via the options defined below. You can
-also use `zlibDeflateOptions` and `zlibInflateOptions`, which is passed directly
-into the creation of [raw deflate/inflate streams][node-zlib-deflaterawdocs].
-
-See [the docs][ws-server-options] for more options.
-
-```js
-import WebSocket, { WebSocketServer } from 'ws';
-
-const wss = new WebSocketServer({
- port: 8080,
- perMessageDeflate: {
- zlibDeflateOptions: {
- // See zlib defaults.
- chunkSize: 1024,
- memLevel: 7,
- level: 3
- },
- zlibInflateOptions: {
- chunkSize: 10 * 1024
- },
- // Other options settable:
- clientNoContextTakeover: true, // Defaults to negotiated value.
- serverNoContextTakeover: true, // Defaults to negotiated value.
- serverMaxWindowBits: 10, // Defaults to negotiated value.
- // Below options specified as default values.
- concurrencyLimit: 10, // Limits zlib concurrency for perf.
- threshold: 1024 // Size (in bytes) below which messages
- // should not be compressed if context takeover is disabled.
- }
-});
-```
-
-The client will only use the extension if it is supported and enabled on the
-server. To always disable the extension on the client set the
-`perMessageDeflate` option to `false`.
-
-```js
-import WebSocket from 'ws';
-
-const ws = new WebSocket('ws://www.host.com/path', {
- perMessageDeflate: false
-});
-```
-
-## Usage examples
-
-### Sending and receiving text data
-
-```js
-import WebSocket from 'ws';
-
-const ws = new WebSocket('ws://www.host.com/path');
-
-ws.on('open', function open() {
- ws.send('something');
-});
-
-ws.on('message', function message(data) {
- console.log('received: %s', data);
-});
-```
-
-### Sending binary data
-
-```js
-import WebSocket from 'ws';
-
-const ws = new WebSocket('ws://www.host.com/path');
-
-ws.on('open', function open() {
- const array = new Float32Array(5);
-
- for (var i = 0; i < array.length; ++i) {
- array[i] = i / 2;
- }
-
- ws.send(array);
-});
-```
-
-### Simple server
-
-```js
-import { WebSocketServer } from 'ws';
-
-const wss = new WebSocketServer({ port: 8080 });
-
-wss.on('connection', function connection(ws) {
- ws.on('message', function message(data) {
- console.log('received: %s', data);
- });
-
- ws.send('something');
-});
-```
-
-### External HTTP/S server
-
-```js
-import { createServer } from 'https';
-import { readFileSync } from 'fs';
-import { WebSocketServer } from 'ws';
-
-const server = createServer({
- cert: readFileSync('/path/to/cert.pem'),
- key: readFileSync('/path/to/key.pem')
-});
-const wss = new WebSocketServer({ server });
-
-wss.on('connection', function connection(ws) {
- ws.on('message', function message(data) {
- console.log('received: %s', data);
- });
-
- ws.send('something');
-});
-
-server.listen(8080);
-```
-
-### Multiple servers sharing a single HTTP/S server
-
-```js
-import { createServer } from 'http';
-import { parse } from 'url';
-import { WebSocketServer } from 'ws';
-
-const server = createServer();
-const wss1 = new WebSocketServer({ noServer: true });
-const wss2 = new WebSocketServer({ noServer: true });
-
-wss1.on('connection', function connection(ws) {
- // ...
-});
-
-wss2.on('connection', function connection(ws) {
- // ...
-});
-
-server.on('upgrade', function upgrade(request, socket, head) {
- const { pathname } = parse(request.url);
-
- if (pathname === '/foo') {
- wss1.handleUpgrade(request, socket, head, function done(ws) {
- wss1.emit('connection', ws, request);
- });
- } else if (pathname === '/bar') {
- wss2.handleUpgrade(request, socket, head, function done(ws) {
- wss2.emit('connection', ws, request);
- });
- } else {
- socket.destroy();
- }
-});
-
-server.listen(8080);
-```
-
-### Client authentication
-
-```js
-import { createServer } from 'http';
-import { WebSocketServer } from 'ws';
-
-const server = createServer();
-const wss = new WebSocketServer({ noServer: true });
-
-wss.on('connection', function connection(ws, request, client) {
- ws.on('message', function message(data) {
- console.log(`Received message ${data} from user ${client}`);
- });
-});
-
-server.on('upgrade', function upgrade(request, socket, head) {
- // This function is not defined on purpose. Implement it with your own logic.
- authenticate(request, function next(err, client) {
- if (err || !client) {
- socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n');
- socket.destroy();
- return;
- }
-
- wss.handleUpgrade(request, socket, head, function done(ws) {
- wss.emit('connection', ws, request, client);
- });
- });
-});
-
-server.listen(8080);
-```
-
-Also see the provided [example][session-parse-example] using `express-session`.
-
-### Server broadcast
-
-A client WebSocket broadcasting to all connected WebSocket clients, including
-itself.
-
-```js
-import WebSocket, { WebSocketServer } from 'ws';
-
-const wss = new WebSocketServer({ port: 8080 });
-
-wss.on('connection', function connection(ws) {
- ws.on('message', function message(data, isBinary) {
- wss.clients.forEach(function each(client) {
- if (client.readyState === WebSocket.OPEN) {
- client.send(data, { binary: isBinary });
- }
- });
- });
-});
-```
-
-A client WebSocket broadcasting to every other connected WebSocket clients,
-excluding itself.
-
-```js
-import WebSocket, { WebSocketServer } from 'ws';
-
-const wss = new WebSocketServer({ port: 8080 });
-
-wss.on('connection', function connection(ws) {
- ws.on('message', function message(data, isBinary) {
- wss.clients.forEach(function each(client) {
- if (client !== ws && client.readyState === WebSocket.OPEN) {
- client.send(data, { binary: isBinary });
- }
- });
- });
-});
-```
-
-### Round-trip time
-
-```js
-import WebSocket from 'ws';
-
-const ws = new WebSocket('wss://websocket-echo.com/');
-
-ws.on('open', function open() {
- console.log('connected');
- ws.send(Date.now());
-});
-
-ws.on('close', function close() {
- console.log('disconnected');
-});
-
-ws.on('message', function message(data) {
- console.log(`Round-trip time: ${Date.now() - data} ms`);
-
- setTimeout(function timeout() {
- ws.send(Date.now());
- }, 500);
-});
-```
-
-### Use the Node.js streams API
-
-```js
-import WebSocket, { createWebSocketStream } from 'ws';
-
-const ws = new WebSocket('wss://websocket-echo.com/');
-
-const duplex = createWebSocketStream(ws, { encoding: 'utf8' });
-
-duplex.pipe(process.stdout);
-process.stdin.pipe(duplex);
-```
-
-### Other examples
-
-For a full example with a browser client communicating with a ws server, see the
-examples folder.
-
-Otherwise, see the test cases.
-
-## FAQ
-
-### How to get the IP address of the client?
-
-The remote IP address can be obtained from the raw socket.
-
-```js
-import { WebSocketServer } from 'ws';
-
-const wss = new WebSocketServer({ port: 8080 });
-
-wss.on('connection', function connection(ws, req) {
- const ip = req.socket.remoteAddress;
-});
-```
-
-When the server runs behind a proxy like NGINX, the de-facto standard is to use
-the `X-Forwarded-For` header.
-
-```js
-wss.on('connection', function connection(ws, req) {
- const ip = req.headers['x-forwarded-for'].split(',')[0].trim();
-});
-```
-
-### How to detect and close broken connections?
-
-Sometimes the link between the server and the client can be interrupted in a way
-that keeps both the server and the client unaware of the broken state of the
-connection (e.g. when pulling the cord).
-
-In these cases ping messages can be used as a means to verify that the remote
-endpoint is still responsive.
-
-```js
-import { WebSocketServer } from 'ws';
-
-function heartbeat() {
- this.isAlive = true;
-}
-
-const wss = new WebSocketServer({ port: 8080 });
-
-wss.on('connection', function connection(ws) {
- ws.isAlive = true;
- ws.on('pong', heartbeat);
-});
-
-const interval = setInterval(function ping() {
- wss.clients.forEach(function each(ws) {
- if (ws.isAlive === false) return ws.terminate();
-
- ws.isAlive = false;
- ws.ping();
- });
-}, 30000);
-
-wss.on('close', function close() {
- clearInterval(interval);
-});
-```
-
-Pong messages are automatically sent in response to ping messages as required by
-the spec.
-
-Just like the server example above your clients might as well lose connection
-without knowing it. You might want to add a ping listener on your clients to
-prevent that. A simple implementation would be:
-
-```js
-import WebSocket from 'ws';
-
-function heartbeat() {
- clearTimeout(this.pingTimeout);
-
- // Use `WebSocket#terminate()`, which immediately destroys the connection,
- // instead of `WebSocket#close()`, which waits for the close timer.
- // Delay should be equal to the interval at which your server
- // sends out pings plus a conservative assumption of the latency.
- this.pingTimeout = setTimeout(() => {
- this.terminate();
- }, 30000 + 1000);
-}
-
-const client = new WebSocket('wss://websocket-echo.com/');
-
-client.on('open', heartbeat);
-client.on('ping', heartbeat);
-client.on('close', function clear() {
- clearTimeout(this.pingTimeout);
-});
-```
-
-### How to connect via a proxy?
-
-Use a custom `http.Agent` implementation like [https-proxy-agent][] or
-[socks-proxy-agent][].
-
-## Changelog
-
-We're using the GitHub [releases][changelog] for changelog entries.
-
-## License
-
-[MIT](LICENSE)
-
-[changelog]: https://github.com/websockets/ws/releases
-[client-report]: http://websockets.github.io/ws/autobahn/clients/
-[https-proxy-agent]: https://github.com/TooTallNate/node-https-proxy-agent
-[node-zlib-bug]: https://github.com/nodejs/node/issues/8871
-[node-zlib-deflaterawdocs]:
- https://nodejs.org/api/zlib.html#zlib_zlib_createdeflateraw_options
-[permessage-deflate]: https://tools.ietf.org/html/rfc7692
-[server-report]: http://websockets.github.io/ws/autobahn/servers/
-[session-parse-example]: ./examples/express-session-parse
-[socks-proxy-agent]: https://github.com/TooTallNate/node-socks-proxy-agent
-[ws-server-options]: ./doc/ws.md#new-websocketserveroptions-callback
diff --git a/spaces/fffiloni/image-to-sound-fx-debug/app.py b/spaces/fffiloni/image-to-sound-fx-debug/app.py
deleted file mode 100644
index dac7dcc2d72a8f91340d6350fa4d99428b43a853..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/image-to-sound-fx-debug/app.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import gradio as gr
-
-
-caption = gr.Blocks.load(name="spaces/SRDdev/Image-Caption")
-audio_gen = gr.Blocks.load(name="spaces/haoheliu/audioldm-text-to-audio-generation")
-
-def infer(image_input):
- cap = caption(image_input, fn_index=0)
- sound = audio_gen(cap, 5, 2.5, 45, 3, fn_index=0)
-
- return gr.Textbox.update(value=cap, visible=True), sound
-
-title = """
-
-
-
- Image to Sound Effect
-
-
-
- Convert an image to a corresponding sound effect generated through GPT2 Image Captioning & AudioLDM
-
-
-"""
-
-article = """
-
-
-
-
-
You may also like:
-
-
-
-
-
-
-
-
-
-"""
-
-with gr.Blocks(css="style.css") as demo:
- with gr.Column(elem_id="col-container"):
-
- gr.HTML(title)
-
- input_img = gr.Image(type="filepath", elem_id="input-img")
- caption_output = gr.Textbox(label="Caption", lines=1, visible=False, elem_id="text-caption")
- sound_output = gr.Video(label="Result", elem_id="sound-output")
-
- generate = gr.Button("Generate SFX from Image")
-
-
-
- gr.HTML(article)
-
- generate.click(infer, inputs=[input_img], outputs=[caption_output, sound_output], api_name="i2fx")
-
-
-demo.queue(max_size=32, concurrency_count=20).launch()
diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_43.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_43.py
deleted file mode 100644
index 2b0025e3ecdf5139930bade493849aaa8693480b..0000000000000000000000000000000000000000
--- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_43.py
+++ /dev/null
@@ -1,26 +0,0 @@
-
-import re
-
-def is_spam(message):
- # Rule 1: Check for the presence of special characters or spaces between characters (common in spam messages)
- if re.search(r'[\W]', message):
- return True
-
- # Rule 2: Check for non-standard domain names
- domain_regex = r'(http|https)://[^\s/]+'
- domain_matches = re.findall(domain_regex, message)
- for match in domain_matches:
- if not ('.' in match and len(match) > 5): # exclude standard ones
- return True
-
- # Rule 3: Check for unusual percentage signs
- if re.search(r'[%][^ ][^\d]', message):
- return True
-
- # Rule 4: Check for the presence of unusual substrings (광고, 보장, 무료, 무료거부, 등록, SMS, 입장, 1000명, 무조건, 매수)
- spam_keywords = ["광고", "보장", "무료", "무료거부", "등록", "SMS", "입장", "1000명", "무조건", "매수"]
- for word in spam_keywords:
- if word in message:
- return True
-
- return False
diff --git a/spaces/flatindo/generate2/diffusion_webui/utils/preprocces_utils.py b/spaces/flatindo/generate2/diffusion_webui/utils/preprocces_utils.py
deleted file mode 100644
index f1824721e4804eecd48b453a37c1ce0377468773..0000000000000000000000000000000000000000
--- a/spaces/flatindo/generate2/diffusion_webui/utils/preprocces_utils.py
+++ /dev/null
@@ -1,96 +0,0 @@
-from controlnet_aux import (
- CannyDetector,
- ContentShuffleDetector,
- HEDdetector,
- LineartAnimeDetector,
- LineartDetector,
- MediapipeFaceDetector,
- MidasDetector,
- MLSDdetector,
- NormalBaeDetector,
- OpenposeDetector,
- PidiNetDetector,
- SamDetector,
- ZoeDetector,
-)
-
-import numpy as np
-import cv2
-
-def pad64(x):
- return int(np.ceil(float(x) / 64.0) * 64 - x)
-
-def HWC3(x):
- assert x.dtype == np.uint8
- if x.ndim == 2:
- x = x[:, :, None]
- assert x.ndim == 3
- H, W, C = x.shape
- assert C == 1 or C == 3 or C == 4
- if C == 3:
- return x
- if C == 1:
- return np.concatenate([x, x, x], axis=2)
- if C == 4:
- color = x[:, :, 0:3].astype(np.float32)
- alpha = x[:, :, 3:4].astype(np.float32) / 255.0
- y = color * alpha + 255.0 * (1.0 - alpha)
- y = y.clip(0, 255).astype(np.uint8)
- return y
-
-def safer_memory(x):
- return np.ascontiguousarray(x.copy()).copy()
-
-
-def resize_image_with_pad(input_image, resolution, skip_hwc3=False):
- if skip_hwc3:
- img = input_image
- else:
- img = HWC3(input_image)
-
- H_raw, W_raw, _ = img.shape
- k = float(resolution) / float(min(H_raw, W_raw))
- interpolation = cv2.INTER_CUBIC if k > 1 else cv2.INTER_AREA
- H_target = int(np.round(float(H_raw) * k))
- W_target = int(np.round(float(W_raw) * k))
- img = cv2.resize(img, (W_target, H_target), interpolation=interpolation)
- H_pad, W_pad = pad64(H_target), pad64(W_target)
- img_padded = np.pad(img, [[0, H_pad], [0, W_pad], [0, 0]], mode='edge')
-
- def remove_pad(x):
- return safer_memory(x[:H_target, :W_target])
-
- return safer_memory(img_padded), remove_pad
-
-
-def scribble_xdog(img, res=512, thr_a=32, **kwargs):
- img, remove_pad = resize_image_with_pad(img, res)
- g1 = cv2.GaussianBlur(img.astype(np.float32), (0, 0), 0.5)
- g2 = cv2.GaussianBlur(img.astype(np.float32), (0, 0), 5.0)
- dog = (255 - np.min(g2 - g1, axis=2)).clip(0, 255).astype(np.uint8)
- result = np.zeros_like(img, dtype=np.uint8)
- result[2 * (255 - dog) > thr_a] = 255
- return remove_pad(result), True
-
-def none_preprocces(image_path:str):
- return Image.open(image_path)
-
-PREPROCCES_DICT = {
- "Hed": HEDdetector.from_pretrained("lllyasviel/Annotators"),
- "Midas": MidasDetector.from_pretrained("lllyasviel/Annotators"),
- "MLSD": MLSDdetector.from_pretrained("lllyasviel/Annotators"),
- "Openpose": OpenposeDetector.from_pretrained("lllyasviel/Annotators"),
- "PidiNet": PidiNetDetector.from_pretrained("lllyasviel/Annotators"),
- "NormalBae": NormalBaeDetector.from_pretrained("lllyasviel/Annotators"),
- "Lineart": LineartDetector.from_pretrained("lllyasviel/Annotators"),
- "LineartAnime": LineartAnimeDetector.from_pretrained(
- "lllyasviel/Annotators"
- ),
- "Zoe": ZoeDetector.from_pretrained("lllyasviel/Annotators"),
- "Canny": CannyDetector(),
- "ContentShuffle": ContentShuffleDetector(),
- "MediapipeFace": MediapipeFaceDetector(),
- "ScribbleXDOG": scribble_xdog,
- "None": none_preprocces
-}
-
\ No newline at end of file
diff --git a/spaces/foduucom/stockmarket-future-prediction/app.py b/spaces/foduucom/stockmarket-future-prediction/app.py
deleted file mode 100644
index 168086fd80ef8fd2692b567f5d45ae9955e35752..0000000000000000000000000000000000000000
--- a/spaces/foduucom/stockmarket-future-prediction/app.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import gradio as gr
-from gradio import components as gc
-import cv2
-import requests
-import os
-from ultralyticsplus import YOLO, render_result
-
-# Model Heading and Description
-model_heading = "StockMarket: Trends Recognition for Trading Success"
-description = """ 🌟 Elevate Your Trading Odyssey with Trend Predictions! 🌟
-Dive deep into the enigma of market trends with the precision of a seasoned detective. 🕵️♂️ With Foduu AI's unparalleled insights, transition seamlessly from bearish 'Downs' to bullish 'Ups'. 📉📈
-Consider us your trading compass, guiding you through the financial wilderness like a modern-day Gandalf. 🧙♂️ Whether you're a seasoned trader or just embarking on your journey, we're here to illuminate your path. 💡
-Trading with us? It's like possessing the secret recipe to investment success. 🍲💰
-Intrigued? Dive into the world of trading alchemy! 🌌
-💌 Reach Out: info@foddu.com
-👍 Give us a thumbs up and embark on an unparalleled trading escapade! No, you won't gain superpowers, but you'll be one step closer to mastering the markets! 🚀🌍📊!"""
-
-image_path= [['test/1.jpg', 'foduucom/stockmarket-future-prediction', 640, 0.25, 0.45], ['test/2.jpg', 'foduucom/stockmarket-future-prediction', 640, 0.25, 0.45],['test/3.jpg', 'foduucom/stockmarket-future-prediction', 640, 0.25, 0.45]]
-
-# Load YOLO model
-model = YOLO("foduucom/stockmarket-future-prediction")
-
-def yolov8_img_inference(
- image: gc.Image = None,
- model_path: str = "foduucom/stockmarket-future-prediction",
- image_size: gc.Slider = 640,
- conf_threshold: gc.Slider = 0.25,
- iou_threshold: gc.Slider = 0.45
-):
- model = YOLO(model_path)
- model.overrides['conf'] = conf_threshold
- model.overrides['iou'] = iou_threshold
- model.overrides['agnostic_nms'] = False
- model.overrides['max_det'] = 1000
- results = model.predict(image)
- render = render_result(model=model, image=image, result=results[0])
- return render
-
-inputs_image = [
- gc.Image(type="filepath", label="Input Image"),
- gc.Dropdown(["foduucom/stockmarket-future-prediction"], default="foduucom/stockmarket-future-prediction", label="Model"),
- gc.Slider(minimum=320, maximum=1280, default=640, step=32, label="Image Size"),
- gc.Slider(minimum=0.0, maximum=1.0, default=0.25, step=0.05, label="Confidence Threshold"),
- gc.Slider(minimum=0.0, maximum=1.0, default=0.45, step=0.05, label="IOU Threshold"),
-]
-
-outputs_image = gc.Image(type="filepath", label="Output Image")
-
-interface_image = gr.Interface(
- fn=yolov8_img_inference,
- inputs=inputs_image,
- outputs=outputs_image,
- title=model_heading,
- description=description,
- examples=image_path,
- cache_examples=False,
- theme='huggingface'
-)
-
-interface_image.queue()
-interface_image.launch(debug=True)
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_plug/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_plug/run.py
deleted file mode 100644
index 97684fa61b5c6a66eb5e07fa4162510f9d155415..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_plug/run.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import gradio as gr
-
-
-def change_tab():
- return gr.Tabs.update(selected=2)
-
-
-identity_demo, input_demo, output_demo = gr.Blocks(), gr.Blocks(), gr.Blocks()
-
-with identity_demo:
- gr.Interface(lambda x: x, "text", "text")
-
-with input_demo:
- t = gr.Textbox(label="Enter your text here")
- with gr.Row():
- btn = gr.Button("Submit")
- clr = gr.Button("Clear")
- clr.click(lambda x: "", t, t)
-
-with output_demo:
- gr.Textbox("This is a static output")
-
-with gr.Blocks() as demo:
- gr.Markdown("Three demos in one!")
- with gr.Tabs(selected=1) as tabs:
- with gr.TabItem("Text Identity", id=0):
- identity_demo.render()
- with gr.TabItem("Text Input", id=1):
- input_demo.render()
- with gr.TabItem("Text Static", id=2):
- output_demo.render()
- btn = gr.Button("Change tab")
- btn.click(inputs=None, outputs=tabs, fn=change_tab)
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/fun-research/FC-CLIP/datasets/prepare_pascal_voc_sem_seg.py b/spaces/fun-research/FC-CLIP/datasets/prepare_pascal_voc_sem_seg.py
deleted file mode 100644
index 9b0b0e133caebe60a64a17f923a23cba4c323363..0000000000000000000000000000000000000000
--- a/spaces/fun-research/FC-CLIP/datasets/prepare_pascal_voc_sem_seg.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# ------------------------------------------------------------------------------
-# Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License.
-# To view a copy of this license, visit
-# https://github.com/NVlabs/ODISE/blob/main/LICENSE
-#
-# Written by Jiarui Xu
-# ------------------------------------------------------------------------------
-
-import os
-from pathlib import Path
-import shutil
-
-import numpy as np
-import tqdm
-from PIL import Image
-
-
-def convert_pas21(input, output):
- img = np.asarray(Image.open(input))
- assert img.dtype == np.uint8
- # do nothing
- Image.fromarray(img).save(output)
-
-def convert_pas20(input, output):
- img = np.array(Image.open(input))
- img[img == 0] = 255
- img = img - 1
- img[img == 254] = 255
- assert img.dtype == np.uint8
- # do nothing
- Image.fromarray(img).save(output)
-
-
-if __name__ == "__main__":
- dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) / "pascal_voc_d2"
- voc_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) / "VOCdevkit/VOC2012"
- for split in ["training", "validation"]:
- if split == "training":
- img_name_path = voc_dir / "ImageSets/Segmentation/train.txt"
- else:
- img_name_path = voc_dir / "ImageSets/Segmentation/val.txt"
- img_dir = voc_dir / "JPEGImages"
- ann_dir = voc_dir / "SegmentationClass"
-
- output_img_dir = dataset_dir / "images" / split
- output_ann_dir_21 = dataset_dir / "annotations_pascal21" / split
- output_ann_dir_20 = dataset_dir / "annotations_pascal20" / split
-
- output_img_dir.mkdir(parents=True, exist_ok=True)
- output_ann_dir_21.mkdir(parents=True, exist_ok=True)
- output_ann_dir_20.mkdir(parents=True, exist_ok=True)
-
- with open(img_name_path) as f:
- for line in tqdm.tqdm(f.readlines()):
- img_name = line.strip()
- img_path = img_dir / f"{img_name}.jpg"
- ann_path = ann_dir / f"{img_name}.png"
-
- # print(f'copy2 {output_img_dir}')
- shutil.copy2(img_path, output_img_dir)
- # print(f"convert {ann_dir} to {output_ann_dir / f'{img_name}.png'}")
- convert_pas21(ann_path, output_ann_dir_21 / f"{img_name}.png")
- convert_pas20(ann_path, output_ann_dir_20 / f"{img_name}.png")
\ No newline at end of file
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dnl_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dnl_head.py
deleted file mode 100644
index 333280c5947066fd3c7ebcfe302a0e7ad65480d5..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/dnl_head.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import torch
-from annotator.uniformer.mmcv.cnn import NonLocal2d
-from torch import nn
-
-from ..builder import HEADS
-from .fcn_head import FCNHead
-
-
-class DisentangledNonLocal2d(NonLocal2d):
- """Disentangled Non-Local Blocks.
-
- Args:
- temperature (float): Temperature to adjust attention. Default: 0.05
- """
-
- def __init__(self, *arg, temperature, **kwargs):
- super().__init__(*arg, **kwargs)
- self.temperature = temperature
- self.conv_mask = nn.Conv2d(self.in_channels, 1, kernel_size=1)
-
- def embedded_gaussian(self, theta_x, phi_x):
- """Embedded gaussian with temperature."""
-
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
- pairwise_weight = torch.matmul(theta_x, phi_x)
- if self.use_scale:
- # theta_x.shape[-1] is `self.inter_channels`
- pairwise_weight /= theta_x.shape[-1]**0.5
- pairwise_weight /= self.temperature
- pairwise_weight = pairwise_weight.softmax(dim=-1)
- return pairwise_weight
-
- def forward(self, x):
- # x: [N, C, H, W]
- n = x.size(0)
-
- # g_x: [N, HxW, C]
- g_x = self.g(x).view(n, self.inter_channels, -1)
- g_x = g_x.permute(0, 2, 1)
-
- # theta_x: [N, HxW, C], phi_x: [N, C, HxW]
- if self.mode == 'gaussian':
- theta_x = x.view(n, self.in_channels, -1)
- theta_x = theta_x.permute(0, 2, 1)
- if self.sub_sample:
- phi_x = self.phi(x).view(n, self.in_channels, -1)
- else:
- phi_x = x.view(n, self.in_channels, -1)
- elif self.mode == 'concatenation':
- theta_x = self.theta(x).view(n, self.inter_channels, -1, 1)
- phi_x = self.phi(x).view(n, self.inter_channels, 1, -1)
- else:
- theta_x = self.theta(x).view(n, self.inter_channels, -1)
- theta_x = theta_x.permute(0, 2, 1)
- phi_x = self.phi(x).view(n, self.inter_channels, -1)
-
- # subtract mean
- theta_x -= theta_x.mean(dim=-2, keepdim=True)
- phi_x -= phi_x.mean(dim=-1, keepdim=True)
-
- pairwise_func = getattr(self, self.mode)
- # pairwise_weight: [N, HxW, HxW]
- pairwise_weight = pairwise_func(theta_x, phi_x)
-
- # y: [N, HxW, C]
- y = torch.matmul(pairwise_weight, g_x)
- # y: [N, C, H, W]
- y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels,
- *x.size()[2:])
-
- # unary_mask: [N, 1, HxW]
- unary_mask = self.conv_mask(x)
- unary_mask = unary_mask.view(n, 1, -1)
- unary_mask = unary_mask.softmax(dim=-1)
- # unary_x: [N, 1, C]
- unary_x = torch.matmul(unary_mask, g_x)
- # unary_x: [N, C, 1, 1]
- unary_x = unary_x.permute(0, 2, 1).contiguous().reshape(
- n, self.inter_channels, 1, 1)
-
- output = x + self.conv_out(y + unary_x)
-
- return output
-
-
-@HEADS.register_module()
-class DNLHead(FCNHead):
- """Disentangled Non-Local Neural Networks.
-
- This head is the implementation of `DNLNet
- `_.
-
- Args:
- reduction (int): Reduction factor of projection transform. Default: 2.
- use_scale (bool): Whether to scale pairwise_weight by
- sqrt(1/inter_channels). Default: False.
- mode (str): The nonlocal mode. Options are 'embedded_gaussian',
- 'dot_product'. Default: 'embedded_gaussian.'.
- temperature (float): Temperature to adjust attention. Default: 0.05
- """
-
- def __init__(self,
- reduction=2,
- use_scale=True,
- mode='embedded_gaussian',
- temperature=0.05,
- **kwargs):
- super(DNLHead, self).__init__(num_convs=2, **kwargs)
- self.reduction = reduction
- self.use_scale = use_scale
- self.mode = mode
- self.temperature = temperature
- self.dnl_block = DisentangledNonLocal2d(
- in_channels=self.channels,
- reduction=self.reduction,
- use_scale=self.use_scale,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- mode=self.mode,
- temperature=self.temperature)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- output = self.convs[0](x)
- output = self.dnl_block(output)
- output = self.convs[1](output)
- if self.concat_input:
- output = self.conv_cat(torch.cat([x, output], dim=1))
- output = self.cls_seg(output)
- return output
diff --git a/spaces/gradio-discord-bots/Llama-2-70b-chat-hf/README.md b/spaces/gradio-discord-bots/Llama-2-70b-chat-hf/README.md
deleted file mode 100644
index edc0e82b72140b9ee9cf82d1235a9bbcae081cc2..0000000000000000000000000000000000000000
--- a/spaces/gradio-discord-bots/Llama-2-70b-chat-hf/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Llama 2 70b Chat Hf
-emoji: 🦙
-colorFrom: blue
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gradio/HuBERT/examples/simultaneous_translation/utils/functions.py b/spaces/gradio/HuBERT/examples/simultaneous_translation/utils/functions.py
deleted file mode 100644
index f795b5f31cee6d9f8387d6402994b9cbb4c98190..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/simultaneous_translation/utils/functions.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-def exclusive_cumprod(tensor, dim: int, eps: float = 1e-10):
- """
- Implementing exclusive cumprod.
- There is cumprod in pytorch, however there is no exclusive mode.
- cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i]
- exclusive means cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i]
- """
- tensor_size = list(tensor.size())
- tensor_size[dim] = 1
- return_tensor = safe_cumprod(
- torch.cat([torch.ones(tensor_size).type_as(tensor), tensor], dim=dim),
- dim=dim,
- eps=eps,
- )
-
- if dim == 0:
- return return_tensor[:-1]
- elif dim == 1:
- return return_tensor[:, :-1]
- elif dim == 2:
- return return_tensor[:, :, :-1]
- else:
- raise RuntimeError("Cumprod on dimension 3 and more is not implemented")
-
-
-def safe_cumprod(tensor, dim: int, eps: float = 1e-10):
- """
- An implementation of cumprod to prevent precision issue.
- cumprod(x)
- = [x1, x1x2, x1x2x3, ....]
- = [exp(log(x1)), exp(log(x1) + log(x2)), exp(log(x1) + log(x2) + log(x3)), ...]
- = exp(cumsum(log(x)))
- """
-
- if (tensor + eps < 0).any().item():
- raise RuntimeError(
- "Safe cumprod can only take non-negative tensors as input."
- "Consider use torch.cumprod if you want to calculate negative values."
- )
-
- log_tensor = torch.log(tensor + eps)
- cumsum_log_tensor = torch.cumsum(log_tensor, dim)
- exp_cumsum_log_tensor = torch.exp(cumsum_log_tensor)
- return exp_cumsum_log_tensor
-
-
-def lengths_to_mask(lengths, max_len: int, dim: int = 0, negative_mask: bool = False):
- """
- Convert a tensor of lengths to mask
- For example, lengths = [[2, 3, 4]], max_len = 5
- mask =
- [[1, 1, 1],
- [1, 1, 1],
- [0, 1, 1],
- [0, 0, 1],
- [0, 0, 0]]
- """
- assert len(lengths.size()) <= 2
- if len(lengths) == 2:
- if dim == 1:
- lengths = lengths.t()
- lengths = lengths
- else:
- lengths = lengths.unsqueeze(1)
-
- # lengths : batch_size, 1
- lengths = lengths.view(-1, 1)
-
- batch_size = lengths.size(0)
- # batch_size, max_len
- mask = torch.arange(max_len).expand(batch_size, max_len).type_as(lengths) < lengths
-
- if negative_mask:
- mask = ~mask
-
- if dim == 0:
- # max_len, batch_size
- mask = mask.t()
-
- return mask
-
-
-def moving_sum(x, start_idx: int, end_idx: int):
- """
- From MONOTONIC CHUNKWISE ATTENTION
- https://arxiv.org/pdf/1712.05382.pdf
- Equation (18)
-
- x = [x_1, x_2, ..., x_N]
- MovingSum(x, start_idx, end_idx)_n = Sigma_{m=n−(start_idx−1)}^{n+end_idx-1} x_m
- for n in {1, 2, 3, ..., N}
-
- x : src_len, batch_size
- start_idx : start idx
- end_idx : end idx
-
- Example
- src_len = 5
- batch_size = 3
- x =
- [[ 0, 5, 10],
- [ 1, 6, 11],
- [ 2, 7, 12],
- [ 3, 8, 13],
- [ 4, 9, 14]]
-
- MovingSum(x, 3, 1) =
- [[ 0, 5, 10],
- [ 1, 11, 21],
- [ 3, 18, 33],
- [ 6, 21, 36],
- [ 9, 24, 39]]
-
- MovingSum(x, 1, 3) =
- [[ 3, 18, 33],
- [ 6, 21, 36],
- [ 9, 24, 39],
- [ 7, 17, 27],
- [ 4, 9, 14]]
- """
- assert start_idx > 0 and end_idx > 0
- assert len(x.size()) == 2
- src_len, batch_size = x.size()
- # batch_size, 1, src_len
- x = x.t().unsqueeze(1)
- # batch_size, 1, src_len
- moving_sum_weight = x.new_ones([1, 1, end_idx + start_idx - 1])
-
- moving_sum = (
- torch.nn.functional.conv1d(
- x, moving_sum_weight, padding=start_idx + end_idx - 1
- )
- .squeeze(1)
- .t()
- )
- moving_sum = moving_sum[end_idx:-start_idx]
-
- assert src_len == moving_sum.size(0)
- assert batch_size == moving_sum.size(1)
-
- return moving_sum
diff --git a/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec2.py b/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec2.py
deleted file mode 100644
index 714fd3ab50443b8d15715b1cf5abd4eb517298c4..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec2.py
+++ /dev/null
@@ -1,1016 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-from typing import List, Tuple
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.data.data_utils import compute_mask_indices
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-from fairseq.models import BaseFairseqModel, register_model
-from fairseq.modules import (
- Fp32GroupNorm,
- Fp32LayerNorm,
- GradMultiply,
- GumbelVectorQuantizer,
- LayerNorm,
- MultiheadAttention,
- SamePad,
- TransposeLast,
-)
-from fairseq.modules.transformer_sentence_encoder import init_bert_params
-from fairseq.utils import buffered_arange, index_put, is_xla_tensor
-
-
-EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"])
-MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(["static", "uniform", "normal", "poisson"])
-
-
-@dataclass
-class Wav2Vec2Config(FairseqDataclass):
- extractor_mode: EXTRACTOR_MODE_CHOICES = field(
- default="default",
- metadata={
- "help": "mode for feature extractor. default has a single group norm with d "
- "groups in the first conv block, whereas layer_norm has layer norms in "
- "every block (meant to use with normalize=True)"
- },
- )
- encoder_layers: int = field(
- default=12, metadata={"help": "num encoder layers in the transformer"}
- )
- encoder_embed_dim: int = field(
- default=768, metadata={"help": "encoder embedding dimension"}
- )
- encoder_ffn_embed_dim: int = field(
- default=3072, metadata={"help": "encoder embedding dimension for FFN"}
- )
- encoder_attention_heads: int = field(
- default=12, metadata={"help": "num encoder attention heads"}
- )
- activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field(
- default="gelu", metadata={"help": "activation function to use"}
- )
-
- # dropouts
- dropout: float = field(
- default=0.1, metadata={"help": "dropout probability for the transformer"}
- )
- attention_dropout: float = field(
- default=0.1, metadata={"help": "dropout probability for attention weights"}
- )
- activation_dropout: float = field(
- default=0.0, metadata={"help": "dropout probability after activation in FFN"}
- )
- encoder_layerdrop: float = field(
- default=0.0, metadata={"help": "probability of dropping a tarnsformer layer"}
- )
- dropout_input: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the input (after feat extr)"},
- )
- dropout_features: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the features (after feat extr)"},
- )
-
- final_dim: int = field(
- default=0,
- metadata={
- "help": "project final representations and targets to this many dimensions."
- "set to encoder_embed_dim is <= 0"
- },
- )
- layer_norm_first: bool = field(
- default=False, metadata={"help": "apply layernorm first in the transformer"}
- )
- conv_feature_layers: str = field(
- default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]",
- metadata={
- "help": "string describing convolutional feature extraction layers in form of a python list that contains "
- "[(dim, kernel_size, stride), ...]"
- },
- )
- conv_bias: bool = field(
- default=False, metadata={"help": "include bias in conv encoder"}
- )
- logit_temp: float = field(
- default=0.1, metadata={"help": "temperature to divide logits by"}
- )
- quantize_targets: bool = field(
- default=False, metadata={"help": "use quantized targets"}
- )
- quantize_input: bool = field(
- default=False, metadata={"help": "use quantized inputs"}
- )
- same_quantizer: bool = field(
- default=False, metadata={"help": "use same quantizer for inputs and targets"}
- )
- target_glu: bool = field(
- default=False, metadata={"help": "adds projection + glu to targets"}
- )
- feature_grad_mult: float = field(
- default=1.0, metadata={"help": "multiply feature extractor var grads by this"}
- )
- quantizer_depth: int = field(
- default=1,
- metadata={"help": "number of quantizer layers"},
- )
- quantizer_factor: int = field(
- default=3,
- metadata={
- "help": "dimensionality increase for inner quantizer layers (if depth > 1)"
- },
- )
- latent_vars: int = field(
- default=320,
- metadata={"help": "number of latent variables V in each group of the codebook"},
- )
- latent_groups: int = field(
- default=2,
- metadata={"help": "number of groups G of latent variables in the codebook"},
- )
- latent_dim: int = field(
- default=0,
- metadata={
- "help": "if > 0, uses this dimensionality for latent variables. "
- "otherwise uses final_dim / latent_groups"
- },
- )
-
- # masking
- mask_length: int = field(default=10, metadata={"help": "mask length"})
- mask_prob: float = field(
- default=0.65, metadata={"help": "probability of replacing a token with mask"}
- )
- mask_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static", metadata={"help": "how to choose mask length"}
- )
- mask_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument (used for more complex distributions), "
- "see help in compute_mask_indices"
- },
- )
- no_mask_overlap: bool = field(
- default=False, metadata={"help": "whether to allow masks to overlap"}
- )
- mask_min_space: int = field(
- default=1,
- metadata={"help": "min space between spans (if no overlap is enabled)"},
- )
-
- # channel masking
- mask_channel_length: int = field(
- default=10, metadata={"help": "length of the mask for features (channels)"}
- )
- mask_channel_prob: float = field(
- default=0.0, metadata={"help": "probability of replacing a feature with 0"}
- )
- mask_channel_before: bool = False
- mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static",
- metadata={"help": "how to choose mask length for channel masking"},
- )
- mask_channel_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument (used for more complex distributions), "
- "see help in compute_mask_indicesh"
- },
- )
- no_mask_channel_overlap: bool = field(
- default=False, metadata={"help": "whether to allow channel masks to overlap"}
- )
- mask_channel_min_space: int = field(
- default=1,
- metadata={"help": "min space between spans (if no overlap is enabled)"},
- )
-
- # negative selection
- num_negatives: int = field(
- default=100,
- metadata={"help": "number of negative examples from the same sample"},
- )
- negatives_from_everywhere: bool = field(
- default=False,
- metadata={"help": "sample negatives from everywhere, not just masked states"},
- )
- cross_sample_negatives: int = field(
- default=0, metadata={"help": "number of negative examples from the any sample"}
- )
- codebook_negatives: int = field(
- default=0, metadata={"help": "number of negative examples codebook"}
- )
-
- # positional embeddings
- conv_pos: int = field(
- default=128,
- metadata={"help": "number of filters for convolutional positional embeddings"},
- )
- conv_pos_groups: int = field(
- default=16,
- metadata={"help": "number of groups for convolutional positional embedding"},
- )
-
- latent_temp: Tuple[float, float, float] = field(
- default=(2, 0.5, 0.999995),
- metadata={
- "help": "temperature for latent variable sampling. "
- "can be tuple of 3 values (start, end, decay)"
- },
- )
-
-
-@register_model("wav2vec2", dataclass=Wav2Vec2Config)
-class Wav2Vec2Model(BaseFairseqModel):
- def __init__(self, cfg: Wav2Vec2Config):
- super().__init__()
- self.cfg = cfg
-
- feature_enc_layers = eval(cfg.conv_feature_layers)
- self.embed = feature_enc_layers[-1][0]
-
- self.feature_extractor = ConvFeatureExtractionModel(
- conv_layers=feature_enc_layers,
- dropout=0.0,
- mode=cfg.extractor_mode,
- conv_bias=cfg.conv_bias,
- )
-
- self.post_extract_proj = (
- nn.Linear(self.embed, cfg.encoder_embed_dim)
- if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input
- else None
- )
-
- self.mask_prob = cfg.mask_prob
- self.mask_selection = cfg.mask_selection
- self.mask_other = cfg.mask_other
- self.mask_length = cfg.mask_length
- self.no_mask_overlap = cfg.no_mask_overlap
- self.mask_min_space = cfg.mask_min_space
-
- self.mask_channel_prob = cfg.mask_channel_prob
- self.mask_channel_before = cfg.mask_channel_before
- self.mask_channel_selection = cfg.mask_channel_selection
- self.mask_channel_other = cfg.mask_channel_other
- self.mask_channel_length = cfg.mask_channel_length
- self.no_mask_channel_overlap = cfg.no_mask_channel_overlap
- self.mask_channel_min_space = cfg.mask_channel_min_space
-
- self.dropout_input = nn.Dropout(cfg.dropout_input)
- self.dropout_features = nn.Dropout(cfg.dropout_features)
-
- self.feature_grad_mult = cfg.feature_grad_mult
-
- self.quantizer = None
- self.input_quantizer = None
-
- self.n_negatives = cfg.num_negatives
- self.cross_sample_negatives = cfg.cross_sample_negatives
- self.codebook_negatives = cfg.codebook_negatives
- self.negatives_from_everywhere = cfg.negatives_from_everywhere
-
- self.logit_temp = cfg.logit_temp
-
- final_dim = cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim
-
- if cfg.quantize_targets:
- vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else final_dim
- self.quantizer = GumbelVectorQuantizer(
- dim=self.embed,
- num_vars=cfg.latent_vars,
- temp=cfg.latent_temp,
- groups=cfg.latent_groups,
- combine_groups=False,
- vq_dim=vq_dim,
- time_first=True,
- weight_proj_depth=cfg.quantizer_depth,
- weight_proj_factor=cfg.quantizer_factor,
- )
- self.project_q = nn.Linear(vq_dim, final_dim)
- else:
- self.project_q = nn.Linear(self.embed, final_dim)
-
- if cfg.quantize_input:
- if cfg.same_quantizer and self.quantizer is not None:
- vq_dim = final_dim
- self.input_quantizer = self.quantizer
- else:
- vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else cfg.encoder_embed_dim
- self.input_quantizer = GumbelVectorQuantizer(
- dim=self.embed,
- num_vars=cfg.latent_vars,
- temp=cfg.latent_temp,
- groups=cfg.latent_groups,
- combine_groups=False,
- vq_dim=vq_dim,
- time_first=True,
- weight_proj_depth=cfg.quantizer_depth,
- weight_proj_factor=cfg.quantizer_factor,
- )
- self.project_inp = nn.Linear(vq_dim, cfg.encoder_embed_dim)
-
- self.mask_emb = nn.Parameter(
- torch.FloatTensor(cfg.encoder_embed_dim).uniform_()
- )
-
- self.encoder = TransformerEncoder(cfg)
- self.layer_norm = LayerNorm(self.embed)
-
- self.target_glu = None
- if cfg.target_glu:
- self.target_glu = nn.Sequential(
- nn.Linear(final_dim, final_dim * 2), nn.GLU()
- )
-
- self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim)
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- """Upgrade a (possibly old) state dict for new versions of fairseq."""
- return state_dict
-
- @classmethod
- def build_model(cls, cfg: Wav2Vec2Config, task=None):
- """Build a new model instance."""
-
- return cls(cfg)
-
- def apply_mask(
- self,
- x,
- padding_mask,
- mask_indices=None,
- mask_channel_indices=None,
- ):
- B, T, C = x.shape
-
- if self.mask_channel_prob > 0 and self.mask_channel_before:
- mask_channel_indices = compute_mask_indices(
- (B, C),
- None,
- self.mask_channel_prob,
- self.mask_channel_length,
- self.mask_channel_selection,
- self.mask_channel_other,
- no_overlap=self.no_mask_channel_overlap,
- min_space=self.mask_channel_min_space,
- )
- mask_channel_indices = (
- torch.from_numpy(mask_channel_indices)
- .to(x.device)
- .unsqueeze(1)
- .expand(-1, T, -1)
- )
- x[mask_channel_indices] = 0
-
- if self.mask_prob > 0:
- if mask_indices is None:
- mask_indices = compute_mask_indices(
- (B, T),
- padding_mask,
- self.mask_prob,
- self.mask_length,
- self.mask_selection,
- self.mask_other,
- min_masks=2,
- no_overlap=self.no_mask_overlap,
- min_space=self.mask_min_space,
- )
- mask_indices = torch.from_numpy(mask_indices).to(x.device)
- x = index_put(x, mask_indices, self.mask_emb)
- else:
- mask_indices = None
-
- if self.mask_channel_prob > 0 and not self.mask_channel_before:
- if mask_channel_indices is None:
- mask_channel_indices = compute_mask_indices(
- (B, C),
- None,
- self.mask_channel_prob,
- self.mask_channel_length,
- self.mask_channel_selection,
- self.mask_channel_other,
- no_overlap=self.no_mask_channel_overlap,
- min_space=self.mask_channel_min_space,
- )
- mask_channel_indices = (
- torch.from_numpy(mask_channel_indices)
- .to(x.device)
- .unsqueeze(1)
- .expand(-1, T, -1)
- )
- x = index_put(x, mask_channel_indices, 0)
-
- return x, mask_indices
-
- def sample_negatives(self, y, num, padding_count=None):
-
- if self.n_negatives == 0 and self.cross_sample_negatives == 0:
- return y.new(0)
-
- bsz, tsz, fsz = y.shape
- y = y.view(-1, fsz) # BTC => (BxT)C
-
- # FIXME: what happens if padding_count is specified?
- cross_high = tsz * bsz
- high = tsz - (padding_count or 0)
- with torch.no_grad():
- assert high > 1, f"{bsz,tsz,fsz}"
-
- if self.n_negatives > 0:
- tszs = (
- buffered_arange(num)
- .unsqueeze(-1)
- .expand(-1, self.n_negatives)
- .flatten()
- )
-
- neg_idxs = torch.randint(
- low=0, high=high - 1, size=(bsz, self.n_negatives * num)
- )
- neg_idxs[neg_idxs >= tszs] += 1
-
- if self.cross_sample_negatives > 0:
- tszs = (
- buffered_arange(num)
- .unsqueeze(-1)
- .expand(-1, self.cross_sample_negatives)
- .flatten()
- )
-
- cross_neg_idxs = torch.randint(
- low=0,
- high=cross_high - 1,
- size=(bsz, self.cross_sample_negatives * num),
- )
- cross_neg_idxs[cross_neg_idxs >= tszs] += 1
-
- if self.n_negatives > 0:
- for i in range(1, bsz):
- neg_idxs[i] += i * high
- else:
- neg_idxs = cross_neg_idxs
-
- if self.cross_sample_negatives > 0 and self.n_negatives > 0:
- neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1)
-
- negs = y[neg_idxs.view(-1)]
- negs = negs.view(
- bsz, num, self.n_negatives + self.cross_sample_negatives, fsz
- ).permute(
- 2, 0, 1, 3
- ) # to NxBxTxC
- return negs, neg_idxs
-
- def compute_preds(self, x, y, negatives):
-
- neg_is_pos = (y == negatives).all(-1)
- y = y.unsqueeze(0)
- targets = torch.cat([y, negatives], dim=0)
-
- logits = torch.cosine_similarity(x.float(), targets.float(), dim=-1).type_as(x)
-
- logits = logits / self.logit_temp
-
- if is_xla_tensor(logits) or neg_is_pos.any():
- fillval = -float(2 ** 30)
- if not hasattr(self, "_inftensor"):
- self._inftensor = (
- torch.tensor(fillval).to(x.device)
- if is_xla_tensor(logits)
- else float("-inf")
- )
- logits[1:] = index_put(logits[1:], neg_is_pos, self._inftensor)
-
- return logits
-
- def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor):
- """
- Computes the output length of the convolutional layers
- """
-
- def _conv_out_length(input_length, kernel_size, stride):
- return torch.floor((input_length - kernel_size) / stride + 1)
-
- conv_cfg_list = eval(self.cfg.conv_feature_layers)
-
- for i in range(len(conv_cfg_list)):
- input_lengths = _conv_out_length(
- input_lengths, conv_cfg_list[i][1], conv_cfg_list[i][2]
- )
-
- return input_lengths.to(torch.long)
-
- def forward(
- self,
- source,
- padding_mask=None,
- mask=True,
- features_only=False,
- layer=None,
- mask_indices=None,
- mask_channel_indices=None,
- padding_count=None,
- ):
-
- if self.feature_grad_mult > 0:
- features = self.feature_extractor(source)
- if self.feature_grad_mult != 1.0:
- features = GradMultiply.apply(features, self.feature_grad_mult)
- else:
- with torch.no_grad():
- features = self.feature_extractor(source)
-
- features_pen = features.float().pow(2).mean()
-
- features = features.transpose(1, 2)
- features = self.layer_norm(features)
- unmasked_features = features.clone()
-
- if padding_mask is not None and padding_mask.any():
- input_lengths = (1 - padding_mask.long()).sum(-1)
- # apply conv formula to get real output_lengths
- output_lengths = self._get_feat_extract_output_lengths(input_lengths)
-
- padding_mask = torch.zeros(
- features.shape[:2], dtype=features.dtype, device=features.device
- )
-
- # these two operations makes sure that all values
- # before the output lengths indices are attended to
- padding_mask[
- (
- torch.arange(padding_mask.shape[0], device=padding_mask.device),
- output_lengths - 1,
- )
- ] = 1
- padding_mask = (1 - padding_mask.flip([-1]).cumsum(-1).flip([-1])).bool()
- else:
- padding_mask = None
-
- if self.post_extract_proj is not None:
- features = self.post_extract_proj(features)
-
- features = self.dropout_input(features)
- unmasked_features = self.dropout_features(unmasked_features)
-
- num_vars = None
- code_ppl = None
- prob_ppl = None
- curr_temp = None
-
- if self.input_quantizer:
- q = self.input_quantizer(features, produce_targets=False)
- features = q["x"]
- num_vars = q["num_vars"]
- code_ppl = q["code_perplexity"]
- prob_ppl = q["prob_perplexity"]
- curr_temp = q["temp"]
- features = self.project_inp(features)
-
- if mask:
- x, mask_indices = self.apply_mask(
- features,
- padding_mask,
- mask_indices=mask_indices,
- mask_channel_indices=mask_channel_indices,
- )
- if not is_xla_tensor(x) and mask_indices is not None:
- # tpu-comment: reducing the size in a dynamic way causes
- # too many recompilations on xla.
- y = unmasked_features[mask_indices].view(
- unmasked_features.size(0), -1, unmasked_features.size(-1)
- )
- else:
- y = unmasked_features
- else:
- x = features
- y = unmasked_features
- mask_indices = None
-
- x, layer_results = self.encoder(x, padding_mask=padding_mask, layer=layer)
-
- if features_only:
- return {
- "x": x,
- "padding_mask": padding_mask,
- "features": unmasked_features,
- "layer_results": layer_results,
- }
-
- if self.quantizer:
- q = self.quantizer(y, produce_targets=False)
- y = q["x"]
- num_vars = q["num_vars"]
- code_ppl = q["code_perplexity"]
- prob_ppl = q["prob_perplexity"]
- curr_temp = q["temp"]
-
- y = self.project_q(y)
-
- if self.negatives_from_everywhere:
- neg_cands = self.quantizer(unmasked_features, produce_targets=False)[
- "x"
- ]
- negs, _ = self.sample_negatives(
- neg_cands,
- y.size(1),
- padding_count=padding_count,
- )
- negs = self.project_q(negs)
-
- else:
- negs, _ = self.sample_negatives(
- y,
- y.size(1),
- padding_count=padding_count,
- )
-
- if self.codebook_negatives > 0:
- cb_negs = self.quantizer.sample_from_codebook(
- y.size(0) * y.size(1), self.codebook_negatives
- )
- cb_negs = cb_negs.view(
- self.codebook_negatives, y.size(0), y.size(1), -1
- ) # order doesnt matter
- cb_negs = self.project_q(cb_negs)
- negs = torch.cat([negs, cb_negs], dim=0)
- else:
- y = self.project_q(y)
-
- if self.negatives_from_everywhere:
- negs, _ = self.sample_negatives(
- unmasked_features,
- y.size(1),
- padding_count=padding_count,
- )
- negs = self.project_q(negs)
- else:
- negs, _ = self.sample_negatives(
- y,
- y.size(1),
- padding_count=padding_count,
- )
-
- if not is_xla_tensor(x):
- # tpu-comment: reducing the size in a dynamic way causes
- # too many recompilations on xla.
- x = x[mask_indices].view(x.size(0), -1, x.size(-1))
-
- if self.target_glu:
- y = self.target_glu(y)
- negs = self.target_glu(negs)
-
- x = self.final_proj(x)
- x = self.compute_preds(x, y, negs)
-
- result = {
- "x": x,
- "padding_mask": padding_mask,
- "features_pen": features_pen,
- }
-
- if prob_ppl is not None:
- result["prob_perplexity"] = prob_ppl
- result["code_perplexity"] = code_ppl
- result["num_vars"] = num_vars
- result["temp"] = curr_temp
-
- return result
-
- def quantize(self, x):
- assert self.quantizer is not None
- x = self.feature_extractor(x)
- x = x.transpose(1, 2)
- x = self.layer_norm(x)
- return self.quantizer.forward_idx(x)
-
- def extract_features(self, source, padding_mask, mask=False, layer=None):
- res = self.forward(
- source, padding_mask, mask=mask, features_only=True, layer=layer
- )
- return res
-
- def get_logits(self, net_output):
- logits = net_output["x"]
- logits = logits.transpose(0, 2)
- logits = logits.reshape(-1, logits.size(-1))
- return logits
-
- def get_targets(self, sample, net_output, expand_steps=True):
- x = net_output["x"]
- return x.new_zeros(x.size(1) * x.size(2), dtype=torch.long)
-
- def get_extra_losses(self, net_output):
- pen = []
-
- if "prob_perplexity" in net_output:
- pen.append(
- (net_output["num_vars"] - net_output["prob_perplexity"])
- / net_output["num_vars"]
- )
-
- if "features_pen" in net_output:
- pen.append(net_output["features_pen"])
-
- return pen
-
- def remove_pretraining_modules(self):
- self.quantizer = None
- self.project_q = None
- self.target_glu = None
- self.final_proj = None
-
-
-class ConvFeatureExtractionModel(nn.Module):
- def __init__(
- self,
- conv_layers: List[Tuple[int, int, int]],
- dropout: float = 0.0,
- mode: str = "default",
- conv_bias: bool = False,
- ):
- super().__init__()
-
- assert mode in {"default", "layer_norm"}
-
- def block(
- n_in,
- n_out,
- k,
- stride,
- is_layer_norm=False,
- is_group_norm=False,
- conv_bias=False,
- ):
- def make_conv():
- conv = nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias)
- nn.init.kaiming_normal_(conv.weight)
- return conv
-
- assert (
- is_layer_norm and is_group_norm
- ) == False, "layer norm and group norm are exclusive"
-
- if is_layer_norm:
- return nn.Sequential(
- make_conv(),
- nn.Dropout(p=dropout),
- nn.Sequential(
- TransposeLast(),
- Fp32LayerNorm(dim, elementwise_affine=True),
- TransposeLast(),
- ),
- nn.GELU(),
- )
- elif is_group_norm:
- return nn.Sequential(
- make_conv(),
- nn.Dropout(p=dropout),
- Fp32GroupNorm(dim, dim, affine=True),
- nn.GELU(),
- )
- else:
- return nn.Sequential(make_conv(), nn.Dropout(p=dropout), nn.GELU())
-
- in_d = 1
- self.conv_layers = nn.ModuleList()
- for i, cl in enumerate(conv_layers):
- assert len(cl) == 3, "invalid conv definition: " + str(cl)
- (dim, k, stride) = cl
-
- self.conv_layers.append(
- block(
- in_d,
- dim,
- k,
- stride,
- is_layer_norm=mode == "layer_norm",
- is_group_norm=mode == "default" and i == 0,
- conv_bias=conv_bias,
- )
- )
- in_d = dim
-
- def forward(self, x):
-
- # BxT -> BxCxT
- x = x.unsqueeze(1)
-
- for conv in self.conv_layers:
- x = conv(x)
-
- return x
-
-
-class TransformerEncoder(nn.Module):
- def __init__(self, args):
- super().__init__()
-
- self.dropout = args.dropout
- self.embedding_dim = args.encoder_embed_dim
-
- self.pos_conv = nn.Conv1d(
- self.embedding_dim,
- self.embedding_dim,
- kernel_size=args.conv_pos,
- padding=args.conv_pos // 2,
- groups=args.conv_pos_groups,
- )
- dropout = 0
- std = math.sqrt((4 * (1.0 - dropout)) / (args.conv_pos * self.embedding_dim))
- nn.init.normal_(self.pos_conv.weight, mean=0, std=std)
- nn.init.constant_(self.pos_conv.bias, 0)
-
- self.pos_conv = nn.utils.weight_norm(self.pos_conv, name="weight", dim=2)
- self.pos_conv = nn.Sequential(self.pos_conv, SamePad(args.conv_pos), nn.GELU())
-
- self.layers = nn.ModuleList(
- [
- TransformerSentenceEncoderLayer(
- embedding_dim=self.embedding_dim,
- ffn_embedding_dim=args.encoder_ffn_embed_dim,
- num_attention_heads=args.encoder_attention_heads,
- dropout=self.dropout,
- attention_dropout=args.attention_dropout,
- activation_dropout=args.activation_dropout,
- activation_fn=args.activation_fn,
- layer_norm_first=args.layer_norm_first,
- )
- for _ in range(args.encoder_layers)
- ]
- )
-
- self.layer_norm_first = args.layer_norm_first
- self.layer_norm = LayerNorm(self.embedding_dim)
- self.layerdrop = args.encoder_layerdrop
-
- self.apply(init_bert_params)
-
- def forward(self, x, padding_mask=None, layer=None):
- x, layer_results = self.extract_features(x, padding_mask, layer)
-
- if self.layer_norm_first and layer is None:
- x = self.layer_norm(x)
-
- return x, layer_results
-
- def extract_features(self, x, padding_mask=None, tgt_layer=None):
-
- if padding_mask is not None:
- x = index_put(x, padding_mask, 0)
-
- x_conv = self.pos_conv(x.transpose(1, 2))
- x_conv = x_conv.transpose(1, 2)
- x = x + x_conv
-
- if not self.layer_norm_first:
- x = self.layer_norm(x)
-
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- layer_results = []
- r = None
- for i, layer in enumerate(self.layers):
- dropout_probability = np.random.random()
- if not self.training or (dropout_probability > self.layerdrop):
- x, z = layer(x, self_attn_padding_mask=padding_mask, need_weights=False)
- if tgt_layer is not None:
- layer_results.append((x, z))
- if i == tgt_layer:
- r = x
- break
-
- if r is not None:
- x = r
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- return x, layer_results
-
- def max_positions(self):
- """Maximum output length supported by the encoder."""
- return self.args.max_positions
-
- def upgrade_state_dict_named(self, state_dict, name):
- """Upgrade a (possibly old) state dict for new versions of fairseq."""
- return state_dict
-
-
-class TransformerSentenceEncoderLayer(nn.Module):
- """
- Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained
- models.
- """
-
- def __init__(
- self,
- embedding_dim: float = 768,
- ffn_embedding_dim: float = 3072,
- num_attention_heads: float = 8,
- dropout: float = 0.1,
- attention_dropout: float = 0.1,
- activation_dropout: float = 0.1,
- activation_fn: str = "relu",
- layer_norm_first: bool = False,
- ) -> None:
-
- super().__init__()
- # Initialize parameters
- self.embedding_dim = embedding_dim
- self.dropout = dropout
- self.activation_dropout = activation_dropout
-
- # Initialize blocks
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.self_attn = MultiheadAttention(
- self.embedding_dim,
- num_attention_heads,
- dropout=attention_dropout,
- self_attention=True,
- )
-
- self.dropout1 = nn.Dropout(dropout)
- self.dropout2 = nn.Dropout(self.activation_dropout)
- self.dropout3 = nn.Dropout(dropout)
-
- self.layer_norm_first = layer_norm_first
-
- # layer norm associated with the self attention layer
- self.self_attn_layer_norm = LayerNorm(self.embedding_dim)
- self.fc1 = nn.Linear(self.embedding_dim, ffn_embedding_dim)
- self.fc2 = nn.Linear(ffn_embedding_dim, self.embedding_dim)
-
- # layer norm associated with the position wise feed-forward NN
- self.final_layer_norm = LayerNorm(self.embedding_dim)
-
- def forward(
- self,
- x: torch.Tensor,
- self_attn_mask: torch.Tensor = None,
- self_attn_padding_mask: torch.Tensor = None,
- need_weights: bool = False,
- att_args=None,
- ):
- """
- LayerNorm is applied either before or after the self-attention/ffn
- modules similar to the original Transformer imlementation.
- """
- residual = x
-
- if self.layer_norm_first:
- x = self.self_attn_layer_norm(x)
- x, attn = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=self_attn_padding_mask,
- attn_mask=self_attn_mask,
- )
- x = self.dropout1(x)
- x = residual + x
-
- residual = x
- x = self.final_layer_norm(x)
- x = self.activation_fn(self.fc1(x))
- x = self.dropout2(x)
- x = self.fc2(x)
- x = self.dropout3(x)
- x = residual + x
- else:
- x, attn = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=self_attn_padding_mask,
- )
-
- x = self.dropout1(x)
- x = residual + x
-
- x = self.self_attn_layer_norm(x)
-
- residual = x
- x = self.activation_fn(self.fc1(x))
- x = self.dropout2(x)
- x = self.fc2(x)
- x = self.dropout3(x)
- x = residual + x
- x = self.final_layer_norm(x)
-
- return x, attn
diff --git a/spaces/gradio/HuBERT/fairseq/trainer.py b/spaces/gradio/HuBERT/fairseq/trainer.py
deleted file mode 100644
index 1deb14326f90dea246b9a1a8d3b97b95c5472a5e..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/trainer.py
+++ /dev/null
@@ -1,1439 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Train a network across multiple GPUs.
-"""
-
-import contextlib
-import logging
-import sys
-import time
-from argparse import Namespace
-from itertools import chain
-from typing import Any, Dict, List
-
-import torch
-from fairseq import checkpoint_utils, models, optim, utils
-from fairseq.dataclass.configs import FairseqConfig
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.distributed import utils as distributed_utils
-from fairseq.file_io import PathManager
-from fairseq.logging import meters, metrics
-from fairseq.nan_detector import NanDetector
-from fairseq.optim import lr_scheduler
-from omegaconf import OmegaConf
-
-logger = logging.getLogger(__name__)
-
-
-class Trainer(object):
- """Main class for data parallel training.
-
- This class supports synchronous distributed data parallel training,
- where multiple workers each have a full model replica and gradients
- are accumulated across workers before each update. We use
- :class:`~torch.nn.parallel.DistributedDataParallel` to handle
- communication of the gradients across workers.
- """
-
- def __init__(self, cfg: FairseqConfig, task, model, criterion, quantizer=None):
-
- if isinstance(cfg, Namespace):
- logger.warning(
- "argparse.Namespace configuration is deprecated! Automatically converting to OmegaConf"
- )
- cfg = convert_namespace_to_omegaconf(cfg)
-
- self.cfg = cfg
- self.task = task
-
- # catalog shared parameters
- shared_params = _catalog_shared_params(model)
- self.tpu = cfg.common.tpu
- self.cuda = torch.cuda.is_available() and not cfg.common.cpu and not self.tpu
- if self.cuda:
- self.device = torch.device("cuda")
- elif self.tpu:
- self.device = utils.get_tpu_device()
- else:
- self.device = torch.device("cpu")
-
- if self.cfg.distributed_training.ddp_backend == "fully_sharded":
- if self.cfg.common.bf16:
- raise ValueError(
- "FullyShardedDataParallel is not compatible with --bf16 or "
- "--memory-efficient-bf16"
- )
- if self.cfg.distributed_training.zero_sharding != "none":
- raise ValueError(
- "FullyShardedDataParallel is not compatible with --zero-sharding "
- "option (it's already built in)"
- )
- else:
- if (
- hasattr(self.cfg.distributed_training, "cpu_offload")
- and self.cfg.distributed_training.cpu_offload
- ):
- raise ValueError("--cpu-offload requires --ddp-backend=fully_sharded")
-
- # copy model and criterion to current device/dtype
- self._criterion = criterion
- self._model = model
- if cfg.distributed_training.ddp_backend != "fully_sharded":
- if cfg.common.fp16:
- assert not cfg.common.amp, "Cannot use fp16 and AMP together"
- self._criterion = self._criterion.half()
- self._model = self._model.half()
- elif cfg.common.bf16:
- self._criterion = self._criterion.to(dtype=torch.bfloat16)
- self._model = self._model.to(dtype=torch.bfloat16)
- elif cfg.common.amp:
- self._amp_retries = 0
- if (
- not cfg.distributed_training.pipeline_model_parallel
- # the DistributedFairseqModel wrapper will handle moving to device,
- # so only handle cases which don't use the wrapper
- and not self.use_distributed_wrapper
- ):
- self._criterion = self._criterion.to(device=self.device)
- self._model = self._model.to(device=self.device)
- self.pipeline_model_parallel = cfg.distributed_training.pipeline_model_parallel
- self.last_device = None
- if self.cuda and self.pipeline_model_parallel:
- self.last_device = torch.device(
- cfg.distributed_training.pipeline_devices[-1]
- )
-
- # check that shared parameters are preserved after device transfer
- for shared_param in shared_params:
- ref = _get_module_by_path(self._model, shared_param[0])
- for path in shared_param[1:]:
- logger.info(
- "detected shared parameter: {} <- {}".format(shared_param[0], path)
- )
- _set_module_by_path(self._model, path, ref)
-
- self._dummy_batch = None # indicates we don't have a dummy batch at first
- self._lr_scheduler = None
- self._num_updates = 0
- self._num_xla_compiles = 0 # for TPUs
- self._optim_history = None
- self._optimizer = None
- self._warn_once = set()
- self._wrapped_criterion = None
- self._wrapped_model = None
-
- # TODO(myleott): support tpu
- if self.cuda and self.data_parallel_world_size > 1:
- self._grad_norm_buf = torch.cuda.DoubleTensor(self.data_parallel_world_size)
- else:
- self._grad_norm_buf = None
-
- self.quantizer = quantizer
- if self.quantizer is not None:
- self.quantizer.set_trainer(self)
-
- # get detailed cuda environment
- if self.cuda:
- self.cuda_env = utils.CudaEnvironment()
- if self.data_parallel_world_size > 1:
- self.cuda_env_arr = distributed_utils.all_gather_list(
- self.cuda_env, group=distributed_utils.get_global_group()
- )
- else:
- self.cuda_env_arr = [self.cuda_env]
- if self.data_parallel_rank == 0:
- utils.CudaEnvironment.pretty_print_cuda_env_list(self.cuda_env_arr)
- else:
- self.cuda_env = None
- self.cuda_env_arr = None
-
- metrics.log_start_time("wall", priority=790, round=0)
-
- self._start_time = time.time()
- self._previous_training_time = 0
- self._cumulative_training_time = None
-
- def reinitialize(self):
- """Reinitialize the Trainer, typically after model params change."""
- self._lr_scheduler = None
- self._optimizer = None
- self._wrapped_criterion = None
- self._wrapped_model = None
-
- @property
- def data_parallel_world_size(self):
- if self.cfg.distributed_training.distributed_world_size == 1:
- return 1
- return distributed_utils.get_data_parallel_world_size()
-
- @property
- def data_parallel_process_group(self):
- return distributed_utils.get_data_parallel_group()
-
- @property
- def data_parallel_rank(self):
- if self.cfg.distributed_training.distributed_world_size == 1:
- return 0
- return distributed_utils.get_data_parallel_rank()
-
- @property
- def is_data_parallel_master(self):
- # NOTE: this returns true for all model parallel replicas with data
- # parallel rank 0
- return self.data_parallel_rank == 0
-
- @property
- def use_distributed_wrapper(self) -> bool:
- return (
- self.data_parallel_world_size > 1 and not self.cfg.optimization.use_bmuf
- ) or (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and self.cfg.distributed_training.cpu_offload
- )
-
- @property
- def should_save_checkpoint_on_current_rank(self) -> bool:
- """Indicates whether to save checkpoints on the current DDP rank."""
- if (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and self.cfg.distributed_training.use_sharded_state
- ) or getattr(self.cfg.model, "base_layers", 0) > 0:
- return True
- else:
- return self.is_data_parallel_master
-
- @property
- def always_call_state_dict_during_save_checkpoint(self) -> bool:
- if (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and not self.cfg.distributed_training.use_sharded_state
- ):
- # FSDP calls communication collective when consolidating checkpoints
- return True
- else:
- return False
-
- @property
- def checkpoint_suffix(self) -> str:
- """Suffix to add to the checkpoint file name."""
- if (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and self.cfg.distributed_training.use_sharded_state
- ):
- return self.cfg.checkpoint.checkpoint_suffix + "-shard{0}".format(
- self.data_parallel_rank
- )
- else:
- return self.cfg.checkpoint.checkpoint_suffix or ""
-
- @property
- def criterion(self):
- if self._wrapped_criterion is None:
- if utils.has_parameters(self._criterion) and self.use_distributed_wrapper:
- self._wrapped_criterion = models.DistributedFairseqModel(
- self.cfg.distributed_training,
- self._criterion,
- process_group=self.data_parallel_process_group,
- device=self.device,
- )
- else:
- self._wrapped_criterion = self._criterion
- return self._wrapped_criterion
-
- @property
- def model(self):
- if self._wrapped_model is None:
- if self.use_distributed_wrapper:
- self._wrapped_model = models.DistributedFairseqModel(
- self.cfg.distributed_training,
- self._model,
- process_group=self.data_parallel_process_group,
- device=self.device,
- )
- else:
- self._wrapped_model = self._model
- return self._wrapped_model
-
- @property
- def optimizer(self):
- if self._optimizer is None:
- self._build_optimizer()
- return self._optimizer
-
- @property
- def lr_scheduler(self):
- if self._lr_scheduler is None:
- self._build_optimizer() # this will initialize self._lr_scheduler
- return self._lr_scheduler
-
- def _build_optimizer(self):
- params = list(
- filter(
- lambda p: p.requires_grad,
- chain(self.model.parameters(), self.criterion.parameters()),
- )
- )
-
- if (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and self.cfg.common.fp16
- ):
- # FullyShardedDataParallel always uses MemoryEfficientFP16 wrapper,
- # mostly for the grad scaling. But if we don't have the
- # --memory-efficient-fp16 flag set, then we're effectively doing
- # regular --fp16 and can allow the use of optimizers that would
- # otherwise be unsupported by MemoryEfficientFP16Optimizer.
- allow_unsupported = not self.cfg.common.memory_efficient_fp16
- self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer(
- self.cfg, params, allow_unsupported=allow_unsupported
- )
- elif self.cfg.common.fp16 or self.cfg.common.bf16 or self.cfg.common.amp:
- if self.cuda and torch.cuda.get_device_capability(0)[0] < 7:
- logger.info(
- "NOTE: your device does NOT support faster training with --fp16 or --amp, "
- "please switch to FP32 which is likely to be faster"
- )
- if (
- self.cfg.common.memory_efficient_fp16
- or self.cfg.common.memory_efficient_bf16
- ):
- self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer(
- self.cfg, params
- )
- elif self.cfg.common.amp:
- self._optimizer = optim.AMPOptimizer.build_optimizer(self.cfg, params)
- else:
- self._optimizer = optim.FP16Optimizer.build_optimizer(self.cfg, params)
- else:
- if self.cuda and torch.cuda.get_device_capability(0)[0] >= 7:
- logger.info("NOTE: your device may support faster training with --fp16 or --amp")
- self._optimizer = optim.build_optimizer(self.cfg.optimizer, params)
-
- if self.cfg.distributed_training.ddp_backend == "fully_sharded":
- assert (
- not self.cfg.optimization.use_bmuf
- ), "--ddp-backend=fully_sharded is not compatible with BMUF"
- assert self._optimizer.supports_flat_params, (
- "--ddp-backend=fully_sharded is only compatible with pointwise "
- "optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.). "
- "However, the sharding will result in slightly different results when "
- "using non-pointwise optimizers (e.g., Adagrad, Adafactor, LAMB)"
- )
-
- if self.cfg.optimization.use_bmuf:
- self._optimizer = optim.FairseqBMUF(
- self.cfg.bmuf,
- self._optimizer,
- )
-
- if self.cfg.distributed_training.zero_sharding == "os":
- if (
- self.cfg.common.fp16
- and not self.cfg.common.memory_efficient_fp16
- and not self.cfg.common.memory_efficient_bf16
- ) and not self.cfg.common.fp16_no_flatten_grads:
- raise ValueError(
- "ZeRO is incomptabile with fp16 and flattened grads. "
- "Please use --fp16-no-flatten-grads"
- )
- else:
- optim.shard_(self._optimizer, self.data_parallel_process_group)
-
- # We should initialize the learning rate scheduler immediately after
- # building the optimizer, so that the initial learning rate is set.
- self._lr_scheduler = lr_scheduler.build_lr_scheduler(
- self.cfg.lr_scheduler,
- self.optimizer,
- )
- self._lr_scheduler.step_update(0)
-
- def consolidate_optimizer(self):
- """For OSS, we need to consolidate the state dict."""
- if self.cfg.checkpoint.no_save_optimizer_state:
- return
- self._gathered_optim_state = None
- if hasattr(self.optimizer.optimizer, "consolidate_state_dict"):
- self.optimizer.optimizer.consolidate_state_dict()
-
- elif (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and not self.model.use_sharded_state
- ):
- st = self.model.gather_full_optim_state_dict(
- self.optimizer
- ) # only returns on rank 0
- self._gathered_optim_state = st
-
- def state_dict(self):
- state_dict = {
- "args": None, # legacy
- "cfg": (
- OmegaConf.to_container(self.cfg, resolve=True, enum_to_str=True)
- if OmegaConf.is_config(self.cfg)
- else self.cfg
- ),
- "model": self.model.state_dict(),
- "criterion": (
- self.criterion.state_dict()
- if utils.has_parameters(self.criterion)
- else None
- ),
- "optimizer_history": (self._optim_history or [])
- + [
- {
- "criterion_name": self.get_criterion().__class__.__name__,
- "optimizer_name": self.optimizer.__class__.__name__,
- "lr_scheduler_state": self.lr_scheduler.state_dict(),
- "num_updates": self.get_num_updates(),
- }
- ],
- "task_state": self.task.state_dict() if self.task is not None else {},
- "extra_state": {
- "metrics": metrics.state_dict(),
- "previous_training_time": self.cumulative_training_time(),
- },
- }
- if not self.cfg.checkpoint.no_save_optimizer_state:
- if self._gathered_optim_state is not None:
- state_dict["last_optimizer_state"] = self._gathered_optim_state
- self._gathered_optim_state = None
- else:
- state_dict["last_optimizer_state"] = self.optimizer.state_dict()
- if self.cfg.distributed_training.ddp_backend == "fully_sharded":
- # save meta data for recombining checkpoint upon loading
- state_dict["fsdp_metadata"] = self.model.local_metadata_dict()
- return state_dict
-
- def save_checkpoint(self, filename, extra_state):
- """Save all training state in a checkpoint file."""
- logger.info(f"Saving checkpoint to {filename}")
- # call state_dict on all ranks in case it needs internal communication
- state_dict = utils.move_to_cpu(self.state_dict())
- state_dict["extra_state"].update(extra_state)
- if self.should_save_checkpoint_on_current_rank:
- checkpoint_utils.torch_persistent_save(
- state_dict,
- filename,
- async_write=self.cfg.checkpoint.write_checkpoints_asynchronously,
- )
- logger.info(f"Finished saving checkpoint to {filename}")
-
- def load_checkpoint(
- self,
- filename,
- reset_optimizer=False,
- reset_lr_scheduler=False,
- optimizer_overrides=None,
- reset_meters=False,
- ):
- """
- Load all training state from a checkpoint file.
- rank = 0 will load the checkpoint, and then broadcast it to all
- other ranks.
- """
- extra_state, self._optim_history, last_optim_state = None, [], None
-
- logger.info(f"Preparing to load checkpoint {filename}")
- is_distributed = self.data_parallel_world_size > 1
- bexists = PathManager.isfile(filename)
- if bexists:
- load_on_all_ranks = (
- self.cfg.checkpoint.load_checkpoint_on_all_dp_ranks
- # TPUs don't support broadcast yet, so load checkpoints
- # on every worker for now
- or self.tpu
- # FSDP requires loading checkpoint shards on all ranks
- or (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and self.cfg.distributed_training.use_sharded_state
- )
- or getattr(self.cfg.model, "base_layers", 0) > 0
- )
-
- if load_on_all_ranks or self.data_parallel_rank == 0:
- state = checkpoint_utils.load_checkpoint_to_cpu(
- filename, load_on_all_ranks=load_on_all_ranks
- )
- last_optim_state = state.get("last_optimizer_state", None)
-
- # If doing zero_sharding, do not broadcast global optimizer
- # state. Later we will broadcast sharded states to each rank
- # to avoid memory from exploding.
- if (
- not load_on_all_ranks
- and self.cfg.distributed_training.zero_sharding == "os"
- and "last_optimizer_state" in state
- and is_distributed
- ):
- state["last_optimizer_state"] = "SHARDED"
- else:
- last_optim_state = None
- state = None
-
- if is_distributed and not load_on_all_ranks:
- state = distributed_utils.broadcast_object(
- state,
- src_rank=0,
- group=self.data_parallel_process_group,
- dist_device=self.device,
- )
- if self.data_parallel_rank > 0:
- last_optim_state = state.get("last_optimizer_state", None)
-
- # load model parameters
- try:
- self.model.load_state_dict(
- state["model"], strict=True, model_cfg=self.cfg.model
- )
- # save memory for later steps
- del state["model"]
- if utils.has_parameters(self.get_criterion()):
- self.get_criterion().load_state_dict(
- state["criterion"], strict=True
- )
- del state["criterion"]
-
- except Exception:
- raise Exception(
- "Cannot load model parameters from checkpoint {}; "
- "please ensure that the architectures match.".format(filename)
- )
- extra_state = state["extra_state"]
- self._optim_history = state["optimizer_history"]
-
- if last_optim_state is not None and not reset_optimizer:
- # rebuild optimizer after loading model, since params may have changed
- self._build_optimizer()
-
- # only reload optimizer and lr_scheduler if they match
- last_optim = self._optim_history[-1]
- assert (
- last_optim["criterion_name"] == self.get_criterion().__class__.__name__
- ), f"Criterion does not match; please reset the optimizer (--reset-optimizer). {last_optim['criterion_name']} vs {self.get_criterion().__class__.__name__}"
- assert (
- last_optim["optimizer_name"] == self.optimizer.__class__.__name__
- ), f"Optimizer does not match; please reset the optimizer (--reset-optimizer). {last_optim['optimizer_name']} vs {self.optimizer.__class__.__name__}"
-
- if not reset_lr_scheduler:
- self.lr_scheduler.load_state_dict(last_optim["lr_scheduler_state"])
-
- if (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and not self.model.use_sharded_state
- ):
- # if use_sharded_state, the last_optim_state is already sharded, skip this
- last_optim_state = self.model.get_shard_from_optim_state_dict(
- last_optim_state
- )
- elif not load_on_all_ranks and is_distributed:
- last_optim_state = self.optimizer.broadcast_global_state_dict(
- last_optim_state
- )
-
- self.optimizer.load_state_dict(last_optim_state, optimizer_overrides)
-
- self.set_num_updates(last_optim["num_updates"])
-
- if extra_state is not None:
- itr_state = extra_state["train_iterator"]
- epoch = itr_state["epoch"]
-
- if "previous_training_time" in extra_state:
- self._previous_training_time = extra_state["previous_training_time"]
- self._start_time = time.time()
-
- self.lr_step(epoch)
-
- if (
- itr_state.get("version", 1) >= 2
- and itr_state["iterations_in_epoch"] == 0
- ):
- # reset meters at start of epoch
- reset_meters = True
-
- if "metrics" in extra_state and not reset_meters:
- metrics.load_state_dict(extra_state["metrics"])
-
- # reset TimeMeters, since their start times don't make sense anymore
- for meter in metrics.get_meters("default"):
- if isinstance(meter, meters.TimeMeter):
- meter.reset()
-
- logger.info(
- "Loaded checkpoint {} (epoch {} @ {} updates)".format(
- filename, epoch, self.get_num_updates()
- )
- )
-
- else:
- logger.info("No existing checkpoint found {}".format(filename))
-
- return extra_state
-
- def get_train_iterator(
- self,
- epoch,
- combine=True,
- load_dataset=True,
- data_selector=None,
- shard_batch_itr=True,
- disable_iterator_cache=False,
- ):
- """Return an EpochBatchIterator over the training set for a given epoch."""
- if load_dataset:
- logger.info("loading train data for epoch {}".format(epoch))
- self.task.load_dataset(
- self.cfg.dataset.train_subset,
- epoch=epoch,
- combine=combine,
- data_selector=data_selector,
- tpu=self.tpu,
- )
- batch_iterator = self.task.get_batch_iterator(
- dataset=self.task.dataset(self.cfg.dataset.train_subset),
- max_tokens=self.cfg.dataset.max_tokens,
- max_sentences=self.cfg.dataset.batch_size,
- max_positions=utils.resolve_max_positions(
- self.task.max_positions(),
- self.model.max_positions(),
- self.cfg.dataset.max_tokens,
- ),
- ignore_invalid_inputs=True,
- required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple,
- seed=self.cfg.common.seed,
- num_shards=self.data_parallel_world_size if shard_batch_itr else 1,
- shard_id=self.data_parallel_rank if shard_batch_itr else 0,
- num_workers=self.cfg.dataset.num_workers,
- epoch=epoch,
- data_buffer_size=self.cfg.dataset.data_buffer_size,
- disable_iterator_cache=disable_iterator_cache,
- )
- self.reset_dummy_batch(batch_iterator.first_batch)
- return batch_iterator
-
- def get_valid_iterator(
- self,
- subset,
- disable_iterator_cache=False,
- ):
- """Return an EpochBatchIterator over given validation subset for a given epoch."""
- batch_iterator = self.task.get_batch_iterator(
- dataset=self.task.dataset(subset),
- max_tokens=self.cfg.dataset.max_tokens_valid,
- max_sentences=self.cfg.dataset.batch_size_valid,
- max_positions=utils.resolve_max_positions(
- self.task.max_positions(),
- self.model.max_positions(),
- ),
- ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test,
- required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple,
- seed=self.cfg.common.seed,
- num_shards=self.data_parallel_world_size,
- shard_id=self.data_parallel_rank,
- num_workers=self.cfg.dataset.num_workers,
- # always pass a fixed "epoch" to keep validation data consistent
- # across training epochs
- epoch=1,
- data_buffer_size=self.cfg.dataset.data_buffer_size,
- disable_iterator_cache=disable_iterator_cache,
- )
- self.reset_dummy_batch(batch_iterator.first_batch)
- return batch_iterator
-
- def begin_epoch(self, epoch):
- """Called at the beginning of each epoch."""
- logger.info("begin training epoch {}".format(epoch))
-
- self.lr_step_begin_epoch(epoch)
-
- if self.quantizer is not None:
- self.quantizer.begin_epoch(epoch)
-
- # task specific setup per epoch
- self.task.begin_epoch(epoch, self.get_model())
-
- if self.tpu:
- import torch_xla.core.xla_model as xm
-
- xm.rendezvous("begin_epoch") # wait for all workers
- xm.mark_step()
-
- def begin_valid_epoch(self, epoch):
- """Called at the beginning of each validation epoch."""
-
- # task specific setup per validation epoch
- self.task.begin_valid_epoch(epoch, self.get_model())
-
- def reset_dummy_batch(self, batch):
- self._dummy_batch = batch
-
- @metrics.aggregate("train")
- def train_step(self, samples, raise_oom=False):
- """Do forward, backward and parameter update."""
- self._set_seed()
- self.model.train()
- self.criterion.train()
- self.zero_grad()
-
- metrics.log_start_time("train_wall", priority=800, round=0)
-
- # forward and backward pass
- logging_outputs, sample_size, ooms = [], 0, 0
- for i, sample in enumerate(samples): # delayed update loop
- sample, is_dummy_batch = self._prepare_sample(sample)
-
- def maybe_no_sync():
- """
- Whenever *samples* contains more than one mini-batch, we
- want to accumulate gradients locally and only call
- all-reduce in the last backwards pass.
- """
- if (
- self.data_parallel_world_size > 1
- and hasattr(self.model, "no_sync")
- and i < len(samples) - 1
- ):
- return self.model.no_sync()
- else:
- return contextlib.ExitStack() # dummy contextmanager
-
- try:
- with maybe_no_sync():
- # forward and backward
- loss, sample_size_i, logging_output = self.task.train_step(
- sample=sample,
- model=self.model,
- criterion=self.criterion,
- optimizer=self.optimizer,
- update_num=self.get_num_updates(),
- ignore_grad=is_dummy_batch,
- )
- del loss
-
- logging_outputs.append(logging_output)
- sample_size += sample_size_i
-
- # emptying the CUDA cache after the first step can
- # reduce the chance of OOM
- if self.cuda and self.get_num_updates() == 0:
- torch.cuda.empty_cache()
- except RuntimeError as e:
- if "out of memory" in str(e):
- self._log_oom(e)
- if raise_oom:
- raise e
- logger.warning(
- "attempting to recover from OOM in forward/backward pass"
- )
- ooms += 1
- self.zero_grad()
- if self.cuda:
- torch.cuda.empty_cache()
- if self.cfg.distributed_training.distributed_world_size == 1:
- return None
- else:
- raise e
-
- if self.tpu and i < len(samples) - 1:
- # tpu-comment: every XLA operation before marking step is
- # appended to the IR graph, and processing too many batches
- # before marking step can lead to OOM errors.
- # To handle gradient accumulation use case, we explicitly
- # mark step here for every forward pass without a backward pass
- self._xla_markstep_and_send_to_cpu()
-
- if is_dummy_batch:
- if torch.is_tensor(sample_size):
- sample_size.zero_()
- else:
- sample_size *= 0.0
-
- if torch.is_tensor(sample_size):
- sample_size = sample_size.float()
- else:
- sample_size = float(sample_size)
-
- # gather logging outputs from all replicas
- if self._sync_stats():
- train_time = self._local_cumulative_training_time()
- logging_outputs, (
- sample_size,
- ooms,
- total_train_time,
- ) = self._aggregate_logging_outputs(
- logging_outputs, sample_size, ooms, train_time, ignore=is_dummy_batch
- )
- self._cumulative_training_time = (
- total_train_time / self.data_parallel_world_size
- )
-
- overflow = False
- try:
- with torch.autograd.profiler.record_function("reduce-grads"):
- # reduce gradients across workers
- self.optimizer.all_reduce_grads(self.model)
- if utils.has_parameters(self.criterion):
- self.optimizer.all_reduce_grads(self.criterion)
-
- with torch.autograd.profiler.record_function("multiply-grads"):
- # multiply gradients by (data_parallel_size / sample_size) since
- # DDP normalizes by the number of data parallel workers for
- # improved fp16 precision.
- # Thus we get (sum_of_gradients / sample_size) at the end.
- # In case of fp16, this step also undoes loss scaling.
- # (Debugging note: Some optimizers perform this scaling on the
- # fly, so inspecting model.parameters() or optimizer.params may
- # still show the original, unscaled gradients.)
- numer = (
- self.data_parallel_world_size
- if not self.cfg.optimization.use_bmuf or self._sync_stats()
- else 1
- )
- self.optimizer.multiply_grads(numer / (sample_size or 1.0))
- # Note: (sample_size or 1.0) handles the case of a zero gradient, in a
- # way that avoids CPU/device transfers in case sample_size is a GPU or
- # TPU object. The assumption is that the gradient itself is also 0.
-
- with torch.autograd.profiler.record_function("clip-grads"):
- # clip grads
- grad_norm = self.clip_grad_norm(self.cfg.optimization.clip_norm)
-
- # check that grad norms are consistent across workers
- # on tpu check tensor is slow
- if not self.tpu:
- if (
- not self.cfg.optimization.use_bmuf
- and self.cfg.distributed_training.ddp_backend != "slow_mo"
- ):
- self._check_grad_norms(grad_norm)
- if not torch.isfinite(grad_norm).all():
- # in case of AMP, if gradients are Nan/Inf then
- # optimizer step is still required
- if self.cfg.common.amp:
- overflow = True
- else:
- # check local gradnorm single GPU case, trigger NanDetector
- raise FloatingPointError("gradients are Nan/Inf")
-
- with torch.autograd.profiler.record_function("optimizer"):
- # take an optimization step
- self.task.optimizer_step(
- self.optimizer, model=self.model, update_num=self.get_num_updates()
- )
- if self.cfg.common.amp and overflow:
- if self._amp_retries == self.cfg.common.amp_batch_retries:
- logger.info("AMP: skipping this batch.")
- self._amp_retries = 0
- else:
- self._amp_retries += 1
- return self.train_step(samples, raise_oom) # recursion to feed in same batch
-
- except FloatingPointError:
- # re-run the forward and backward pass with hooks attached to print
- # out where it fails
- self.zero_grad()
- with NanDetector(self.get_model()):
- for _, sample in enumerate(samples):
- sample, _ = self._prepare_sample(sample)
- self.task.train_step(
- sample,
- self.model,
- self.criterion,
- self.optimizer,
- self.get_num_updates(),
- ignore_grad=False,
- )
- raise
- except OverflowError as e:
- overflow = True
- logger.info(
- f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}"
- )
- grad_norm = torch.tensor(0.0).cuda()
- self.zero_grad()
- except RuntimeError as e:
- if "out of memory" in str(e):
- self._log_oom(e)
- logger.error("OOM during optimization, irrecoverable")
- raise e
-
- # Some distributed wrappers (e.g., SlowMo) need access to the optimizer
- # after the step
- if hasattr(self.model, "perform_additional_optimizer_actions"):
- if hasattr(self.optimizer, "fp32_params"):
- self.model.perform_additional_optimizer_actions(
- self.optimizer.optimizer, self.optimizer.fp32_params
- )
- else:
- self.model.perform_additional_optimizer_actions(
- self.optimizer.optimizer
- )
-
- logging_output = None
- if not overflow or self.cfg.distributed_training.ddp_backend == "slow_mo":
- self.set_num_updates(self.get_num_updates() + 1)
-
- if self.tpu:
- import torch_xla.core.xla_model as xm
-
- # mark step on TPUs
- self._xla_markstep_and_send_to_cpu()
-
- # only log stats every log_interval steps
- # this causes wps to be misreported when log_interval > 1
- logging_output = {}
- if self.get_num_updates() % self.cfg.common.log_interval == 0:
- # log memory usage
- mem_info = xm.get_memory_info(self.device)
- gb_free = mem_info["kb_free"] / 1024 / 1024
- gb_total = mem_info["kb_total"] / 1024 / 1024
- metrics.log_scalar(
- "gb_free", gb_free, priority=1500, round=1, weight=0
- )
- metrics.log_scalar(
- "gb_total", gb_total, priority=1600, round=1, weight=0
- )
- logging_outputs = self._xla_markstep_and_send_to_cpu(
- logging_outputs
- )
- logging_output = self._reduce_and_log_stats(
- logging_outputs, sample_size, grad_norm
- )
-
- # log whenever there's an XLA compilation, since these
- # slow down training and may indicate opportunities for
- # optimization
- self._check_xla_compilation()
- else:
- if self.cuda and self.cuda_env is not None:
- # log minimum free memory over the iteration
- gb_used = torch.cuda.max_memory_allocated() / 1024 / 1024 / 1024
- torch.cuda.reset_peak_memory_stats()
- gb_free = self.cuda_env.total_memory_in_GB - gb_used
- metrics.log_scalar(
- "gb_free", gb_free, priority=1500, round=1, weight=0
- )
-
- # log stats
- logging_output = self._reduce_and_log_stats(
- logging_outputs, sample_size, grad_norm
- )
-
- # clear CUDA cache to reduce memory fragmentation
- if (
- self.cuda
- and self.cfg.common.empty_cache_freq > 0
- and (
- (self.get_num_updates() + self.cfg.common.empty_cache_freq - 1)
- % self.cfg.common.empty_cache_freq
- )
- == 0
- ):
- torch.cuda.empty_cache()
-
- if self.cfg.common.fp16 or self.cfg.common.amp:
- metrics.log_scalar(
- "loss_scale",
- (
- self.optimizer.scaler.loss_scale
- if self.cfg.common.fp16
- else self.optimizer.scaler.get_scale()
- ),
- priority=700,
- round=4,
- weight=0,
- )
-
- metrics.log_stop_time("train_wall")
- return logging_output
-
- @metrics.aggregate("valid")
- def valid_step(self, sample, raise_oom=False):
- """Do forward pass in evaluation mode."""
- if self.tpu:
- import torch_xla.core.xla_model as xm
-
- xm.rendezvous("valid_step") # wait for all workers
-
- with torch.no_grad():
- self.model.eval()
- self.criterion.eval()
-
- sample, is_dummy_batch = self._prepare_sample(sample)
-
- try:
- _loss, sample_size, logging_output = self.task.valid_step(
- sample, self.model, self.criterion
- )
- except RuntimeError as e:
- if "out of memory" in str(e):
- self._log_oom(e)
- if not raise_oom:
- logger.warning(
- "ran out of memory in validation step, retrying batch"
- )
- for p in self.model.parameters():
- if p.grad is not None:
- p.grad = None # free some memory
- if self.cuda:
- torch.cuda.empty_cache()
- return self.valid_step(sample, raise_oom=True)
- raise e
-
- logging_outputs = [logging_output]
- if is_dummy_batch:
- if torch.is_tensor(sample_size):
- sample_size.zero_()
- else:
- sample_size *= 0.0
-
- # gather logging outputs from all replicas
- if self.data_parallel_world_size > 1:
- logging_outputs, (sample_size,) = self._aggregate_logging_outputs(
- logging_outputs,
- sample_size,
- ignore=is_dummy_batch,
- )
-
- # log validation stats
- if self.tpu:
- logging_outputs = self._xla_markstep_and_send_to_cpu(logging_outputs)
- logging_output = self._reduce_and_log_stats(logging_outputs, sample_size)
-
- return logging_output
-
- def zero_grad(self):
- self.optimizer.zero_grad()
-
- def lr_step_begin_epoch(self, epoch):
- """Adjust the learning rate at the beginning of the epoch."""
- self.lr_scheduler.step_begin_epoch(epoch)
- # prefer updating the LR based on the number of steps
- return self.lr_step_update()
-
- def lr_step(self, epoch, val_loss=None):
- """Adjust the learning rate at the end of the epoch."""
- self.lr_scheduler.step(epoch, val_loss)
- # prefer updating the LR based on the number of steps
- return self.lr_step_update()
-
- def lr_step_update(self):
- """Update the learning rate after each update."""
- new_lr = self.lr_scheduler.step_update(self.get_num_updates())
- if isinstance(new_lr, dict):
- for k, v in new_lr.items():
- metrics.log_scalar(f"lr_{k}", v, weight=0, priority=300)
- new_lr = new_lr.get("default", next(iter(new_lr.values())))
- else:
- metrics.log_scalar("lr", new_lr, weight=0, priority=300)
- return new_lr
-
- def get_lr(self):
- """Get the current learning rate."""
- return self.optimizer.get_lr()
-
- def get_model(self):
- """Get the (non-wrapped) model instance."""
- return self._model
-
- def get_criterion(self):
- """Get the (non-wrapped) criterion instance."""
- return self._criterion
-
- def get_meter(self, name):
- """[deprecated] Get a specific meter by name."""
- from fairseq import meters
-
- if "get_meter" not in self._warn_once:
- self._warn_once.add("get_meter")
- utils.deprecation_warning(
- "Trainer.get_meter is deprecated. Please use fairseq.metrics instead."
- )
-
- train_meters = metrics.get_meters("train")
- if train_meters is None:
- train_meters = {}
-
- if name == "train_loss" and "loss" in train_meters:
- return train_meters["loss"]
- elif name == "train_nll_loss":
- # support for legacy train.py, which assumed this meter is
- # always initialized
- m = train_meters.get("nll_loss", None)
- return m or meters.AverageMeter()
- elif name == "wall":
- # support for legacy train.py, which assumed this meter is
- # always initialized
- m = metrics.get_meter("default", "wall")
- return m or meters.TimeMeter()
- elif name == "wps":
- m = metrics.get_meter("train", "wps")
- return m or meters.TimeMeter()
- elif name in {"valid_loss", "valid_nll_loss"}:
- # support for legacy train.py, which assumed these meters
- # are always initialized
- k = name[len("valid_") :]
- m = metrics.get_meter("valid", k)
- return m or meters.AverageMeter()
- elif name == "oom":
- return meters.AverageMeter()
- elif name in train_meters:
- return train_meters[name]
- return None
-
- def get_num_updates(self):
- """Get the number of parameters updates."""
- return self._num_updates
-
- def set_num_updates(self, num_updates):
- """Set the number of parameters updates."""
- self._num_updates = num_updates
- self.lr_step_update()
- if self.quantizer:
- self.quantizer.step_update(self._num_updates)
- metrics.log_scalar("num_updates", self._num_updates, weight=0, priority=200)
-
- def clip_grad_norm(self, clip_norm):
- def agg_norm_fn(total_norm):
- total_norm = total_norm.cuda().float() ** 2
- total_norm = distributed_utils.all_reduce(
- total_norm, group=self.data_parallel_process_group
- )
- return total_norm ** 0.5
-
- should_agg_norm = (
- self.cfg.distributed_training.ddp_backend == "fully_sharded"
- and (
- self.data_parallel_process_group is not None
- or torch.distributed.is_initialized()
- )
- )
- return self.optimizer.clip_grad_norm(
- clip_norm, aggregate_norm_fn=agg_norm_fn if should_agg_norm else None
- )
-
- def cumulative_training_time(self):
- if self._cumulative_training_time is None:
- # single GPU
- return self._local_cumulative_training_time()
- else:
- return self._cumulative_training_time
-
- def _local_cumulative_training_time(self):
- """Aggregate training time in seconds."""
- return time.time() - self._start_time + self._previous_training_time
-
- def _fp_convert_sample(self, sample):
- def apply_half(t):
- if t.dtype is torch.float32:
- return t.to(dtype=torch.half)
- return t
-
- def apply_bfloat16(t):
- if t.dtype is torch.float32:
- return t.to(dtype=torch.bfloat16)
- return t
-
- if self.cfg.common.fp16:
- sample = utils.apply_to_sample(apply_half, sample)
-
- if self.cfg.common.bf16:
- sample = utils.apply_to_sample(apply_bfloat16, sample)
-
- return sample
-
- def _prepare_sample(self, sample, is_dummy=False):
- if sample == "DUMMY":
- raise Exception(
- "Trying to use an uninitialized 'dummy' batch. This usually indicates "
- "that the total number of batches is smaller than the number of "
- "participating GPUs. Try reducing the batch size or using fewer GPUs."
- )
-
- if sample is None or len(sample) == 0:
- assert (
- self._dummy_batch is not None and len(self._dummy_batch) > 0
- ), "Invalid dummy batch: {}".format(self._dummy_batch)
- sample, _ = self._prepare_sample(self._dummy_batch, is_dummy=True)
- return sample, True
-
- # Given that PCIe/NVLink bandwidth is significantly smaller than DRAM bandwidth
- # it makes sense to do the format conversion on the CPU and then transfer
- # a smaller buffer to the device. This also saves GPU memory capacity.
-
- if self.cfg.common.on_cpu_convert_precision:
- sample = self._fp_convert_sample(sample)
-
- if self.cuda:
- if self.pipeline_model_parallel:
- if 'target' in sample:
- sample['target'] = utils.move_to_cuda(sample['target'], device=self.last_device)
- else:
- sample = utils.move_to_cuda(sample)
- elif self.tpu and is_dummy:
- # the dummy batch may not be on the appropriate device
- sample = utils.move_to_cuda(sample, device=self.device)
-
- if not self.cfg.common.on_cpu_convert_precision:
- sample = self._fp_convert_sample(sample)
-
- if self._dummy_batch == "DUMMY":
- self._dummy_batch = sample
-
- return sample, False
-
- def _set_seed(self):
- # Set seed based on args.seed and the update number so that we get
- # reproducible results when resuming from checkpoints
- seed = self.cfg.common.seed + self.get_num_updates()
- utils.set_torch_seed(seed)
-
- def _sync_stats(self):
- # Return True if it's using multiple GPUs and DDP or multiple GPUs with
- # BMUF and it's a bmuf sync with warmup iterations completed before.
- if self.data_parallel_world_size == 1:
- return False
- elif self.cfg.optimization.use_bmuf:
- return (
- self.get_num_updates() + 1
- ) % self.cfg.bmuf.global_sync_iter == 0 and (
- self.get_num_updates() + 1
- ) > self.cfg.bmuf.warmup_iterations
- else:
- return True
-
- def _log_oom(self, exc):
- msg = "OOM: Ran out of memory with exception: {}".format(exc)
- logger.warning(msg)
- if torch.cuda.is_available() and hasattr(torch.cuda, "memory_summary"):
- for device_idx in range(torch.cuda.device_count()):
- logger.warning(torch.cuda.memory_summary(device=device_idx))
- sys.stderr.flush()
-
- def _aggregate_logging_outputs(
- self,
- logging_outputs: List[Dict[str, Any]],
- *extra_stats_to_sum,
- ignore=False,
- ):
- if self.task.__class__.logging_outputs_can_be_summed(self.get_criterion()):
- return self._fast_stat_sync_sum(
- logging_outputs, *extra_stats_to_sum, ignore=ignore
- )
- else:
- return self._all_gather_list_sync(
- logging_outputs, *extra_stats_to_sum, ignore=ignore
- )
-
- def _all_gather_list_sync(
- self,
- logging_outputs: List[Dict[str, Any]],
- *extra_stats_to_sum,
- ignore=False,
- ):
- """
- Sync logging outputs across workers. all_gather_list_sync is
- suitable when logging outputs are complex types.
- """
- if self.tpu:
- raise NotImplementedError
- if ignore:
- logging_outputs = []
- results = list(
- zip(
- *distributed_utils.all_gather_list(
- [logging_outputs] + list(extra_stats_to_sum),
- max_size=getattr(self.cfg.common, "all_gather_list_size", 16384),
- group=self.data_parallel_process_group,
- )
- )
- )
- logging_outputs, extra_stats_to_sum = results[0], results[1:]
- logging_outputs = list(chain.from_iterable(logging_outputs))
- extra_stats_to_sum = [sum(s) for s in extra_stats_to_sum]
- return logging_outputs, extra_stats_to_sum
-
- def _fast_stat_sync_sum(
- self,
- logging_outputs: List[Dict[str, Any]],
- *extra_stats_to_sum,
- ignore=False,
- ):
- """
- Sync logging outputs across workers. fast_stat_sync_sum is
- faster than all_gather_list_sync, but is only suitable when
- logging outputs are scalars and can be summed. Note that
- *logging_outputs* cannot contain any nested dicts/lists.
- """
- data = {}
- for i, stat in enumerate(extra_stats_to_sum):
- data["extra_stats_" + str(i)] = stat
- if len(logging_outputs) > 0:
- log_keys = list(logging_outputs[0].keys())
- for k in log_keys:
- if not ignore:
- v = sum(log[k] for log in logging_outputs if k in log)
- else:
- v = logging_outputs[0][k]
- v = torch.zeros_like(v) if torch.is_tensor(v) else 0
- data["logging_outputs_" + k] = v
- else:
- log_keys = None
-
- data = distributed_utils.all_reduce_dict(
- data, device=self.device, group=self.data_parallel_process_group
- )
-
- extra_stats_to_sum = [
- data["extra_stats_" + str(i)] for i in range(len(extra_stats_to_sum))
- ]
- if log_keys is not None:
- logging_outputs = [{k: data["logging_outputs_" + k] for k in log_keys}]
- else:
- logging_outputs = []
- return logging_outputs, extra_stats_to_sum
-
- def _check_grad_norms(self, grad_norm):
- """Check that grad norms are consistent across workers."""
- if self._grad_norm_buf is not None:
- self._grad_norm_buf.zero_()
- self._grad_norm_buf[self.data_parallel_rank] = grad_norm
- distributed_utils.all_reduce(
- self._grad_norm_buf, group=self.data_parallel_process_group
- )
-
- def is_consistent(tensor):
- max_abs_diff = torch.max(torch.abs(tensor - tensor[0]))
- return (
- (torch.isfinite(tensor).all()
- and (max_abs_diff / (tensor[0] + 1e-6) < 1e-6).all())
- or
- (self.cfg.common.amp and not torch.isfinite(tensor).all())
- # in case of amp non-finite grads are fine
- )
-
- if not is_consistent(self._grad_norm_buf):
- pretty_detail = "\n".join(
- "rank {:3d} = {:.8f}".format(r, n)
- for r, n in enumerate(self._grad_norm_buf.tolist())
- )
- error_detail = "grad_norm across the workers:\n{}\n".format(
- pretty_detail
- )
- # use FloatingPointError to trigger NanDetector
- raise FloatingPointError(
- "Fatal error: gradients are inconsistent between workers. "
- "Try --ddp-backend=legacy_ddp. "
- "Or are you mixing up different generation of GPUs in training?"
- + "\n"
- + "-" * 80
- + "\n{}\n".format(error_detail)
- + "-" * 80
- )
-
- def _reduce_and_log_stats(self, logging_outputs, sample_size, grad_norm=None):
- if grad_norm is not None and (
- not torch.is_tensor(grad_norm) or torch.isfinite(grad_norm)
- ):
- metrics.log_speed("ups", 1.0, priority=100, round=2)
- metrics.log_scalar("gnorm", grad_norm, priority=400, round=3)
- if self.cfg.optimization.clip_norm > 0:
- metrics.log_scalar(
- "clip",
- torch.where(
- grad_norm > self.cfg.optimization.clip_norm,
- grad_norm.new_tensor(100),
- grad_norm.new_tensor(0),
- ),
- priority=500,
- round=1,
- )
-
- with metrics.aggregate() as agg:
- if logging_outputs is not None:
- self.task.reduce_metrics(logging_outputs, self.get_criterion())
- del logging_outputs
-
- # extra warning for criterions that don't properly log a loss value
- if "loss" not in agg:
- if "loss" not in self._warn_once:
- self._warn_once.add("loss")
- logger.warning(
- "Criterion.reduce_metrics did not log a 'loss' value, "
- "which may break some functionality"
- )
- metrics.log_scalar("loss", -1)
-
- # support legacy interface
- if self.tpu:
- logging_output = {}
- else:
- logging_output = agg.get_smoothed_values()
- logging_output["sample_size"] = sample_size
- for key_to_delete in ["ppl", "wps", "wpb", "bsz"]:
- if key_to_delete in logging_output:
- del logging_output[key_to_delete]
- return logging_output
-
- def _check_xla_compilation(self):
- import torch_xla.debug.metrics as met
-
- compile_stats = met.metric_data("CompileTime")
- if compile_stats is None:
- return
- num_xla_compiles = compile_stats[0]
- if num_xla_compiles > self._num_xla_compiles:
- logger.warning(
- "XLA compilation detected on device #{}; too many of these can lead "
- "to slow training, but we expect a few in the beginning".format(
- self.cfg.distributed_training.distributed_rank
- )
- )
- self._num_xla_compiles = num_xla_compiles
-
- def _xla_markstep_and_send_to_cpu(self, data=None):
- import torch_xla.core.xla_model as xm
-
- xm.mark_step()
- if data is not None:
- from fairseq.utils import xla_device_to_cpu
-
- return xla_device_to_cpu(data)
-
-
-def _catalog_shared_params(module, memo=None, prefix=""):
- if memo is None:
- first_call = True
- memo = {}
- else:
- first_call = False
- for name, param in module._parameters.items():
- param_prefix = prefix + ("." if prefix else "") + name
- if param not in memo:
- memo[param] = []
- memo[param].append(param_prefix)
- for name, m in module._modules.items():
- if m is None:
- continue
- submodule_prefix = prefix + ("." if prefix else "") + name
- _catalog_shared_params(m, memo, submodule_prefix)
- if first_call:
- return [x for x in memo.values() if len(x) > 1]
-
-
-def _get_module_by_path(module, path):
- path = path.split(".")
- for name in path:
- module = getattr(module, name)
- return module
-
-
-def _set_module_by_path(module, path, value):
- path = path.split(".")
- for name in path[:-1]:
- module = getattr(module, name)
- setattr(module, path[-1], value)
diff --git a/spaces/gradio/HuBERT/scripts/test_fsdp.sh b/spaces/gradio/HuBERT/scripts/test_fsdp.sh
deleted file mode 100644
index 1f428a035e4474427ded991f8e8307ea59f61f69..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/scripts/test_fsdp.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-rm -rf fsdp_dummy
-mkdir -p fsdp_dummy
-CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 256 --batch-size 8 \
- --arch transformer_lm_gpt2_tiny \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 5 --log-format json --log-interval 1 \
- --save-interval-updates 5 --save-dir fsdp_dummy --disable-validation \
- --restore-file x.pt "$@"
-
-# Now we try to load the checkpoint
-CUDA_VISIBLE_DEVICES=0,1 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \
- --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \
- --cpu-offload --checkpoint-activations \
- --task language_modeling --tokens-per-sample 256 --batch-size 8 \
- --arch transformer_lm_gpt2_tiny \
- --optimizer cpu_adam --adam-betas "(0.9,0.98)" \
- --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \
- --max-update 2 --log-format json --log-interval 1 \
- --save-interval-updates 2 --save-dir fsdp_dummy
diff --git a/spaces/guoyww/AnimateDiff/download_bashscripts/3-RcnzCartoon.sh b/spaces/guoyww/AnimateDiff/download_bashscripts/3-RcnzCartoon.sh
deleted file mode 100644
index 07f4f69d399e10b0a618501d7f72bcf7da571dd0..0000000000000000000000000000000000000000
--- a/spaces/guoyww/AnimateDiff/download_bashscripts/3-RcnzCartoon.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-#!/bin/bash
-wget https://civitai.com/api/download/models/71009 -P models/DreamBooth_LoRA/ --content-disposition --no-check-certificate
\ No newline at end of file
diff --git a/spaces/h2oai/h2ogpt-chatbot/gradio_utils/__init__.py b/spaces/h2oai/h2ogpt-chatbot/gradio_utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/h2oai/h2ogpt-chatbot2/README.md b/spaces/h2oai/h2ogpt-chatbot2/README.md
deleted file mode 100644
index 420745b2dcb76877686b17a0e2f090721503fa43..0000000000000000000000000000000000000000
--- a/spaces/h2oai/h2ogpt-chatbot2/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: H2ogpt Chatbot
-emoji: 📚
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: h2oai/h2ogpt-chatbot
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/h2oai/wave-tour/examples/stat_small_series_area.py b/spaces/h2oai/wave-tour/examples/stat_small_series_area.py
deleted file mode 100644
index dea80fe806c30a5ee95bcf793351c2c049b6f525..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/stat_small_series_area.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Stat / Series / Small / Area
-# Create a small stat card displaying a primary value and a series plot.
-# #stat_card #series
-# ---
-import time
-
-from faker import Faker
-
-from synth import FakeCategoricalSeries
-from h2o_wave import site, ui, data
-
-page = site['/demo']
-
-colors = '$red $pink $blue $azure $cyan $teal $mint $green $lime $yellow $amber $orange $tangerine'.split()
-curves = 'linear smooth step step-after step-before'.split()
-fake = Faker()
-cards = []
-for i in range(len(curves)):
- f = FakeCategoricalSeries()
- cat, val, pc = f.next()
- c = page.add(f'example{i}', ui.small_series_stat_card(
- box=f'1 {i + 1} 1 1',
- title=fake.cryptocurrency_name(),
- value='=${{intl qux minimum_fraction_digits=2 maximum_fraction_digits=2}}',
- data=dict(qux=val, quux=pc),
- plot_category='foo',
- plot_type='area',
- plot_value='qux',
- plot_color=colors[i],
- plot_data=data('foo qux', -15),
- plot_zero_value=0,
- plot_curve=curves[i],
- ))
- cards.append((f, c))
-page.save()
-
-while True:
- time.sleep(1)
- for f, c in cards:
- cat, val, pc = f.next()
- c.data.qux = val
- c.data.quux = pc
- c.plot_data[-1] = [cat, val]
- page.save()
diff --git a/spaces/h2oai/wave-tour/examples/table_groupby.py b/spaces/h2oai/wave-tour/examples/table_groupby.py
deleted file mode 100644
index 4a8c8f80634700a0a98f61ea02c5a124793d0cc2..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/table_groupby.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Table / Group by
-# Allow grouping a table by column values.
-# #table
-# ---
-import random
-from faker import Faker
-from h2o_wave import main, app, Q, ui
-
-fake = Faker()
-
-_id = 0
-
-
-class Issue:
- def __init__(self, text: str, status: str, progress: float, icon: str, notifications: str):
- global _id
- _id += 1
- self.id = f'I{_id}'
- self.text = text
- self.status = status
- self.views = 0
- self.progress = progress
- self.icon = icon
- self.notifications = notifications
-
-
-# Create some issues
-issues = [
- Issue(
- text=fake.sentence(),
- status=('Closed' if i % 2 == 0 else 'Open'),
- progress=random.random(),
- icon=('BoxCheckmarkSolid' if random.random() > 0.5 else 'BoxMultiplySolid'),
- notifications=('Off' if random.random() > 0.5 else 'On')) for i in range(100)
-]
-
-# Create columns for our issue table.
-columns = [
- ui.table_column(name='text', label='Issue'),
- ui.table_column(name='status', label='Status'),
- ui.table_column(name='notifications', label='Notifications'),
- ui.table_column(name='done', label='Done', cell_type=ui.icon_table_cell_type()),
- ui.table_column(name='views', label='Views'),
- ui.table_column(name='progress', label='Progress', cell_type=ui.progress_table_cell_type()),
-]
-
-
-@app('/demo')
-async def serve(q: Q):
- q.page['form'] = ui.form_card(box='1 1 -1 7', items=[
- ui.table(
- name='issues',
- columns=columns,
- rows=[ui.table_row(
- name=issue.id,
- cells=[issue.text, issue.status, issue.notifications, issue.icon, str(issue.views),
- str(issue.progress)]) for issue in issues],
- groupable=True,
- )])
- await q.page.save()
diff --git a/spaces/h4d35/CosineSim/README.md b/spaces/h4d35/CosineSim/README.md
deleted file mode 100644
index 191c5fe5811d2a9a19f947f62d1c0ce0d4f2e770..0000000000000000000000000000000000000000
--- a/spaces/h4d35/CosineSim/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CosineSim
-emoji: 🌖
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/hackathon-pln-es/extractive-qa-biomedicine/app.py b/spaces/hackathon-pln-es/extractive-qa-biomedicine/app.py
deleted file mode 100644
index df6a72ff0f132246f5202ca40652273d1067bca5..0000000000000000000000000000000000000000
--- a/spaces/hackathon-pln-es/extractive-qa-biomedicine/app.py
+++ /dev/null
@@ -1,150 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-import torch
-
-title = "Extractive QA Biomedicine"
-description = """
-
-Recent research has made available Spanish Language Models trained on Biomedical corpus. This project explores the use of these new models to generate extractive Question Answering models for Biomedicine, and compares their effectiveness with general masked language models.
-
-The models were trained on the SQUAD_ES Dataset (automatic translation of the Stanford Question Answering Dataset into Spanish). SQUAD v2 version was chosen in order to include questions that cannot be answered based on a provided context.
-
-The models were evaluated on BIOMED_SQUAD_ES_V2 Dataset , a subset of the SQUAD_ES evaluation dataset containing questions related to the Biomedical domain.
-
-The project is aligned with goal number 3 of the Sustainable Development Goals promoted by the United Nations: "Ensure a healthy life and promote well-being for all at all ages", since this research can lead to the development of tools that facilitate the access to health information by doctors and Spanish-speaking people from all over the world.
-
-In the following Demo, the four trained models can be tested to answer a question given a context (the confidence score - from 0 to 1 - of the predicted answer is also displayed):
-
-
Question Answering is a complex task to understand, as it requires not only pre-processing the inputs, but also post-processing the outputs. Moreover, the metrics used are quite specific.
-
There is not as much documentation and tutorials available for QA as for other more popular NLP tasks. In particular, the examples provided are often focused on the SQUAD v1 format and not on SQUAD v2, the format selected for this project.
-
Before the Hackathon, there was no Biomedical QA dataset in Spanish publicly available (particularly with the SQUAD V2 format). It was necessary to create a validation Biomedical Dataset using the SQUAD_ES Dataset.
-
-
-
Conclusion and Future Work
-
-If F1 Score is considered, the results show that there may be no advantage in using domain-specific masked language models to generate Biomedical QA models.
-
-
-However, the F1 Scores reported for the Biomedical roberta-based models are not far below from those of the general roberta-based model.
-
-If only unanswerable questions are taken into account, the model with the best F1 Score is hackathon-pln-es/roberta-base-biomedical-es-squad2-es.
-
-
-The model hackathon-pln-es/biomedtra-small-es-squad2-es, on the contrary, shows inability to correctly identify unanswerable questions.
-
-As future work, the following experiments could be carried out:
-
-
Create Biomedical masked-language models adapted from a general model, to preserve words and features of Spanish that are also present in Biomedical questions and articles. The Biomedical base models used in the project were trained from scratch from a Biomedical corpus.
-
Create a Biomedical training dataset with SQUAD v2 format.
-
Generate a new and larger Spanish Biomedical validation dataset, not translated from English as in the case of SQUAD_ES Dataset.
-
Ensamble different models.
-
-
-
-
Author
-Santiago Maximo
-"""
-
-device = 0 if torch.cuda.is_available() else -1
-MODEL_NAMES = ["hackathon-pln-es/roberta-base-bne-squad2-es",
- "hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es",
- "hackathon-pln-es/roberta-base-biomedical-es-squad2-es",
- "hackathon-pln-es/biomedtra-small-es-squad2-es"]
-
-examples = [
- [MODEL_NAMES[2], "¿Qué cidippido se utiliza como descripción de los ctenóforos en la mayoría de los libros de texto?","Para un filo con relativamente pocas especies, los ctenóforos tienen una amplia gama de planes corporales. Las especies costeras necesitan ser lo suficientemente duras para soportar las olas y remolcar partículas de sedimentos, mientras que algunas especies oceánicas son tan frágiles que es muy difícil capturarlas intactas para su estudio. Además, las especies oceánicas no conservan bien, y son conocidas principalmente por fotografías y notas de observadores. Por lo tanto, la mayor atención se ha concentrado recientemente en tres géneros costeros: Pleurobrachia, Beroe y Mnemiopsis. Al menos dos libros de texto basan sus descripciones de ctenóforos en los cidipépidos Pleurobrachia."],
- [MODEL_NAMES[0], "¿Dónde se atasca un fagocito en un patógeno?", "La fagocitosis es una característica importante de la inmunidad celular innata llevada a cabo por células llamadas fagocitos que absorben, o comen, patógenos o partículas. Los fagocitos generalmente patrullan el cuerpo en busca de patógenos, pero pueden ser llamados a lugares específicos por citoquinas. Una vez que un patógeno ha sido absorbido por un fagocito, queda atrapado en una vesícula intracelular llamada fagosoma, que posteriormente se fusiona con otra vesícula llamada lisosoma para formar un fagocito."],
-
-]
-
-def getanswer(model_name, question, context):
-
- question_answerer = pipeline("question-answering", model=model_name, device=device)
-
- response = question_answerer({
- 'question': question,
- 'context': context
- })
- return response['answer'],response['score']
-
-face = gr.Interface(
- fn=getanswer,
- inputs=[
- gr.inputs.Radio(
- label="Pick a QA Model",
- choices=MODEL_NAMES,
- ),
- gr.inputs.Textbox(lines=1, placeholder="Question Here… "),
- gr.inputs.Textbox(lines=10, placeholder="Context Here… ")
- ],
- outputs=[
- gr.outputs.Textbox(label="Answer"),
- gr.outputs.Textbox(label="Score"),
- ],
- layout="vertical",
- title=title,
- examples=examples,
- description=description,
- article=article,
- allow_flagging ="never"
-)
-face.launch()
\ No newline at end of file
diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/profile.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/profile.py
deleted file mode 100644
index f10372cdef306e5e199db432b23062df1c098cf9..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/profile.py
+++ /dev/null
@@ -1,158 +0,0 @@
-import argparse
-
-import torch
-import open_clip
-import pandas as pd
-from fvcore.nn import FlopCountAnalysis, flop_count_str, ActivationCountAnalysis
-
-
-parser = argparse.ArgumentParser(description='OpenCLIP Profiler')
-
-# benchmark specific args
-parser.add_argument('--model', metavar='NAME', default='',
- help='model(s) to profile')
-parser.add_argument('--results-file', default='', type=str, metavar='FILENAME',
- help='Output csv file for results')
-
-
-def profile_fvcore(
- model,
- image_input_size=(3, 224, 224),
- text_input_size=(77,),
- batch_size=1,
- detailed=False,
- force_cpu=False
-):
- if force_cpu:
- model = model.to('cpu')
- device, dtype = next(model.parameters()).device, next(model.parameters()).dtype
- example_image_input = torch.ones((batch_size,) + image_input_size, device=device, dtype=dtype)
- example_text_input = torch.ones((batch_size,) + text_input_size, device=device, dtype=torch.int64)
- fca = FlopCountAnalysis(model, (example_image_input, example_text_input))
- aca = ActivationCountAnalysis(model, (example_image_input, example_text_input))
- if detailed:
- fcs = flop_count_str(fca)
- print(fcs)
- return fca.total(), aca.total()
-
-
-def profile_fvcore_text(
- model,
- text_input_size=(77,),
- batch_size=1,
- detailed=False,
- force_cpu=False
-):
- if force_cpu:
- model = model.to('cpu')
- device = next(model.parameters()).device
- example_input = torch.ones((batch_size,) + text_input_size, device=device, dtype=torch.int64)
- fca = FlopCountAnalysis(model, example_input)
- aca = ActivationCountAnalysis(model, example_input)
- if detailed:
- fcs = flop_count_str(fca)
- print(fcs)
- return fca.total(), aca.total()
-
-
-def profile_fvcore_image(
- model,
- image_input_size=(3, 224, 224),
- batch_size=1,
- detailed=False,
- force_cpu=False
-):
- if force_cpu:
- model = model.to('cpu')
- device, dtype = next(model.parameters()).device, next(model.parameters()).dtype
- example_input = torch.ones((batch_size,) + image_input_size, device=device, dtype=dtype)
- fca = FlopCountAnalysis(model, example_input)
- aca = ActivationCountAnalysis(model, example_input)
- if detailed:
- fcs = flop_count_str(fca)
- print(fcs)
- return fca.total(), aca.total()
-
-
-def count_params(model):
- return sum([m.numel() for m in model.parameters()])
-
-
-def profile_model(model_name):
- model = open_clip.create_model(model_name, force_custom_text=True, pretrained_hf=False)
- model.eval()
- if torch.cuda.is_available():
- model = model.cuda()
-
- if isinstance(model.visual.image_size, (tuple, list)):
- image_input_size = (3,) + tuple(model.visual.image_size[-2:])
- else:
- image_input_size = (3, model.visual.image_size, model.visual.image_size)
- text_input_size = (77,)
-
- results = {}
- results['model'] = model_name
- results['image_size'] = image_input_size[1]
-
- model_cfg = open_clip.get_model_config(model_name)
- if model_cfg:
- vision_cfg = open_clip.CLIPVisionCfg(**model_cfg['vision_cfg'])
- text_cfg = open_clip.CLIPTextCfg(**model_cfg['text_cfg'])
- results['image_width'] = int(vision_cfg.width)
- results['text_width'] = int(text_cfg.width)
- results['embed_dim'] = int(model_cfg['embed_dim'])
- else:
- results['image_width'] = 0
- results['text_width'] = 0
- results['embed_dim'] = 0
-
- retries = 2
- while retries:
- retries -= 1
- try:
- macs, acts = profile_fvcore(
- model, image_input_size=image_input_size, text_input_size=text_input_size, force_cpu=not retries)
-
- image_macs, image_acts = profile_fvcore_image(
- model.visual, image_input_size=image_input_size, force_cpu=not retries)
-
- text_macs, text_acts = profile_fvcore_text(
- model.text, text_input_size=text_input_size, force_cpu=not retries)
-
- results['gmacs'] = round(macs / 1e9, 2)
- results['macts'] = round(acts / 1e6, 2)
- results['mparams'] = round(count_params(model) / 1e6, 2)
- results['image_gmacs'] = round(image_macs / 1e9, 2)
- results['image_macts'] = round(image_acts / 1e6, 2)
- results['image_mparams'] = round(count_params(model.visual) / 1e6, 2)
- results['text_gmacs'] = round(text_macs / 1e9, 2)
- results['text_macts'] = round(text_acts / 1e6, 2)
- results['text_mparams'] = round(count_params(model.text) / 1e6, 2)
- except RuntimeError as e:
- pass
- return results
-
-
-def main():
- args = parser.parse_args()
-
- # FIXME accept a text file name to allow lists of models in txt/csv
- if args.model == 'all':
- parsed_model = open_clip.list_models()
- else:
- parsed_model = args.model.split(',')
-
- results = []
- for m in parsed_model:
- row = profile_model(m)
- results.append(row)
-
- df = pd.DataFrame(results, columns=results[0].keys())
- df = df.sort_values('gmacs')
- print(df)
- if args.results_file:
- df.to_csv(args.results_file, index=False)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/hamelcubsfan/AutoGPT/benchmark/__init__.py b/spaces/hamelcubsfan/AutoGPT/benchmark/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/solver/__init__.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/solver/__init__.py
deleted file mode 100644
index 75f40530cccb6b989d33193de92a6c26a07cf751..0000000000000000000000000000000000000000
--- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/solver/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-from .build import make_optimizer
-from .build import make_lr_scheduler
-from .lr_scheduler import WarmupMultiStepLR
diff --git a/spaces/harshhpareek/bertscore/app.py b/spaces/harshhpareek/bertscore/app.py
deleted file mode 100644
index e24884e8afcbeb0c2a68c59d4e7db4eeb3fd8c43..0000000000000000000000000000000000000000
--- a/spaces/harshhpareek/bertscore/app.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import evaluate
-from evaluate.utils import launch_gradio_widget
-
-
-module = evaluate.load("bertscore")
-launch_gradio_widget(module)
diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Bill-Goldberg-Theme-Song-Free-Download.md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Bill-Goldberg-Theme-Song-Free-Download.md
deleted file mode 100644
index b3a679287c23385f5ceffe52fb1b70283863d66d..0000000000000000000000000000000000000000
--- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Bill-Goldberg-Theme-Song-Free-Download.md
+++ /dev/null
@@ -1,47 +0,0 @@
-## Bill Goldberg Theme Song Free Download
-
-
-
- 
-
-
-
-**Download >>> [https://ditzcosupo.blogspot.com/?d=2twsji](https://ditzcosupo.blogspot.com/?d=2twsji)**
-
-
-
-# How to Download Bill Goldberg Theme Song for Free
-
-
-
-Bill Goldberg is one of the most popular and intense wrestlers of all time. He rose to fame in WCW with a lengthy undefeated streak in singles competition from 1997 to 1998, becoming a one-time WCW World Heavyweight Champion, two-time WCW United States Heavyweight Champion, and one-time WCW World Tag Team Champion. He also wrestled for WWE, becoming a one-time World Heavyweight Champion and a two-time WWE Universal Champion. He is widely regarded as the inventor of the spear signature move in wrestling, which he popularized and executed with great skill.
-
-
-
-One of the things that made Goldberg stand out was his entrance theme song, "Invasion", which was composed by Jim Johnston. The song featured a powerful guitar riff, a siren sound, and chants of "Goldberg" from the crowd. The song perfectly matched Goldberg's intensity and charisma, and created an atmosphere of anticipation and excitement whenever he appeared.
-
-
-
-If you are a fan of Goldberg and his theme song, you might be wondering how to download it for free. There are many websites that offer free downloads of wrestling theme songs, but not all of them are safe or legal. Some of them might contain viruses, malware, or spyware that can harm your computer or device. Some of them might also violate copyright laws and infringe on the rights of the original creators.
-
-
-
-To avoid these risks, you should only download Goldberg's theme song from reputable and authorized sources. One of them is YouTube, where you can find several videos of Goldberg's entrance with his theme song playing in the background. You can use a YouTube downloader tool or app to convert the video into an MP3 file and save it on your device. However, you should be careful not to download any copyrighted content without permission from the owner.
-
-
-
-Another option is to use a streaming service like Spotify or Apple Music, where you can find Goldberg's theme song as part of various wrestling playlists. You can listen to the song online or offline, depending on your subscription plan. However, you should be aware that you cannot download the song as a separate file or share it with others.
-
-
-
-The best way to enjoy Goldberg's theme song is to watch him perform live in the ring. He is still active as a wrestler and occasionally makes appearances on WWE shows. You can check out his official website [billgoldberg.com](https://billgoldberg.com/) for his latest news and updates. You can also follow him on social media platforms like Twitter, Instagram, and Facebook.
-
-
-
-Goldberg's theme song is more than just a music track. It is a symbol of his legacy and impact on the wrestling industry. It is a tribute to his strength, skill, and passion. It is a reminder of his unforgettable moments and achievements. It is a part of his identity and personality.
-
-
-
-If you want to download Bill Goldberg's theme song for free, you should do it responsibly and legally. You should respect his work and his rights as an artist and a wrestler. You should also support him by watching his matches and cheering for him whenever he steps into the ring.
-
- 1b8d091108
\ No newline at end of file
diff --git a/spaces/huybery/deep-thinking/models/meta_optimizer.py b/spaces/huybery/deep-thinking/models/meta_optimizer.py
deleted file mode 100644
index d3fe520ffed657d94e6f7e539f43850ced244420..0000000000000000000000000000000000000000
--- a/spaces/huybery/deep-thinking/models/meta_optimizer.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import torch
-
-
-class MomentumOptim:
- def __init__(self, step_size=0.01, momentum=0.9):
- self.step_size = step_size
- self.momentum = momentum
- self.m = None # velocity
-
- def init(self):
- self.m = None
-
- def upd_m(self, old_m, g):
- return g + self.momentum * old_m
-
- def upd(self, old_x, m):
- return old_x + self.step_size * m
-
- def __call__(self, old_xs, new_xs):
- pesudo_gs = [new_x - old_x for old_x, new_x in zip(old_xs, new_xs)]
-
- if not self.m:
- self.m = pesudo_gs
- else:
- self.m = [self.upd_m(old_m, g) for old_m, g in zip(self.m, pesudo_gs)]
-
- updated_kv = [self.upd(old_x, m) for old_x, m in zip(old_xs, self.m)]
- return updated_kv
-
-
-class AttnOptimWrapper:
- def __init__(self, llm, model_type, optimizer="momentum", **optimizer_args):
- self.model = llm
- self.kv = None
- self.model_type = model_type
-
- if optimizer == "momentum":
- self.optim_k = MomentumOptim(**optimizer_args)
- self.optim_v = MomentumOptim(**optimizer_args)
- else:
- raise ValueError()
-
- def init(self):
- self.optim_k.init()
- self.optim_v.init()
-
- @torch.no_grad()
- def step(self, ctx_ids):
- L = len(ctx_ids)
-
- ctx_ids = ctx_ids.unsqueeze(0) # [1, L]
- mask = torch.ones_like(ctx_ids)
- if self.kv is not None:
- mask = mask.repeat(1, 2) # [1, 2*L]
-
- next_kv = self.model(
- input_ids=ctx_ids,
- attention_mask=mask,
- past_key_values=self.kv,
- use_cache=True,
- ).past_key_values # kv @ (old_ctx + new_ctx)
-
- cur_kv = []
- for layer_k, layer_v in next_kv:
- # [B, num_head, 2*L, head_hidden]
- cur_kv.append([layer_k[:, :, -L:, :], layer_v[:, :, -L:, :]]) # kv @ (new_ctx)
-
- if not self.kv:
- self.kv = cur_kv
- else:
- old_ks, old_vs = zip(*self.kv)
- cur_ks, cur_vs = zip(*cur_kv)
-
- upd_ks = self.optim_k(old_ks, cur_ks)
- upd_vs = self.optim_v(old_vs, cur_vs)
- self.kv = list(zip(upd_ks, upd_vs))
-
- return self.kv
diff --git a/spaces/hysts/ControlNet-with-Anything-v4/app_hough.py b/spaces/hysts/ControlNet-with-Anything-v4/app_hough.py
deleted file mode 100644
index ef87a73ca6c757eea4352aeafbd45fdad0189599..0000000000000000000000000000000000000000
--- a/spaces/hysts/ControlNet-with-Anything-v4/app_hough.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_hough2image.py
-# The original license file is LICENSE.ControlNet in this repo.
-import gradio as gr
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- gr.Markdown('## Control Stable Diffusion with Hough Line Maps')
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type='numpy')
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button(label='Run')
- with gr.Accordion('Advanced options', open=False):
- num_samples = gr.Slider(label='Images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image Resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- detect_resolution = gr.Slider(label='Hough Resolution',
- minimum=128,
- maximum=512,
- value=512,
- step=1)
- mlsd_value_threshold = gr.Slider(
- label='Hough value threshold (MLSD)',
- minimum=0.01,
- maximum=2.0,
- value=0.1,
- step=0.01)
- mlsd_distance_threshold = gr.Slider(
- label='Hough distance threshold (MLSD)',
- minimum=0.01,
- maximum=20.0,
- value=0.1,
- step=0.01)
- num_steps = gr.Slider(label='Steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance Scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=-1,
- maximum=2147483647,
- step=1,
- randomize=True)
- a_prompt = gr.Textbox(
- label='Added Prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative Prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output',
- show_label=False,
- elem_id='gallery').style(grid=2,
- height='auto')
- inputs = [
- input_image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- detect_resolution,
- num_steps,
- guidance_scale,
- seed,
- mlsd_value_threshold,
- mlsd_distance_threshold,
- ]
- prompt.submit(fn=process, inputs=inputs, outputs=result)
- run_button.click(fn=process,
- inputs=inputs,
- outputs=result,
- api_name='hough')
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model()
- demo = create_demo(model.process_hough)
- demo.queue().launch()
diff --git a/spaces/ianpan/bone-age-greulich-and-pyle/README.md b/spaces/ianpan/bone-age-greulich-and-pyle/README.md
deleted file mode 100644
index 8b650b68b7bb174814384ec0cd18615f21d9966b..0000000000000000000000000000000000000000
--- a/spaces/ianpan/bone-age-greulich-and-pyle/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Deep Learning Model for Pediatric Bone Age
-emoji: 💻
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.8.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/idosal/oai-proxy/src/info-page.ts b/spaces/idosal/oai-proxy/src/info-page.ts
deleted file mode 100644
index 20f9a4f9157276da3cbb632bb8cbe51dbe2bca21..0000000000000000000000000000000000000000
--- a/spaces/idosal/oai-proxy/src/info-page.ts
+++ /dev/null
@@ -1,51 +0,0 @@
-import { Request, Response } from "express";
-import showdown from "showdown";
-import { keys } from "./keys";
-
-export const handleInfoPage = (req: Request, res: Response) => {
- // Huggingface puts spaces behind some cloudflare ssl proxy, so `req.protocol` is `http` but the correct URL is actually `https`
- const host = req.get("host");
- const isHuggingface = host?.includes("hf.space");
- const protocol = isHuggingface ? "https" : req.protocol;
- res.send(getInfoPageHtml(protocol + "://" + host));
-};
-
-function getInfoPageHtml(host: string) {
- const keylist = keys.list();
- const info = {
- message: "OpenAI Reverse Proxy",
- uptime: process.uptime(),
- timestamp: Date.now(),
- baseUrl: host,
- kobold: host + "/proxy/kobold" + " (not yet implemented)",
- openai: host + "/proxy/openai",
- keys: {
- all: keylist.length,
- active: keylist.filter((k) => !k.isDisabled).length,
- trial: keylist.filter((k) => k.isTrial).length,
- gpt4: keylist.filter((k) => k.isGpt4).length,
- proompts: keylist.reduce((acc, k) => acc + k.promptCount, 0),
- },
- };
-
- const readme = require("fs").readFileSync("README.md", "utf8");
- const readmeBody = readme.split("---")[2];
- const converter = new showdown.Converter();
- const html = converter.makeHtml(readmeBody);
-
- const pageBody = `
-
-
-
- OpenAI Reverse Proxy
-
-
-
-13 Dec 2020 - Linkin Park Roads Untraveled Mp3 320kbps 75 Absolutyzm Czy Republika Sprawdzian Doc BR6 MPB A CAPELLA GOM Encoder Crack15 Download Race. I can't wait till this year.
-To watch the "Summer Walker - Get Up" video, we've got to put "Praying For Rain" to the middle of the video.
-I'm so happy that this song reached the Top 10.
-A little bit of rain would be a wonderfully good finishing touch on this song.
-"Summer Walker" is by "A-Sides" from the album "Pays" (1999).
-On this channel, I upload new and popular mp3 downloads, and high quality mp3 songs.
-Free Download "Summer Walker - Get Up" song.
-Summer Walker - Get Up Artist Summer Walker 8a78ff9644
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Biosoft PrimerPlex V2 11 21103.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Biosoft PrimerPlex V2 11 21103.md
deleted file mode 100644
index 228b6e6ecc5f8ef40e077274b7ee22a8cafc5211..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Biosoft PrimerPlex V2 11 21103.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
Biosoft PrimerPlex V2 11 21103: A Powerful Tool for Multiplex PCR Design
-
Multiplex PCR is a technique that allows simultaneous amplification of multiple targets in a single reaction. This can save time, cost and resources, as well as increase the throughput and accuracy of the analysis. However, designing primers for multiplex PCR can be challenging, as they need to be compatible with each other and avoid cross-reactivity, primer-dimer formation and nonspecific amplification.
-
That's where Biosoft PrimerPlex V2 11 21103 comes in. Biosoft PrimerPlex V2 11 21103 is an efficient and sophisticated software for designing oligos for multiplex analysis on suspension array systems such as Luminex 100, Luminex 200 and Bio-Plex 200. Based on Luminex xMAP® technology, these systems offer a versatile platform for multiplex nucleic acid detection in the 96-well format.
Biosoft PrimerPlex V2 11 21103 uses a proprietary algorithm to design optimal and compatible multiplex primer sets under uniform reaction conditions. It takes into account various factors such as primer melting temperature, GC content, secondary structure, specificity, homology and multiplexing level. It also allows the user to customize the primer design parameters and select the target regions of interest.
-
Biosoft PrimerPlex V2 11 21103 can design primers for various applications, such as:
-
-
Multiplex PCR for target amplification for next-generation sequencing (NGS)
-
Allele-specific primer extension (ASPE) primers for high-throughput SNP genotyping
-
Multiplex PCR for pathogen detection and identification
-
Multiplex PCR for gene expression analysis
-
Multiplex PCR for copy number variation (CNV) analysis
-
-
Benefits of Biosoft PrimerPlex V2 11 21103
-
Biosoft PrimerPlex V2 11 21103 offers several benefits for multiplex PCR design, such as:
-
-
It can design up to 1000 primers in a single run
-
It can design primers for both DNA and RNA targets
-
It can design primers for both singleplex and multiplex reactions
-
It can design primers for both conventional and real-time PCR
-
It can design primers for both standard and long-range PCR
-
It can design primers for both forward and reverse orientation
-
It can design primers with or without tails and tags
-
It can design primers with or without mismatches and degeneracy
-
It can design primers with or without restriction sites and overhangs
-
It can design primers with or without GC clamps and spacers
-
It can design primers with or without internal probes and quenchers
-
It can design primers with or without universal adapters and barcodes
-
It can design primers with or without Tm balancing and concentration optimization
-
It can design primers with or without cross-reactivity and primer-dimer checking
-
It can design primers with or without secondary structure and homology analysis
-
It can design primers with or without specificity and sensitivity testing
-
It can design primers with or without BLAST search and alignment
-
It can design primers with or without annotation and visualization
-
It can export the primer design results in various formats, such as Excel, PDF, FASTA, GenBank, etc.
-
It can import the primer design inputs from various sources, such as text files, databases, online servers, etc.
-
-
-
Conclusion
-
-
Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.
-
What Customers Say About Biosoft PrimerPlex V2 11 21103
-
Biosoft PrimerPlex V2 11 21103 has received positive feedback and reviews from many customers who have used it for their multiplex PCR design needs. Here are some of the testimonials from satisfied users:
-
-
"I have been using PrimerPlex for designing primers for multiplex PCR and NGS target amplification. It is a very useful and user-friendly software that saves me a lot of time and effort. It designs optimal and compatible primers for multiple targets under uniform reaction conditions. It also provides various analyses and tests to ensure primer specificity and quality. I highly recommend PrimerPlex to anyone who needs to design primers for multiplex applications."
-
-Dr. John Smith, Molecular Biologist, ABC Research Institute
-
-
-
"PrimerPlex is a great tool for designing ASPE primers for high-throughput SNP genotyping. It allows me to design primers with different features and options, such as tails, tags, mismatches, degeneracy, etc. It also checks for cross-reactivity and primer-dimer formation among the primer sets. It helps me to achieve accurate and reliable SNP genotyping results with minimal cost and resources."
-Ms. Jane Doe, Geneticist, XYZ Biotech Company
-
-
-
"I have been using PrimerPlex for designing primers for multiplex PCR for pathogen detection and identification. It is a fast and accurate software that can design up to 1000 primers in a single run. It also allows me to customize the primer design parameters and select the target regions of interest. It performs various analyses and tests on the primer sets, such as BLAST search, alignment, annotation and visualization. It helps me to detect and identify multiple pathogens in a single reaction with high sensitivity and specificity."
-Dr. Alice Lee, Microbiologist, LMN Hospital
-
-
-
Conclusion
-
-
Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.
-
How to Learn Biosoft PrimerPlex V2 11 21103
-
If you want to learn how to use Biosoft PrimerPlex V2 11 21103 for your multiplex PCR design needs, you can access various resources and tutorials that are available online. Here are some of the ways you can learn Biosoft PrimerPlex V2 11 21103:
Contact the customer support team that can assist you with any queries or problems related to the software. You can contact them by email, phone or live chat from https://www.premierbiosoft.com/contactus/index.html.
Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.
-
How Biosoft PrimerPlex V2 11 21103 Compares with Other Software
-
Biosoft PrimerPlex V2 11 21103 is not the only software for multiplex PCR design, but it is one of the best and most advanced ones. Here are some of the features that make Biosoft PrimerPlex V2 11 21103 stand out from other software:
-
-
It can design primers for both DNA and RNA targets, while some other software can only design primers for DNA targets.
-
It can design primers for both singleplex and multiplex reactions, while some other software can only design primers for singleplex reactions.
-
It can design primers for both conventional and real-time PCR, while some other software can only design primers for conventional PCR.
-
It can design primers for both standard and long-range PCR, while some other software can only design primers for standard PCR.
-
It can design primers with various features and options, such as tails, tags, mismatches, degeneracy, restriction sites, overhangs, GC clamps, spacers, internal probes, quenchers, universal adapters, barcodes, Tm balancing and concentration optimization, while some other software have limited or no options for these features.
-
It can perform various analyses and tests on the primer sets, such as cross-reactivity and primer-dimer checking, secondary structure and homology analysis, specificity and sensitivity testing, BLAST search and alignment, annotation and visualization, while some other software have limited or no options for these analyses and tests.
-
It can export the primer design results in various formats, such as Excel, PDF, FASTA, GenBank, etc., while some other software have limited or no options for exporting the results.
-
It can import the primer design inputs from various sources, such as text files, databases, online servers, etc., while some other software have limited or no options for importing the inputs.
-
-
-
Conclusion
-
-
Biosoft PrimerPlex V2 11 21103 is a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. Whether you are working on NGS, SNP genotyping, pathogen detection, gene expression, CNV analysis or any other multiplex application, Biosoft PrimerPlex V2 11 21103 can provide you with optimal and compatible multiplex primer sets that will ensure high-quality results. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.
-
Conclusion
-
In this article, we have introduced Biosoft PrimerPlex V2 11 21103, a powerful tool for multiplex PCR design that can help you achieve your research goals faster and easier. We have explained what multiplex PCR is and why it is useful for various applications. We have also described how Biosoft PrimerPlex V2 11 21103 works and what benefits it offers for multiplex PCR design. We have also shown you how to use Biosoft PrimerPlex V2 11 21103 and how to learn more about it. Finally, we have compared Biosoft PrimerPlex V2 11 21103 with other software and highlighted its unique features and advantages.
-
If you are looking for a reliable and trusted software for multiplex PCR design, you should definitely try Biosoft PrimerPlex V2 11 21103. It is fast, accurate, versatile, flexible, comprehensive, thorough, informative, helpful, easy and user-friendly. It is also affordable, cost-effective, supported and updated. It can design primers for various applications, targets and systems. It can design primers with various features and options. It can perform various analyses and tests on the primer sets. It can export the primer design results in various formats. It can import the primer design inputs from various sources.
-
Biosoft PrimerPlex V2 11 21103 is the ultimate solution for multiplex PCR design that can help you achieve your research goals faster and easier. To learn more about Biosoft PrimerPlex V2 11 21103 and download a free trial version, visit https://www.premierbiosoft.com/primerplex/index.html.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Cara Upgrade Windows 7 Sp1 Ke Sp2.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Cara Upgrade Windows 7 Sp1 Ke Sp2.md
deleted file mode 100644
index 32809026b9c9d1c64d5465fd7c931ba4281d65cd..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Cara Upgrade Windows 7 Sp1 Ke Sp2.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
Cara Upgrade Windows 7 SP1 ke SP2 Secara Offline
-
Windows 7 adalah salah satu sistem operasi yang masih banyak digunakan oleh banyak orang. Meskipun sudah agak jadul, Windows 7 memiliki kelebihan seperti kompatibilitas dengan berbagai aplikasi dan software, serta tidak terlalu boros kuota internet. Namun, Windows 7 juga memiliki kekurangan seperti beberapa celah keamanan dan keterbatasan fitur yang bisa berbahaya bagi pengguna.
Oleh karena itu, sangat penting bagi pengguna Windows 7 untuk melakukan update secara berkala agar sistem operasi mereka tetap aman dan optimal. Salah satu update yang dianjurkan adalah Service Pack 2 (SP2), yang merupakan kumpulan dari semua update keamanan dan non-keamanan yang dirilis setelah Service Pack 1 (SP1). Sayangnya, Microsoft tidak merilis SP2 secara resmi untuk Windows 7, melainkan hanya menyediakan convenience rollup update, yang esensinya sama saja dengan SP2.
-
Lalu, bagaimana cara upgrade Windows 7 SP1 ke SP2 secara offline? Berikut adalah langkah-langkahnya:
-
-
Pastikan Windows 7 anda sudah terinstall SP1. Jika belum, anda bisa mendownload dan menginstall SP1 secara offline dari situs resmi Microsoft[^1^] [^2^]. Pilih versi yang sesuai dengan jenis Windows 7 anda, apakah 32-bit atau 64-bit. Anda bisa mengeceknya dengan klik kanan pada My Computer lalu pilih Properties.
-
Download convenience rollup update untuk Windows 7 SP1 dari situs resmi Microsoft[^3^]. Pilih versi yang sesuai dengan jenis Windows 7 anda, apakah 32-bit atau 64-bit. File yang didownload berformat .MSU dan tidak perlu software tambahan untuk menjalankannya.
-
Setelah download selesai, double klik pada file .MSU dan ikuti instruksi yang muncul di layar. Proses ini mungkin akan memakan waktu cukup lama tergantung pada spesifikasi komputer anda.
-
Setelah proses selesai, komputer anda akan diminta untuk restart agar update dapat diterapkan secara sempurna.
-
Setelah restart, anda bisa mengecek apakah update berhasil dengan membuka Control Panel\\System and Security\\Windows Update dan melihat riwayat update yang telah terinstall. Anda juga bisa membuka Control Panel\\System and Security\\System dan melihat versi Windows 7 anda di bagian bawah.
-
-
Selamat, anda telah berhasil upgrade Windows 7 SP1 ke SP2 secara offline. Semoga artikel ini bermanfaat dan selamat mencoba!
-
-
Upgrade ke SP2 tidak hanya meningkatkan keamanan dan kinerja Windows 7 anda, tetapi juga memungkinkan anda untuk menginstall beberapa aplikasi dan software yang membutuhkan SP2 sebagai syarat minimum. Misalnya, iTunes, yang merupakan aplikasi untuk mengelola data pada perangkat iPhone, membutuhkan Windows 7 SP1 atau lebih tinggi untuk dapat berjalan dengan baik. Jadi, jika anda ingin menggunakan iTunes di Windows 7, anda harus upgrade ke SP2 terlebih dahulu.
-
-
Selain itu, upgrade ke SP2 juga membantu anda untuk mempersiapkan diri jika anda ingin upgrade ke Windows 10 di masa depan. Windows 10 adalah sistem operasi terbaru dari Microsoft yang memiliki banyak fitur canggih dan menarik, seperti Cortana, Edge, Continuum, dan lain-lain. Windows 10 juga lebih aman dan stabil daripada Windows 7, serta mendapatkan update secara rutin dari Microsoft.
-
Jika anda tertarik untuk upgrade ke Windows 10, anda bisa melakukannya secara gratis jika anda memiliki lisensi asli dari Windows 7 SP1 atau SP2. Anda bisa mendownload alat bantu upgrade dari situs resmi Microsoft dan mengikuti langkah-langkah yang diberikan. Namun, sebelum upgrade ke Windows 10, pastikan anda membackup data penting anda terlebih dahulu dan mengecek apakah komputer anda memenuhi spesifikasi minimum untuk menjalankan Windows 10.
-
Demikianlah cara upgrade Windows 7 SP1 ke SP2 secara offline dan beberapa manfaat yang bisa anda dapatkan dari upgrade tersebut. Semoga artikel ini bermanfaat dan selamat mencoba!
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Emergency 2013 Unlock Code.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Emergency 2013 Unlock Code.md
deleted file mode 100644
index 116f20cfe453fac3e07655eab7605c66be014896..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Emergency 2013 Unlock Code.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-June 2, 2013 - now says "only emergency calls" and does NOT ask me for an unlock code anymore. The original ATT SIM card is working fine. And one more thing: I just ... oh, damn! It tells me "Your phone is temporarily locked". I cannot use it. They require me to provide "unlock" but all I want to do is use my phone or I have to call ATT and get his code and I can't call ATT and get it. I can't just make an emergency call because it's blocked. I can't use the internet because it's blocked. I can't use any apps. It's simple. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Circuit Wizard 1.5 Pro Torrent.md b/spaces/inreVtussa/clothingai/Examples/Circuit Wizard 1.5 Pro Torrent.md
deleted file mode 100644
index 4a42e489492eef25bb1088c47dc14a3d63280372..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Circuit Wizard 1.5 Pro Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
🎶 地表最强文本转语音模型 + 3秒实时AI变声,支持中文!Powered by [OpenAI TTS](https://platform.openai.com/docs/guides/text-to-speech) and [KNN-VC](https://github.com/bshall/knn-vc)
-# OpenAI API key is either missing or incorrect.
-#
-# """
-
-# def use_queries(queries):
-# query_str = ", ".join([f"{q}" for q in queries])
-# return f"
Search your Zotero collection for {query_str}"
-
-
-# def update_status(messages):
-# return gr.HTML.update(f"""
-#
-# {("").join(messages)}
-#
-# """)
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Delphi Decompiler Full Crack 14.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Delphi Decompiler Full Crack 14.md
deleted file mode 100644
index 445c3fc36a59297b20d0285c3e66175ccef9d5ff..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Delphi Decompiler Full Crack 14.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
Annotate decompiled code without leaving source. The precise decompiler for Delphi offers more capabilities and features than the free and open source decompiler I discuss here. While the free tool may be sufficient for the casual user, the commercial decompiler offers more features and support, but it's not free. If you're serious about your project, DeDe might be right for you.
Try for free for 30 days and if you're not comfortable with the result, purchase a license for additional functionality. I found my license to be quite reasonable, at a price slightly higher than a cup of coffee. See for yourself why I like this tool, and if you like it, try the more complete version for your project.
-
Delphi is the most popular development environment for the Microsoft Windows operating system and is, therefore, used by many developers. While Delphi is not open source, it does offer free and open source tools for programmers with the Delphi IDE.
The first version of NWN with it's use of DIGICODE C, was released in 1996 and while it was largely considered to be a commercial, free alternative to NWN Builder, it was released under the GNU General Public License. Since then, two "official" versions of the NWN decompiler have been released, one in 2001 and another in 2003. There's also a third version, called NWN Recompiler and which has been discontinued. However, third party vendors such as Jeremy Barnes and The Crazy Bob's have also released unofficial versions of the NWN decompiler.
-
Regardless of whether you're a hobbyist or professional, the first thing you need to know about the Delphi IDE is that you must be patient. Adding a unit makes the IDE crawl at the best of times, so your first units may take a little longer than usual to load. However, once you're finally ready to start coding, you'll see that the IDE runs very smoothly.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Fifa 15 Without Origin Crack Download __TOP__.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Fifa 15 Without Origin Crack Download __TOP__.md
deleted file mode 100644
index b641ddc304fccb869a4bb0b461b7064fd39650bd..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Fifa 15 Without Origin Crack Download __TOP__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.
-
-
-
Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a page of white men, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels.
The mathematics of all this is a little easier to follow with abstract shapes. Let’s take a look at some of them:
-
-
-
Suppose we want to return about 30% green boxes to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return?
-
-
-
Another diversity metric we care about is the percentage of dots… how close to 35% dots can you get?
-
-
-
If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn’t possible to reduce the difference of every metric to zero. One natural approach: find the selection with the lowest mean difference across all the metrics to get as close as possible to all the targets.
-
In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the lowest max difference. Try minimizing both below:
-
-
-
Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results?
-
Ranking Measures
-
We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set’s percentage of green, dots and small shapes are shown in the small histograms.
-
-
-
At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets.
-
-
-
Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for intersectionality. The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It’s important to keep in mind what exactly you’re trying to maximize and the dataset that you’re operating on.
-
Which Measure is Best?
-
In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context.
-
For example, the doctors on the left have more variance along the shirt color attribute, but they’re less diverse by gender than the doctors on the right. With the shirt color and gender targets we’ve picked, the two subsets have the same mean and max differences However, in most applications, it’s more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color.
-
-
-
Just selecting a diverse sample isn’t sufficient either. Diversity and Inclusion Metrics in Subset Selection introduces a way of measuring “inclusion” - how well does the searcher feel represented in the results?
-
Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive.
-
-
-
The context of the query and the searcher also plays in the quality of search results. A search for “work clothing” that shows a mixed palette of colors for men’s clothing and only pink women’s clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women’s clothes might be appropriate to show for a “pink women work clothes” search or if the searcher had previously expressed a preference for pink.
-
We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems.
-
More Reading
-
The Diversity and Inclusion Metrics paper has a Colab with a detailed desciption of the metrics, additional visualizations and a reference Python implementation.
Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the People + AI Guidebook.
-
Credits
-
Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell* and Timnit Gebru* // March 2021
-
*Work done while at Google
-
Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece.
-
More Explorables
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/merve/measuring-fairness/source/fill-in-the-blank/init-pair.js b/spaces/merve/measuring-fairness/source/fill-in-the-blank/init-pair.js
deleted file mode 100644
index dbd16d4499ddbcc59234fcdefbf7a5cad6f91a7a..0000000000000000000000000000000000000000
--- a/spaces/merve/measuring-fairness/source/fill-in-the-blank/init-pair.js
+++ /dev/null
@@ -1,360 +0,0 @@
-/* Copyright 2021 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-window.initPair = function(pair){
- var isMobile = window.innerWidth <= 820
-
- var sel = d3.select('.' + pair.class).html('')
- .at({role: 'graphics-document', 'aria-label': pair.ariaLabel})
- .on('keydown', function(){
- sel.classed('changed', 1)
- if (d3.event.keyCode != 13) return
- d3.event.preventDefault()
- // return
-
- pair.str0 = ''
- pair.str1 = ''
-
- updateChart()
- })
-
- if (!sel.node()) return
-
- var optionSel = sel.append('div.options')
-
- var inputRow = optionSel.append('div.flex-row.flex-row-textarea')
- var input1Sel = inputRow.append('textarea.input-1')
- .st({color: util.colors[1]}).at({cols: 30})
- input1Sel.node().value = pair.s1.replace('[MASK]', '_')
-
- var input0Sel = inputRow.append('textarea.input-0')
- .st({color: util.colors[0]}).at({cols: 30})
- input0Sel.node().value = pair.s0.replace('[MASK]', '_')
-
- if (isMobile){
- sel.selectAll('textarea').on('change', updateChart)
- }
-
- var countSel = optionSel.append('div')
- .append('b').text('Number of Tokens')
- .append('info').text('ⓘ').call(addLockedTooltip)
- .datum('The scales are set using the top N tokens for each sentence.
"Likelihoods" will show more than N tokens if a top completion for one sentence is unlikely for the other sentence.')
- .parent().parent()
- .append('div.flex-row')
- .appendMany('div.button', [30, 200, 1000, 5000, 99999])
- .text(d => d > 5000 ? 'All' : d)
- .st({textAlign: 'center'})
- .on('click', d => {
- pair.count = d
- updateChart()
- })
-
- var typeSel = optionSel.append('div')
- .append('b').text('Chart Type')
- .append('info').text('ⓘ').call(addLockedTooltip)
- .datum('"Likelihoods" shows the logits from both models plotted directly with a shared linear scale.
To better contrast the outputs, "Differences" shows logitA - logitB on the y-axis and mean(logitA, logitB) on the x-axis with separate linear scales.')
- .parent().parent()
- .append('div.flex-row')
- .appendMany('div.button', ['Likelihoods', 'Differences'])
- .text(d => d)
- .st({textAlign: 'center'})
- .on('click', d => {
- pair.type = d
- updateChart()
- })
-
- var modelSel = optionSel.append('div')
- .st({display: pair.model == 'BERT' ? 'none' : ''})
- .append('b').text('Model')
- .parent()
- .append('div.flex-row')
- .appendMany('div.button', ['BERT', 'Zari'])
- .text(d => d)
- .st({textAlign: 'center'})
- .on('click', d => {
- pair.model = d
- updateChart()
- })
-
- // TODO add loading spinner
- var updateSel = optionSel
- .append('div.flex-row')
- .append('div.button.update').on('click', updateChart)
- .text('Update')
- .st({display: isMobile ? 'none' : ''})
-
- var warningSel = optionSel.append('div.warning')
- .text('⚠️Some of the text this model was trained on includes harmful stereotypes. This is a tool to uncover these associations—not an endorsement of them.')
-
- var resetSel = optionSel.append('div.reset')
- .html('↻ Reset')
- .on('click', () => {
- pair = JSON.parse(pair.pairStr)
- pair.pairStr = JSON.stringify(pair)
-
- input0Sel.node().value = pair.s0
- input1Sel.node().value = pair.s1
-
- updateChart(true)
- })
-
- if (pair.alts){
- d3.select('.' + pair.class + '-alts').html('')
- .classed('alt-block', 1).st({display: 'block'})
- .appendMany('span.p-button-link', pair.alts)
- .html(d => d.str)
- .on('click', d => {
- input0Sel.node().value = d.s0
- input1Sel.node().value = d.s1
-
- updateChart()
- })
- }
-
-
- var margin = {bottom: 50, left: 25, top: 5, right: 20}
- var graphSel = sel.append('div.graph')
- var totalWidth = graphSel.node().offsetWidth
- var width = totalWidth - margin.left - margin.right
-
- var c = d3.conventions({
- sel: graphSel.append('div').st({marginTop: isMobile ? 20 : -5}),
- width,
- height: width,
- margin,
- layers: 'sdds',
- })
-
-
- var nTicks = 4
- var tickScale = d3.scaleLinear().range([0, c.width])
- c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1))
- .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`})
- c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1))
- .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`})
-
-
- var annotationSel = c.layers[1].appendMany('div.annotations', pair.annotations)
- .translate(d => d.pos)
- .html(d => d.str)
- .st({color: d => d.color, width: 250, postion: 'absolute'})
-
- var scatter = window.initScatter(c)
-
- updateChart(true)
-
-
- async function updateChart(isFirst){
- sel.classed('changed', 0)
- warningSel.st({opacity: isFirst ? 0 : 1})
- resetSel.st({opacity: isFirst ? 0 : 1})
- annotationSel.st({opacity: isFirst ? 1 : 0})
-
- countSel.classed('active', d => d == pair.count)
- typeSel.classed('active', d => d == pair.type)
- modelSel.classed('active', d => d == pair.model)
-
- function getStr(sel){
- return sel.node().value.replace('_', '[MASK]')
- }
-
- var modelPath = pair.model == 'Zari' ? 'embed_zari_cda' : 'embed'
-
- pair.s0 = input0Sel.node().value.replace('_', '[MASK]')
- pair.s1 = input1Sel.node().value.replace('_', '[MASK]')
-
- updateSel.classed('loading', 1)
- var vals0 = await post(modelPath, {sentence: pair.s0})
- var vals1 = await post(modelPath, {sentence: pair.s1})
- updateSel.classed('loading', 0)
-
-
- var allTokens = vals0.map((v0, i) => {
- return {word: tokenizer.vocab[i], v0, i, v1: vals1[i]}
- })
- allTokens.forEach(d => {
- d.dif = d.v0 - d.v1
- d.meanV = (d.v0 + d.v1) / 2
- d.isVisible = false
- })
-
- _.sortBy(allTokens, d => -d.v1).forEach((d, i) => d.v1i = i)
- _.sortBy(allTokens, d => -d.v0).forEach((d, i) => d.v0i = i)
-
- var topTokens = allTokens.filter(d => d.v0i <= pair.count || d.v1i <= pair.count)
-
-
- var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1)))
-
- var tokens = allTokens
- .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1)
-
- var mag = logitExtent[1] - logitExtent[0]
- logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002]
-
- if (pair.type == 'Differences') tokens = _.sortBy(allTokens, d => -d.meanV).slice(0, pair.count)
-
- tokens.forEach(d => {
- d.isVisible = true
- })
-
- var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs))
- var color = palette(-maxDif*.8, maxDif*.8)
-
- updateSentenceLabels()
-
- if (pair.type == 'Likelihoods'){
- drawXY()
- } else{
- drawRotated()
- }
-
- sel.classed('is-xy', pair.type == 'Likelihoods')
- sel.classed('is-rotate', pair.type != 'Likelihoods')
-
-
- function drawXY(){
- c.x.domain(logitExtent)
- c.y.domain(logitExtent)
-
- d3.drawAxis(c)
-
- var s = {30: 4, 200: 3, 1000: 3}[pair.count] || 2
- var scatterData = allTokens.map(d => {
- var x = c.x(d.v0)
- var y = c.y(d.v1)
- var fill = color(d.dif)
- var dif = d.dif
- var word = d.word
- var show = ''
- var isVisible = d.isVisible
-
- return {x, y, s, dif, fill, word, show, isVisible}
- })
-
- var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif)
- d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10))
- .forEach(d => d[0].show = 'uf')
- d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10))
- .forEach(d => d[0].show = 'lr')
-
- logitExtent.pair = pair
- scatter.draw(c, scatterData, true)
-
- c.svg.selectAppend('text.x-axis-label.xy-only')
- .translate([c.width/2, c.height + 24])
- .text(pair.label0 ? ' __ likelihood, ' + pair.label0 + ' sentence →' : '__ likelihood, sentence two →')
- .st({fill: util.colors[0]})
- .at({textAnchor: 'middle'})
-
-
- c.svg.selectAppend('g.y-axis-label.xy-only')
- .translate([c.width + 20, c.height/2])
- .selectAppend('text')
- .text(pair.label1 ? ' __ likelihood, ' + pair.label1 + ' sentence →' : '__ likelihood, sentence one →')
- .st({fill: util.colors[1]})
- .at({textAnchor: 'middle', transform: 'rotate(-90)'})
- }
-
- function drawRotated(){
- c.x.domain(d3.extent(tokens, d => d.meanV))
- c.y.domain([maxDif, -maxDif])
-
- d3.drawAxis(c)
-
- var scatterData = allTokens.map(d => {
- var x = c.x(d.meanV)
- var y = c.y(d.dif)
- var fill = color(d.dif)
- var word = d.word
- var show = ''
- var isVisible = d.isVisible
-
- return {x, y, s: 2, fill, word, show, isVisible}
- })
-
- scatterData.forEach(d => {
- d.dx = d.x - c.width/2
- d.dy = d.y - c.height/2
- })
-
- var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy)
- .filter(d => d.isVisible)
- .slice(0, 5000)
- d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy)))
- .map(d => d[0])
- .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r'))
-
- scatter.draw(c, scatterData, false)
-
- c.svg.selectAppend('text.rotate-only.x-axis-label')
- .translate([c.width/2, c.height + 24])
- .text('__ likelihood, both sentences →')
- .at({textAnchor: 'middle'})
- .st({fill: '#000'})
-
- c.svg.selectAll('g.rotate-only.sent-1,g.rotate-only.sent-1').remove()
- c.svg.selectAppend('g.rotate-only.sent-1')
- .translate([c.width + 20, c.height/2])
- .append('text')
- .text(`Higher likelihood, ${pair.label1 ? pair.label1 + ' sentence ' : 'sentence one'} →`)
- .at({textAnchor: 'start', transform: 'rotate(-90)', x: 20})
- .st({fill: util.colors[1]})
-
- c.svg.selectAppend('g.rotate-only.sent-1')
- .translate([c.width + 20, c.height/2 + 0])
- .append('text')
- .text(`← Higher likelihood, ${pair.label0 ? pair.label0 + ' sentence ' : 'sentence two'}`)
- .at({textAnchor: 'end', transform: 'rotate(-90)', x: -20})
- .st({fill: util.colors[0]})
- }
- }
-
- function updateSentenceLabels(){
- var t0 = tokenizer.tokenize(pair.s0)
- var t1 = tokenizer.tokenize(pair.s1)
-
- var i = 0
- while (t0[i] == t1[i] && i < t0.length) i++
-
- var j = 1
- while (t0[t0.length - j] == t1[t1.length - j] && j < t0.length) j++
-
- pair.label0 = tokens2origStr(t0, pair.s0)
- pair.label1 = tokens2origStr(t1, pair.s1)
-
- function tokens2origStr(t, s){
- var tokenStr = tokenizer.decode(t.slice(i, -j + 1)).trim()
- var lowerStr = s.toLowerCase()
-
- var startI = lowerStr.indexOf(tokenStr)
- return s.slice(startI, startI + tokenStr.length)
- }
-
- if (
- !pair.label0.length ||
- !pair.label1.length ||
- pair.label0.length > 15 ||
- pair.label1.length > 15){
- pair.label0 = ''
- pair.label1 = ''
- }
-
- // console.log(i, j, pair.label0, pair.label1)
- }
-}
-
-if (window.init) init()
diff --git a/spaces/merve/uncertainty-calibration/public/measuring-diversity/columns-height.js b/spaces/merve/uncertainty-calibration/public/measuring-diversity/columns-height.js
deleted file mode 100644
index 3933c17b4bb8abe209b3573bb436c53c47543b1b..0000000000000000000000000000000000000000
--- a/spaces/merve/uncertainty-calibration/public/measuring-diversity/columns-height.js
+++ /dev/null
@@ -1,177 +0,0 @@
-window.initColumns = function(id, metrics, measures){
- var c = d3.conventions({
- sel: d3.select(id).html('').st({width: 775, margin: '0px auto', left: 27}),
- margin: {left: 260, top: 40},
- height: 600,
- })
-
- var sets = d3.range(numRows).map(i => {
- var shapes = columnShapes[i]
- shapes = _.sortBy(shapes, d => d.shape)
- shapes = _.sortBy(shapes, d => d.size)
- shapes = _.sortBy(shapes, d => d.color)
- shapes = _.sortBy(shapes, d => d.color == 'green' ? 0 : 1)
-
-
- shapes.nG = d3.sum(shapes, d => d.color == 'green')
- shapes.nB = d3.sum(shapes, d => d.color == 'blue')
- shapes.nO = d3.sum(shapes, d => d.color == 'orange')
- shapes.nR = d3.sum(shapes, d => d.color == 'red')
-
- shapes.forEach((d, i) => {
- d.i = i
- d.sizeVal = d.sizeVal < 1 ? .6 : 1
- })
- shapes.i = i
- return shapes
- })
-
- var colW = 200
- var colWpad = 50
- var colH = 20
- var colHpad = 10
- var offsetW = -20
-
- var colSel = c.svg.appendMany('g', measures)
- .translate((d, i) => [.5 + i*(colW + colWpad) + offsetW, .5])
-
- colSel.append('text').text(d => d.ranking_display_text)
- .at({y: -20, textAnchor: 'middle', x: colW/2, fontWeight: 600, })
-
- var rowSel = colSel.appendMany('g.row', sets)
- .translate(d => d.i*(colH + colHpad), 1)
-
- var colMean = colSel.filter((d, i) => i === 0)
- var colMin = colSel.filter((d, i) => i === 1)
- var scoreLabelsMean = colMean.selectAll('.row').append('text')
- .at({x: -5, y: 15, textAnchor: 'end'})
- .st({fontSize: '13px', opacity: .7})
- var scoreLabelsMin = colMin.selectAll('.row').append('text')
- .at({x: 222, y: 15, textAnchor: 'end'})
- .st({fontSize: '13px', opacity: .7})
-
- colSel.each(function(d, i){
- d.rowSel = d3.select(this).selectAll('.row')
-
- c.svg.append('marker')
- .attr('id', 'arrow')
- .attr('viewBox', '-10 -10 20 20')
- .attr('markerWidth', 20)
- .attr('markerHeight', 20)
- .attr('orient', 'auto')
- .append('path')
- .attr('d', 'M-6.75,-6.75 L 0,0 L -6.75,6.75')
- .at({fill: '#000'})
-
-
- if (i){
- var pathstr = ['M', 160, -25, 'C', 215, -25, 215, -25, 215, -5].join(' ')
- } else{
- var pathstr = ['M', 35, -25, 'C', -20, -25, -20, -25, -20, -5].join(' ')
- }
- d3.select(this).append('path')
- .at({stroke: '#000', fill: 'none', d: pathstr, markerEnd: 'url(#arrow)', strokeWidth: .6})
- })
-
-
- var s = colH
- var p = 2
-
- var l0Sel = c.svg.appendMany('path.set', sets).classed('set1', true)
- .translate(d => [colW + offsetW, s/2 + .5])
-
- drawRow(rowSel)
- function drawRow(rowSel){
- rowSel.append('rect.set.no-stroke')
- .at({x: -p, y: -p, width: colW + p*2, height: colH + p*2, fill: '#fff'}).classed('set1', true)
-
- rowSel.appendMany('g', d => d)
- .translate(d => [d.i*s + s/2, s/2])
- .each(function(d){
-
- var sOffset = 12
- var classNames = [d.shape, d.size, d.color, 'rank-item'].join(' ')
- var shapeSel = d3.select(this).append('rect')
- .at({
- x: -s/2,
- y: -s/2 + (d.size == 'small' ? sOffset/2 : 0) - .5,
- width: s - .5,
- height: s - (d.size == 'small' ? sOffset : 0),
- fill: d.fill,
- class: classNames
- })
-
- if (d.shape == 'triangle'){
- var shapeSel = d3.select(this).append('circle')
- .at({r: 2, fill: '#fff', stroke: '#000', strokeWidth: .5, class: classNames})
- }
- })
-
- }
-
- var setSel = c.svg.selectAll('.set1')
- .on('mouseover', selectSet)
-
- sets.selected = sets[0]
- function selectSet(set){
- sets.selected = set
- sets.forEach(d => d.selected = d == set)
- setSel
- .classed('selected', d => d.selected)
- .filter(d => d.selected)
- .lower()
-
- rowSel.classed('selected', d => d.selected)
-
- sliders.render()
- }
-
-
- var sliders = makeSliders(metrics, sets, c, selectSet, drawRow, () => {
- sets.forEach(shapes => {
- shapes.score = metrics.map(m => {
- var v = d3.sum(shapes, (d, i) => shapes[i][m.field] == m.key)
- return Math.abs(m.target - v/shapes.length)
- })
- })
-
- measures.forEach(m => {
- sets.forEach(shapes => {
- shapes[m.str] = m.fn(shapes.score)
- })
- _.sortBy(sets, d => d[m.str] + d.i/10000000)//.reverse()
- .forEach((d, i) => d['i' + m.str] = i)
-
- m.rowSel.translate(d => d['i' + m.str]*(colH + colHpad), 1)
- })
-
- var p = 0
- l0Sel.at({d: d => [
- 'M', p, d['iUtilitarian']*(colH + colHpad),
- 'L', colWpad - p, d['iEgalitarian']*(colH + colHpad),
- ].join(' ')})
-
-
- scoreLabelsMean.text(d => {
- return d3.format('.2f')(d['Utilitarian'])// + '%'
- })
- scoreLabelsMin.text(d => {
- return measures[1].ppFn(d['score']).replace('%', '')// + '%'
- })
- })
-
- sliders.render()
- selectSet(_.sortBy(sets, d => d.iEgalitarian)[0])
-}
-window.initColumns('#columns-height', metrics1, measures)
-window.initColumns('#columns-height-disagree', metrics2, measures2)
-
-// Only highlight green items in the second ranking chart.
-d3.select('#columns-height-disagree').selectAll('.rank-item').at({opacity: .3})
-d3.select('#columns-height-disagree').selectAll('.green').at({opacity: 1})
-
-// Only highlight the green slider in the second ranking chart.
-d3.select('#columns-height-disagree').selectAll('.slider').at({opacity: d => {
- return d.key !== 'green' ? 0.35: 1
-}})
-
diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/lpips/dist_model.py b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/lpips/dist_model.py
deleted file mode 100644
index 4ff0aa4ca6e4b217954c167787eaac1ca1f8e304..0000000000000000000000000000000000000000
--- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/lpips/dist_model.py
+++ /dev/null
@@ -1,284 +0,0 @@
-
-from __future__ import absolute_import
-
-import sys
-import numpy as np
-import torch
-from torch import nn
-import os
-from collections import OrderedDict
-from torch.autograd import Variable
-import itertools
-from .base_model import BaseModel
-from scipy.ndimage import zoom
-import fractions
-import functools
-import skimage.transform
-from tqdm import tqdm
-
-from IPython import embed
-
-from . import networks_basic as networks
-import lpips as util
-
-class DistModel(BaseModel):
- def name(self):
- return self.model_name
-
- def initialize(self, model='net-lin', net='alex', colorspace='Lab', pnet_rand=False, pnet_tune=False, model_path=None,
- use_gpu=True, printNet=False, spatial=False,
- is_train=False, lr=.0001, beta1=0.5, version='0.1', gpu_ids=[0]):
- '''
- INPUTS
- model - ['net-lin'] for linearly calibrated network
- ['net'] for off-the-shelf network
- ['L2'] for L2 distance in Lab colorspace
- ['SSIM'] for ssim in RGB colorspace
- net - ['squeeze','alex','vgg']
- model_path - if None, will look in weights/[NET_NAME].pth
- colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM
- use_gpu - bool - whether or not to use a GPU
- printNet - bool - whether or not to print network architecture out
- spatial - bool - whether to output an array containing varying distances across spatial dimensions
- spatial_shape - if given, output spatial shape. if None then spatial shape is determined automatically via spatial_factor (see below).
- spatial_factor - if given, specifies upsampling factor relative to the largest spatial extent of a convolutional layer. if None then resized to size of input images.
- spatial_order - spline order of filter for upsampling in spatial mode, by default 1 (bilinear).
- is_train - bool - [True] for training mode
- lr - float - initial learning rate
- beta1 - float - initial momentum term for adam
- version - 0.1 for latest, 0.0 was original (with a bug)
- gpu_ids - int array - [0] by default, gpus to use
- '''
- BaseModel.initialize(self, use_gpu=use_gpu, gpu_ids=gpu_ids)
-
- self.model = model
- self.net = net
- self.is_train = is_train
- self.spatial = spatial
- self.gpu_ids = gpu_ids
- self.model_name = '%s [%s]'%(model,net)
-
- if(self.model == 'net-lin'): # pretrained net + linear layer
- self.net = networks.PNetLin(pnet_rand=pnet_rand, pnet_tune=pnet_tune, pnet_type=net,
- use_dropout=True, spatial=spatial, version=version, lpips=True)
- kw = {}
- if not use_gpu:
- kw['map_location'] = 'cpu'
- if(model_path is None):
- import inspect
- model_path = os.path.abspath(os.path.join(inspect.getfile(self.initialize), '..', 'weights/v%s/%s.pth'%(version,net)))
-
- if(not is_train):
- print('Loading model from: %s'%model_path)
- self.net.load_state_dict(torch.load(model_path, **kw), strict=False)
-
- elif(self.model=='net'): # pretrained network
- self.net = networks.PNetLin(pnet_rand=pnet_rand, pnet_type=net, lpips=False)
- elif(self.model in ['L2','l2']):
- self.net = networks.L2(use_gpu=use_gpu,colorspace=colorspace) # not really a network, only for testing
- self.model_name = 'L2'
- elif(self.model in ['DSSIM','dssim','SSIM','ssim']):
- self.net = networks.DSSIM(use_gpu=use_gpu,colorspace=colorspace)
- self.model_name = 'SSIM'
- else:
- raise ValueError("Model [%s] not recognized." % self.model)
-
- self.parameters = list(self.net.parameters())
-
- if self.is_train: # training mode
- # extra network on top to go from distances (d0,d1) => predicted human judgment (h*)
- self.rankLoss = networks.BCERankingLoss()
- self.parameters += list(self.rankLoss.net.parameters())
- self.lr = lr
- self.old_lr = lr
- self.optimizer_net = torch.optim.Adam(self.parameters, lr=lr, betas=(beta1, 0.999))
- else: # test mode
- self.net.eval()
-
- if(use_gpu):
- self.net.to(gpu_ids[0])
- self.net = torch.nn.DataParallel(self.net, device_ids=gpu_ids)
- if(self.is_train):
- self.rankLoss = self.rankLoss.to(device=gpu_ids[0]) # just put this on GPU0
-
- if(printNet):
- print('---------- Networks initialized -------------')
- networks.print_network(self.net)
- print('-----------------------------------------------')
-
- def forward(self, in0, in1, retPerLayer=False):
- ''' Function computes the distance between image patches in0 and in1
- INPUTS
- in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1]
- OUTPUT
- computed distances between in0 and in1
- '''
-
- return self.net.forward(in0, in1, retPerLayer=retPerLayer)
-
- # ***** TRAINING FUNCTIONS *****
- def optimize_parameters(self):
- self.forward_train()
- self.optimizer_net.zero_grad()
- self.backward_train()
- self.optimizer_net.step()
- self.clamp_weights()
-
- def clamp_weights(self):
- for module in self.net.modules():
- if(hasattr(module, 'weight') and module.kernel_size==(1,1)):
- module.weight.data = torch.clamp(module.weight.data,min=0)
-
- def set_input(self, data):
- self.input_ref = data['ref']
- self.input_p0 = data['p0']
- self.input_p1 = data['p1']
- self.input_judge = data['judge']
-
- if(self.use_gpu):
- self.input_ref = self.input_ref.to(device=self.gpu_ids[0])
- self.input_p0 = self.input_p0.to(device=self.gpu_ids[0])
- self.input_p1 = self.input_p1.to(device=self.gpu_ids[0])
- self.input_judge = self.input_judge.to(device=self.gpu_ids[0])
-
- self.var_ref = Variable(self.input_ref,requires_grad=True)
- self.var_p0 = Variable(self.input_p0,requires_grad=True)
- self.var_p1 = Variable(self.input_p1,requires_grad=True)
-
- def forward_train(self): # run forward pass
- # print(self.net.module.scaling_layer.shift)
- # print(torch.norm(self.net.module.net.slice1[0].weight).item(), torch.norm(self.net.module.lin0.model[1].weight).item())
-
- self.d0 = self.forward(self.var_ref, self.var_p0)
- self.d1 = self.forward(self.var_ref, self.var_p1)
- self.acc_r = self.compute_accuracy(self.d0,self.d1,self.input_judge)
-
- self.var_judge = Variable(1.*self.input_judge).view(self.d0.size())
-
- self.loss_total = self.rankLoss.forward(self.d0, self.d1, self.var_judge*2.-1.)
-
- return self.loss_total
-
- def backward_train(self):
- torch.mean(self.loss_total).backward()
-
- def compute_accuracy(self,d0,d1,judge):
- ''' d0, d1 are Variables, judge is a Tensor '''
- d1_lt_d0 = (d1 %f' % (type,self.old_lr, lr))
- self.old_lr = lr
-
-def score_2afc_dataset(data_loader, func, name=''):
- ''' Function computes Two Alternative Forced Choice (2AFC) score using
- distance function 'func' in dataset 'data_loader'
- INPUTS
- data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside
- func - callable distance function - calling d=func(in0,in1) should take 2
- pytorch tensors with shape Nx3xXxY, and return numpy array of length N
- OUTPUTS
- [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators
- [1] - dictionary with following elements
- d0s,d1s - N arrays containing distances between reference patch to perturbed patches
- gts - N array in [0,1], preferred patch selected by human evaluators
- (closer to "0" for left patch p0, "1" for right patch p1,
- "0.6" means 60pct people preferred right patch, 40pct preferred left)
- scores - N array in [0,1], corresponding to what percentage function agreed with humans
- CONSTS
- N - number of test triplets in data_loader
- '''
-
- d0s = []
- d1s = []
- gts = []
-
- for data in tqdm(data_loader.load_data(), desc=name):
- d0s+=func(data['ref'],data['p0']).data.cpu().numpy().flatten().tolist()
- d1s+=func(data['ref'],data['p1']).data.cpu().numpy().flatten().tolist()
- gts+=data['judge'].cpu().numpy().flatten().tolist()
-
- d0s = np.array(d0s)
- d1s = np.array(d1s)
- gts = np.array(gts)
- scores = (d0st in e?hc(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var Et=(e,t,n)=>(yc(e,typeof t!="symbol"?t+"":t,n),n);(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const o of i.addedNodes)o.tagName==="LINK"&&o.rel==="modulepreload"&&r(o)}).observe(document,{childList:!0,subtree:!0});function n(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=n(l);fetch(l.href,i)}})();var Mr={},vc={get exports(){return Mr},set exports(e){Mr=e}},ul={},ne={},gc={get exports(){return ne},set exports(e){ne=e}},T={};/**
- * @license React
- * react.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */var bn=Symbol.for("react.element"),wc=Symbol.for("react.portal"),kc=Symbol.for("react.fragment"),Sc=Symbol.for("react.strict_mode"),Ec=Symbol.for("react.profiler"),xc=Symbol.for("react.provider"),_c=Symbol.for("react.context"),Cc=Symbol.for("react.forward_ref"),Nc=Symbol.for("react.suspense"),Pc=Symbol.for("react.memo"),zc=Symbol.for("react.lazy"),Qo=Symbol.iterator;function Oc(e){return e===null||typeof e!="object"?null:(e=Qo&&e[Qo]||e["@@iterator"],typeof e=="function"?e:null)}var ns={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},rs=Object.assign,ls={};function cn(e,t,n){this.props=e,this.context=t,this.refs=ls,this.updater=n||ns}cn.prototype.isReactComponent={};cn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};cn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function is(){}is.prototype=cn.prototype;function Xi(e,t,n){this.props=e,this.context=t,this.refs=ls,this.updater=n||ns}var Yi=Xi.prototype=new is;Yi.constructor=Xi;rs(Yi,cn.prototype);Yi.isPureReactComponent=!0;var Ko=Array.isArray,os=Object.prototype.hasOwnProperty,Gi={current:null},us={key:!0,ref:!0,__self:!0,__source:!0};function ss(e,t,n){var r,l={},i=null,o=null;if(t!=null)for(r in t.ref!==void 0&&(o=t.ref),t.key!==void 0&&(i=""+t.key),t)os.call(t,r)&&!us.hasOwnProperty(r)&&(l[r]=t[r]);var u=arguments.length-2;if(u===1)l.children=n;else if(1]+)>;\s+rel="([^"]+)"/g;return Object.fromEntries([...e.matchAll(t)].map(([,n,r])=>[r,n]))}var Qc=["pipeline_tag","private","gated","downloads","likes"];async function*Kc(e){var r,l;Hc(e==null?void 0:e.credentials);const t=new URLSearchParams([...Object.entries({limit:"500",...(r=e==null?void 0:e.search)!=null&&r.owner?{author:e.search.owner}:void 0,...(l=e==null?void 0:e.search)!=null&&l.task?{pipeline_tag:e.search.task}:void 0}),...Qc.map(i=>["expand",i])]).toString();let n=`${(e==null?void 0:e.hubUrl)||$c}/api/models?${t}`;for(;n;){const i=await fetch(n,{headers:{accept:"application/json",...e!=null&&e.credentials?{Authorization:`Bearer ${e.credentials.accessToken}`}:void 0}});if(!i.ok)throw Vc(i);const o=await i.json();for(const s of o)yield{id:s._id,name:s.id,private:s.private,task:s.pipeline_tag,downloads:s.downloads,gated:s.gated,likes:s.likes,updatedAt:new Date(s.lastModified)};const u=i.headers.get("Link");n=u?Wc(u).next:void 0}}var Xc=Object.defineProperty,Yc=(e,t)=>{for(var n in t)Xc(e,n,{get:t[n],enumerable:!0})},Ji={};Yc(Ji,{audioClassification:()=>bc,automaticSpeechRecognition:()=>ef,conversational:()=>uf,featureExtraction:()=>sf,fillMask:()=>af,imageClassification:()=>tf,imageSegmentation:()=>nf,imageToText:()=>rf,objectDetection:()=>lf,questionAnswering:()=>cf,request:()=>K,sentenceSimilarity:()=>ff,streamingRequest:()=>qi,summarization:()=>df,tableQuestionAnswering:()=>pf,textClassification:()=>mf,textGeneration:()=>hf,textGenerationStream:()=>yf,textToImage:()=>of,tokenClassification:()=>vf,translation:()=>gf,zeroShotClassification:()=>wf});var Gc="https://api-inference.huggingface.co/models/";function cs(e,t){const{model:n,accessToken:r,...l}=e,i={};r&&(i.Authorization=`Bearer ${r}`);const o="data"in e&&!!e.data;o?(t!=null&&t.wait_for_model&&(i["X-Wait-For-Model"]="true"),(t==null?void 0:t.use_cache)===!1&&(i["X-Use-Cache"]="false"),t!=null&&t.dont_load_model&&(i["X-Load-Model"]="0")):i["Content-Type"]="application/json";const u=/^http(s?):/.test(n)||n.startsWith("/")?n:`${Gc}${n}`,s={headers:i,method:"POST",body:o?e.data:JSON.stringify({...l,options:t}),credentials:t!=null&&t.includeCredentials?"include":"same-origin"};return{url:u,info:s}}async function K(e,t){var i,o;const{url:n,info:r}=cs(e,t),l=await fetch(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return K(e,{...t,wait_for_model:!0});if(!l.ok){if((i=l.headers.get("Content-Type"))!=null&&i.startsWith("application/json")){const u=await l.json();if(u.error)throw new Error(u.error)}throw new Error("An error occurred while fetching the blob")}return(o=l.headers.get("Content-Type"))!=null&&o.startsWith("application/json")?await l.json():await l.blob()}function Zc(e){let t,n,r,l=!1;return function(o){t===void 0?(t=o,n=0,r=-1):t=qc(t,o);const u=t.length;let s=0;for(;n0){const s=l.decode(o.subarray(0,u)),c=u+(o[u+1]===32?2:1),m=l.decode(o.subarray(c));switch(s){case"data":r.data=r.data?r.data+`
-`+m:m;break;case"event":r.event=m;break;case"id":e(r.id=m);break;case"retry":const h=parseInt(m,10);isNaN(h)||t(r.retry=h);break}}}}function qc(e,t){const n=new Uint8Array(e.length+t.length);return n.set(e),n.set(t,e.length),n}function Yo(){return{data:"",event:"",id:"",retry:void 0}}async function*qi(e,t){var c;const{url:n,info:r}=cs({...e,stream:!0},t),l=await fetch(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return qi(e,{...t,wait_for_model:!0});if(!l.ok){if((c=l.headers.get("Content-Type"))!=null&&c.startsWith("application/json")){const m=await l.json();if(m.error)throw new Error(m.error)}throw new Error(`Server response contains error: ${l.status}`)}if(l.headers.get("content-type")!=="text/event-stream")throw new Error("Server does not support event stream content type, it returned "+l.headers.get("content-type"));if(!l.body)return;const i=l.body.getReader();let o=[];const s=Zc(Jc(()=>{},()=>{},m=>{o.push(m)}));try{for(;;){const{done:m,value:h}=await i.read();if(m)return;s(h);for(const p of o)p.data.length>0&&(yield JSON.parse(p.data));o=[]}}finally{i.releaseLock()}}var J=class extends TypeError{constructor(e){super(`Invalid inference output: ${e}. Use the 'request' method with the same parameters to do a custom call with no type checking.`),this.name="InferenceOutputError"}};async function bc(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new J("Expected Array<{label: string, score: number}>");return n}async function ef(e,t){const n=await K(e,t);if(!(typeof(n==null?void 0:n.text)=="string"))throw new J("Expected {text: string}");return n}async function tf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new J("Expected Array<{label: string, score: number}>");return n}async function nf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.mask=="string"&&typeof l.score=="number")))throw new J("Expected Array<{label: string, mask: string, score: number}>");return n}async function rf(e,t){var r;const n=(r=await K(e,t))==null?void 0:r[0];if(typeof(n==null?void 0:n.generated_text)!="string")throw new J("Expected {generated_text: string}");return n}async function lf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number"&&typeof l.box.xmin=="number"&&typeof l.box.ymin=="number"&&typeof l.box.xmax=="number"&&typeof l.box.ymax=="number")))throw new J("Expected Array<{label:string; score:number; box:{xmin:number; ymin:number; xmax:number; ymax:number}}>");return n}async function of(e,t){const n=await K(e,t);if(!(n&&n instanceof Blob))throw new J("Expected Blob");return n}async function uf(e,t){const n=await K(e,t);if(!(Array.isArray(n.conversation.generated_responses)&&n.conversation.generated_responses.every(l=>typeof l=="string")&&Array.isArray(n.conversation.past_user_inputs)&&n.conversation.past_user_inputs.every(l=>typeof l=="string")&&typeof n.generated_text=="string"&&Array.isArray(n.warnings)&&n.warnings.every(l=>typeof l=="string")))throw new J("Expected {conversation: {generated_responses: string[], past_user_inputs: string[]}, generated_text: string, warnings: string[]}");return n}async function sf(e,t){const n=await K(e,t);let r=!0;if(Array.isArray(n)){for(const l of n)if(Array.isArray(l)){if(r=l.every(i=>typeof i=="number"),!r)break}else if(typeof l!="number"){r=!1;break}}else r=!1;if(!r)throw new J("Expected Array");return n}async function af(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.score=="number"&&typeof l.sequence=="string"&&typeof l.token=="number"&&typeof l.token_str=="string")))throw new J("Expected Array<{score: number, sequence: string, token: number, token_str: string}>");return n}async function cf(e,t){const n=await K(e,t);if(!(typeof(n==null?void 0:n.answer)=="string"&&typeof n.end=="number"&&typeof n.score=="number"&&typeof n.start=="number"))throw new J("Expected {answer: string, end: number, score: number, start: number}");return n}async function ff(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l=="number")))throw new J("Expected number[]");return n}async function df(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.summary_text)=="string")))throw new J("Expected Array<{summary_text: string}>");return n==null?void 0:n[0]}async function pf(e,t){const n=await K(e,t);if(!(typeof(n==null?void 0:n.aggregator)=="string"&&typeof n.answer=="string"&&Array.isArray(n.cells)&&n.cells.every(l=>typeof l=="string")&&Array.isArray(n.coordinates)&&n.coordinates.every(l=>Array.isArray(l)&&l.every(i=>typeof i=="number"))))throw new J("Expected {aggregator: string, answer: string, cells: string[], coordinates: number[][]}");return n}async function mf(e,t){var l;const n=(l=await K(e,t))==null?void 0:l[0];if(!(Array.isArray(n)&&n.every(i=>typeof(i==null?void 0:i.label)=="string"&&typeof i.score=="number")))throw new J("Expected Array<{label: string, score: number}>");return n}async function hf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.generated_text)=="string")))throw new J("Expected Array<{generated_text: string}>");return n==null?void 0:n[0]}async function*yf(e,t){yield*qi(e,t)}function fs(e){return Array.isArray(e)?e:[e]}async function vf(e,t){const n=fs(await K(e,t));if(!(Array.isArray(n)&&n.every(l=>typeof l.end=="number"&&typeof l.entity_group=="string"&&typeof l.score=="number"&&typeof l.start=="number"&&typeof l.word=="string")))throw new J("Expected Array<{end: number, entity_group: string, score: number, start: number, word: string}>");return n}async function gf(e,t){const n=await K(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.translation_text)=="string")))throw new J("Expected type Array<{translation_text: string}>");return n==null?void 0:n[0]}async function wf(e,t){const n=fs(await K(e,t));if(!(Array.isArray(n)&&n.every(l=>Array.isArray(l.labels)&&l.labels.every(i=>typeof i=="string")&&Array.isArray(l.scores)&&l.scores.every(i=>typeof i=="number")&&typeof l.sequence=="string")))throw new J("Expected Array<{labels: string[], scores: number[], sequence: string}>");return n}var kf=class{constructor(e="",t={}){Et(this,"accessToken");Et(this,"defaultOptions");this.accessToken=e,this.defaultOptions=t;for(const[n,r]of Object.entries(Ji))Object.defineProperty(this,n,{enumerable:!1,value:(l,i)=>r({...l,accessToken:e},{...t,...i})})}endpoint(e){return new Sf(e,this.accessToken,this.defaultOptions)}},Sf=class{constructor(e,t="",n={}){for(const[r,l]of Object.entries(Ji))Object.defineProperty(this,r,{enumerable:!1,value:(i,o)=>l({...i,accessToken:t,model:e},{...n,...o})})}},jr=function(){return jr=Object.assign||function(t){for(var n,r=1,l=arguments.length;r0&&n>="0"&&n<="9"?"_"+n+r:""+n.toUpperCase()+r}function Nf(e,t){return t===void 0&&(t={}),Cf(e,jr({delimiter:"",transform:ds},t))}function Pf(e,t){return t===0?e.toLowerCase():ds(e,t)}function zf(e,t){return t===void 0&&(t={}),Nf(e,jr({transform:Pf},t))}var bl={},Of={get exports(){return bl},set exports(e){bl=e}},Ee={},ei={},Tf={get exports(){return ei},set exports(e){ei=e}},ps={};/**
- * @license React
- * scheduler.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */(function(e){function t(x,z){var O=x.length;x.push(z);e:for(;0>>1,q=x[W];if(0>>1;Wl(Cl,O))Stl(ir,Cl)?(x[W]=ir,x[St]=O,W=St):(x[W]=Cl,x[kt]=O,W=kt);else if(Stl(ir,O))x[W]=ir,x[St]=O,W=St;else break e}}return z}function l(x,z){var O=x.sortIndex-z.sortIndex;return O!==0?O:x.id-z.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var o=Date,u=o.now();e.unstable_now=function(){return o.now()-u}}var s=[],c=[],m=1,h=null,p=3,g=!1,w=!1,k=!1,D=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(x){for(var z=n(c);z!==null;){if(z.callback===null)r(c);else if(z.startTime<=x)r(c),z.sortIndex=z.expirationTime,t(s,z);else break;z=n(c)}}function y(x){if(k=!1,d(x),!w)if(n(s)!==null)w=!0,xl(E);else{var z=n(c);z!==null&&_l(y,z.startTime-x)}}function E(x,z){w=!1,k&&(k=!1,f(N),N=-1),g=!0;var O=p;try{for(d(z),h=n(s);h!==null&&(!(h.expirationTime>z)||x&&!Le());){var W=h.callback;if(typeof W=="function"){h.callback=null,p=h.priorityLevel;var q=W(h.expirationTime<=z);z=e.unstable_now(),typeof q=="function"?h.callback=q:h===n(s)&&r(s),d(z)}else r(s);h=n(s)}if(h!==null)var lr=!0;else{var kt=n(c);kt!==null&&_l(y,kt.startTime-z),lr=!1}return lr}finally{h=null,p=O,g=!1}}var _=!1,C=null,N=-1,H=5,L=-1;function Le(){return!(e.unstable_now()-Lx||125W?(x.sortIndex=O,t(c,x),n(s)===null&&x===n(c)&&(k?(f(N),N=-1):k=!0,_l(y,O-W))):(x.sortIndex=q,t(s,x),w||g||(w=!0,xl(E))),x},e.unstable_shouldYield=Le,e.unstable_wrapCallback=function(x){var z=p;return function(){var O=p;p=z;try{return x.apply(this,arguments)}finally{p=O}}}})(ps);(function(e){e.exports=ps})(Tf);/**
- * @license React
- * react-dom.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */var ms=ne,Se=ei;function v(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),ti=Object.prototype.hasOwnProperty,Lf=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Zo={},Jo={};function Rf(e){return ti.call(Jo,e)?!0:ti.call(Zo,e)?!1:Lf.test(e)?Jo[e]=!0:(Zo[e]=!0,!1)}function If(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Af(e,t,n,r){if(t===null||typeof t>"u"||If(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function de(e,t,n,r,l,i,o){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=i,this.removeEmptyString=o}var le={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){le[e]=new de(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];le[t]=new de(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){le[e]=new de(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){le[e]=new de(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){le[e]=new de(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){le[e]=new de(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){le[e]=new de(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){le[e]=new de(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){le[e]=new de(e,5,!1,e.toLowerCase(),null,!1,!1)});var bi=/[\-:]([a-z])/g;function eo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(bi,eo);le[t]=new de(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(bi,eo);le[t]=new de(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(bi,eo);le[t]=new de(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){le[e]=new de(e,1,!1,e.toLowerCase(),null,!1,!1)});le.xlinkHref=new de("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){le[e]=new de(e,1,!1,e.toLowerCase(),null,!0,!0)});function to(e,t,n,r){var l=le.hasOwnProperty(t)?le[t]:null;(l!==null?l.type!==0:r||!(2u||l[o]!==i[u]){var s=`
-`+l[o].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=o&&0<=u);break}}}finally{zl=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?xn(e):""}function Mf(e){switch(e.tag){case 5:return xn(e.type);case 16:return xn("Lazy");case 13:return xn("Suspense");case 19:return xn("SuspenseList");case 0:case 2:case 15:return e=Ol(e.type,!1),e;case 11:return e=Ol(e.type.render,!1),e;case 1:return e=Ol(e.type,!0),e;default:return""}}function ii(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Ut:return"Fragment";case Ft:return"Portal";case ni:return"Profiler";case no:return"StrictMode";case ri:return"Suspense";case li:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case vs:return(e.displayName||"Context")+".Consumer";case ys:return(e._context.displayName||"Context")+".Provider";case ro:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case lo:return t=e.displayName||null,t!==null?t:ii(e.type)||"Memo";case tt:t=e._payload,e=e._init;try{return ii(e(t))}catch{}}return null}function jf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return ii(t);case 8:return t===no?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function ht(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ws(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function Df(e){var t=ws(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,i=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(o){r=""+o,i.call(this,o)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(o){r=""+o},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function sr(e){e._valueTracker||(e._valueTracker=Df(e))}function ks(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=ws(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function Dr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function oi(e,t){var n=t.checked;return V({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function bo(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=ht(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function Ss(e,t){t=t.checked,t!=null&&to(e,"checked",t,!1)}function ui(e,t){Ss(e,t);var n=ht(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?si(e,t.type,n):t.hasOwnProperty("defaultValue")&&si(e,t.type,ht(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function eu(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function si(e,t,n){(t!=="number"||Dr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var _n=Array.isArray;function Zt(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l"+t.valueOf().toString()+"",t=ar.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function Dn(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var Pn={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Ff=["Webkit","ms","Moz","O"];Object.keys(Pn).forEach(function(e){Ff.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),Pn[t]=Pn[e]})});function Cs(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||Pn.hasOwnProperty(e)&&Pn[e]?(""+t).trim():t+"px"}function Ns(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=Cs(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var Uf=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function fi(e,t){if(t){if(Uf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(v(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(v(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(v(61))}if(t.style!=null&&typeof t.style!="object")throw Error(v(62))}}function di(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var pi=null;function io(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var mi=null,Jt=null,qt=null;function ru(e){if(e=nr(e)){if(typeof mi!="function")throw Error(v(280));var t=e.stateNode;t&&(t=dl(t),mi(e.stateNode,e.type,t))}}function Ps(e){Jt?qt?qt.push(e):qt=[e]:Jt=e}function zs(){if(Jt){var e=Jt,t=qt;if(qt=Jt=null,ru(e),t)for(e=0;e>>=0,e===0?32:31-(Zf(e)/Jf|0)|0}var cr=64,fr=4194304;function Cn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Vr(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,o=n&268435455;if(o!==0){var u=o&~l;u!==0?r=Cn(u):(i&=o,i!==0&&(r=Cn(i)))}else o=n&~l,o!==0?r=Cn(o):i!==0&&(r=Cn(i));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,i=t&-t,l>=i||l===16&&(i&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function er(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-je(t),e[t]=n}function td(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=On),du=String.fromCharCode(32),pu=!1;function Ys(e,t){switch(e){case"keyup":return Od.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Gs(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var $t=!1;function Ld(e,t){switch(e){case"compositionend":return Gs(t);case"keypress":return t.which!==32?null:(pu=!0,du);case"textInput":return e=t.data,e===du&&pu?null:e;default:return null}}function Rd(e,t){if($t)return e==="compositionend"||!mo&&Ys(e,t)?(e=Ks(),Nr=co=it=null,$t=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=vu(n)}}function bs(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?bs(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function ea(){for(var e=window,t=Dr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=Dr(e.document)}return t}function ho(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function Vd(e){var t=ea(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&bs(n.ownerDocument.documentElement,n)){if(r!==null&&ho(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=gu(n,i);var o=gu(n,r);l&&o&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==o.node||e.focusOffset!==o.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(t),e.extend(o.node,o.offset)):(t.setEnd(o.node,o.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,Vt=null,ki=null,Ln=null,Si=!1;function wu(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;Si||Vt==null||Vt!==Dr(r)||(r=Vt,"selectionStart"in r&&ho(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Ln&&Hn(Ln,r)||(Ln=r,r=Wr(ki,"onSelect"),0Wt||(e.current=Pi[Wt],Pi[Wt]=null,Wt--)}function A(e,t){Wt++,Pi[Wt]=e.current,e.current=t}var yt={},se=gt(yt),he=gt(!1),Tt=yt;function rn(e,t){var n=e.type.contextTypes;if(!n)return yt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in n)l[i]=t[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function ye(e){return e=e.childContextTypes,e!=null}function Kr(){j(he),j(se)}function Nu(e,t,n){if(se.current!==yt)throw Error(v(168));A(se,t),A(he,n)}function aa(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(v(108,jf(e)||"Unknown",l));return V({},n,r)}function Xr(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||yt,Tt=se.current,A(se,e),A(he,he.current),!0}function Pu(e,t,n){var r=e.stateNode;if(!r)throw Error(v(169));n?(e=aa(e,t,Tt),r.__reactInternalMemoizedMergedChildContext=e,j(he),j(se),A(se,e)):j(he),A(he,n)}var Qe=null,pl=!1,Hl=!1;function ca(e){Qe===null?Qe=[e]:Qe.push(e)}function bd(e){pl=!0,ca(e)}function wt(){if(!Hl&&Qe!==null){Hl=!0;var e=0,t=I;try{var n=Qe;for(I=1;e>=o,l-=o,Ke=1<<32-je(t)+l|n<N?(H=C,C=null):H=C.sibling;var L=p(f,C,d[N],y);if(L===null){C===null&&(C=H);break}e&&C&&L.alternate===null&&t(f,C),a=i(L,a,N),_===null?E=L:_.sibling=L,_=L,C=H}if(N===d.length)return n(f,C),F&&xt(f,N),E;if(C===null){for(;NN?(H=C,C=null):H=C.sibling;var Le=p(f,C,L.value,y);if(Le===null){C===null&&(C=H);break}e&&C&&Le.alternate===null&&t(f,C),a=i(Le,a,N),_===null?E=Le:_.sibling=Le,_=Le,C=H}if(L.done)return n(f,C),F&&xt(f,N),E;if(C===null){for(;!L.done;N++,L=d.next())L=h(f,L.value,y),L!==null&&(a=i(L,a,N),_===null?E=L:_.sibling=L,_=L);return F&&xt(f,N),E}for(C=r(f,C);!L.done;N++,L=d.next())L=g(C,f,N,L.value,y),L!==null&&(e&&L.alternate!==null&&C.delete(L.key===null?N:L.key),a=i(L,a,N),_===null?E=L:_.sibling=L,_=L);return e&&C.forEach(function(pn){return t(f,pn)}),F&&xt(f,N),E}function D(f,a,d,y){if(typeof d=="object"&&d!==null&&d.type===Ut&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case ur:e:{for(var E=d.key,_=a;_!==null;){if(_.key===E){if(E=d.type,E===Ut){if(_.tag===7){n(f,_.sibling),a=l(_,d.props.children),a.return=f,f=a;break e}}else if(_.elementType===E||typeof E=="object"&&E!==null&&E.$$typeof===tt&&Au(E)===_.type){n(f,_.sibling),a=l(_,d.props),a.ref=kn(f,_,d),a.return=f,f=a;break e}n(f,_);break}else t(f,_);_=_.sibling}d.type===Ut?(a=Ot(d.props.children,f.mode,y,d.key),a.return=f,f=a):(y=Ar(d.type,d.key,d.props,null,f.mode,y),y.ref=kn(f,a,d),y.return=f,f=y)}return o(f);case Ft:e:{for(_=d.key;a!==null;){if(a.key===_)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){n(f,a.sibling),a=l(a,d.children||[]),a.return=f,f=a;break e}else{n(f,a);break}else t(f,a);a=a.sibling}a=Jl(d,f.mode,y),a.return=f,f=a}return o(f);case tt:return _=d._init,D(f,a,_(d._payload),y)}if(_n(d))return w(f,a,d,y);if(hn(d))return k(f,a,d,y);gr(f,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(n(f,a.sibling),a=l(a,d),a.return=f,f=a):(n(f,a),a=Zl(d,f.mode,y),a.return=f,f=a),o(f)):n(f,a)}return D}var on=ga(!0),wa=ga(!1),rr={},He=gt(rr),Xn=gt(rr),Yn=gt(rr);function Pt(e){if(e===rr)throw Error(v(174));return e}function _o(e,t){switch(A(Yn,t),A(Xn,e),A(He,rr),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:ci(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=ci(t,e)}j(He),A(He,t)}function un(){j(He),j(Xn),j(Yn)}function ka(e){Pt(Yn.current);var t=Pt(He.current),n=ci(t,e.type);t!==n&&(A(Xn,e),A(He,n))}function Co(e){Xn.current===e&&(j(He),j(Xn))}var U=gt(0);function br(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if(t.flags&128)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var Wl=[];function No(){for(var e=0;en?n:4,e(!0);var r=Ql.transition;Ql.transition={};try{e(!1),t()}finally{I=n,Ql.transition=r}}function ja(){return Te().memoizedState}function rp(e,t,n){var r=pt(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Da(e))Fa(t,n);else if(n=ma(e,t,n,r),n!==null){var l=ce();De(n,e,r,l),Ua(n,t,r)}}function lp(e,t,n){var r=pt(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Da(e))Fa(t,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=t.lastRenderedReducer,i!==null))try{var o=t.lastRenderedState,u=i(o,n);if(l.hasEagerState=!0,l.eagerState=u,Fe(u,o)){var s=t.interleaved;s===null?(l.next=l,Eo(t)):(l.next=s.next,s.next=l),t.interleaved=l;return}}catch{}finally{}n=ma(e,t,l,r),n!==null&&(l=ce(),De(n,e,r,l),Ua(n,t,r))}}function Da(e){var t=e.alternate;return e===$||t!==null&&t===$}function Fa(e,t){Rn=el=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Ua(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,uo(e,n)}}var tl={readContext:Oe,useCallback:ie,useContext:ie,useEffect:ie,useImperativeHandle:ie,useInsertionEffect:ie,useLayoutEffect:ie,useMemo:ie,useReducer:ie,useRef:ie,useState:ie,useDebugValue:ie,useDeferredValue:ie,useTransition:ie,useMutableSource:ie,useSyncExternalStore:ie,useId:ie,unstable_isNewReconciler:!1},ip={readContext:Oe,useCallback:function(e,t){return $e().memoizedState=[e,t===void 0?null:t],e},useContext:Oe,useEffect:ju,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,Tr(4194308,4,La.bind(null,t,e),n)},useLayoutEffect:function(e,t){return Tr(4194308,4,e,t)},useInsertionEffect:function(e,t){return Tr(4,2,e,t)},useMemo:function(e,t){var n=$e();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=$e();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=rp.bind(null,$,e),[r.memoizedState,e]},useRef:function(e){var t=$e();return e={current:e},t.memoizedState=e},useState:Mu,useDebugValue:Lo,useDeferredValue:function(e){return $e().memoizedState=e},useTransition:function(){var e=Mu(!1),t=e[0];return e=np.bind(null,e[1]),$e().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=$,l=$e();if(F){if(n===void 0)throw Error(v(407));n=n()}else{if(n=t(),ee===null)throw Error(v(349));Rt&30||xa(r,t,n)}l.memoizedState=n;var i={value:n,getSnapshot:t};return l.queue=i,ju(Ca.bind(null,r,i,e),[e]),r.flags|=2048,Jn(9,_a.bind(null,r,i,n,t),void 0,null),n},useId:function(){var e=$e(),t=ee.identifierPrefix;if(F){var n=Xe,r=Ke;n=(r&~(1<<32-je(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=Gn++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=o.createElement(n,{is:r.is}):(e=o.createElement(n),n==="select"&&(o=e,r.multiple?o.multiple=!0:r.size&&(o.size=r.size))):e=o.createElementNS(e,n),e[Ve]=t,e[Kn]=r,Ya(e,t,!1,!1),t.stateNode=e;e:{switch(o=di(n,r),n){case"dialog":M("cancel",e),M("close",e),l=r;break;case"iframe":case"object":case"embed":M("load",e),l=r;break;case"video":case"audio":for(l=0;lan&&(t.flags|=128,r=!0,Sn(i,!1),t.lanes=4194304)}else{if(!r)if(e=br(o),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),Sn(i,!0),i.tail===null&&i.tailMode==="hidden"&&!o.alternate&&!F)return oe(t),null}else 2*Q()-i.renderingStartTime>an&&n!==1073741824&&(t.flags|=128,r=!0,Sn(i,!1),t.lanes=4194304);i.isBackwards?(o.sibling=t.child,t.child=o):(n=i.last,n!==null?n.sibling=o:t.child=o,i.last=o)}return i.tail!==null?(t=i.tail,i.rendering=t,i.tail=t.sibling,i.renderingStartTime=Q(),t.sibling=null,n=U.current,A(U,r?n&1|2:n&1),t):(oe(t),null);case 22:case 23:return Do(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?ge&1073741824&&(oe(t),t.subtreeFlags&6&&(t.flags|=8192)):oe(t),null;case 24:return null;case 25:return null}throw Error(v(156,t.tag))}function pp(e,t){switch(vo(t),t.tag){case 1:return ye(t.type)&&Kr(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return un(),j(he),j(se),No(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return Co(t),null;case 13:if(j(U),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(v(340));ln()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return j(U),null;case 4:return un(),null;case 10:return So(t.type._context),null;case 22:case 23:return Do(),null;case 24:return null;default:return null}}var kr=!1,ue=!1,mp=typeof WeakSet=="function"?WeakSet:Set,S=null;function Yt(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){B(e,t,r)}else n.current=null}function Ui(e,t,n){try{n()}catch(r){B(e,t,r)}}var Qu=!1;function hp(e,t){if(Ei=Br,e=ea(),ho(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{n.nodeType,i.nodeType}catch{n=null;break e}var o=0,u=-1,s=-1,c=0,m=0,h=e,p=null;t:for(;;){for(var g;h!==n||l!==0&&h.nodeType!==3||(u=o+l),h!==i||r!==0&&h.nodeType!==3||(s=o+r),h.nodeType===3&&(o+=h.nodeValue.length),(g=h.firstChild)!==null;)p=h,h=g;for(;;){if(h===e)break t;if(p===n&&++c===l&&(u=o),p===i&&++m===r&&(s=o),(g=h.nextSibling)!==null)break;h=p,p=h.parentNode}h=g}n=u===-1||s===-1?null:{start:u,end:s}}else n=null}n=n||{start:0,end:0}}else n=null;for(xi={focusedElem:e,selectionRange:n},Br=!1,S=t;S!==null;)if(t=S,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,S=e;else for(;S!==null;){t=S;try{var w=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,D=w.memoizedState,f=t.stateNode,a=f.getSnapshotBeforeUpdate(t.elementType===t.type?k:Ie(t.type,k),D);f.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=t.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(v(163))}}catch(y){B(t,t.return,y)}if(e=t.sibling,e!==null){e.return=t.return,S=e;break}S=t.return}return w=Qu,Qu=!1,w}function In(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&Ui(t,n,i)}l=l.next}while(l!==r)}}function yl(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function $i(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function Ja(e){var t=e.alternate;t!==null&&(e.alternate=null,Ja(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[Ve],delete t[Kn],delete t[Ni],delete t[Jd],delete t[qd])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function qa(e){return e.tag===5||e.tag===3||e.tag===4}function Ku(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||qa(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Vi(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Qr));else if(r!==4&&(e=e.child,e!==null))for(Vi(e,t,n),e=e.sibling;e!==null;)Vi(e,t,n),e=e.sibling}function Bi(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Bi(e,t,n),e=e.sibling;e!==null;)Bi(e,t,n),e=e.sibling}var te=null,Ae=!1;function et(e,t,n){for(n=n.child;n!==null;)ba(e,t,n),n=n.sibling}function ba(e,t,n){if(Be&&typeof Be.onCommitFiberUnmount=="function")try{Be.onCommitFiberUnmount(sl,n)}catch{}switch(n.tag){case 5:ue||Yt(n,t);case 6:var r=te,l=Ae;te=null,et(e,t,n),te=r,Ae=l,te!==null&&(Ae?(e=te,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):te.removeChild(n.stateNode));break;case 18:te!==null&&(Ae?(e=te,n=n.stateNode,e.nodeType===8?Bl(e.parentNode,n):e.nodeType===1&&Bl(e,n),Vn(e)):Bl(te,n.stateNode));break;case 4:r=te,l=Ae,te=n.stateNode.containerInfo,Ae=!0,et(e,t,n),te=r,Ae=l;break;case 0:case 11:case 14:case 15:if(!ue&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,o=i.destroy;i=i.tag,o!==void 0&&(i&2||i&4)&&Ui(n,t,o),l=l.next}while(l!==r)}et(e,t,n);break;case 1:if(!ue&&(Yt(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(u){B(n,t,u)}et(e,t,n);break;case 21:et(e,t,n);break;case 22:n.mode&1?(ue=(r=ue)||n.memoizedState!==null,et(e,t,n),ue=r):et(e,t,n);break;default:et(e,t,n)}}function Xu(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new mp),t.forEach(function(r){var l=_p.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Re(e,t){var n=t.deletions;if(n!==null)for(var r=0;rl&&(l=o),r&=~i}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*vp(r/1960))-r,10e?16:e,ot===null)var r=!1;else{if(e=ot,ot=null,ll=0,R&6)throw Error(v(331));var l=R;for(R|=4,S=e.current;S!==null;){var i=S,o=i.child;if(S.flags&16){var u=i.deletions;if(u!==null){for(var s=0;sQ()-Mo?zt(e,0):Ao|=n),ve(e,t)}function uc(e,t){t===0&&(e.mode&1?(t=fr,fr<<=1,!(fr&130023424)&&(fr=4194304)):t=1);var n=ce();e=Je(e,t),e!==null&&(er(e,t,n),ve(e,n))}function xp(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),uc(e,n)}function _p(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(v(314))}r!==null&&r.delete(t),uc(e,n)}var sc;sc=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||he.current)me=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return me=!1,fp(e,t,n);me=!!(e.flags&131072)}else me=!1,F&&t.flags&1048576&&fa(t,Gr,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;Lr(e,t),e=t.pendingProps;var l=rn(t,se.current);en(t,n),l=zo(null,t,r,e,l,n);var i=Oo();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,ye(r)?(i=!0,Xr(t)):i=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,xo(t),l.updater=ml,t.stateNode=l,l._reactInternals=t,Ri(t,r,e,n),t=Mi(null,t,r,!0,i,n)):(t.tag=0,F&&i&&yo(t),ae(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(Lr(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=Np(r),e=Ie(r,e),l){case 0:t=Ai(null,t,r,e,n);break e;case 1:t=Bu(null,t,r,e,n);break e;case 11:t=$u(null,t,r,e,n);break e;case 14:t=Vu(null,t,r,Ie(r.type,e),n);break e}throw Error(v(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),Ai(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),Bu(e,t,r,l,n);case 3:e:{if(Qa(t),e===null)throw Error(v(387));r=t.pendingProps,i=t.memoizedState,l=i.element,ha(e,t),qr(t,r,null,n);var o=t.memoizedState;if(r=o.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:o.cache,pendingSuspenseBoundaries:o.pendingSuspenseBoundaries,transitions:o.transitions},t.updateQueue.baseState=i,t.memoizedState=i,t.flags&256){l=sn(Error(v(423)),t),t=Hu(e,t,r,n,l);break e}else if(r!==l){l=sn(Error(v(424)),t),t=Hu(e,t,r,n,l);break e}else for(we=ct(t.stateNode.containerInfo.firstChild),ke=t,F=!0,Me=null,n=wa(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(ln(),r===l){t=qe(e,t,n);break e}ae(e,t,r,n)}t=t.child}return t;case 5:return ka(t),e===null&&Oi(t),r=t.type,l=t.pendingProps,i=e!==null?e.memoizedProps:null,o=l.children,_i(r,l)?o=null:i!==null&&_i(r,i)&&(t.flags|=32),Wa(e,t),ae(e,t,o,n),t.child;case 6:return e===null&&Oi(t),null;case 13:return Ka(e,t,n);case 4:return _o(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=on(t,null,r,n):ae(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),$u(e,t,r,l,n);case 7:return ae(e,t,t.pendingProps,n),t.child;case 8:return ae(e,t,t.pendingProps.children,n),t.child;case 12:return ae(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,i=t.memoizedProps,o=l.value,A(Zr,r._currentValue),r._currentValue=o,i!==null)if(Fe(i.value,o)){if(i.children===l.children&&!he.current){t=qe(e,t,n);break e}}else for(i=t.child,i!==null&&(i.return=t);i!==null;){var u=i.dependencies;if(u!==null){o=i.child;for(var s=u.firstContext;s!==null;){if(s.context===r){if(i.tag===1){s=Ye(-1,n&-n),s.tag=2;var c=i.updateQueue;if(c!==null){c=c.shared;var m=c.pending;m===null?s.next=s:(s.next=m.next,m.next=s),c.pending=s}}i.lanes|=n,s=i.alternate,s!==null&&(s.lanes|=n),Ti(i.return,n,t),u.lanes|=n;break}s=s.next}}else if(i.tag===10)o=i.type===t.type?null:i.child;else if(i.tag===18){if(o=i.return,o===null)throw Error(v(341));o.lanes|=n,u=o.alternate,u!==null&&(u.lanes|=n),Ti(o,n,t),o=i.sibling}else o=i.child;if(o!==null)o.return=i;else for(o=i;o!==null;){if(o===t){o=null;break}if(i=o.sibling,i!==null){i.return=o.return,o=i;break}o=o.return}i=o}ae(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,en(t,n),l=Oe(l),r=r(l),t.flags|=1,ae(e,t,r,n),t.child;case 14:return r=t.type,l=Ie(r,t.pendingProps),l=Ie(r.type,l),Vu(e,t,r,l,n);case 15:return Ba(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Ie(r,l),Lr(e,t),t.tag=1,ye(r)?(e=!0,Xr(t)):e=!1,en(t,n),va(t,r,l),Ri(t,r,l,n),Mi(null,t,r,!0,e,n);case 19:return Xa(e,t,n);case 22:return Ha(e,t,n)}throw Error(v(156,t.tag))};function ac(e,t){return Ms(e,t)}function Cp(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ne(e,t,n,r){return new Cp(e,t,n,r)}function Uo(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Np(e){if(typeof e=="function")return Uo(e)?1:0;if(e!=null){if(e=e.$$typeof,e===ro)return 11;if(e===lo)return 14}return 2}function mt(e,t){var n=e.alternate;return n===null?(n=Ne(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Ar(e,t,n,r,l,i){var o=2;if(r=e,typeof e=="function")Uo(e)&&(o=1);else if(typeof e=="string")o=5;else e:switch(e){case Ut:return Ot(n.children,l,i,t);case no:o=8,l|=8;break;case ni:return e=Ne(12,n,t,l|2),e.elementType=ni,e.lanes=i,e;case ri:return e=Ne(13,n,t,l),e.elementType=ri,e.lanes=i,e;case li:return e=Ne(19,n,t,l),e.elementType=li,e.lanes=i,e;case gs:return gl(n,l,i,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case ys:o=10;break e;case vs:o=9;break e;case ro:o=11;break e;case lo:o=14;break e;case tt:o=16,r=null;break e}throw Error(v(130,e==null?e:typeof e,""))}return t=Ne(o,n,t,l),t.elementType=e,t.type=r,t.lanes=i,t}function Ot(e,t,n,r){return e=Ne(7,e,r,t),e.lanes=n,e}function gl(e,t,n,r){return e=Ne(22,e,r,t),e.elementType=gs,e.lanes=n,e.stateNode={isHidden:!1},e}function Zl(e,t,n){return e=Ne(6,e,null,t),e.lanes=n,e}function Jl(e,t,n){return t=Ne(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function Pp(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Ll(0),this.expirationTimes=Ll(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Ll(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function $o(e,t,n,r,l,i,o,u,s){return e=new Pp(e,t,n,u,s),t===1?(t=1,i===!0&&(t|=8)):t=0,i=Ne(3,null,null,t),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},xo(i),e}function zp(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(t)}catch(n){console.error(n)}}t(),e.exports=Ee})(Of);var pc,ts=bl;pc=ts.createRoot,ts.hydrateRoot;const Y=new kf,Ip=["audio-classification","audio-to-audio","automatic-speech-recognition","conversational","depth-estimation","document-question-answering","feature-extraction","fill-mask","graph-ml","image-classification","image-segmentation","image-to-image","image-to-text","multiple-choice","object-detection","other","question-answering","reinforcement-learning","robotics","sentence-similarity","summarization","table-question-answering","table-to-text","tabular-classification","tabular-regression","tabular-to-text","text-classification","text-generation","text-retrieval","text-to-image","text-to-speech","text2text-generation","time-series-forecasting","token-classification","translation","unconditional-image-generation","video-classification","visual-question-answering","voice-activity-detection","zero-shot-classification","zero-shot-image-classification"].filter(e=>Object.getOwnPropertyNames(Y).includes(zf(e))),ql={},Ap=async e=>{if(ql[e])return ql[e];const t=[];for await(const n of Kc({search:{task:e}}))t.push(n);return t.sort((n,r)=>n.downloads>r.downloads?-1:n.downloadsr.likes?-1:n.likesr.name?-1:n.nameze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Task"}),ze("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:t=>e.setTask(t.target.value),placeholder:"Select a task",value:e.task,children:[P("option",{children:"Select a task"}),Ip.map(t=>P("option",{value:t,children:t},t))]})]}),jp=e=>{const[t,n]=ne.useState(!1),[r,l]=ne.useState([]);return ne.useEffect(()=>{e.task&&(n(!0),Ap(e.task).then(i=>l(i)).finally(()=>n(!1)))},[e.task]),r.length>0?ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Model"}),ze("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:i=>e.setModel(i.target.value),placeholder:"Select a model",value:e.model,children:[P("option",{children:"Select a model"}),r.map(i=>P("option",{value:i.name,children:i.name},i.name))]})]}):P("p",{className:"text-center w-full",children:e.task?t?"Loading models for this task":"No models available for this task":"Select a task to view available models"})},Dp=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Inputs"}),e.inputs?P("audio",{className:"w-full",controls:!0,src:URL.createObjectURL(e.inputs)}):ze("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",P("input",{accept:"audio/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),Fp=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Inputs"}),e.inputs?P("img",{className:"w-full",src:URL.createObjectURL(e.inputs)}):ze("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",P("input",{accept:"image/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),Up=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Inputs"}),P("input",{className:"bg-yellow-200 py-6 text-center w-full",onChange:t=>{t.target.value?e.setInputs(t.target.value):e.setInputs("")},type:"text",value:e.inputs??""})]}),$p=e=>e.model&&e.task?["audio-classification","automatic-speech-recognition"].includes(e.task)?P(Dp,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["image-classification","image-segmentation","image-to-text","object-detection"].includes(e.task)?P(Fp,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["conversational","feature-extraction","fill-mask","question-answering","summarization","table-question-answering","text-classification","text-generation","text-to-image","token-classification","translation","zero-shot-classification"].includes(e.task)?P(Up,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):P("div",{className:"w-full",children:P("p",{className:"text-center",children:"Inference for this task is not yet supported."})}):P(ne.Fragment,{}),Vp=e=>{if(e.inputs&&e.model&&e.task){const t=()=>{e.setInputs(void 0),e.setOutput(void 0)};return P("button",{className:`border-4 border-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:"Clear"})}return P(ne.Fragment,{})},Bp=e=>{if(e.inputs&&e.model&&e.task){const t=async()=>{if(e.inputs&&e.model&&e.task){e.setLoading(!0);try{switch(e.task){case"audio-classification":{const n=await Y.audioClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"automatic-speech-recognition":{const n=await Y.automaticSpeechRecognition({data:e.inputs,model:e.model});e.setOutput(n);break}case"conversational":{const n=await Y.conversational({inputs:{text:e.inputs},model:e.model});e.setOutput(n);break}case"feature-extraction":{const n=await Y.featureExtraction({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"fill-mask":{const n=await Y.fillMask({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"image-classification":{const n=await Y.imageClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"image-segmentation":{const n=await Y.imageSegmentation({data:e.inputs,model:e.model});e.setOutput(n);break}case"image-to-text":{const n=await Y.imageToText({data:e.inputs,model:e.model});e.setOutput(n);break}case"object-detection":{const n=await Y.objectDetection({data:e.inputs,model:e.model});e.setOutput(n);break}case"question-answering":{const n=await Y.questionAnswering({inputs:{context:e.inputs,question:e.inputs},model:e.model});e.setOutput(n);break}case"summarization":{const n=await Y.summarization({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"table-question-answering":{const n=await Y.tableQuestionAnswering({inputs:{query:e.inputs,table:{[e.inputs]:[e.inputs]}},model:e.model});e.setOutput(n);break}case"text-classification":{const n=await Y.textClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-generation":{const n=await Y.textGeneration({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-to-image":{const n=await Y.textToImage({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"token-classification":{const n=await Y.tokenClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"translation":{const n=await Y.translation({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"zero-shot-classification":{const n=await Y.zeroShotClassification({inputs:e.inputs,model:e.model,parameters:{candidate_labels:[e.inputs]}});e.setOutput(n);break}}}catch(n){n instanceof Error&&e.setOutput(n.message)}e.setLoading(!1)}};return P("button",{className:`bg-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:e.loading?"Submitting":"Submit"})}return P(ne.Fragment,{})},Hp=e=>ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Output"}),P("img",{className:`w-full ${e.loading?"cursor-wait opacity-50":""}`,src:URL.createObjectURL(e.output)})]}),Wp=e=>{const t=(()=>{try{return JSON.stringify(e.output,void 0,2)}catch(n){if(n instanceof Error)return`Error during JSON.stringify: ${n.message}`}})();return ze("div",{className:"w-full",children:[P("p",{className:"text-xl",children:"Output"}),P("pre",{className:`bg-yellow-200 p-6 select-text w-full whitespace-pre-wrap ${e.loading?"cursor-wait opacity-50":""}`,children:t})]})},Qp=e=>e.output&&e.task?["text-to-image"].includes(e.task)?P(Hp,{loading:e.loading,output:e.output}):P(Wp,{loading:e.loading,output:e.output}):P(ne.Fragment,{}),Kp=()=>{const[e,t]=ne.useState(),[n,r]=ne.useState(),[l,i]=ne.useState(),[o,u]=ne.useState(!1),[s,c]=ne.useState();return P("div",{className:"bg-yellow-500 flex flex-col h-full items-center min-h-screen min-w-screen overflow-auto w-full",children:ze("div",{className:"flex flex-col items-center justify-center py-24 space-y-12 w-2/3 lg:w-1/3",children:[P("header",{className:"text-center text-6xl",children:"🤗"}),P(Mp,{setTask:t,task:e}),P(jp,{model:n,setModel:r,task:e}),P($p,{inputs:l,model:n,setInputs:i,task:e}),P(Vp,{inputs:l,loading:o,model:n,setInputs:i,setOutput:c,task:e}),P(Bp,{inputs:l,loading:o,model:n,setLoading:u,setOutput:c,task:e}),P(Qp,{loading:o,output:s,task:e})]})})},Xp=()=>{const e="root",t=document.getElementById(e);if(t){const n=pc(t),r=P(ne.StrictMode,{children:P(Kp,{})});n.render(r)}};Xp();
diff --git a/spaces/milyiyo/reimagine-it/captioning/utils/eval_multi.py b/spaces/milyiyo/reimagine-it/captioning/utils/eval_multi.py
deleted file mode 100644
index 83907410b806a50002aa32db289ca86cff72f45d..0000000000000000000000000000000000000000
--- a/spaces/milyiyo/reimagine-it/captioning/utils/eval_multi.py
+++ /dev/null
@@ -1,218 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import torch
-import torch.nn as nn
-
-import numpy as np
-import json
-from json import encoder
-import random
-import string
-import time
-import os
-import sys
-from . import misc as utils
-from eval_utils import getCOCO
-
-from .div_utils import compute_div_n, compute_global_div_n
-
-import sys
-try:
- sys.path.append("coco-caption")
- annFile = 'coco-caption/annotations/captions_val2014.json'
- from pycocotools.coco import COCO
- from pycocoevalcap.eval import COCOEvalCap
- from pycocoevalcap.eval_spice import COCOEvalCapSpice
- from pycocoevalcap.tokenizer.ptbtokenizer import PTBTokenizer
- from pycocoevalcap.bleu.bleu import Bleu
- sys.path.append("cider")
- from pyciderevalcap.cider.cider import Cider
-except:
- print('Warning: requirements for eval_multi not satisfied')
-
-
-def eval_allspice(dataset, preds_n, model_id, split):
- coco = getCOCO(dataset)
- valids = coco.getImgIds()
-
- capsById = {}
- for d in preds_n:
- capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d]
-
- # filter results to only those in MSCOCO validation set (will be about a third)
- preds_filt_n = [p for p in preds_n if p['image_id'] in valids]
- print('using %d/%d predictions_n' % (len(preds_filt_n), len(preds_n)))
- cache_path_n = os.path.join('eval_results/', model_id + '_' + split + '_n.json')
- json.dump(preds_filt_n, open(cache_path_n, 'w')) # serialize to temporary json file. Sigh, COCO API...
-
- # Eval AllSPICE
- cocoRes_n = coco.loadRes(cache_path_n)
- cocoEvalAllSPICE = COCOEvalCapSpice(coco, cocoRes_n)
- cocoEvalAllSPICE.params['image_id'] = cocoRes_n.getImgIds()
- cocoEvalAllSPICE.evaluate()
-
- out = {}
- for metric, score in cocoEvalAllSPICE.eval.items():
- out['All'+metric] = score
-
- imgToEvalAllSPICE = cocoEvalAllSPICE.imgToEval
- # collect SPICE_sub_score
- for k in list(imgToEvalAllSPICE.values())[0]['SPICE'].keys():
- if k != 'All':
- out['AllSPICE_'+k] = np.array([v['SPICE'][k]['f'] for v in imgToEvalAllSPICE.values()])
- out['AllSPICE_'+k] = (out['AllSPICE_'+k][out['AllSPICE_'+k]==out['AllSPICE_'+k]]).mean()
- for p in preds_filt_n:
- image_id, caption = p['image_id'], p['caption']
- imgToEvalAllSPICE[image_id]['caption'] = capsById[image_id]
- return {'overall': out, 'imgToEvalAllSPICE': imgToEvalAllSPICE}
-
-def eval_oracle(dataset, preds_n, model_id, split):
- cache_path = os.path.join('eval_results/', model_id + '_' + split + '_n.json')
-
- coco = getCOCO(dataset)
- valids = coco.getImgIds()
-
- capsById = {}
- for d in preds_n:
- capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d]
-
- sample_n = capsById[list(capsById.keys())[0]]
- for i in range(len(capsById[list(capsById.keys())[0]])):
- preds = [_[i] for _ in capsById.values()]
-
- json.dump(preds, open(cache_path, 'w')) # serialize to temporary json file. Sigh, COCO API...
-
- cocoRes = coco.loadRes(cache_path)
- cocoEval = COCOEvalCap(coco, cocoRes)
- cocoEval.params['image_id'] = cocoRes.getImgIds()
- cocoEval.evaluate()
-
- imgToEval = cocoEval.imgToEval
- for img_id in capsById.keys():
- tmp = imgToEval[img_id]
- for k in tmp['SPICE'].keys():
- if k != 'All':
- tmp['SPICE_'+k] = tmp['SPICE'][k]['f']
- if tmp['SPICE_'+k] != tmp['SPICE_'+k]: # nan
- tmp['SPICE_'+k] = -100
- tmp['SPICE'] = tmp['SPICE']['All']['f']
- if tmp['SPICE'] != tmp['SPICE']: tmp['SPICE'] = -100
- capsById[img_id][i]['scores'] = imgToEval[img_id]
-
- out = {'overall': {}, 'ImgToEval': {}}
- for img_id in capsById.keys():
- out['ImgToEval'][img_id] = {}
- for metric in capsById[img_id][0]['scores'].keys():
- if metric == 'image_id': continue
- out['ImgToEval'][img_id]['oracle_'+metric] = max([_['scores'][metric] for _ in capsById[img_id]])
- out['ImgToEval'][img_id]['avg_'+metric] = sum([_['scores'][metric] for _ in capsById[img_id]]) / len(capsById[img_id])
- out['ImgToEval'][img_id]['captions'] = capsById[img_id]
- for metric in list(out['ImgToEval'].values())[0].keys():
- if metric == 'captions':
- continue
- tmp = np.array([_[metric] for _ in out['ImgToEval'].values()])
- tmp = tmp[tmp!=-100]
- out['overall'][metric] = tmp.mean()
-
- return out
-
-def eval_div_stats(dataset, preds_n, model_id, split):
- tokenizer = PTBTokenizer()
-
- capsById = {}
- for i, d in enumerate(preds_n):
- d['id'] = i
- capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d]
-
- n_caps_perimg = len(capsById[list(capsById.keys())[0]])
- print(n_caps_perimg)
- _capsById = capsById # save the untokenized version
- capsById = tokenizer.tokenize(capsById)
-
- div_1, adiv_1 = compute_div_n(capsById,1)
- div_2, adiv_2 = compute_div_n(capsById,2)
-
- globdiv_1, _= compute_global_div_n(capsById,1)
-
- print('Diversity Statistics are as follows: \n Div1: %.2f, Div2: %.2f, gDiv1: %d\n'%(div_1,div_2, globdiv_1))
-
- # compute mbleu
- scorer = Bleu(4)
- all_scrs = []
- scrperimg = np.zeros((n_caps_perimg, len(capsById)))
-
- for i in range(n_caps_perimg):
- tempRefsById = {}
- candsById = {}
- for k in capsById:
- tempRefsById[k] = capsById[k][:i] + capsById[k][i+1:]
- candsById[k] = [capsById[k][i]]
-
- score, scores = scorer.compute_score(tempRefsById, candsById)
- all_scrs.append(score)
- scrperimg[i,:] = scores[1]
-
- all_scrs = np.array(all_scrs)
-
- out = {}
- out['overall'] = {'Div1': div_1, 'Div2': div_2, 'gDiv1': globdiv_1}
- for k, score in zip(range(4), all_scrs.mean(axis=0).tolist()):
- out['overall'].update({'mBLeu_%d'%(k+1): score})
- imgToEval = {}
- for i,imgid in enumerate(capsById.keys()):
- imgToEval[imgid] = {'mBleu_2' : scrperimg[:,i].mean()}
- imgToEval[imgid]['individuals'] = []
- for j, d in enumerate(_capsById[imgid]):
- imgToEval[imgid]['individuals'].append(preds_n[d['id']])
- imgToEval[imgid]['individuals'][-1]['mBleu_2'] = scrperimg[j,i]
- out['ImgToEval'] = imgToEval
-
- print('Mean mutual Bleu scores on this set is:\nmBLeu_1, mBLeu_2, mBLeu_3, mBLeu_4')
- print(all_scrs.mean(axis=0))
-
- return out
-
-def eval_self_cider(dataset, preds_n, model_id, split):
- cache_path = os.path.join('eval_results/', model_id + '_' + split + '_n.json')
-
- coco = getCOCO(dataset)
- valids = coco.getImgIds()
-
- # Get Cider_scorer
- Cider_scorer = Cider(df='corpus')
-
- tokenizer = PTBTokenizer()
- gts = {}
- for imgId in valids:
- gts[imgId] = coco.imgToAnns[imgId]
- gts = tokenizer.tokenize(gts)
-
- for imgId in valids:
- Cider_scorer.cider_scorer += (None, gts[imgId])
- Cider_scorer.cider_scorer.compute_doc_freq()
- Cider_scorer.cider_scorer.ref_len = np.log(float(len(Cider_scorer.cider_scorer.crefs)))
-
- # Prepare captions
- capsById = {}
- for d in preds_n:
- capsById[d['image_id']] = capsById.get(d['image_id'], []) + [d]
-
- capsById = tokenizer.tokenize(capsById)
- imgIds = list(capsById.keys())
- scores = Cider_scorer.my_self_cider([capsById[_] for _ in imgIds])
-
- def get_div(eigvals):
- eigvals = np.clip(eigvals, 0, None)
- return -np.log(np.sqrt(eigvals[-1]) / (np.sqrt(eigvals).sum())) / np.log(len(eigvals))
- sc_scores = [get_div(np.linalg.eigvalsh(_/10)) for _ in scores]
- score = np.mean(np.array(sc_scores))
-
- imgToEval = {}
- for i, image_id in enumerate(imgIds):
- imgToEval[image_id] = {'self_cider': sc_scores[i], 'self_cider_mat': scores[i].tolist()}
- return {'overall': {'self_cider': score}, 'imgToEval': imgToEval}
-
-
- return score
diff --git a/spaces/mpuig/gpt3-email-generator/app.py b/spaces/mpuig/gpt3-email-generator/app.py
deleted file mode 100644
index 44411c1379ec49bfb3e0dd61397f33a4a73c6579..0000000000000000000000000000000000000000
--- a/spaces/mpuig/gpt3-email-generator/app.py
+++ /dev/null
@@ -1,136 +0,0 @@
-from random import choice
-
-import streamlit as st
-import openai
-
-PROMPT_TEMPLATE = "Write a {tone} email to the customers of a {company_type} offering {offer}"
-
-VOICE_TONE_OPTIONS = "funny,formal,professional,informal,friendly,humorous," \
- "serious,optimistic,motivating,respectful,assertive," \
- "conversational,urgent".split(",")
-
-COMPANY_TYPE_OPTIONS = "bank,insurance,telecommunications (telco),retail,transportation".split(",")
-# "pharmaceutical,energy,automotive,real estate,technology," \
-# "hospitality,food and beverage,healthcare,manufacturing,construction," \
-# "mining,agriculture,e-commerce,entertainment," \
-# "consulting services,accounting services,legal services".split(",")
-
-EXAMPLE_OFFERS = {
- "bank": [
- "Checking Accounts that allows customers to deposit and withdraw funds, write checks, and make electronic transactions",
- "Savings Accounts, where customers can deposit money and earn interest on their savings",
- "Certificates of Deposit, a type of savings account where customers deposit money for a fixed term and earn a higher rate of interest",
- "Personal Loans, a loan offered to individuals for personal use, such as home improvement, debt consolidation, or medical expenses",
- "Home Loans, a loan for the purpose of purchasing or refinancing a home",
- ],
- "insurance": [
- "Auto Insurance: A type of insurance policy that provides coverage for losses related to an individual's car, including liability, collision, and comprehensive coverage",
- "Home Insurance: A type of insurance policy that provides coverage for losses related to an individual's home, including protection for the structure, personal belongings, and liability coverage",
- "Life Insurance: A type of insurance policy that provides financial protection to an individual's family in the event of their death",
- "Health Insurance: A type of insurance policy that provides coverage for medical expenses and treatments, including doctor visits, hospital stays, and prescription drugs",
- "Business Insurance: A type of insurance policy that provides coverage for losses related to a business, including liability, property, and workers' compensation coverage",
- ],
- "telecommunications (telco)": [
- "Postpaid Plan: A postpaid plan provides customers with a monthly bill for services used. The customer typically receives a set amount of data, minutes, and texts for a fixed price, with the option to add extra services for an additional fee",
- "Prepaid Plan: A prepaid plan allows customers to pay for services in advance, before they use them. The customer adds credit to their account, which is then deducted for each call, text, or data usage",
- "Family Plan: A family plan allows multiple users to share a single account, pooling their data, minutes, and texts. This type of plan is often more cost-effective than individual plans and is popular with families or groups of friends",
- "Unlimited Plan: An unlimited plan provides customers with unlimited data, minutes, and texts for a fixed monthly fee. These plans are attractive to customers who use their mobile devices frequently and need a lot of data",
- "Roaming Plan: A roaming plan provides customers with the ability to use their mobile devices while traveling abroad. The customer pays a fee for each day they use their device, and is provided with a set amount of data, minutes, and texts while they are overseas",
- ],
- "retail": [
- "Buy one, get one free: Customers can purchase one product and receive a second product of equal or lesser value for free",
- "Limited-time discount: A temporary reduction in price for a specific product or product line, designed to encourage customers to make a purchase quickly",
- "Bundled offer: A package deal that combines multiple products or services at a discounted price, often as a way to promote complementary products",
- "Loyalty program: A reward system that incentivizes customers to continue making purchases by offering points, coupons, or other benefits for their spending",
- "Free gift with purchase: Customers receive a complimentary item when they make a purchase, often to promote new products or drive sales of slower-moving inventory.",
- ],
- "transportation": [
- "Express Delivery Service - This offer would be ideal for customers who need to have their packages delivered quickly and with a guaranteed delivery time. This could be done through the use of priority shipping, courier services, and specialized delivery vehicles",
- "Freight Shipping - This offer would target customers who need to transport large quantities of goods over long distances. The company would provide the necessary resources, such as shipping containers, trailers, and trucks, to safely transport the goods from point A to point B",
- "Logistics Solutions - This offer would provide customers with a comprehensive set of services for managing their supply chain. This could include warehousing, inventory management, and order fulfillment services, among others",
- "Shuttle Services - This offer would target customers who need to transport groups of people from one location to another, such as airport transfers, school trips, and group tours. The company would provide the necessary vehicles and drivers to safely transport the passengers",
- "Last-Mile Delivery - This offer would be ideal for customers who need to have their packages delivered directly to the end customer. This could be done through the use of delivery vehicles, bicycles, and even drones, depending on the needs of the customer",
- ]
-}
-
-openai.api_key = st.secrets["openai-api-key"]
-
-
-def generate_email(prompt: str, max_tokens: int = 256) -> str:
- """
- Returns a generated an email using GPT3 with a certain prompt and starting sentence
- """
-
- completions = openai.Completion.create(
- model="text-davinci-003",
- prompt=prompt,
- temperature=0.7,
- max_tokens=max_tokens,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0
- )
- message = completions.choices[0].text
- return message
-
-
-def company_type_changed():
- company_type = st.session_state['company_type']
- st.session_state['offer'] = choice(EXAMPLE_OFFERS.get(company_type))
-
-
-def main():
- st.title("Email Generator")
- st.text("by Marc Puig")
-
- st.sidebar.markdown("### :arrow_right: Parameters")
-
- email_tone = st.sidebar.selectbox(
- label="Email voice tone",
- options=(sorted(VOICE_TONE_OPTIONS))
- ),
-
- email_company_type = st.sidebar.selectbox(
- label="Company type",
- key="company_type",
- options=(sorted(COMPANY_TYPE_OPTIONS)),
- on_change=company_type_changed,
- )
-
- if 'offer' not in st.session_state:
- st.session_state['offer'] = choice(EXAMPLE_OFFERS.get(email_company_type))
-
- email_offer = st.sidebar.text_area(
- label="Offer description",
- key="offer",
- value=st.session_state['offer'],
- height=200,
- )
-
- email_include_emojis = st.sidebar.checkbox('Include emojis 🤩')
-
- prompt_input = None
-
- if email_tone and email_company_type and email_offer:
- prompt_input = PROMPT_TEMPLATE.format(tone=email_tone, company_type=email_company_type, offer=email_offer)
- if email_include_emojis:
- prompt_input = prompt_input + ", including emojis"
-
- max_tokens_input = st.slider(
- label="How many characters do you want your email to be? ",
- help="A typical email is usually 100-500 characters",
- min_value=64,
- max_value=400,
- value=200
- )
-
- with st.form(key="form"):
- if st.form_submit_button(label='Generate email', disabled=prompt_input is None or len(prompt_input) == 0):
- with st.spinner("Generating email..."):
- output = generate_email(prompt_input, max_tokens=max_tokens_input)
- st.markdown("----")
- st.markdown(output)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/mrloler/oai-claude/src/utils.js b/spaces/mrloler/oai-claude/src/utils.js
deleted file mode 100644
index df50ddf2150e882f01aebcaa91ed05b60ea97585..0000000000000000000000000000000000000000
--- a/spaces/mrloler/oai-claude/src/utils.js
+++ /dev/null
@@ -1,191 +0,0 @@
-const FormData = require('form-data');
-
-const wait = (duration) => {
- return new Promise((resolve) => {
- setTimeout(() => {
- resolve();
- }, duration);
- });
-};
-
-function buildPrompt(messages) {
- prompt = "\n\n" + preparePrompt(messages);
- return prompt;
- //do not escape for now
- const escapedPrompt = prompt.replace(/\r?\n|\r/g, '\\n').replace(/"/g, '\\"');
- return escapedPrompt;
-};
-
-const readBody = (res, json) => new Promise((resolve, reject) => {
- let buffer = '';
-
- res.on('data', chunk => {
- buffer += chunk;
- });
-
- res.on('end', () => {
- try {
- if (json) buffer = JSON.parse(buffer);
- resolve(buffer);
- } catch (e) {
- console.error(buffer);
- reject(e);
- }
- });
-})
-
-function preparePrompt(messages) {
- return messages.filter(m => m.content?.trim()).map(m => {
- let author = '';
- switch (m.role) {
- case 'user': author = 'Human'; break;
- case 'assistant': author = 'Assistant'; break;
- case 'system': author = 'System Note'; break;
- default: author = m.role; break;
- }
-
- return `${author}: ${m.content.trim()}`;
- }).join('\n\n');
-}
-
-const currentTime = () => {
- const date = new Date();
- const year = date.getFullYear();
- const month = String(date.getMonth() + 1).padStart(2, '0');
- const day = String(date.getDate()).padStart(2, '0');
- const hours = String(date.getHours()).padStart(2, '0');
- const minutes = String(date.getMinutes()).padStart(2, '0');
- const seconds = String(date.getSeconds()).padStart(2, '0');
- const milliseconds = String(date.getMilliseconds()).padStart(3, '0');
-
- return `${year}-${month}-${day} ${hours}:${minutes}:${seconds}.${milliseconds}`;
-};
-
-const genHeaders = (config) => {
- return {
- 'Cookie': `d=${config.cookie};`,
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/112.0',
- }
-}
-
-function splitJsonArray(jsonArray, maxLength) {
- let result = [];
- let currentChunk = [];
- let currentLength = 2; // Accounts for the opening and closing square brackets in the JSON array
-
- const jail = '(OOC: This is just a part of the context, reply only with "OOC: understood")';
- const assistant = "\n\nAssistant: ";
-
- const addObjectToChunk = (object, chunk) => {
- chunk.push(object);
- return currentLength + JSON.stringify(object).length + 1;
- };
-
- const appendTextToContent = (object, text) => {
- const newObj = JSON.parse(JSON.stringify(object));
- newObj.content += text;
- return newObj;
- };
-
- for (const obj of jsonArray) {
- const objLength = JSON.stringify(obj).length + 1;
-
- if (currentLength + objLength <= maxLength) {
- currentLength = addObjectToChunk(obj, currentChunk);
- } else {
- const lastObjectInChunk = currentChunk[currentChunk.length - 1];
- if (!lastObjectInChunk) continue;
- const lastObjectWithJail = appendTextToContent(lastObjectInChunk, ` ${jail}`);
- const lastObjectWithJailLength = JSON.stringify(lastObjectWithJail).length + 1;
-
- if (currentLength - JSON.stringify(lastObjectInChunk).length - 1 + lastObjectWithJailLength <= maxLength) {
- currentChunk[currentChunk.length - 1] = lastObjectWithJail;
- }
-
- result.push(currentChunk);
- currentChunk = [obj];
- currentLength = 2 + objLength;
- }
- }
-
- if (currentChunk.length > 0) {
- result.push(currentChunk);
- }
-
- const lastChunk = result[result.length - 1];
- const lastObjectInLastChunk = lastChunk[lastChunk.length - 1];
- const lastObjectWithAssistant = appendTextToContent(lastObjectInLastChunk, assistant);
- const lastObjectWithAssistantLength = JSON.stringify(lastObjectWithAssistant).length + 1;
-
- if (currentLength - JSON.stringify(lastObjectInLastChunk).length - 1 + lastObjectWithAssistantLength <= maxLength) {
- lastChunk[lastChunk.length - 1] = lastObjectWithAssistant;
- }
-
- return result;
-}
-
-function convertToUnixTime(date) {
- const unixTime = Math.floor(date.getTime() / 1000);
- const randomDigit = Math.floor(Math.random() * 10);
- return `${unixTime}.xxxxx${randomDigit}`;
-}
-
-function createBaseForm(config) {
- const form = new FormData();
- form.append('token', config.token);
- form.append('channel', `${config.claudeId}`);
- form.append('_x_mode', 'online');
- form.append('_x_sonic', 'true');
- return form;
-}
-
-// Add the utility functions here
-// e.g. escapePrompt, readBody, preparePrompt, currentTime, headers, convertToUnixTime, createBaseForm
-
-const dataToResponse = (
- data,
- promptTokens,
- completionTokens,
- stream = false,
- reason = null
-) => {
- const currDate = new Date();
- const contentData = { content: data, role: 'assistant' };
- const contentName = stream ? 'delta' : 'message';
-
- return {
- choices: [
- {
- [contentName]: !!data ? contentData : {},
- finish_reason: reason,
- index: 0,
- },
- ],
- created: currDate.getTime(),
- id: `chatcmpl-${(Math.random().toString(36).slice(2))}`,
- object: 'chat.completion.chunk',
- usage: {
- prompt_tokens: promptTokens,
- completion_tokens: completionTokens,
- total_tokens: promptTokens + completionTokens,
- },
- };
-};
-
-const stats = {
- prompts: []
-}
-
-module.exports = {
- buildPrompt,
- readBody,
- preparePrompt,
- currentTime,
- genHeaders,
- convertToUnixTime,
- createBaseForm,
- splitJsonArray,
- wait,
- dataToResponse,
- stats,
-};
\ No newline at end of file
diff --git a/spaces/mrm8488/speech-to-diffusion/README.md b/spaces/mrm8488/speech-to-diffusion/README.md
deleted file mode 100644
index 5b7b0535368d21ce73e410b22ee418d1e0f88048..0000000000000000000000000000000000000000
--- a/spaces/mrm8488/speech-to-diffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Speech To Diffusion
-emoji: 🌖
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.3
-app_file: app.py
-pinned: false
-license: wtfpl
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mrrandom123/image_creative_caption_new/app.py b/spaces/mrrandom123/image_creative_caption_new/app.py
deleted file mode 100644
index b0ef195f114baf72e102bb0a67a3ad41db94e55a..0000000000000000000000000000000000000000
--- a/spaces/mrrandom123/image_creative_caption_new/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import streamlit as st
-import os
-import cohere
-from PIL import Image
-from transformers import BlipProcessor, BlipForConditionalGeneration, AutoTokenizer
-import itertools
-from nltk.corpus import stopwords
-import nltk
-import easyocr
-import torch
-import numpy as np
-nltk.download('stopwords')
-
-COHERE_API_KEY = os.getenv('COHERE_API_KEY')
-co_client = cohere.Client(COHERE_API_KEY)
-
-
-processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
-model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
-
-tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
-reader = easyocr.Reader(['en'])
-# set up Streamlit app
-st.set_page_config(layout='wide', page_title='Image Hashtag Recommender')
-
-def genrate_caption(image_file):
- image = Image.open(image_file).convert('RGB')
- inputs = processor(image, return_tensors="pt")
- output_ids = model.generate(**inputs)
- output_text = processor.decode(output_ids[0], skip_special_tokens=True)
- return output_text
-
-st.title("Image Caption and HashTag Generator")
-image_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"])
-
-def creative_caption(text):
- return co_client.generate(prompt=f"Write some trendy, catchy, exciting, innovative, captivating, creative and engaging instagram captions for the following prompt - {text}").generations[0].text
-
-
-def caption_hashtags(text):
- return co_client.generate(prompt=f"Write 10 trendy instagram hashtags for the following prompt - {text}").generations[0].text
-
-if image_file is not None:
- try:
- caption = genrate_caption(image_file)
- caption_text = creative_caption(caption)
- hashtags = caption_hashtags(caption)
- if len(caption) > 0:
- st.write(f"Caption : {caption}")
- st.write(f"Creative Caption : {caption_text}")
- st.write(f"Creative hashtags : {hashtags}")
-
- else:
- st.write("No caption found for this image.")
- except Exception as e:
- st.write(f"Error: {e}")
diff --git a/spaces/mrwenchen/stabilityai-stable-diffusion-2-1/app.py b/spaces/mrwenchen/stabilityai-stable-diffusion-2-1/app.py
deleted file mode 100644
index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000
--- a/spaces/mrwenchen/stabilityai-stable-diffusion-2-1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch()
\ No newline at end of file
diff --git a/spaces/mshkdm/VToonify/vtoonify/model/vgg.py b/spaces/mshkdm/VToonify/vtoonify/model/vgg.py
deleted file mode 100644
index a1043d5bd8bdd0d1484d2270ae0d33c29495856c..0000000000000000000000000000000000000000
--- a/spaces/mshkdm/VToonify/vtoonify/model/vgg.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import torch
-import torch.nn as nn
-import torchvision
-
-# VGG architecter, used for the perceptual loss using a pretrained VGG network
-class VGG19(torch.nn.Module):
- def __init__(self, requires_grad=False):
- super().__init__()
- vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.slice6 = torch.nn.Sequential()
- for x in range(2):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(2, 7):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(7, 12):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(12, 21):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(21, 32):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
- for x in range(32, 36):
- self.slice6.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- self.pool = nn.AdaptiveAvgPool2d(output_size=1)
-
- self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1,-1, 1, 1).cuda() * 2 - 1
- self.std = torch.tensor([0.229, 0.224, 0.225]).view(1,-1, 1, 1).cuda() * 2
-
- def forward(self, X): # relui_1
- X = (X-self.mean)/self.std
- h_relu1 = self.slice1(X)
- h_relu2 = self.slice2(h_relu1)
- h_relu3 = self.slice3(h_relu2)
- h_relu4 = self.slice4(h_relu3)
- h_relu5 = self.slice5[:-2](h_relu4)
- out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5]
- return out
-
-# Perceptual loss that uses a pretrained VGG network
-class VGGLoss(nn.Module):
- def __init__(self):
- super(VGGLoss, self).__init__()
- self.vgg = VGG19().cuda()
- self.criterion = nn.L1Loss()
- self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0]
-
- def forward(self, x, y):
- x_vgg, y_vgg = self.vgg(x), self.vgg(y)
- loss = 0
- for i in range(len(x_vgg)):
- loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach())
- return loss
\ No newline at end of file
diff --git a/spaces/mshukor/UnIVAL/data/data_utils.py b/spaces/mshukor/UnIVAL/data/data_utils.py
deleted file mode 100644
index d45beb1aca2e55b1ca9b2c01ce1a869ad9a2121d..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/data/data_utils.py
+++ /dev/null
@@ -1,601 +0,0 @@
-# Copyright 2022 The OFA-Sys Team.
-# All rights reserved.
-# This source code is licensed under the Apache 2.0 license
-# found in the LICENSE file in the root directory.
-
-try:
- from collections.abc import Iterable
-except ImportError:
- from collections import Iterable
-import contextlib
-import itertools
-import logging
-import re
-import warnings
-from typing import Optional, Tuple
-
-import numpy as np
-import torch
-
-from fairseq.file_io import PathManager
-from fairseq import utils
-import os
-
-logger = logging.getLogger(__name__)
-
-
-def infer_language_pair(path):
- """Infer language pair from filename: .-.(...).idx"""
- src, dst = None, None
- for filename in PathManager.ls(path):
- parts = filename.split(".")
- if len(parts) >= 3 and len(parts[1].split("-")) == 2:
- return parts[1].split("-")
- return src, dst
-
-
-def collate_tokens(
- values,
- pad_idx,
- eos_idx=None,
- left_pad=False,
- move_eos_to_beginning=False,
- pad_to_length=None,
- pad_to_multiple=1,
- pad_to_bsz=None,
-):
- """Convert a list of 1d tensors into a padded 2d tensor."""
- size = max(v.size(0) for v in values)
- size = size if pad_to_length is None else max(size, pad_to_length)
- if pad_to_multiple != 1 and size % pad_to_multiple != 0:
- size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple)
-
- def copy_tensor(src, dst):
- assert dst.numel() == src.numel()
- if move_eos_to_beginning:
- if eos_idx is None:
- # if no eos_idx is specified, then use the last token in src
- dst[0] = src[-1]
- else:
- dst[0] = eos_idx
- dst[1:] = src[:-1]
- else:
- dst.copy_(src)
-
- if values[0].dim() == 1:
- res = values[0].new(len(values), size).fill_(pad_idx)
- elif values[0].dim() == 2:
- assert move_eos_to_beginning is False
- res = values[0].new(len(values), size, values[0].size(1)).fill_(pad_idx)
- else:
- raise NotImplementedError
-
- for i, v in enumerate(values):
- copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)])
- return res
-
-
-def load_indexed_dataset(
- path, dictionary=None, dataset_impl=None, combine=False, default="cached"
-):
- """A helper function for loading indexed datasets.
-
- Args:
- path (str): path to indexed dataset (e.g., 'data-bin/train')
- dictionary (~fairseq.data.Dictionary): data dictionary
- dataset_impl (str, optional): which dataset implementation to use. If
- not provided, it will be inferred automatically. For legacy indexed
- data we use the 'cached' implementation by default.
- combine (bool, optional): automatically load and combine multiple
- datasets. For example, if *path* is 'data-bin/train', then we will
- combine 'data-bin/train', 'data-bin/train1', ... and return a
- single ConcatDataset instance.
- """
- import fairseq.data.indexed_dataset as indexed_dataset
- from fairseq.data.concat_dataset import ConcatDataset
-
- datasets = []
- for k in itertools.count():
- path_k = path + (str(k) if k > 0 else "")
- try:
- path_k = indexed_dataset.get_indexed_dataset_to_local(path_k)
- except Exception as e:
- if "StorageException: [404] Path not found" in str(e):
- logger.warning(f"path_k: {e} not found")
- else:
- raise e
-
- dataset_impl_k = dataset_impl
- if dataset_impl_k is None:
- dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k)
- dataset = indexed_dataset.make_dataset(
- path_k,
- impl=dataset_impl_k or default,
- fix_lua_indexing=True,
- dictionary=dictionary,
- )
- if dataset is None:
- break
- logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k))
- datasets.append(dataset)
- if not combine:
- break
- if len(datasets) == 0:
- return None
- elif len(datasets) == 1:
- return datasets[0]
- else:
- return ConcatDataset(datasets)
-
-
-@contextlib.contextmanager
-def numpy_seed(seed, *addl_seeds):
- """Context manager which seeds the NumPy PRNG with the specified seed and
- restores the state afterward"""
- if seed is None:
- yield
- return
- if len(addl_seeds) > 0:
- seed = int(hash((seed, *addl_seeds)) % 1e6)
- state = np.random.get_state()
- np.random.seed(seed)
- try:
- yield
- finally:
- np.random.set_state(state)
-
-
-def collect_filtered(function, iterable, filtered):
- """
- Similar to :func:`filter` but collects filtered elements in ``filtered``.
-
- Args:
- function (callable): function that returns ``False`` for elements that
- should be filtered
- iterable (iterable): iterable to filter
- filtered (list): list to store filtered elements
- """
- for el in iterable:
- if function(el):
- yield el
- else:
- filtered.append(el)
-
-
-def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False):
- def compare_leq(a, b):
- return a <= b if not isinstance(a, tuple) else max(a) <= b
-
- def check_size(idx):
- if isinstance(max_positions, float) or isinstance(max_positions, int):
- return size_fn(idx) <= max_positions
- elif isinstance(max_positions, dict):
- idx_size = size_fn(idx)
- assert isinstance(idx_size, dict)
- intersect_keys = set(max_positions.keys()) & set(idx_size.keys())
- return all(
- all(
- a is None or b is None or a <= b
- for a, b in zip(idx_size[key], max_positions[key])
- )
- for key in intersect_keys
- )
- else:
- # For MultiCorpusSampledDataset, will generalize it later
- if not isinstance(size_fn(idx), Iterable):
- return all(size_fn(idx) <= b for b in max_positions)
- return all(
- a is None or b is None or a <= b
- for a, b in zip(size_fn(idx), max_positions)
- )
-
- ignored = []
- itr = collect_filtered(check_size, indices, ignored)
- indices = np.fromiter(itr, dtype=np.int64, count=-1)
- return indices, ignored
-
-
-def filter_by_size(indices, dataset, max_positions, raise_exception=False):
- """
- [deprecated] Filter indices based on their size.
- Use `FairseqDataset::filter_indices_by_size` instead.
-
- Args:
- indices (List[int]): ordered list of dataset indices
- dataset (FairseqDataset): fairseq dataset instance
- max_positions (tuple): filter elements larger than this size.
- Comparisons are done component-wise.
- raise_exception (bool, optional): if ``True``, raise an exception if
- any elements are filtered (default: False).
- """
- warnings.warn(
- "data_utils.filter_by_size is deprecated. "
- "Use `FairseqDataset::filter_indices_by_size` instead.",
- stacklevel=2,
- )
- if isinstance(max_positions, float) or isinstance(max_positions, int):
- if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray):
- ignored = indices[dataset.sizes[indices] > max_positions].tolist()
- indices = indices[dataset.sizes[indices] <= max_positions]
- elif (
- hasattr(dataset, "sizes")
- and isinstance(dataset.sizes, list)
- and len(dataset.sizes) == 1
- ):
- ignored = indices[dataset.sizes[0][indices] > max_positions].tolist()
- indices = indices[dataset.sizes[0][indices] <= max_positions]
- else:
- indices, ignored = _filter_by_size_dynamic(
- indices, dataset.size, max_positions
- )
- else:
- indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions)
-
- if len(ignored) > 0 and raise_exception:
- raise Exception(
- (
- "Size of sample #{} is invalid (={}) since max_positions={}, "
- "skip this example with --skip-invalid-size-inputs-valid-test"
- ).format(ignored[0], dataset.size(ignored[0]), max_positions)
- )
- if len(ignored) > 0:
- logger.warning(
- (
- "{} samples have invalid sizes and will be skipped, "
- "max_positions={}, first few sample ids={}"
- ).format(len(ignored), max_positions, ignored[:10])
- )
- return indices
-
-
-def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes):
- """Filter a list of sample indices. Remove those that are longer
- than specified in max_sizes.
-
- Args:
- indices (np.array): original array of sample indices
- max_sizes (int or list[int] or tuple[int]): max sample size,
- can be defined separately for src and tgt (then list or tuple)
-
- Returns:
- np.array: filtered sample array
- list: list of removed indices
- """
- if max_sizes is None:
- return indices, []
- if type(max_sizes) in (int, float):
- max_src_size, max_tgt_size = max_sizes, max_sizes
- else:
- max_src_size, max_tgt_size = max_sizes
- if tgt_sizes is None:
- ignored = indices[src_sizes[indices] > max_src_size]
- else:
- ignored = indices[
- (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size)
- ]
- if len(ignored) > 0:
- if tgt_sizes is None:
- indices = indices[src_sizes[indices] <= max_src_size]
- else:
- indices = indices[
- (src_sizes[indices] <= max_src_size)
- & (tgt_sizes[indices] <= max_tgt_size)
- ]
- return indices, ignored.tolist()
-
-
-def batch_by_size(
- indices,
- num_tokens_fn,
- num_tokens_vec=None,
- max_tokens=None,
- max_sentences=None,
- required_batch_size_multiple=1,
- fixed_shapes=None,
-):
- """
- Yield mini-batches of indices bucketed by size. Batches may contain
- sequences of different lengths.
-
- Args:
- indices (List[int]): ordered list of dataset indices
- num_tokens_fn (callable): function that returns the number of tokens at
- a given index
- num_tokens_vec (List[int], optional): precomputed vector of the number
- of tokens for each index in indices (to enable faster batch generation)
- max_tokens (int, optional): max number of tokens in each batch
- (default: None).
- max_sentences (int, optional): max number of sentences in each
- batch (default: None).
- required_batch_size_multiple (int, optional): require batch size to
- be less than N or a multiple of N (default: 1).
- fixed_shapes (List[Tuple[int, int]], optional): if given, batches will
- only be created with the given shapes. *max_sentences* and
- *required_batch_size_multiple* will be ignored (default: None).
- """
- try:
- from fairseq.data.data_utils_fast import (
- batch_by_size_fn,
- batch_by_size_vec,
- batch_fixed_shapes_fast,
- )
- except ImportError:
- raise ImportError(
- "Please build Cython components with: "
- "`python setup.py build_ext --inplace`"
- )
- except ValueError:
- raise ValueError(
- "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`."
- )
-
- # added int() to avoid TypeError: an integer is required
- max_tokens = (
- int(max_tokens) if max_tokens is not None else -1
- )
- max_sentences = max_sentences if max_sentences is not None else -1
- bsz_mult = required_batch_size_multiple
-
- if not isinstance(indices, np.ndarray):
- indices = np.fromiter(indices, dtype=np.int64, count=-1)
-
- if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray):
- num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1)
-
- if fixed_shapes is None:
- if num_tokens_vec is None:
- return batch_by_size_fn(
- indices,
- num_tokens_fn,
- max_tokens,
- max_sentences,
- bsz_mult,
- )
- else:
- return batch_by_size_vec(
- indices,
- num_tokens_vec,
- max_tokens,
- max_sentences,
- bsz_mult,
- )
-
- else:
- fixed_shapes = np.array(fixed_shapes, dtype=np.int64)
- sort_order = np.lexsort(
- [
- fixed_shapes[:, 1].argsort(), # length
- fixed_shapes[:, 0].argsort(), # bsz
- ]
- )
- fixed_shapes_sorted = fixed_shapes[sort_order]
- return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted)
-
-
-def post_process(sentence: str, symbol: str):
- if symbol == "sentencepiece":
- sentence = sentence.replace(" ", "").replace("\u2581", " ").strip()
- elif symbol == "wordpiece":
- sentence = sentence.replace(" ", "").replace("_", " ").strip()
- elif symbol == "letter":
- sentence = sentence.replace(" ", "").replace("|", " ").strip()
- elif symbol == "silence":
- import re
- sentence = sentence.replace("", "")
- sentence = re.sub(' +', ' ', sentence).strip()
- elif symbol == "_EOW":
- sentence = sentence.replace(" ", "").replace("_EOW", " ").strip()
- elif symbol in {"subword_nmt", "@@ ", "@@"}:
- if symbol == "subword_nmt":
- symbol = "@@ "
- sentence = (sentence + " ").replace(symbol, "").rstrip()
- elif symbol == "none":
- pass
- elif symbol is not None:
- raise NotImplementedError(f"Unknown post_process option: {symbol}")
- return sentence
-
-
-def compute_mask_indices(
- shape: Tuple[int, int],
- padding_mask: Optional[torch.Tensor],
- mask_prob: float,
- mask_length: int,
- mask_type: str = "static",
- mask_other: float = 0.0,
- min_masks: int = 0,
- no_overlap: bool = False,
- min_space: int = 0,
-) -> np.ndarray:
- """
- Computes random mask spans for a given shape
-
- Args:
- shape: the the shape for which to compute masks.
- should be of size 2 where first element is batch size and 2nd is timesteps
- padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements
- mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by
- number of timesteps divided by length of mask span to mask approximately this percentage of all elements.
- however due to overlaps, the actual number will be smaller (unless no_overlap is True)
- mask_type: how to compute mask lengths
- static = fixed size
- uniform = sample from uniform distribution [mask_other, mask_length*2]
- normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element
- poisson = sample from possion distribution with lambda = mask length
- min_masks: minimum number of masked spans
- no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping
- min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans
- """
-
- bsz, all_sz = shape
- mask = np.full((bsz, all_sz), False)
-
- all_num_mask = int(
- # add a random number for probabilistic rounding
- mask_prob * all_sz / float(mask_length)
- + np.random.rand()
- )
-
- all_num_mask = max(min_masks, all_num_mask)
-
- mask_idcs = []
- for i in range(bsz):
- if padding_mask is not None:
- sz = all_sz - padding_mask[i].long().sum().item()
- num_mask = int(
- # add a random number for probabilistic rounding
- mask_prob * sz / float(mask_length)
- + np.random.rand()
- )
- num_mask = max(min_masks, num_mask)
- else:
- sz = all_sz
- num_mask = all_num_mask
-
- if mask_type == "static":
- lengths = np.full(num_mask, mask_length)
- elif mask_type == "uniform":
- lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask)
- elif mask_type == "normal":
- lengths = np.random.normal(mask_length, mask_other, size=num_mask)
- lengths = [max(1, int(round(x))) for x in lengths]
- elif mask_type == "poisson":
- lengths = np.random.poisson(mask_length, size=num_mask)
- lengths = [int(round(x)) for x in lengths]
- else:
- raise Exception("unknown mask selection " + mask_type)
-
- if sum(lengths) == 0:
- lengths[0] = min(mask_length, sz - 1)
-
- if no_overlap:
- mask_idc = []
-
- def arrange(s, e, length, keep_length):
- span_start = np.random.randint(s, e - length)
- mask_idc.extend(span_start + i for i in range(length))
-
- new_parts = []
- if span_start - s - min_space >= keep_length:
- new_parts.append((s, span_start - min_space + 1))
- if e - span_start - keep_length - min_space > keep_length:
- new_parts.append((span_start + length + min_space, e))
- return new_parts
-
- parts = [(0, sz)]
- min_length = min(lengths)
- for length in sorted(lengths, reverse=True):
- lens = np.fromiter(
- (e - s if e - s >= length + min_space else 0 for s, e in parts),
- np.int,
- )
- l_sum = np.sum(lens)
- if l_sum == 0:
- break
- probs = lens / np.sum(lens)
- c = np.random.choice(len(parts), p=probs)
- s, e = parts.pop(c)
- parts.extend(arrange(s, e, length, min_length))
- mask_idc = np.asarray(mask_idc)
- else:
- min_len = min(lengths)
- if sz - min_len <= num_mask:
- min_len = sz - num_mask - 1
-
- mask_idc = np.random.choice(sz - min_len, num_mask, replace=False)
-
- mask_idc = np.asarray(
- [
- mask_idc[j] + offset
- for j in range(len(mask_idc))
- for offset in range(lengths[j])
- ]
- )
-
- mask_idcs.append(np.unique(mask_idc[mask_idc < sz]))
-
- min_len = min([len(m) for m in mask_idcs])
- for i, mask_idc in enumerate(mask_idcs):
- if len(mask_idc) > min_len:
- mask_idc = np.random.choice(mask_idc, min_len, replace=False)
- mask[i, mask_idc] = True
-
- return mask
-
-
-def get_mem_usage():
- try:
- import psutil
-
- mb = 1024 * 1024
- return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb"
- except ImportError:
- return "N/A"
-
-
-# lens: torch.LongTensor
-# returns: torch.BoolTensor
-def lengths_to_padding_mask(lens):
- bsz, max_lens = lens.size(0), torch.max(lens).item()
- mask = torch.arange(max_lens).to(lens.device).view(1, max_lens)
- mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens)
- return mask
-
-
-# lens: torch.LongTensor
-# returns: torch.BoolTensor
-def lengths_to_mask(lens):
- return ~lengths_to_padding_mask(lens)
-
-
-def get_buckets(sizes, num_buckets):
- buckets = np.unique(
- np.percentile(
- sizes,
- np.linspace(0, 100, num_buckets + 1),
- interpolation='lower',
- )[1:]
- )
- return buckets
-
-
-def get_bucketed_sizes(orig_sizes, buckets):
- sizes = np.copy(orig_sizes)
- assert np.min(sizes) >= 0
- start_val = -1
- for end_val in buckets:
- mask = (sizes > start_val) & (sizes <= end_val)
- sizes[mask] = end_val
- start_val = end_val
- return sizes
-
-
-
-def _find_extra_valid_paths(dataset_path: str) -> set:
- paths = utils.split_paths(dataset_path)
- all_valid_paths = set()
- for sub_dir in paths:
- contents = PathManager.ls(sub_dir)
- valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None]
- all_valid_paths |= {os.path.basename(p) for p in valid_paths}
- # Remove .bin, .idx etc
- roots = {os.path.splitext(p)[0] for p in all_valid_paths}
- return roots
-
-
-def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None:
- """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored."""
- if (
- train_cfg.dataset.ignore_unused_valid_subsets
- or train_cfg.dataset.combine_valid_subsets
- or train_cfg.dataset.disable_validation
- or not hasattr(train_cfg.task, "data")
- ):
- return
- other_paths = _find_extra_valid_paths(train_cfg.task.data)
- specified_subsets = train_cfg.dataset.valid_subset.split(",")
- ignored_paths = [p for p in other_paths if p not in specified_subsets]
- if ignored_paths:
- advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them."
- msg = f"Valid paths {ignored_paths} will be ignored. {advice}"
- raise ValueError(msg)
diff --git a/spaces/mw00/chess-classification/app.py b/spaces/mw00/chess-classification/app.py
deleted file mode 100644
index 1621b7c41ca22d9228a37dddc4aa0e0664411846..0000000000000000000000000000000000000000
--- a/spaces/mw00/chess-classification/app.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# AUTOGENERATED! DO NOT EDIT! File to edit: chess-deployment.ipynb.
-
-# %% auto 0
-__all__ = ['learn', 'categories', 'image', 'label', 'examples', 'intf', 'classify_image']
-
-# %% chess-deployment.ipynb 1
-from fastai.vision.all import *
-import gradio as gr
-
-# %% chess-deployment.ipynb 3
-learn = load_learner("chess-model.pkl")
-
-# %% chess-deployment.ipynb 5
-categories = ("Bishop", "King", "Knight", "Pawn", "Queen", "Rook")
-
-def classify_image(img):
- pred,pred_idx,probs = learn.predict(img)
- return dict(zip(categories, map(float, probs)))
-
-# %% chess-deployment.ipynb 7
-image = gr.inputs.Image(shape=(192,192))
-label = gr.outputs.Label()
-examples = ["bishop.png", "king.jpg", "knight.png", "rook.jpg", "pawn.jpg"]
-
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples, title="Chess Piece Classifier", description="Classify chess pieces into King, Queen, Bishop, Knight, Rook, or Pawn")
-intf.launch(inline=False)
diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/utils/__init__.py b/spaces/mygyasir/Real-Time-Voice-Cloning/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/nanglo123/GTSRB-Deployment/create_model.py b/spaces/nanglo123/GTSRB-Deployment/create_model.py
deleted file mode 100644
index 5ba000a50ea6313abcd648ee55f433b37096280f..0000000000000000000000000000000000000000
--- a/spaces/nanglo123/GTSRB-Deployment/create_model.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import torch
-import torchvision
-from model_class import CNNTraffic
-from torch import nn
-from torchvision import transforms
-
-def create_CNN(seed:int=42):
-
- model = CNNTraffic(input_shape=3,output_shape=43)
-
-
- for param in model.parameters():
- param.requires_grad = False
-
-
-
- return model
diff --git a/spaces/nateraw/dino-clips/dino/video_generation.py b/spaces/nateraw/dino-clips/dino/video_generation.py
deleted file mode 100644
index 82b796961b536b67bd3549a38ae82b42fa2769d6..0000000000000000000000000000000000000000
--- a/spaces/nateraw/dino-clips/dino/video_generation.py
+++ /dev/null
@@ -1,388 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import os
-import glob
-import sys
-import argparse
-import cv2
-
-from tqdm import tqdm
-import matplotlib.pyplot as plt
-import torch
-import torch.nn as nn
-import torchvision
-from torchvision import transforms as pth_transforms
-import numpy as np
-from PIL import Image
-
-import utils
-import vision_transformer as vits
-
-
-FOURCC = {
- "mp4": cv2.VideoWriter_fourcc(*"MP4V"),
- "avi": cv2.VideoWriter_fourcc(*"XVID"),
-}
-DEVICE = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
-
-
-class VideoGenerator:
- def __init__(self, args):
- self.args = args
- # self.model = None
- # Don't need to load model if you only want a video
- if not self.args.video_only:
- self.model = self.__load_model()
-
- def run(self):
- if self.args.input_path is None:
- print(f"Provided input path {self.args.input_path} is non valid.")
- sys.exit(1)
- else:
- if self.args.video_only:
- self._generate_video_from_images(
- self.args.input_path, self.args.output_path
- )
- else:
- # If input path exists
- if os.path.exists(self.args.input_path):
- frames_folder = os.path.join(self.args.output_path, "frames")
- os.makedirs(frames_folder, exist_ok=True)
- # If input is a video file
- if os.path.isfile(self.args.input_path):
- attention_folder = os.path.join(
- self.args.output_path, "attention"
- )
-
- os.makedirs(attention_folder, exist_ok=True)
-
- self._extract_frames_from_video(
- self.args.input_path, frames_folder
- )
-
- self._inference(
- frames_folder,
- attention_folder,
- )
-
- self._generate_video_from_images(
- attention_folder, self.args.output_path
- )
- self._generate_video_from_images(
- frames_folder,
- self.args.output_path,
- file_pattern="reshaped-*.jpg",
- out_video_name="original-reshaped"
- )
- # If input is a folder of already extracted frames
- if os.path.isdir(self.args.input_path):
- attention_folder = os.path.join(
- self.args.output_path, "attention"
- )
-
- os.makedirs(attention_folder, exist_ok=True)
-
- self._inference(self.args.input_path, attention_folder)
-
- self._generate_video_from_images(
- attention_folder, self.args.output_path
- )
- self._generate_video_from_images(
- frames_folder,
- self.args.output_path,
- file_pattern="reshaped-*.jpg",
- out_video_name="original-reshaped"
- )
- # If input path doesn't exists
- else:
- print(f"Provided input path {self.args.input_path} doesn't exists.")
- sys.exit(1)
-
- def _extract_frames_from_video(self, inp: str, out: str):
- vidcap = cv2.VideoCapture(inp)
- self.args.fps = vidcap.get(cv2.CAP_PROP_FPS)
-
- print(f"Video: {inp} ({self.args.fps} fps)")
- print(f"Extracting frames to {out}")
-
- success, image = vidcap.read()
- count = 0
- while success:
- cv2.imwrite(
- os.path.join(out, f"frame-{count:04}.jpg"),
- image,
- )
- success, image = vidcap.read()
- count += 1
-
- def _generate_video_from_images(self, inp: str, out: str, file_pattern="attn-*.jpg", out_video_name="video"):
- img_array = []
- attention_images_list = sorted(glob.glob(os.path.join(inp, file_pattern)))
-
- # Get size of the first image
- with open(attention_images_list[0], "rb") as f:
- img = Image.open(f)
- img = img.convert("RGB")
- size = (img.width, img.height)
- img_array.append(cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR))
-
- print(f"Generating video {size} to {out}")
-
- for filename in tqdm(attention_images_list[1:]):
- with open(filename, "rb") as f:
- img = Image.open(f)
- img = img.convert("RGB")
- img_array.append(cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR))
-
- out = cv2.VideoWriter(
- os.path.join(out, f"{out_video_name}." + self.args.video_format),
- FOURCC[self.args.video_format],
- self.args.fps,
- size,
- )
-
- for i in range(len(img_array)):
- out.write(img_array[i])
- out.release()
- print("Done")
-
- def _inference(self, inp: str, out: str):
- print(f"Generating attention images to {out}")
-
- for img_path in tqdm(sorted(glob.glob(os.path.join(inp, "*.jpg")))):
- with open(img_path, "rb") as f:
- img_in = Image.open(f)
- img_in = img_in.convert("RGB")
-
- if self.args.resize is not None:
- transform = pth_transforms.Compose(
- [
- pth_transforms.ToTensor(),
- pth_transforms.Resize(self.args.resize),
- pth_transforms.Normalize(
- (0.485, 0.456, 0.406), (0.229, 0.224, 0.225)
- ),
- ]
- )
- else:
- transform = pth_transforms.Compose(
- [
- pth_transforms.ToTensor(),
- pth_transforms.Normalize(
- (0.485, 0.456, 0.406), (0.229, 0.224, 0.225)
- ),
- ]
- )
-
- img = transform(img_in)
-
- # make the image divisible by the patch size
- w, h = (
- img.shape[1] - img.shape[1] % self.args.patch_size,
- img.shape[2] - img.shape[2] % self.args.patch_size,
- )
- img = img[:, :w, :h].unsqueeze(0)
- w_featmap = img.shape[-2] // self.args.patch_size
- h_featmap = img.shape[-1] // self.args.patch_size
-
- attentions = self.model.get_last_selfattention(img.to(DEVICE))
- nh = attentions.shape[1] # number of head
-
- # we keep only the output patch attention
- attentions = attentions[0, :, 0, 1:].reshape(nh, -1)
-
- # we keep only a certain percentage of the mass
- val, idx = torch.sort(attentions)
- val /= torch.sum(val, dim=1, keepdim=True)
- cumval = torch.cumsum(val, dim=1)
- th_attn = cumval > (1 - self.args.threshold)
- idx2 = torch.argsort(idx)
- for head in range(nh):
- th_attn[head] = th_attn[head][idx2[head]]
- th_attn = th_attn.reshape(nh, w_featmap, h_featmap).float()
- # interpolate
- th_attn = (
- nn.functional.interpolate(
- th_attn.unsqueeze(0),
- scale_factor=self.args.patch_size,
- mode="nearest",
- )[0]
- .cpu()
- .numpy()
- )
-
- attentions = attentions.reshape(nh, w_featmap, h_featmap)
- attentions = (
- nn.functional.interpolate(
- attentions.unsqueeze(0),
- scale_factor=self.args.patch_size,
- mode="nearest",
- )[0]
- .cpu()
- .numpy()
- )
-
- # save attentions heatmaps
- fname = os.path.join(out, "attn-" + os.path.basename(img_path))
- plt.imsave(
- fname=fname,
- arr=sum(
- attentions[i] * 1 / attentions.shape[0]
- for i in range(attentions.shape[0])
- ),
- cmap="inferno",
- format="jpg",
- )
- fname = os.path.join(os.path.dirname(out), "frames/reshaped-" + os.path.basename(img_path))
- img_in = img_in.resize((attentions[0].shape[1], attentions[0].shape[0]))
- img_in.save(fname)
-
- def __load_model(self):
- # build model
- model = vits.__dict__[self.args.arch](
- patch_size=self.args.patch_size, num_classes=0
- )
- for p in model.parameters():
- p.requires_grad = False
- model.eval()
- model.to(DEVICE)
-
- if os.path.isfile(self.args.pretrained_weights):
- state_dict = torch.load(self.args.pretrained_weights, map_location="cpu")
- if (
- self.args.checkpoint_key is not None
- and self.args.checkpoint_key in state_dict
- ):
- print(
- f"Take key {self.args.checkpoint_key} in provided checkpoint dict"
- )
- state_dict = state_dict[self.args.checkpoint_key]
- state_dict = {k.replace("module.", ""): v for k, v in state_dict.items()}
- # remove `backbone.` prefix induced by multicrop wrapper
- state_dict = {k.replace("backbone.", ""): v for k, v in state_dict.items()}
- msg = model.load_state_dict(state_dict, strict=False)
- print(
- "Pretrained weights found at {} and loaded with msg: {}".format(
- self.args.pretrained_weights, msg
- )
- )
- else:
- print(
- "Please use the `--pretrained_weights` argument to indicate the path of the checkpoint to evaluate."
- )
- url = None
- if self.args.arch == "vit_small" and self.args.patch_size == 16:
- url = "dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth"
- elif self.args.arch == "vit_small" and self.args.patch_size == 8:
- url = "dino_deitsmall8_300ep_pretrain/dino_deitsmall8_300ep_pretrain.pth" # model used for visualizations in our paper
- elif self.args.arch == "vit_base" and self.args.patch_size == 16:
- url = "dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth"
- elif self.args.arch == "vit_base" and self.args.patch_size == 8:
- url = "dino_vitbase8_pretrain/dino_vitbase8_pretrain.pth"
- if url is not None:
- print(
- "Since no pretrained weights have been provided, we load the reference pretrained DINO weights."
- )
- state_dict = torch.hub.load_state_dict_from_url(
- url="https://dl.fbaipublicfiles.com/dino/" + url
- )
- model.load_state_dict(state_dict, strict=True)
- else:
- print(
- "There is no reference weights available for this model => We use random weights."
- )
- return model
-
-
-def parse_args():
- parser = argparse.ArgumentParser("Generation self-attention video")
- parser.add_argument(
- "--arch",
- default="vit_small",
- type=str,
- choices=["vit_tiny", "vit_small", "vit_base"],
- help="Architecture (support only ViT atm).",
- )
- parser.add_argument(
- "--patch_size", default=8, type=int, help="Patch resolution of the self.model."
- )
- parser.add_argument(
- "--pretrained_weights",
- default="",
- type=str,
- help="Path to pretrained weights to load.",
- )
- parser.add_argument(
- "--checkpoint_key",
- default="teacher",
- type=str,
- help='Key to use in the checkpoint (example: "teacher")',
- )
- parser.add_argument(
- "--input_path",
- required=True,
- type=str,
- help="""Path to a video file if you want to extract frames
- or to a folder of images already extracted by yourself.
- or to a folder of attention images.""",
- )
- parser.add_argument(
- "--output_path",
- default="./",
- type=str,
- help="""Path to store a folder of frames and / or a folder of attention images.
- and / or a final video. Default to current directory.""",
- )
- parser.add_argument(
- "--threshold",
- type=float,
- default=0.6,
- help="""We visualize masks
- obtained by thresholding the self-attention maps to keep xx percent of the mass.""",
- )
- parser.add_argument(
- "--resize",
- default=None,
- type=int,
- nargs="+",
- help="""Apply a resize transformation to input image(s). Use if OOM error.
- Usage (single or W H): --resize 512, --resize 720 1280""",
- )
- parser.add_argument(
- "--video_only",
- action="store_true",
- help="""Use this flag if you only want to generate a video and not all attention images.
- If used, --input_path must be set to the folder of attention images. Ex: ./attention/""",
- )
- parser.add_argument(
- "--fps",
- default=30.0,
- type=float,
- help="FPS of input / output video. Automatically set if you extract frames from a video.",
- )
- parser.add_argument(
- "--video_format",
- default="mp4",
- type=str,
- choices=["mp4", "avi"],
- help="Format of generated video (mp4 or avi).",
- )
-
- return parser.parse_args()
-
-
-if __name__ == "__main__":
- args = parse_args()
-
- vg = VideoGenerator(args)
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dvdvideosoft Free Studio 503 Serial Keygen REPACK 14.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dvdvideosoft Free Studio 503 Serial Keygen REPACK 14.md
deleted file mode 100644
index 4615d1e150584492acefcfee842cbad2a7416a2c..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dvdvideosoft Free Studio 503 Serial Keygen REPACK 14.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
Dvdvideosoft Free Studio 503: A Complete Multimedia Package for DVDs
-
Dvdvideosoft Free Studio 503 is a software bundle that contains 49 free multimedia applications developed by Dvdvideosoft. These applications are divided into five sections: Downloaders, Uploaders, Converters, Recorders and Editors. You can use them to download and convert video from YouTube to MP4 and MP3, edit video and audio files, record video and audio from Skype, upload video and music to YouTube and Facebook, and more.
One of the most popular applications in Dvdvideosoft Free Studio 503 is the YouTube to MP3 Converter, which allows you to download and convert any YouTube video to MP3 format with high quality. You can also download YouTube playlists, channels, VEVO videos, torrent videos, and premium videos with this application. Another popular application is the Free Video Editor, which lets you cut, rotate, merge, and crop video files without losing quality.
-
Dvdvideosoft Free Studio 503 is compatible with Windows 11, 10, 8, 7, XP SP3. You can download it for free from the official website of Dvdvideosoft. However, if you want to unlock some premium features and remove the watermark from the edited videos, you will need a serial keygen. A serial keygen is a program that generates valid serial numbers for a software product. You can use these serial numbers to activate the software and enjoy its full functionality.
-
There are many websites that claim to offer serial keygens for Dvdvideosoft Free Studio 503. However, most of them are fake or malicious. They may contain viruses, malware, spyware, or adware that can harm your computer or steal your personal information. Some of them may also ask you to complete surveys or pay money to access the serial keygens. These are scams that you should avoid at all costs.
-
The only safe and reliable way to get a serial keygen for Dvdvideosoft Free Studio 503 is to purchase it from the official website of Dvdvideosoft. The price is only $10 for a lifetime license that covers all the applications in the bundle. You will also get free updates and technical support from Dvdvideosoft. By buying a serial keygen from Dvdvideosoft, you will not only support the developers of this amazing software but also protect your computer from potential threats.
-
-
If you are looking for a complete multimedia package for DVDs that offers a wide range of free applications for downloading, converting, editing, recording, and uploading video and audio files, you should try Dvdvideosoft Free Studio 503. It is easy to use, fast, and high-quality. And if you want to enjoy its premium features without any limitations or watermarks, you should buy a serial keygen from Dvdvideosoft. It is worth every penny.
-
-
Dvdvideosoft Free Studio 503 is not only a great software for DVDs but also for other devices and platforms. You can use it to convert video and audio files between different formats or for iPhone, iPad, iPod, Windows and Android devices. You can also make screen captures and record videos from the desktop or from Skype. You can even create your own ringtones, GIFs, and slideshows with Dvdvideosoft Free Studio 503.
-
Dvdvideosoft Free Studio 503 is also very user-friendly and intuitive. All the applications have a simple and clear interface that guides you through the process step by step. You can also customize the settings and preferences according to your needs and preferences. Dvdvideosoft Free Studio 503 also supports multiple languages, including English, French, German, Spanish, Italian, Russian, Chinese, Japanese, and more.
-
Dvdvideosoft Free Studio 503 is a software that you will love to use and recommend to your friends and family. It has everything you need to enjoy and share your multimedia files with ease and fun. Whether you want to download and convert YouTube videos to MP3, edit your own videos and audio files, record Skype calls, or upload your creations to YouTube and Facebook, Dvdvideosoft Free Studio 503 can help you do it all. And with a serial keygen from Dvdvideosoft, you can unlock its full potential and get rid of any limitations or watermarks.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mary J. Blige Growing Pains Full Album Zip National Cartographe Fixed.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mary J. Blige Growing Pains Full Album Zip National Cartographe Fixed.md
deleted file mode 100644
index ab8b18e9723afa84753c81497ad34c47c3221442..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mary J. Blige Growing Pains Full Album Zip National Cartographe Fixed.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
Mary J. Blige's Growing Pains: A Soulful Journey of Self-Discovery
-
Mary J. Blige is one of the most influential and successful R&B singers of all time. She has sold over 80 million records worldwide and won nine Grammy Awards. Her eighth studio album, Growing Pains, released in 2007, showcases her versatility and strength as an artist and a woman.
-
Mary J. Blige, Growing Pains Full Album Zip national cartographe
Growing Pains is a soulful journey of self-discovery, where Blige reflects on her personal and professional growth, her struggles and triumphs, and her love and happiness. The album features collaborations with Ludacris, Usher, The-Dream, Ne-Yo, and Eve, among others. The production is diverse and polished, ranging from upbeat dance tracks to smooth ballads.
-
The album's lead single, "Just Fine", is a catchy anthem of positivity and empowerment, where Blige declares that she is "not gon' let nothing get in my way". The song was nominated for two Grammy Awards and became a worldwide hit. Other highlights include "Work That", a motivational song about self-confidence and beauty; "Stay Down", a heartfelt pledge of loyalty to her husband; "Roses", a candid confession of her insecurities and fears; and "Work In Progress (Growing Pains)", a humble acknowledgment of her flaws and mistakes.
-
Growing Pains received critical acclaim and commercial success, debuting at number two on the Billboard 200 chart and selling over three million copies worldwide. It also won a Grammy Award for Best Contemporary R&B Album. The album is widely regarded as one of Blige's best works, as well as one of the best R&B albums of the 2000s.
-
-
If you are a fan of Mary J. Blige or R&B music in general, you can download the full album zip file from Apple Music or stream it on Qobuz. You can also read more about the album and its songs on AllMusic.
-
-
One of the themes that Blige explores on Growing Pains is the importance of self-love and self-care. She sings about finding peace and joy within herself, rather than relying on external sources. She also encourages her listeners to do the same, as she believes that everyone deserves happiness and respect. On "Feel Like a Woman", she celebrates her femininity and sensuality, while on "If You Love Me?", she challenges her partner to show his true feelings and commitment.
-
Another theme that Blige addresses on Growing Pains is the challenge of overcoming adversity and negativity. She shares her experiences of dealing with criticism, doubt, pain, and betrayal, and how she learned to cope and heal. She also offers hope and inspiration to those who are going through similar situations, as she believes that nothing can stop them from achieving their dreams. On "Hurt Again", she expresses her resilience and courage in the face of heartbreak, while on "Come To Me (Peace)", she prays for harmony and unity in the world.
-
Growing Pains is a testament to Blige's artistic maturity and personal growth. It showcases her remarkable vocal skills, emotional depth, and musical diversity. It also reveals her vulnerability and honesty, as she opens up about her life and feelings. Growing Pains is not only an album, but a journey, a lesson, and a gift. It is a reflection of Blige's soul, and a reminder of her greatness.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Oggy And The Cockroaches New Episodes In Hindi Download VERIFIED.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Oggy And The Cockroaches New Episodes In Hindi Download VERIFIED.md
deleted file mode 100644
index 0f9521ca3a265119521fa15a6af9a9a01c0c098f..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Oggy And The Cockroaches New Episodes In Hindi Download VERIFIED.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
How to Download Oggy and the Cockroaches New Episodes in Hindi for Free
-
Oggy and the Cockroaches is a popular animated comedy series that features the adventures of a lazy cat named Oggy and three pesky cockroaches who live in his house. The show has been dubbed in many languages, including Hindi, and has fans all over the world.
-
If you are looking for a way to download Oggy and the Cockroaches new episodes in Hindi for free, you have come to the right place. In this article, we will show you how to find and download the latest episodes of this hilarious show using some simple steps.
-
oggy and the cockroaches new episodes in hindi download
The first step is to find a reliable website that offers Oggy and the Cockroaches new episodes in Hindi for free download. There are many websites that claim to provide this service, but not all of them are safe or legal. Some of them may contain viruses, malware, or unwanted ads that can harm your device or compromise your privacy.
-
One of the websites that we recommend is PureToons.Com[^1^]. This website has a large collection of Oggy and the Cockroaches episodes in Hindi, as well as other cartoons and anime. The website is easy to use, fast, and secure. You can also watch the episodes online if you prefer.
-
Step 2: Choose an Episode
-
The next step is to choose an episode that you want to download. You can browse through the different seasons and episodes of Oggy and the Cockroaches on PureToons.Com[^1^] by clicking on the links provided on the homepage. You can also use the search bar to find a specific episode by typing its name or number.
-
For example, if you want to download the latest episode of Oggy and the Cockroaches: Next Generation, which is a spin-off series that features Oggy's son and his friends[^3^], you can type "Oggy Next Generation" in the search bar and click on the result that matches your query.
-
Step 3: Download the Episode
-
The final step is to download the episode that you have chosen. Once you click on an episode link, you will be redirected to a page that contains some information about the episode, such as its title, genre, running time, language, quality, and summary. You will also see a download button at the bottom of the page.
-
Click on the download button and wait for a few seconds until a new window opens. This window will show you some options for downloading the episode in different formats and sizes, such as MP4, 3GP, 720p, 240p, 360p, 480p, or 1080p. Choose the option that suits your device and internet speed and click on it.
-
-
A new tab will open that will ask you to verify that you are not a robot by completing a captcha. After completing the captcha, click on "click here to continue" and wait for another few seconds until a countdown timer ends. Then click on "get link" and your download will start automatically.
-
Conclusion
-
Oggy and the Cockroaches is a fun and entertaining show that you can enjoy with your family and friends. By following these simple steps, you can download Oggy and the Cockroaches new episodes in Hindi for free from PureToons.Com[^1^] without any hassle or risk.
-
If you liked this article, please share it with your friends who are also fans of Oggy and the Cockroaches. You can also watch some funny clips of Oggy and the Cockroaches on YouTube[^2^] [^4^] or Netflix[^3^]. Happy watching!
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/README.md b/spaces/nikitaPDL2023/assignment4/detectron2/tests/README.md
deleted file mode 100644
index f560384045ab4f6bc2beabef1170308fca117eb3..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
-## Unit Tests
-
-To run the unittests, do:
-```
-cd detectron2
-python -m unittest discover -v -s ./tests
-```
-
-There are also end-to-end inference & training tests, in [dev/run_*_tests.sh](../dev).
diff --git a/spaces/nomic-ai/Gustavosta_Stable-Diffusion-Prompts/README.md b/spaces/nomic-ai/Gustavosta_Stable-Diffusion-Prompts/README.md
deleted file mode 100644
index 654ecd58a88135060f2bdd87f5e5e8183ee3f9e9..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/Gustavosta_Stable-Diffusion-Prompts/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Gustavosta/Stable-Diffusion-Prompts
-emoji: 🗺️
-colorFrom: purple
-colorTo: red
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nomic-ai/csebuetnlp_xlsum/style.css b/spaces/nomic-ai/csebuetnlp_xlsum/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/csebuetnlp_xlsum/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/nomic-ai/teknium_GPT4-LLM-Cleaned/style.css b/spaces/nomic-ai/teknium_GPT4-LLM-Cleaned/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/teknium_GPT4-LLM-Cleaned/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/novita-ai/Face-Stylization-Playground/README.md b/spaces/novita-ai/Face-Stylization-Playground/README.md
deleted file mode 100644
index e87c50910d44d2d33efcec252401cc182a77b65a..0000000000000000000000000000000000000000
--- a/spaces/novita-ai/Face-Stylization-Playground/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Face-Stylization-Playground
-emoji: ⚡️
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.48.0
-app_file: app.py
-license: mit
-pinned: false
-suggested_hardware: cpu-upgrade
-suggested_storage: small
-hf_oauth: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/onursavas/MultilingualOCR/Dockerfile b/spaces/onursavas/MultilingualOCR/Dockerfile
deleted file mode 100644
index c406ad653cbf4491fd0cb2bded00e8b5faa6f98a..0000000000000000000000000000000000000000
--- a/spaces/onursavas/MultilingualOCR/Dockerfile
+++ /dev/null
@@ -1,32 +0,0 @@
-# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
-# you will also find guides on how best to write your Dockerfile
-
-FROM ubuntu:22.04
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-RUN useradd -m -u 1000 user
-
-RUN apt-get update && apt-get install -y \
- git \
- curl \
- software-properties-common \
- python3.10 \
- python3.10-dev \
- && rm -rf /var/lib/apt/lists/* \
- && apt-get remove -y --purge python3-blinker
-
-RUN apt-get update && apt-get install -y python3-opencv
-
-WORKDIR /code
-
-COPY --chown=user ./requirements.txt /code/requirements.txt
-
-RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 \
- && python3.10 -m pip install --no-cache-dir -r /code/requirements.txt
-
-COPY --chown=user . .
-
-USER user
-
-CMD ["python3.10", "main.py"]
diff --git a/spaces/paochoa/DeOldification/app.py b/spaces/paochoa/DeOldification/app.py
deleted file mode 100644
index 2fbb9fb93cd6a73ff6b924e6649dd12225d82a42..0000000000000000000000000000000000000000
--- a/spaces/paochoa/DeOldification/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import gradio as gr
-import requests
-import streamlit as st
-
-res = requests.get('https://stackoverflow.com/questions/26000336')
-
-# Definimos una función que se encarga de llevar a cabo las predicciones
-def predict(url):
- obj = {'image': url}
- header = {'api-key': st.secrets["api_key"]}
- res = requests.post('https://api.deepai.org/api/colorizer', data = obj, headers = header)
- url_res = res.text.split(",")[1].split(":")[1] + ":" + res.text.split(",")[1].split(":")[2]
- res = url_res.split("\"")[1]
- return res
-
-# Creamos la interfaz y la lanzamos.
-gr.Interface(fn=predict, inputs=gr.inputs.Textbox(lines=1), outputs=gr.outputs.Textbox(),examples=[], title="DeOldification of B&W images", description="This, with the help of the DeepAI colorization API, will color the image from the URL given. The result will be a URL, that is the URL where the colorized image is.").launch(share=False)
-
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/session.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/session.py
deleted file mode 100644
index 887dc14e796cad0257e5ccfd51ed3a21b7908821..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/session.py
+++ /dev/null
@@ -1,519 +0,0 @@
-"""PipSession and supporting code, containing all pip-specific
-network request configuration and behavior.
-"""
-
-import email.utils
-import io
-import ipaddress
-import json
-import logging
-import mimetypes
-import os
-import platform
-import shutil
-import subprocess
-import sys
-import urllib.parse
-import warnings
-from typing import (
- TYPE_CHECKING,
- Any,
- Dict,
- Generator,
- List,
- Mapping,
- Optional,
- Sequence,
- Tuple,
- Union,
-)
-
-from pip._vendor import requests, urllib3
-from pip._vendor.cachecontrol import CacheControlAdapter as _BaseCacheControlAdapter
-from pip._vendor.requests.adapters import DEFAULT_POOLBLOCK, BaseAdapter
-from pip._vendor.requests.adapters import HTTPAdapter as _BaseHTTPAdapter
-from pip._vendor.requests.models import PreparedRequest, Response
-from pip._vendor.requests.structures import CaseInsensitiveDict
-from pip._vendor.urllib3.connectionpool import ConnectionPool
-from pip._vendor.urllib3.exceptions import InsecureRequestWarning
-
-from pip import __version__
-from pip._internal.metadata import get_default_environment
-from pip._internal.models.link import Link
-from pip._internal.network.auth import MultiDomainBasicAuth
-from pip._internal.network.cache import SafeFileCache
-
-# Import ssl from compat so the initial import occurs in only one place.
-from pip._internal.utils.compat import has_tls
-from pip._internal.utils.glibc import libc_ver
-from pip._internal.utils.misc import build_url_from_netloc, parse_netloc
-from pip._internal.utils.urls import url_to_path
-
-if TYPE_CHECKING:
- from ssl import SSLContext
-
- from pip._vendor.urllib3.poolmanager import PoolManager
-
-
-logger = logging.getLogger(__name__)
-
-SecureOrigin = Tuple[str, str, Optional[Union[int, str]]]
-
-
-# Ignore warning raised when using --trusted-host.
-warnings.filterwarnings("ignore", category=InsecureRequestWarning)
-
-
-SECURE_ORIGINS: List[SecureOrigin] = [
- # protocol, hostname, port
- # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
- ("https", "*", "*"),
- ("*", "localhost", "*"),
- ("*", "127.0.0.0/8", "*"),
- ("*", "::1/128", "*"),
- ("file", "*", None),
- # ssh is always secure.
- ("ssh", "*", "*"),
-]
-
-
-# These are environment variables present when running under various
-# CI systems. For each variable, some CI systems that use the variable
-# are indicated. The collection was chosen so that for each of a number
-# of popular systems, at least one of the environment variables is used.
-# This list is used to provide some indication of and lower bound for
-# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive.
-# For more background, see: https://github.com/pypa/pip/issues/5499
-CI_ENVIRONMENT_VARIABLES = (
- # Azure Pipelines
- "BUILD_BUILDID",
- # Jenkins
- "BUILD_ID",
- # AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI
- "CI",
- # Explicit environment variable.
- "PIP_IS_CI",
-)
-
-
-def looks_like_ci() -> bool:
- """
- Return whether it looks like pip is running under CI.
- """
- # We don't use the method of checking for a tty (e.g. using isatty())
- # because some CI systems mimic a tty (e.g. Travis CI). Thus that
- # method doesn't provide definitive information in either direction.
- return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES)
-
-
-def user_agent() -> str:
- """
- Return a string representing the user agent.
- """
- data: Dict[str, Any] = {
- "installer": {"name": "pip", "version": __version__},
- "python": platform.python_version(),
- "implementation": {
- "name": platform.python_implementation(),
- },
- }
-
- if data["implementation"]["name"] == "CPython":
- data["implementation"]["version"] = platform.python_version()
- elif data["implementation"]["name"] == "PyPy":
- pypy_version_info = sys.pypy_version_info # type: ignore
- if pypy_version_info.releaselevel == "final":
- pypy_version_info = pypy_version_info[:3]
- data["implementation"]["version"] = ".".join(
- [str(x) for x in pypy_version_info]
- )
- elif data["implementation"]["name"] == "Jython":
- # Complete Guess
- data["implementation"]["version"] = platform.python_version()
- elif data["implementation"]["name"] == "IronPython":
- # Complete Guess
- data["implementation"]["version"] = platform.python_version()
-
- if sys.platform.startswith("linux"):
- from pip._vendor import distro
-
- linux_distribution = distro.name(), distro.version(), distro.codename()
- distro_infos: Dict[str, Any] = dict(
- filter(
- lambda x: x[1],
- zip(["name", "version", "id"], linux_distribution),
- )
- )
- libc = dict(
- filter(
- lambda x: x[1],
- zip(["lib", "version"], libc_ver()),
- )
- )
- if libc:
- distro_infos["libc"] = libc
- if distro_infos:
- data["distro"] = distro_infos
-
- if sys.platform.startswith("darwin") and platform.mac_ver()[0]:
- data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]}
-
- if platform.system():
- data.setdefault("system", {})["name"] = platform.system()
-
- if platform.release():
- data.setdefault("system", {})["release"] = platform.release()
-
- if platform.machine():
- data["cpu"] = platform.machine()
-
- if has_tls():
- import _ssl as ssl
-
- data["openssl_version"] = ssl.OPENSSL_VERSION
-
- setuptools_dist = get_default_environment().get_distribution("setuptools")
- if setuptools_dist is not None:
- data["setuptools_version"] = str(setuptools_dist.version)
-
- if shutil.which("rustc") is not None:
- # If for any reason `rustc --version` fails, silently ignore it
- try:
- rustc_output = subprocess.check_output(
- ["rustc", "--version"], stderr=subprocess.STDOUT, timeout=0.5
- )
- except Exception:
- pass
- else:
- if rustc_output.startswith(b"rustc "):
- # The format of `rustc --version` is:
- # `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'`
- # We extract just the middle (1.52.1) part
- data["rustc_version"] = rustc_output.split(b" ")[1].decode()
-
- # Use None rather than False so as not to give the impression that
- # pip knows it is not being run under CI. Rather, it is a null or
- # inconclusive result. Also, we include some value rather than no
- # value to make it easier to know that the check has been run.
- data["ci"] = True if looks_like_ci() else None
-
- user_data = os.environ.get("PIP_USER_AGENT_USER_DATA")
- if user_data is not None:
- data["user_data"] = user_data
-
- return "{data[installer][name]}/{data[installer][version]} {json}".format(
- data=data,
- json=json.dumps(data, separators=(",", ":"), sort_keys=True),
- )
-
-
-class LocalFSAdapter(BaseAdapter):
- def send(
- self,
- request: PreparedRequest,
- stream: bool = False,
- timeout: Optional[Union[float, Tuple[float, float]]] = None,
- verify: Union[bool, str] = True,
- cert: Optional[Union[str, Tuple[str, str]]] = None,
- proxies: Optional[Mapping[str, str]] = None,
- ) -> Response:
- pathname = url_to_path(request.url)
-
- resp = Response()
- resp.status_code = 200
- resp.url = request.url
-
- try:
- stats = os.stat(pathname)
- except OSError as exc:
- # format the exception raised as a io.BytesIO object,
- # to return a better error message:
- resp.status_code = 404
- resp.reason = type(exc).__name__
- resp.raw = io.BytesIO(f"{resp.reason}: {exc}".encode("utf8"))
- else:
- modified = email.utils.formatdate(stats.st_mtime, usegmt=True)
- content_type = mimetypes.guess_type(pathname)[0] or "text/plain"
- resp.headers = CaseInsensitiveDict(
- {
- "Content-Type": content_type,
- "Content-Length": stats.st_size,
- "Last-Modified": modified,
- }
- )
-
- resp.raw = open(pathname, "rb")
- resp.close = resp.raw.close
-
- return resp
-
- def close(self) -> None:
- pass
-
-
-class _SSLContextAdapterMixin:
- """Mixin to add the ``ssl_context`` constructor argument to HTTP adapters.
-
- The additional argument is forwarded directly to the pool manager. This allows us
- to dynamically decide what SSL store to use at runtime, which is used to implement
- the optional ``truststore`` backend.
- """
-
- def __init__(
- self,
- *,
- ssl_context: Optional["SSLContext"] = None,
- **kwargs: Any,
- ) -> None:
- self._ssl_context = ssl_context
- super().__init__(**kwargs)
-
- def init_poolmanager(
- self,
- connections: int,
- maxsize: int,
- block: bool = DEFAULT_POOLBLOCK,
- **pool_kwargs: Any,
- ) -> "PoolManager":
- if self._ssl_context is not None:
- pool_kwargs.setdefault("ssl_context", self._ssl_context)
- return super().init_poolmanager( # type: ignore[misc]
- connections=connections,
- maxsize=maxsize,
- block=block,
- **pool_kwargs,
- )
-
-
-class HTTPAdapter(_SSLContextAdapterMixin, _BaseHTTPAdapter):
- pass
-
-
-class CacheControlAdapter(_SSLContextAdapterMixin, _BaseCacheControlAdapter):
- pass
-
-
-class InsecureHTTPAdapter(HTTPAdapter):
- def cert_verify(
- self,
- conn: ConnectionPool,
- url: str,
- verify: Union[bool, str],
- cert: Optional[Union[str, Tuple[str, str]]],
- ) -> None:
- super().cert_verify(conn=conn, url=url, verify=False, cert=cert)
-
-
-class InsecureCacheControlAdapter(CacheControlAdapter):
- def cert_verify(
- self,
- conn: ConnectionPool,
- url: str,
- verify: Union[bool, str],
- cert: Optional[Union[str, Tuple[str, str]]],
- ) -> None:
- super().cert_verify(conn=conn, url=url, verify=False, cert=cert)
-
-
-class PipSession(requests.Session):
- timeout: Optional[int] = None
-
- def __init__(
- self,
- *args: Any,
- retries: int = 0,
- cache: Optional[str] = None,
- trusted_hosts: Sequence[str] = (),
- index_urls: Optional[List[str]] = None,
- ssl_context: Optional["SSLContext"] = None,
- **kwargs: Any,
- ) -> None:
- """
- :param trusted_hosts: Domains not to emit warnings for when not using
- HTTPS.
- """
- super().__init__(*args, **kwargs)
-
- # Namespace the attribute with "pip_" just in case to prevent
- # possible conflicts with the base class.
- self.pip_trusted_origins: List[Tuple[str, Optional[int]]] = []
-
- # Attach our User Agent to the request
- self.headers["User-Agent"] = user_agent()
-
- # Attach our Authentication handler to the session
- self.auth = MultiDomainBasicAuth(index_urls=index_urls)
-
- # Create our urllib3.Retry instance which will allow us to customize
- # how we handle retries.
- retries = urllib3.Retry(
- # Set the total number of retries that a particular request can
- # have.
- total=retries,
- # A 503 error from PyPI typically means that the Fastly -> Origin
- # connection got interrupted in some way. A 503 error in general
- # is typically considered a transient error so we'll go ahead and
- # retry it.
- # A 500 may indicate transient error in Amazon S3
- # A 520 or 527 - may indicate transient error in CloudFlare
- status_forcelist=[500, 503, 520, 527],
- # Add a small amount of back off between failed requests in
- # order to prevent hammering the service.
- backoff_factor=0.25,
- ) # type: ignore
-
- # Our Insecure HTTPAdapter disables HTTPS validation. It does not
- # support caching so we'll use it for all http:// URLs.
- # If caching is disabled, we will also use it for
- # https:// hosts that we've marked as ignoring
- # TLS errors for (trusted-hosts).
- insecure_adapter = InsecureHTTPAdapter(max_retries=retries)
-
- # We want to _only_ cache responses on securely fetched origins or when
- # the host is specified as trusted. We do this because
- # we can't validate the response of an insecurely/untrusted fetched
- # origin, and we don't want someone to be able to poison the cache and
- # require manual eviction from the cache to fix it.
- if cache:
- secure_adapter = CacheControlAdapter(
- cache=SafeFileCache(cache),
- max_retries=retries,
- ssl_context=ssl_context,
- )
- self._trusted_host_adapter = InsecureCacheControlAdapter(
- cache=SafeFileCache(cache),
- max_retries=retries,
- )
- else:
- secure_adapter = HTTPAdapter(max_retries=retries, ssl_context=ssl_context)
- self._trusted_host_adapter = insecure_adapter
-
- self.mount("https://", secure_adapter)
- self.mount("http://", insecure_adapter)
-
- # Enable file:// urls
- self.mount("file://", LocalFSAdapter())
-
- for host in trusted_hosts:
- self.add_trusted_host(host, suppress_logging=True)
-
- def update_index_urls(self, new_index_urls: List[str]) -> None:
- """
- :param new_index_urls: New index urls to update the authentication
- handler with.
- """
- self.auth.index_urls = new_index_urls
-
- def add_trusted_host(
- self, host: str, source: Optional[str] = None, suppress_logging: bool = False
- ) -> None:
- """
- :param host: It is okay to provide a host that has previously been
- added.
- :param source: An optional source string, for logging where the host
- string came from.
- """
- if not suppress_logging:
- msg = f"adding trusted host: {host!r}"
- if source is not None:
- msg += f" (from {source})"
- logger.info(msg)
-
- parsed_host, parsed_port = parse_netloc(host)
- if parsed_host is None:
- raise ValueError(f"Trusted host URL must include a host part: {host!r}")
- if (parsed_host, parsed_port) not in self.pip_trusted_origins:
- self.pip_trusted_origins.append((parsed_host, parsed_port))
-
- self.mount(
- build_url_from_netloc(host, scheme="http") + "/", self._trusted_host_adapter
- )
- self.mount(build_url_from_netloc(host) + "/", self._trusted_host_adapter)
- if not parsed_port:
- self.mount(
- build_url_from_netloc(host, scheme="http") + ":",
- self._trusted_host_adapter,
- )
- # Mount wildcard ports for the same host.
- self.mount(build_url_from_netloc(host) + ":", self._trusted_host_adapter)
-
- def iter_secure_origins(self) -> Generator[SecureOrigin, None, None]:
- yield from SECURE_ORIGINS
- for host, port in self.pip_trusted_origins:
- yield ("*", host, "*" if port is None else port)
-
- def is_secure_origin(self, location: Link) -> bool:
- # Determine if this url used a secure transport mechanism
- parsed = urllib.parse.urlparse(str(location))
- origin_protocol, origin_host, origin_port = (
- parsed.scheme,
- parsed.hostname,
- parsed.port,
- )
-
- # The protocol to use to see if the protocol matches.
- # Don't count the repository type as part of the protocol: in
- # cases such as "git+ssh", only use "ssh". (I.e., Only verify against
- # the last scheme.)
- origin_protocol = origin_protocol.rsplit("+", 1)[-1]
-
- # Determine if our origin is a secure origin by looking through our
- # hardcoded list of secure origins, as well as any additional ones
- # configured on this PackageFinder instance.
- for secure_origin in self.iter_secure_origins():
- secure_protocol, secure_host, secure_port = secure_origin
- if origin_protocol != secure_protocol and secure_protocol != "*":
- continue
-
- try:
- addr = ipaddress.ip_address(origin_host or "")
- network = ipaddress.ip_network(secure_host)
- except ValueError:
- # We don't have both a valid address or a valid network, so
- # we'll check this origin against hostnames.
- if (
- origin_host
- and origin_host.lower() != secure_host.lower()
- and secure_host != "*"
- ):
- continue
- else:
- # We have a valid address and network, so see if the address
- # is contained within the network.
- if addr not in network:
- continue
-
- # Check to see if the port matches.
- if (
- origin_port != secure_port
- and secure_port != "*"
- and secure_port is not None
- ):
- continue
-
- # If we've gotten here, then this origin matches the current
- # secure origin and we should return True
- return True
-
- # If we've gotten to this point, then the origin isn't secure and we
- # will not accept it as a valid location to search. We will however
- # log a warning that we are ignoring it.
- logger.warning(
- "The repository located at %s is not a trusted or secure host and "
- "is being ignored. If this repository is available via HTTPS we "
- "recommend you use HTTPS instead, otherwise you may silence "
- "this warning and allow it anyway with '--trusted-host %s'.",
- origin_host,
- origin_host,
- )
-
- return False
-
- def request(self, method: str, url: str, *args: Any, **kwargs: Any) -> Response:
- # Allow setting a default timeout on a session
- kwargs.setdefault("timeout", self.timeout)
- # Allow setting a default proxies on a session
- kwargs.setdefault("proxies", self.proxies)
-
- # Dispatch the actual request
- return super().request(method, url, *args, **kwargs)
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/protocol.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/protocol.py
deleted file mode 100644
index 12ab23713a70dda46edd300bd975b02bfb2be031..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/protocol.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from typing import Any, cast, Set, TYPE_CHECKING
-from inspect import isclass
-
-if TYPE_CHECKING:
- from pip._vendor.rich.console import RenderableType
-
-_GIBBERISH = """aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf"""
-
-
-def is_renderable(check_object: Any) -> bool:
- """Check if an object may be rendered by Rich."""
- return (
- isinstance(check_object, str)
- or hasattr(check_object, "__rich__")
- or hasattr(check_object, "__rich_console__")
- )
-
-
-def rich_cast(renderable: object) -> "RenderableType":
- """Cast an object to a renderable by calling __rich__ if present.
-
- Args:
- renderable (object): A potentially renderable object
-
- Returns:
- object: The result of recursively calling __rich__.
- """
- from pip._vendor.rich.console import RenderableType
-
- rich_visited_set: Set[type] = set() # Prevent potential infinite loop
- while hasattr(renderable, "__rich__") and not isclass(renderable):
- # Detect object which claim to have all the attributes
- if hasattr(renderable, _GIBBERISH):
- return repr(renderable)
- cast_method = getattr(renderable, "__rich__")
- renderable = cast_method()
- renderable_type = type(renderable)
- if renderable_type in rich_visited_set:
- break
- rich_visited_set.add(renderable_type)
-
- return cast(RenderableType, renderable)
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/wait.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/wait.py
deleted file mode 100644
index f9349c028360d541c56962d6a09bd9c2a00e3a37..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/wait.py
+++ /dev/null
@@ -1,228 +0,0 @@
-# Copyright 2016–2021 Julien Danjou
-# Copyright 2016 Joshua Harlow
-# Copyright 2013-2014 Ray Holder
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-import random
-import typing
-
-from pip._vendor.tenacity import _utils
-
-if typing.TYPE_CHECKING:
- from pip._vendor.tenacity import RetryCallState
-
-
-class wait_base(abc.ABC):
- """Abstract base class for wait strategies."""
-
- @abc.abstractmethod
- def __call__(self, retry_state: "RetryCallState") -> float:
- pass
-
- def __add__(self, other: "wait_base") -> "wait_combine":
- return wait_combine(self, other)
-
- def __radd__(self, other: "wait_base") -> typing.Union["wait_combine", "wait_base"]:
- # make it possible to use multiple waits with the built-in sum function
- if other == 0: # type: ignore[comparison-overlap]
- return self
- return self.__add__(other)
-
-
-WaitBaseT = typing.Union[wait_base, typing.Callable[["RetryCallState"], typing.Union[float, int]]]
-
-
-class wait_fixed(wait_base):
- """Wait strategy that waits a fixed amount of time between each retry."""
-
- def __init__(self, wait: _utils.time_unit_type) -> None:
- self.wait_fixed = _utils.to_seconds(wait)
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- return self.wait_fixed
-
-
-class wait_none(wait_fixed):
- """Wait strategy that doesn't wait at all before retrying."""
-
- def __init__(self) -> None:
- super().__init__(0)
-
-
-class wait_random(wait_base):
- """Wait strategy that waits a random amount of time between min/max."""
-
- def __init__(self, min: _utils.time_unit_type = 0, max: _utils.time_unit_type = 1) -> None: # noqa
- self.wait_random_min = _utils.to_seconds(min)
- self.wait_random_max = _utils.to_seconds(max)
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- return self.wait_random_min + (random.random() * (self.wait_random_max - self.wait_random_min))
-
-
-class wait_combine(wait_base):
- """Combine several waiting strategies."""
-
- def __init__(self, *strategies: wait_base) -> None:
- self.wait_funcs = strategies
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- return sum(x(retry_state=retry_state) for x in self.wait_funcs)
-
-
-class wait_chain(wait_base):
- """Chain two or more waiting strategies.
-
- If all strategies are exhausted, the very last strategy is used
- thereafter.
-
- For example::
-
- @retry(wait=wait_chain(*[wait_fixed(1) for i in range(3)] +
- [wait_fixed(2) for j in range(5)] +
- [wait_fixed(5) for k in range(4)))
- def wait_chained():
- print("Wait 1s for 3 attempts, 2s for 5 attempts and 5s
- thereafter.")
- """
-
- def __init__(self, *strategies: wait_base) -> None:
- self.strategies = strategies
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- wait_func_no = min(max(retry_state.attempt_number, 1), len(self.strategies))
- wait_func = self.strategies[wait_func_no - 1]
- return wait_func(retry_state=retry_state)
-
-
-class wait_incrementing(wait_base):
- """Wait an incremental amount of time after each attempt.
-
- Starting at a starting value and incrementing by a value for each attempt
- (and restricting the upper limit to some maximum value).
- """
-
- def __init__(
- self,
- start: _utils.time_unit_type = 0,
- increment: _utils.time_unit_type = 100,
- max: _utils.time_unit_type = _utils.MAX_WAIT, # noqa
- ) -> None:
- self.start = _utils.to_seconds(start)
- self.increment = _utils.to_seconds(increment)
- self.max = _utils.to_seconds(max)
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- result = self.start + (self.increment * (retry_state.attempt_number - 1))
- return max(0, min(result, self.max))
-
-
-class wait_exponential(wait_base):
- """Wait strategy that applies exponential backoff.
-
- It allows for a customized multiplier and an ability to restrict the
- upper and lower limits to some maximum and minimum value.
-
- The intervals are fixed (i.e. there is no jitter), so this strategy is
- suitable for balancing retries against latency when a required resource is
- unavailable for an unknown duration, but *not* suitable for resolving
- contention between multiple processes for a shared resource. Use
- wait_random_exponential for the latter case.
- """
-
- def __init__(
- self,
- multiplier: typing.Union[int, float] = 1,
- max: _utils.time_unit_type = _utils.MAX_WAIT, # noqa
- exp_base: typing.Union[int, float] = 2,
- min: _utils.time_unit_type = 0, # noqa
- ) -> None:
- self.multiplier = multiplier
- self.min = _utils.to_seconds(min)
- self.max = _utils.to_seconds(max)
- self.exp_base = exp_base
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- try:
- exp = self.exp_base ** (retry_state.attempt_number - 1)
- result = self.multiplier * exp
- except OverflowError:
- return self.max
- return max(max(0, self.min), min(result, self.max))
-
-
-class wait_random_exponential(wait_exponential):
- """Random wait with exponentially widening window.
-
- An exponential backoff strategy used to mediate contention between multiple
- uncoordinated processes for a shared resource in distributed systems. This
- is the sense in which "exponential backoff" is meant in e.g. Ethernet
- networking, and corresponds to the "Full Jitter" algorithm described in
- this blog post:
-
- https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
-
- Each retry occurs at a random time in a geometrically expanding interval.
- It allows for a custom multiplier and an ability to restrict the upper
- limit of the random interval to some maximum value.
-
- Example::
-
- wait_random_exponential(multiplier=0.5, # initial window 0.5s
- max=60) # max 60s timeout
-
- When waiting for an unavailable resource to become available again, as
- opposed to trying to resolve contention for a shared resource, the
- wait_exponential strategy (which uses a fixed interval) may be preferable.
-
- """
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- high = super().__call__(retry_state=retry_state)
- return random.uniform(0, high)
-
-
-class wait_exponential_jitter(wait_base):
- """Wait strategy that applies exponential backoff and jitter.
-
- It allows for a customized initial wait, maximum wait and jitter.
-
- This implements the strategy described here:
- https://cloud.google.com/storage/docs/retry-strategy
-
- The wait time is min(initial * 2**n + random.uniform(0, jitter), maximum)
- where n is the retry count.
- """
-
- def __init__(
- self,
- initial: float = 1,
- max: float = _utils.MAX_WAIT, # noqa
- exp_base: float = 2,
- jitter: float = 1,
- ) -> None:
- self.initial = initial
- self.max = max
- self.exp_base = exp_base
- self.jitter = jitter
-
- def __call__(self, retry_state: "RetryCallState") -> float:
- jitter = random.uniform(0, self.jitter)
- try:
- exp = self.exp_base ** (retry_state.attempt_number - 1)
- result = self.initial * exp + jitter
- except OverflowError:
- result = self.max
- return max(0, min(result, self.max))
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/__init__.py
deleted file mode 100644
index 1a188c35cb6a82cfb7dfb6d8a813fed35bed0cc4..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/__init__.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import sys
-import importlib
-
-__version__, _, _ = sys.version.partition(' ')
-
-
-try:
- # Allow Debian and pkgsrc (only) to customize system
- # behavior. Ref pypa/distutils#2 and pypa/distutils#16.
- # This hook is deprecated and no other environments
- # should use it.
- importlib.import_module('_distutils_system_mod')
-except ImportError:
- pass
diff --git a/spaces/power2/JoJoGan-powerhow2/e4e/utils/model_utils.py b/spaces/power2/JoJoGan-powerhow2/e4e/utils/model_utils.py
deleted file mode 100644
index e51e95578f72b3218d6d832e3b604193cb68c1d7..0000000000000000000000000000000000000000
--- a/spaces/power2/JoJoGan-powerhow2/e4e/utils/model_utils.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import torch
-import argparse
-from models.psp import pSp
-from models.encoders.psp_encoders import Encoder4Editing
-
-
-def setup_model(checkpoint_path, device='cuda'):
- ckpt = torch.load(checkpoint_path, map_location='cpu')
- opts = ckpt['opts']
-
- opts['checkpoint_path'] = checkpoint_path
- opts['device'] = device
- opts = argparse.Namespace(**opts)
-
- net = pSp(opts)
- net.eval()
- net = net.to(device)
- return net, opts
-
-
-def load_e4e_standalone(checkpoint_path, device='cuda'):
- ckpt = torch.load(checkpoint_path, map_location='cpu')
- opts = argparse.Namespace(**ckpt['opts'])
- e4e = Encoder4Editing(50, 'ir_se', opts)
- e4e_dict = {k.replace('encoder.', ''): v for k, v in ckpt['state_dict'].items() if k.startswith('encoder.')}
- e4e.load_state_dict(e4e_dict)
- e4e.eval()
- e4e = e4e.to(device)
- latent_avg = ckpt['latent_avg'].to(device)
-
- def add_latent_avg(model, inputs, outputs):
- return outputs + latent_avg.repeat(outputs.shape[0], 1, 1)
-
- e4e.register_forward_hook(add_latent_avg)
- return e4e
diff --git a/spaces/prerna9811/Chord/portaudio/examples/paex_pink.c b/spaces/prerna9811/Chord/portaudio/examples/paex_pink.c
deleted file mode 100644
index 519f9797bf5ceafe370aeb1d028f6edd75468f97..0000000000000000000000000000000000000000
--- a/spaces/prerna9811/Chord/portaudio/examples/paex_pink.c
+++ /dev/null
@@ -1,280 +0,0 @@
-/** @file paex_pink.c
- @ingroup examples_src
- @brief Generate Pink Noise using Gardner method.
-
- Optimization suggested by James McCartney uses a tree
- to select which random value to replace.
-
- x x x x x x x x x x x x x x x x
- x x x x x x x x
- x x x x
- x x
- x
-